text stringlengths 256 16.4k |
|---|
\(\text{FIGURE XIV.1}\)
The circuit in Figure \(\text{XIV.1}\) contains two equal resistances, two equal capacitances, and a battery. The battery is connected at time \(t=0\). Find the charges held by the capacitors after time \(t\).
Apply Kirchhoff’s second rule to each half:
\[(\dot Q_1 + \dot Q_2)RC + Q_2 = CV, \tag{14.12.1}\]
and \[\dot Q_1 RC + Q_1 - Q_2 = 0.\tag{14.12.2}\]
Eliminate \(Q_2\):
\[R^2C^2\ddot Q_1 + 3 RC Q_1 + Q_1 = CV. \tag{14.12.3}\]
Transform, with \(Q_1\) and \(\dot Q_1\) initially zero:
\[(R^2C^2s^2 + 3RCs + 1) \bar{Q_1} = \frac{CV}{s}.\tag{14.12.4}\]
I.e. \[R^2C\bar{Q_1} = \frac{1}{s(s^2 + 3as + a^2)} \cdot V , \tag{14.12.5}\]
where \[a=1/(RC). \tag{14.12.6}\]
That is \[R^2C \bar{Q_1}= \frac{1}{s(s+2.618a)(s+0.382a)}V. \tag{14.12.7}\]
Partial fractions: \[R^2C\bar{Q_1} = \left[\frac{1}{s} + \frac{0.1708}{s+2.618a} - \frac{1.1708}{s+0.382a} \right] \frac{V}{a^2}. \tag{14.12.8}\]
That is, \[\bar{Q_1} = \left[ \frac{1}{s} + \frac{0.1708}{s+2.618a} - \frac{1.1708}{s+0.382a} \right]CV. \tag{14.12.9}\]
Inverse transform: \[Q_1 = \left[ 1 + 0.1708e^{-2.618t/(RC)} - 1.1708e^{-0.382t/(RC)} \right]. \tag{14.12.10}\]
The current can be found by differentiation.
I leave it to the reader to eliminate \(Q_1\) from equations 14.12.1 and 2 and hence to show that
\[Q_2 = \left[1 - 0.2764 e^{-2.618 t/(RC)} - 0.7236 e^{-0.382 t/(RC)} \right]. \tag{14.12.11}\] |
I am trying to figure out the details on how to implement the 3D structure tensor in C/C++ in an easy but efficient way and need some advice!
For a discrete function $ I(x_i,y_j,z_k)$ the 3D structure tensor is given by: $$ S=\begin{pmatrix} W \ast I_x^2 & W \ast (I_xI_y) & W \ast (I_xI_z)\\ W \ast (I_xI_y) & W \ast I_y^2 & W \ast I_yI_z \\ W \ast (I_xI_z) & W \ast I_yI_z & W \ast I_z^2 \\ \end{pmatrix}$$ where W is a smoothing kernel, $\ \ast $ denotes convolution and subscript denotes partial derivative with respect to.
The calculation of the structure tensor have two main steps:
calculate the partial derivatives of the function in a way that is robust to noise over a window. smooth products of the partial derivatives over another larger window.
I start by looking at step 2. I want to use a Gaussian as smoothing kernel. The normal distribution in 3 dimensions is separable: $$ g(x,y,z) = g(x)g(y)g(z) $$ where $$ g(x) = \frac{1}{\sqrt{2\pi}\sigma}exp(-\frac{1}{2}(x/\sigma)^2) $$ etc.
The Fourier transform of the normal distribution in 3 dimensions is given by: $$ G(kx,ky,kz) = G(kx)G(ky)G(kz) $$ where $$ G(kx) = \frac{\sigma}{\sqrt{2\pi}}exp(-\frac{1}{2}(kx\sigma)^2) $$ etc.
How do I implement the Gaussian smoothing?
The simplest way to implement the Gaussian smoothing would be to loop over a 3D Gaussian kernel for each point in I.
Since the Gaussian is separable however it should be more efficent to perform a convolution with a 1D Gaussian in the x direction followed by the y direction and z direction.
Another even more efficient approach (?) would be to do the convolution in the Fourier space (where it becomes a multiplication): $$ g*a=\mathcal{F}^{-1}[GA] $$
Next I look at step 1.
The derivative of the Gaussian is given by: $$ G_x(x) = -x/\sigma^2G(x)G(y)G(z)$$ etc., where subscript denotes partial derivative with respect to.
The Gaussian derivative can be used to estimate the partial derivatives of I in a way that is robust to noise: $$ I_x=-x/\sigma^2G(x)*G(y)*G(z)*I $$
Here I am faced with the same decision: implement it without using separability, implement it using separability or implement it in Fourier space?
Any advice or comments? |
All graphs considered will be directed graphs $G=(V,E)$, with $E \subseteq V \times V$ (so possibly with self-loops). For $k \in \mathbb{N}_{\geq 1}$, I will write $[k]$ the set $\{1,\ldots,k\}$. A
$k$-valuation of $G$ is a mapping $\nu: V \to [k]$. Given a $k$-valuation $\nu$ of $G=(V,E)$, I define the graph $\nu(G)=(V_\nu,E_\nu)$ by $V_\nu = \{\nu(v) \mid v \in V\}$ and $E_\nu = \{(\nu(v),\nu(v')) \mid (v,v') \in E\}$. I am interested in the following counting problem:
INPUT: a directed graph $G=(V,E)$, and integer $k \in \mathbb{N}_{\geq 1}$.
OUTPUT: $|\mathrm{Span}_k(G)|$, where $\mathrm{Span}_k(G) = \{\nu(G) \mid \nu:V \to [k]\}$.
In other words, I want to count the number of distinct graphs that can be obtained from $G$ by a $k$-valuation of $G$.
My question: has this problem already been studied? What is its complexity? Small example
Let $G=(V,E)$ be the triangle graph, i.e., $V=\{a,b,c\}$ and $E = \{(a,b),(b,c),(c,a)\}$, and let $k=2$. Then $\mathrm{Span}_k(G)$ contains $4$ graphs:
$G_1 = \{V_1,E_1\}$ where $V_1 = \{1\}$ and $E_1 = \{(1,1)\}$; $G_2 = \{V_2,E_2\}$ where $V_2 = \{2\}$ and $E_2 = \{(2,2)\}$; $G_3 = \{V_3,E_3\}$ where $V_3 = \{1,2\}$ and $E_3 = \{(1,2),(2,2),(2,1)\}$; $G_4 = \{V_4,E_4\}$ where $V_4 = \{1,2\}$ and $E_4 = \{(2,1),(1,1),(1,2)\}$.
So the output should be $4$. Note that, although $G_1$ and $G_2$ (and $G_3$ and $G_4$) are isomorphic, they are still counted as different.
Preliminary observations It seems to be a variant of counting the number of quotient graphs of $G$, but I don't see an obvious reduction. Also, I have not found any work on counting the quotient graphs. My problem is in the class Span-P, which is the class of counting problems that can be defined as the number of distinct outputs of a nondeterministic Turing machine running in polynomial time. This class was introduced in Köbler, J., Schöning, U., & Torán, J. (1989). On counting and approximation. Acta Informatica, 26(4), 363-379. Indeed, a machine can just guess a $k$-valuation $\nu$, and then write the graph $\nu(G)$ (in the right order). Ideally I would like to show that it is Span-P complete. For the hardness part, if it helps I don't mind considering the version of the problem where edges can be labeled with a fixed, finite alphabet. Since it is in Span-P, according to this same paper (Theorem 7.2) this problem can be approximated in polynomial time, but using an oracle to NP. Can we get rid of the oracle to show that the problem has a FPRAS? Is it in #P? It doesn't seem so, but I don't see a simple #P-harness proof either... If we define $\mathrm{SurjSpan}_k(G)$ to be just like $\mathrm{Span}_k(G)$ but restricted to $k$-valuations that are surjective, then clearly we have $|\mathrm{Span}_k(G)| = \sum_{i=1}^k \binom{k}{i} |\mathrm{SurjSpan}_i(G)|$. So the problems of counting $|\mathrm{SurjSpan}_k(G)|$ and $|\mathrm{Span}_k(G)|$ are reducible to each other (using Turing reductions here). |
I'm trying to derive an approximation for the zero-rebate barrier option under the Heston model: $$dS_t=\mu S_tdt+\sqrt{v_t}S_tdW^S_t$$ $$dv_t=\kappa(\bar{v}-v_t)dt+\eta\sqrt{v_t}dW^v_t,\quad d\langle W^S,W^v\rangle_t=\rho dt$$ The payoff of a down-and-out option is: $$\mathcal{C}_T=(S_T-k)\mathbb{I}_{\{S_T\geq K\}}\mathbb{I}_{\{(\max_{0\leq t\leq T}S_t=:)m_T\geq H\}}$$
Under the Black-Scholes dynamics we have the closed-form solution: $$\mathcal{C}_t=S_te^{r\tau}\left(\Phi\left(\frac{\ln(S_t/K)+\nu\tau}{\sigma\sqrt{\tau}}\right)-\left(\frac{H}{S_t}\right)^{1+2r/\sigma^2}\Phi\left(\frac{\ln(H^2/(S_tK))+\nu\tau}{\sigma\sqrt{\tau}}\right)\right) - K\left(\Phi\left(\frac{\ln(S_t/K)+(\nu-\sigma^2)\tau}{\sigma\sqrt{\tau}}\right)-\left(\frac{H}{S_t}\right)^{-1+2r/\sigma^2}\Phi\left(\frac{\ln(H^2/(S_tK))+(\nu-\sigma^2)\tau}{\sigma\sqrt{\tau}}\right)\right)$$ where $\nu=r+\frac{\sigma^2}{2}$ Obviously, under the Heston dynamics we don't have a closed-form solution. However, I would like to know whether there exists approximations allowing to price the down-and-out call option in a similar fashion.
As a starting point, I can use the result of the hitting probability of Heston for $\rho$ and $\mu$ equal to 0, i.e. the probability that during the time interval $[0,t]$ the process $S_t$ was positive, with $x$ and $y$ are initial values, solved via the Fourier transform in $x$: $$\frac{2}{\pi}\int^\infty_0 \frac{\sin (\omega x)}{\omega}\bigg(\frac{\Delta(\omega)e^{-m_-(\omega)\kappa t}}{m_+(\omega)+m_-(\omega)e^{-\Delta(\omega)\kappa t}}\bigg)^{\frac{2\theta\kappa}{\xi^2}}e^{-\frac{2y\kappa}{\xi^2}B(\omega,\kappa t)}d\omega$$ where $B(\omega,\tau)$ is a solution of the Riccati equation $$\frac{\partial B}{\partial \tau}(\omega,\tau)=-B(\omega,\tau)-B(\omega,\tau)^2+\frac{\xi^2 \omega^2}{4\kappa^2}, B(\omega,0)=0;$$ $$\Delta(\omega)=\sqrt{1+(\xi\omega/\kappa)^2}, m_{\pm}(\omega)=\frac{\Delta(\omega)\pm 1}{2}$$
Any hints in this direction as much as completely different solutions are much appreciated. |
Answer
$d=r(1-\cos \frac{\theta}{2})$
Work Step by Step
Step 1: $d$ is the difference between $r$ and $h$. Step 2: $d=r-h$ Step 3: Since $h=r\cos \frac{\theta}{2}$, $d=r-h=r-(r\cos \frac{\theta}{2})=r(1-\cos \frac{\theta}{2})$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
The Chi-Square Distribution
[latexpage]
The goodness–of–fit test can be used to decide whether a population fits a given distribution, but it will not suffice to decide whether two populations follow the same unknown distribution. A different test, called the test for homogeneity, can be used to draw a conclusion about whether two populations have the same distribution. To calculate the test statistic for a test for homogeneity, follow the same procedure as with the test of independence.
The expected value for each cell needs to be at least five in order for you to use this test.
Hypotheses
H 0: The distributions of the two populations are the same. H a: The distributions of the two populations are not the same.
Test StatisticUse a \({\chi }^{2}\) test statistic. It is computed in the same way as the test for independence.
Degrees of Freedom (
df) df = number of columns – 1
RequirementsAll values in the table must be greater than or equal to five.
Common UsesComparing two populations. For example: men vs. women, before vs. after, east vs. west. The variable is categorical with more than two possible response values.
Do male and female college students have the same distribution of living arrangements? Use a level of significance of 0.05. Suppose that 250 randomly selected male college students and 300 randomly selected female college students were asked about their living arrangements: dormitory, apartment, with parents, other. The results are shown in [link]. Do male and female college students have the same distribution of living arrangements?
Dormitory Apartment With Parents Other Males 72 84 49 45 Females 91 86 88 35 H 0: The distribution of living arrangements for male college students is the same as the distribution of living arrangements for female college students. H a: The distribution of living arrangements for male college students is not the same as the distribution of living arrangements for female college students. Degrees of Freedom ( df): df = number of columns – 1 = 4 – 1 = 3 Distribution for the test:\({\chi }_{3}^{2}\) Calculate the test statistic: χ 2 = 10.1287 (calculator or computer) Probability statement: p-value = P( χ 2 >10.1287) = 0.0175 MATRX
key and arrow over to
EDIT
. Press
1:[A]
. Press
2 ENTER 4 ENTER
. Enter the table values by row. Press
ENTER
after each. Press
2nd QUIT
. Press
STAT
and arrow over to
TESTS
. Arrow down to
C:χ2-TEST
. Press
ENTER
. You should see
Observed:[A] and Expected:[B]
. Arrow down to
Calculate
. Press
ENTER
. The test statistic is 10.1287 and the
p-value = 0.0175. Do the procedure a second time but arrow down to Draw
instead of
calculate
.
Compare α and the p-value: Since no αis given, assume α= 0.05. p-value = 0.0175. α> p-value. Make a decision: Since α > p-value, reject H 0. This means that the distributions are not the same. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that the distributions of living arrangements for male and female college students are not the same.
Notice that the conclusion is only that the distributions are not the same. We cannot use the test for homogeneity to draw any conclusions about how they differ.
Do families and singles have the same distribution of cars? Use a level of significance of 0.05. Suppose that 100 randomly selected families and 200 randomly selected singles were asked what type of car they drove: sport, sedan, hatchback, truck, van/SUV. The results are shown in [link]. Do families and singles have the same distribution of cars? Test at a level of significance of 0.05.
Sport Sedan Hatchback Truck Van/SUV Family 5 15 35 17 28 Single 45 65 37 46 7
With a
p-value of almost zero, we reject the null hypothesis. The data show that the distribution of cars is not the same for families and singles.
Both before and after a recent earthquake, surveys were conducted asking voters which of the three candidates they planned on voting for in the upcoming city council election. Has there been a change since the earthquake? Use a level of significance of 0.05. [link] shows the results of the survey. Has there been a change in the distribution of voter preferences since the earthquake?
Perez Chung Stevens Before 167 128 135 After 214 197 225 H 0: The distribution of voter preferences was the same before and after the earthquake. H a: The distribution of voter preferences was not the same before and after the earthquake. Degrees of Freedom ( df): df = number of columns – 1 = 3 – 1 = 2 Distribution for the test: \({\chi }_{2}^{2}\) Calculate the test statistic: χ 2 = 3.2603 (calculator or computer) Probability statement: p-value= P( χ 2 > 3.2603) = 0.1959
Press the
MATRX key and arrow over to
EDIT. Press
1:[A]. Press
2 ENTER 3 ENTER. Enter the table values by row. Press
ENTER after each. Press
2nd QUIT. Press
STAT and arrow over to
TESTS. Arrow down to
C:χ2-TEST. Press
ENTER. You should see
Observed:[A] and Expected:[B]. Arrow down to
Calculate. Press
ENTER. The test statistic is 3.2603 and the
p-value = 0.1959. Do the procedure a second time but arrow down to
Draw instead of
calculate.
Compare α and the p-value: α= 0.05 and the p-value = 0.1959. α< p-value. Make a decision: Since α < p-value, do not reject H o. Conclusion: At a 5% level of significance, from the data, there is insufficient evidence to conclude that the distribution of voter preferences was not the same before and after the earthquake.
Ivy League schools receive many applications, but only some can be accepted. At the schools listed in [link], two types of applications are accepted: regular and early decision.
Application Type Accepted Brown Columbia Cornell Dartmouth Penn Yale Regular 2,115 1,792 5,306 1,734 2,685 1,245 Early Decision 577 627 1,228 444 1,195 761
We want to know if the number of regular applications accepted follows the same distribution as the number of early applications accepted. State the null and alternative hypotheses, the degrees of freedom and the test statistic, sketch the graph of the
p-value, and draw a conclusion about the test of homogeneity. H 0 : The distribution of regular applications accepted is the same as the distribution of early applications accepted. H a : The distribution of regular applications accepted is not the same as the distribution of early applications accepted. df = 5 χ 2 test statistic = 430.06
Press the
MATRX key and arrow over to
EDIT. Press
1:[A]. Press
3 ENTER 3 ENTER. Enter the table values by row. Press
ENTER after each. Press
2nd QUIT. Press
STAT and arrow over to
TESTS. Arrow down to
C:χ2-TEST. Press
ENTER. You should see
Observed:[A] and Expected:[B]. Arrow down to
Calculate. Press
ENTER. The test statistic is 430.06 and the
p-value = 9.80E-91. Do the procedure a second time but arrow down to
Draw instead of
calculate.
References
Data from the Insurance Institute for Highway Safety, 2013. Available online at www.iihs.org/iihs/ratings (accessed May 24, 2013).
“Energy use (kg of oil equivalent per capita).” The World Bank, 2013. Available online at http://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE/countries (accessed May 24, 2013).
“Parent and Family Involvement Survey of 2007 National Household Education Survey Program (NHES),” U.S. Department of Education, National Center for Education Statistics. Available online at http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2009030 (accessed May 24, 2013).
“Parent and Family Involvement Survey of 2007 National Household Education Survey Program (NHES),” U.S. Department of Education, National Center for Education Statistics. Available online at http://nces.ed.gov/pubs2009/2009030_sup.pdf (accessed May 24, 2013).
Chapter Review
To assess whether two data sets are derived from the same distribution—which need not be known, you can apply the test for homogeneity that uses the chi-square distribution. The null hypothesis for this test states that the populations of the two data sets come from the same distribution. The test compares the observed values against the expected values if the two populations followed the same distribution. The test is right-tailed. Each observation or cell category must have an expected value of at least five.
Formula Review
\(\sum _{i\cdot j}\frac{{\left(O-E\right)}^{2}}{E}\) Homogeneity test statistic where:
O = observed values E = expected values i = number of rows in data contingency table j = number of columns in data contingency table df = ( i −1)( j −1) Degrees of freedom
A math teacher wants to see if two of her classes have the same distribution of test scores. What test should she use?
test for homogeneity
What are the null and alternative hypotheses for [link]?
A market researcher wants to see if two different stores have the same distribution of sales throughout the year. What type of test should he use?
test for homogeneity
A meteorologist wants to know if East and West Australia have the same distribution of storms. What type of test should she use?
What condition must be met to use the test for homogeneity?
All values in the table must be greater than or equal to five.
Use the following information to answer the next five exercises: Do private practice doctors and hospital doctors have the same distribution of working hours? Suppose that a sample of 100 private practice doctors and 150 hospital doctors are selected at random and asked about the number of hours a week they work. The results are shown in [link].
20–30 30–40 40–50 50–60 Private Practice 16 40 38 6 Hospital 8 44 59 39
State the null and alternative hypotheses.
df = _______
3
What is the test statistic?
What is the
p-value?
0.00005
What can you conclude at the 5% significance level?
Homework For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places.
A psychologist is interested in testing whether there is a difference in the distribution of personality types for business majors and social science majors. The results of the study are shown in [link]. Conduct a test of homogeneity. Test at a 5% level of significance.
Open Conscientious Extrovert Agreeable Neurotic Business 41 52 46 61 58 Social Science 72 75 63 80 65 H: The distribution for personality types is the same for both majors 0 H: The distribution for personality types is not the same for both majors a df= 4 chi-square with df= 4 test statistic = 3.01 p-value = 0.5568 Check student’s solution. Alpha: 0.05 Decision: Do not reject the null hypothesis. Reason for decision: p-value > alpha Conclusion: There is insufficient evidence to conclude that the distribution of personality types is different for business and social science majors.
Do men and women select different breakfasts? The breakfasts ordered by randomly selected men and women at a popular breakfast place is shown in [link]. Conduct a test for homogeneity at a 5% level of significance.
French Toast Pancakes Waffles Omelettes Men 47 35 28 53 Women 65 59 55 60
A fisherman is interested in whether the distribution of fish caught in Green Valley Lake is the same as the distribution of fish caught in Echo Lake. Of the 191 randomly selected fish caught in Green Valley Lake, 105 were rainbow trout, 27 were other trout, 35 were bass, and 24 were catfish. Of the 293 randomly selected fish caught in Echo Lake, 115 were rainbow trout, 58 were other trout, 67 were bass, and 53 were catfish. Perform a test for homogeneity at a 5% level of significance.
H: The distribution for fish caught is the same in Green Valley Lake and in Echo Lake. 0 H: The distribution for fish caught is not the same in Green Valley Lake and in Echo Lake. a 3 chi-square with df= 3 11.75 p-value = 0.0083 Check student’s solution. Alpha: 0.05 Decision: Reject the null hypothesis. Reason for decision: p-value < alpha Conclusion: There is evidence to conclude that the distribution of fish caught is different in Green Valley Lake and in Echo Lake
In 2007, the United States had 1.5 million homeschooled students, according to the U.S. National Center for Education Statistics. In [link] you can see that parents decide to homeschool their children for different reasons, and some reasons are ranked by parents as more important than others. According to the survey results shown in the table, is the distribution of applicable reasons the same as the distribution of the most important reason? Provide your assessment at the 5% significance level. Did you expect the result you obtained?
Reasons for Homeschooling Applicable Reason (in thousands of respondents) Most Important Reason (in thousands of respondents) Row Total Concern about the environment of other schools 1,321 309 1,630 Dissatisfaction with academic instruction at other schools 1,096 258 1,354 To provide religious or moral instruction 1,257 540 1,797 Child has special needs, other than physical or mental 315 55 370 Nontraditional approach to child’s education 984 99 1,083 Other reasons (e.g., finances, travel, family time, etc.) 485 216 701 Column Total 5,458 1,477 6,935
When looking at energy consumption, we are often interested in detecting trends over time and how they correlate among different countries. The information in [link] shows the average energy use (in units of kg of oil equivalent per capita) in the USA and the joint European Union countries (EU) for the six-year period 2005 to 2010. Do the energy use values in these two areas come from the same distribution? Perform the analysis at the 5% significance level.
Year European Union United States Row Total 2010 3,413 7,164 10,557 2009 3,302 7,057 10,359 2008 3,505 7,488 10,993 2007 3,537 7,758 11,295 2006 3,595 7,697 11,292 2005 3,613 7,847 11,460 Column Total 45,011 20,965 65,976 H: The distribution of average energy use in the USA is the same as in Europe between 2005 and 2010. 0 H: The distribution of average energy use in the USA is not the same as in Europe between 2005 and 2010. a df= 4 chi-square with df= 4 test statistic = 2.7434 p-value = 0.7395 Check student’s solution. Alpha: 0.05 Decision: Do not reject the null hypothesis. Reason for decision: p-value > alpha Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the average energy use values in the US and EU are not derived from different distributions for the period from 2005 to 2010.
The Insurance Institute for Highway Safety collects safety information about all types of cars every year, and publishes a report of Top Safety Picks among all cars, makes, and models. [link] presents the number of Top Safety Picks in six car categories for the two years 2009 and 2013. Analyze the table data to conclude whether the distribution of cars that earned the Top Safety Picks safety award has remained the same between 2009 and 2013. Derive your results at the 5% significance level.
Year \ Car Type Small Mid-Size Large Small SUV Mid-Size SUV Large SUV Row Total 2009 12 22 10 10 27 6 87 2013 31 30 19 11 29 4 124 Column Total 43 52 29 21 56 10 211 |
In fractional reserve banking commercial banks create money when they make loans. When these loans are paid back the account is zeroed, the created money disappears, but the bank is still entitled interest. Where does the money to pay the interest come from? Does the central bank necessarily have to inflate the currency to pay it? Does money in circulation generally cover it?
Repayment of interest does require an expansion of the money supply, but not in a way that is inflationary.
Consider first the way that commercial banks make loans. The whole purpose of loans is to borrow against future output— so if you imagine a firm that wants to purchase a machine that will allow it to produce more stuff— and thus have greater future income— the firm may go to a commercial bank and ask to take out a loan. The commercial bank will price that loan at a premium to the risk-free interest rate that reflects the risk of the loan, so that in expectation (taking into account that some loans will be repaid only in part or not at all), banks making riskier loans will be repaid at a slightly higher rate than ones making safer loans.
So firms will be offered an interest rate, and if that rate is less than their expected return on their investment, they'll make the investment. Notably, in this case, they're only borrowing money and investing
because their future output will be higher. So when all of this works out (that is, when there isn't a credit bubble in which people are borrowing against future output that won't materialize), future output $Y$ (i.e., GDP) will be higher by an amount that is the amount of additional money $M$ required to repay the interest on the loans.
$$ \% \Delta Y \geq \% \Delta M $$
Now if you recall the relationship between the money stock $M$, output $Y$, the price level $P$, and money velocity $V$:
$$ PY=MV \rightarrow P=\frac{MV}{Y} $$
It's easy to see that if $V$ is constant, then:
$$ \% \Delta P = \frac{\% \Delta M}{\% \Delta Y}$$
Which, when combined with the observation that $ \% \Delta Y \geq \% \Delta M $, implies that an expansion of the money supply that is
merely sufficient to allow for repayment of interest would actually be deflationary. This should make perfect sense: in expectation, output must increase by more than the amount captured by banks in the form of interest, otherwise firms in the real economy would never bother investing in fixed capital. So the money supply will in all likelihood increase by an amount strictly greater than the level required to repay interest, yet without any increase in inflation.
Loans to consumers are similar, so I won't give them a full treatment. Consumers are borrowing against their own future output, and foreclosures are generally priced in to the interest rate on a loan.
You'll note that I left out the case where in the aggregate, future output is less than that required to repay outstanding loans. This is intentional, as it's worth treating as a separate question.
The key thing to note is that the interest paid does not simply accumulate at the bank. Nobody runs a business in order to just make a big pile of money to keep in a safe. The banks spend the money back into the economy through staff wages, running costs and dividend payments to shareholders. All these outflows of money will ultimately be spent on real goods and services produced by the rest of the economy. This flow of money is a source of money for the interest payments.
You can get an idea of what is going on by imagining an economy with a fixed money supply. The figure below shows a hypothetical flow of the money in the economy. The flow marked "trade" and is simply the money circulating back an forth between households and industry as people earn money and spend it on what has been produced in the factories.
In a steady state, the rate of flow of money being created as new loans (shown as "loans" in the diagram) will be equal to the rate of flow of money being paid back in principal repayments (i.e. before interest payments).
At the same time the loan interest payments are equal to the spending by banks (those staff wages, running costs and dividend payments to shareholders).
As you can see, the flows can balance. Nothing is broken, no new money needs to be added to pay the interest. The system can continue indefinitely.
The issue regarding the source of macroeconomic interest (and profits) appears to be unsettled among economists. A free paper on this issue, entitled “What is the Source of Profit and Interest? A Classical Conundrum Reconsidered,” by Gunnar Tomasson and Dirk J. Bezemer, dated January 29, 2010, and posted March 11, 2010, can be found online at https://mpra.ub.uni-muenchen.de/21292/. Personally, although I have not exhaustively researched this issue or economists’ attempts to address it, of the explanations I have studied, I believe that the monetary-circuit approach of Professor Louis-Philippe Rochon most plausibly resolves the conundrum by considering that, in firms’ investment cycles, a cash outflow required for the purchase of capital goods and financed by long-term bank loans occurs in the first period of production in the investment cycle while long-term bank loans may be paid back over multiple periods of production until the end of the investment cycle. Accordingly, based on my understanding of Professor Rochon's view, the central bank does not necessarily have to inflate the currency to pay macroeconomic interest, and money in circulation can generally cover it.
Interest is
, the reward for a service offered: the renting of (here, financial) capital. From this aspect, it is no different from say, a salary earned for a month's work. This is income too. So, income
How salaries are paid?Does the central bank necessarily have to inflate the currency in order for them to be paid? Does money in circulation generally cover it?
and in general
HowDoes the central bank necessarily have to inflate the currency in order for it to be paid? Does money in circulation generally cover it? any kind of incomeis paid?
The answer, simply and perhaps boringly, is, "it depends". On whether the existing money supply covers adequately the transactions needed for the existing level of
total income. If interest income increases while salaries fall, it may be the case that no additional money is needed, for example.
Or it may be the case that both kinds of income increase, but
velocity of money also increases - so again, no need for extra money.
Perhaps the most compelling evidence that fractional banking must be supported by a constant expansion of the monetary base by the central bank, is that the Fed while maintaining a somewhat target Fed Funds Rate almost always experiences growth in MB.
The demand to pay interest spikes up the interbank lending rate which incites the Fed to put more money into the Fed Funds market. If there were enough money to pay interest, then such pressure should not exist.
The ethical implications of this are quite alarming. If the money supply increases and a disproportionate amount goes to support the banking system, then this is an indirect transfer from the poor to the rich. As the system continues, the divide will increase unless alleviated by government actions (like welfare) or by bank failures. |
Consider the Sturm-Liouville
regular problem in self-adjoint form$$(A-\zeta)\,v=f$$whose explicit solution is$$v=\int_a^bG(x,s)\,f(s)\,ds$$where the Green function$$G(x,s)=\begin{cases}\frac{\varphi_b(x)\,\varphi_a(s)}{W(s)},&a\leq{s}\leq{x}\\[0.1in]\frac{\varphi_a(x)\,\varphi_b(s)}{W(s)},&x\leq{s}\leq{b}\end{cases}$$is built by the solutions $\varphi_a$, $\varphi_b$ of the equation $A\,\varphi=0$ which satisfy respective boundary conditions at $x=a,b$. It's a long way to get here, but you can verify this. The resolvent associated to $A$ is given by$$R(A,\zeta)\,f=\int_a^bG(x,s)\,f(s)\,ds$$Now, showing that an operator $K$, whose kernel $k(x,s)$ satisfies$$\int_a^b\int_a^b\left|k(x,s)\right|^2ds\,dx<\infty\hspace{0.5in}(*)$$is compact (meaning the functions $k$ are of square-integrable), may be used to show that if the Green function $G$ of the resolvent operator $R$ is continuous, then $G$ satisfies equation $(*)$ on a finite interval $[a,b]$, meaning that $R$ is a compact self-adjoint operator for regular Sturm-Liouville problems. You could just do this last thing in general, and just show that with the operator$$\hat{H}\equiv{T}=-\frac{\hbar^2}{2m}\nabla^2+V$$(the Hamiltonian), the eigenvalue problem $\hat{H}\psi=E\psi$ (known as Schrödinger stationary -or time independent- equation) can be reduced to a regular Sturm-Liouville problem.
All of this because, complementing what you said, the eigenvalues of the problem $T\varphi_k=\lambda_k\varphi_k$ for a compact symmetric operator $T$ on a Hilbert space $H$ with interior product $\langle\cdot\mid\cdot\rangle_H$ are a bounded countably infinite set that converges to zero, $\displaystyle{\lim_{k\to\infty}\lambda_k=0}$. Other (pretty relevant) properties are that the multiplicity of each eigenvalue $\lambda_k$ is finite, and that the set of all eigenfunctions $\varphi_k$ define a complete basis of the space $H$, so that any element $f$ of $H$ can be expanded as $\displaystyle{f=\sum_{k=1}^\infty{f_k}\,\varphi_k}$.
Update
Please note the relevance of the condition $\boxed{\displaystyle{E<\lim_{|x|\rightarrow \infty}V(x)}}$. Consider this next image as an illustration (I've taken it directly from the web, so take the potential $V=U$); as the math is pretty straightforward, I'll go a little more with physical picture this time
with the condition above, the results found are valid for the equation $\hat{H}\psi=E_i\psi$ with $i=0,1$. Indeed, both equations may
presumably be reduced to a regular S-L problem because they would be ones with a finite or penetrable potential well, where we can know the energy spectrum just from $\psi$ inside the potential well, i.e. for a finite interval, whereas considering $i=2$ the interval for $\psi$ would inevitably be infinite and it is evident that $\hat{H}\psi=E_i\psi$ couldn't be reduced to a regular S-L problem and thus the energy spectrum presumably wouldn't be discrete. Feel free to consider any potential you like.
This corresponds to a beautiful analogy with classical mechanics when we find closed or open
orbits, which here are bounded and unbounded states (hope I got the translations right) respectively. Think of a massive nucleus and an electron that interact by means of an atractive Columb potential. If the condition above is satisfied, the system may be a hydrogen-type atom that would have a discrete energy spectrum, as known. Although if someone shot the electron against the nucleus from very far away and with enough kinetic energy, the total energy would be positive and the nucleus would deflect the electron without capturing it and without changing it's energy, which evidently is free to take any value. The simplest case for a continuous energy spectrum in QM is that of the free particle, i.e. $V(\mathbf{r})=0,\,\forall\mathbf{r}\in\mathbb{R}^3$; there the normalization condition $\int\psi^*_n\psi_m\,dx=\delta_{nm}$ is not valid and may be generalized using the Dirac delta, also wavefunctions strictly belong to a Banach space. |
I am sorry for spamming MO with questions I have not thought about for more than 3 hours, but currently I am quite busy with preparing a talk on representations of $S_n$, and I don't want these to get ...
Let a square $n\times n$ real matrix ${\bf A}$ and two vectors ${\bf x}$ and ${\bf b}$ of length $n$, such that $${\bf A}{\bf x}={\bf b}.$$Solving for ${\bf x}$ through standard Gaussian Elimination ...
A linear system of equations Ax=b can be solved using various methods, namely, inverse method, Gauss/Gauss-Jordan elimination, LU factorization, EVD (Eigenvalue Decomposition), and SVD (Singular Value ...
I'm teaching students about several numeric methods, including scaled pivoting. There's a small section in this subject that I could never find a clear explanation to, either as intuition, or a more ...
Say I have a set of $(n-1)$ linearly independent vectors $\mathbf{v}_i$ of dimension $n$ with entries $\pm1$. I am interested in finding the $n-$dimensional vector $\mathbf{u} $which is normal to the ...
Consider a finite field $F$ and suppose we have a system of equations$$h_1(\alpha,\beta)=0,h_2(\alpha,\beta)=0,...,h_t(\alpha,\beta)=0$$where $\alpha=(\alpha_1,...,\alpha_s)$ and $\beta=(\beta_1,..... |
Introduction
The perfect gas equation of state \(PV=NkT\) is manifestly incapable of describing actual gases at low temperatures, since they undergo a discontinuous change of volume and become liquids. In the 1870’s, the Dutch physicist Van der Waals came up with an improvement: a gas law that recognized the molecules interacted with each other. He put in two parameters to mimic this interaction. The first, an attractive intermolecular force at long distances, helps draw the gas together and therefore reduces the necessary outside pressure to contain the gas in a given volume—the gas is a little thinner near the walls. The attractive long range force can be represented by a negative potential \(-aN/V\) on going away from the walls—the molecules near the walls are attracted inwards, those in the bulk are attracted equally in all directions, so effectively the long range attraction is equivalent to a potential well extending throughout the volume, ending close to the walls. Consequently, the gas density \(N/V\) near the walls is decreased by a factor \(e^{-E/kT}=e^{-aN/VkT}\cong 1-aN/VkT\). Therefore, the pressure measured at the containing wall is from slightly diluted gas, so \(P=(N/V)kT\) becomes \(P=(N/V)(1-aN/VkT)kT\), or \((P+a(N/V)^2)V=NkT\). The second parameter van der Waals added was to take account of the finite molecular volume. A real gas cannot be compressed indefinitely—it becomes a liquid, for all practical purposes incompressible. He represented this by replacing the volume \(V\) with \(V-Nb\), \(Nb\) is referred to as the “excluded volume”, roughly speaking the volume of the molecules. Putting in these two terms gives his famous equation \[ \left[ P+a\left(\frac{N}{V}\right)^2\right] (V-Nb)=NkT. \label{9.3.1}\]
This rather crude approximation does in fact give sets of isotherms representing the basic physics of a phase transition quite well. (For further details, and an enlightening discussion, see for example Appendix D of
Thermal Physics, by R. Baierlein.) Ground State Hydrogen Atoms
Our interest here is in understanding the van der Waals long-range attractive force between electrically neutral atoms and molecules in quantum mechanical terms. We begin with the simplest possible example, two hydrogen atoms, both in the ground state:
We label the atoms \(A\) and \(B\), the vectors from the protons to the electron position are denoted by \(\vec{r_A}\) and \(\vec{r_B}\) respectively, and \(\vec{R}\) is the vector from proton \(A\) to proton \(B\).
Then the Hamiltonian \(H=H^0+V\), where \[ H^0=-\frac{\hbar^2}{2m}(\nabla^2_A+\nabla^2_B)-\frac{e^2}{r_A}-\frac{e^2}{r_B} \label{9.3.2}\]
and the electrostatic interaction between the two atoms \[ V=\frac{e^2}{R}+\frac{e^2}{|\vec{R}+\vec{r_B}-\vec{r_A}|}-\frac{e^2}{|\vec{R}+\vec{r_B}|}-\frac{e^2}{|\vec{R}-\vec{r_A}|} \label{9.3.3}\]
The ground state of \(H^0\) is just the product of the ground states of the atoms \(A,B\), that is, \[ |0\rangle =|100\rangle_A\otimes |100\rangle_B. \label{9.3.4}\]
Assuming now that the distance between the two atoms is much greater than their size, we can expand the interaction \(V\) in the small parameters \(r_A/R,\; r_B/R\). As one might suspect from the diagram above, the leading order terms in the electrostatic energy are just those of a dipole-dipole interaction: \[ V=-e^2(\vec{r_A}\cdot \vec{\nabla})(\vec{r_B}\cdot \vec{\nabla})\frac{1}{R}=e^2\left[ \frac{\vec{r_A}\cdot \vec{r_B}}{R^3}-\frac{3(\vec{r_A}\cdot \vec{R})(\vec{r_B}\cdot \vec{R})}{R^5}\right] \label{9.3.5}\]
Taking now the z- axis in the direction \(\vec{R}\), this interaction energy is \[ V=\frac{e^2}{R^3}(x_Ax_B+y_Ay_B-2z_AZ_B) \label{9.3.6}\]
Now the first-order correction to the ground state energy of the two-atom system from this interaction is \(E^1_n=\langle n^0|H^1|n^0\rangle\) , where here \(H^1=V\) and \(|n^0\rangle =|100\rangle_A\otimes |100\rangle_B\). Beginning with the first term \(x_Ax_B\) in \(V\) \[ (_A\langle 100|\otimes_B\langle 100|)(x_Ax_B)(|100\rangle_A\otimes |100\rangle_B)=(_A\langle 100|x_A|100\rangle_A)(_B\langle 100|x_B|100\rangle_B) \label{9.3.7}\]
is clearly zero since the ground states are spherically symmetric. Similarly, the other terms in \(V\) are zero to first order.
Recall that the second-order energy correction is \(E^2_n=\sum_{m\neq n} \frac{|\langle m0|H^1|n^0\rangle |^2}{E^0_n-E^0_m} \).
That is, \[ E^{(2)}=\sum_{\begin{matrix}n,l,m\\ n′,l′,m′ \end{matrix}} \frac{|(_A\langle nlm|\otimes_B\langle n′l′m′|)V(|100\rangle_A\otimes |100\rangle_B)|^2}{2E_1-E_n-E_{n′}}. \label{9.3.8}\]
A typical term here is \[ (_A\langle nlm|\otimes_B\langle n′l′m′|)(x_Ax_B)(|100\rangle_A\otimes |100\rangle_B)=(_A\langle nlm|x_A|100\rangle_A)(_B\langle n′l′m′|x_B|100\rangle_B), \label{9.3.9}\]
so the single-atom matrix elements are exactly those we discussed for the Stark effect (as we would expect—this is an electrostatic interaction!). As before, only \(l=1,\;\; l′=1\) contribute. To make a rough estimate of the size of \(E^{(2)}\), we can use the same trick used for the quadratic Stark effect: replace the denominators by the constant \(2E_1\) (the other terms are a lot smaller for the bound states, and continuum states have small overlap terms in the numerator). The sum over intermediate states \(n,l,m,n′,l′,m′\) can then be taken to be completely unrestricted, including even the ground state, giving \[ \sum_{\begin{matrix}n,l,m\\ n′,l′,m′ \end{matrix}} (|nlm\rangle_A\otimes |n′l′m′\rangle_B)(_A\langle nlm|\otimes_B\langle n′l′m′|)=I, \label{9.3.10}\]
the identity operator. In this approximation, then, just as for the Stark effect, \[ E^{(2)}\simeq \frac{e^4}{R^6}\frac{1}{2E_1}(_A\langle 100|\otimes_B\langle 100|)(x_Ax_B+y_Ay_B-2z_AZ_B)^2(|100\rangle_A\otimes |100\rangle_B) \label{9.3.11}\]
where \(E_1=-1\) Ryd., so this is a
lowering of energy.
In multiplying out \((x_Ax_B+y_Ay_B-2z_AZ_B)^2\), the cross terms will have expectation values of zero. The ground state wave function is symmetrical, so all we need is \(\langle 100|x^2|100\rangle =a^2_0\), where \(a_0\) is the Bohr radius.
This gives
\[ E^{(2)}\simeq \frac{e^4}{R^6}\frac{1}{2E_1}6a^4_0\simeq -6\frac{e^2}{R}\left( \frac{a_0}{R}\right)^5 \label{9.3.12}\]
using \(E_1=-e^2/2a_0\). Bear in mind that this is an approximation, but a pretty good one—a more accurate calculation replaces the 6 by 6.5.
Forces between a 1 s Hydrogen Atom and a 2 p Hydrogen Atom
With one atom in the \(|100\rangle\) and the other in \(|210\rangle\) , say, a typical leading order term would be
\[ (_A\langle 100|\otimes_B\langle 210|)(x_Ax_B)(|100\rangle_A\otimes |100\rangle_B)=(_A\langle 100|x_A|100\rangle_A)(_B\langle 210|x_B|100\rangle_B), \label{9.3.13}\]
and this is certainly zero, as are all the other leading terms. Baym (
Lectures on Quantum Mechanics) concluded from this that there is no leading order energy correction between two hydrogen atoms if one of them is in the ground state. This is incorrect: the first excited state of the two-atom system (without interaction) is degenerate, so, exactly as for the 2-D simple harmonic oscillator treated in the previous lecture, we must diagonalize the perturbation in the subspace of these degenerate first excited states. (For this section, we follow fairly closely the excellent treatment in Quantum Mechanics, by C. Cohen-Tannoudji et al.)
The space of the degenerate first excited states of the two noninteracting atoms is spanned by the product-space kets: \[ \begin{matrix} (|100\rangle_A\otimes |200\rangle_B), &(|200\rangle_A\otimes |100\rangle_B),&(|100\rangle_A\otimes |211\rangle_B),&(|211\rangle_A\otimes |100\rangle_B),\\ (|100\rangle_A\otimes |210\rangle_B),&(|210\rangle_A\otimes |100\rangle_B),&(|100\rangle_A\otimes |21-1\rangle_B),&(|21-1\rangle_A\otimes |100\rangle_B). \end{matrix} \label{9.3.14}\]
The task, then, is to diagonalize \(V=\frac{e^2}{R^3}(x_Ax_B+y_Ay_B-2z_AZ_B)\) in this eight-dimensional subspace.
We begin by representing \(V\) as an \(8\times 8\) matrix using these states as the basis. First, note that all the diagonal elements of the matrix are zero—in all of them, we’re finding the average of x,y or z for one of the atoms in the ground state. Second, writing \(V=\frac{e^2}{R^3}(\vec{r_A}\cdot \vec{r_B}-3z_AZ_B)\), it is evident that \(V\) is unchanged if the system is rotated around the z- axis (the line joining the two protons). This means that the commutator \([V,L_z]=0\), where \(L_z\) is the total angular momentum component in the z- direction, so \(V\) will only have nonzero matrix elements between states having the same total \(L_z\). Third, from parity (or Wigner-Eckart) all matrix elements in the subspace spanned by \((|100\rangle_A\otimes |200\rangle_B),\; (|200\rangle_A\otimes |100\rangle_B)\) must be zero.
This reduces the nonzero part of the \(8\times 8\) matrix to a direct product of three \(2\times 2\) matrices, corresponding to the three values of \(L_z=m\). For example, the \(m=0\) subspace is spanned by \((|100\rangle_A\otimes |210\rangle_B),\; (|210\rangle_A\otimes |100\rangle_B)\). The diagonal elements of the \(2\times 2\) matrix are zero, the off-diagonal elements are equal to \(-2\frac{e^2}{R^3}(_A\langle 100|z_A|210\rangle_A)(_B\langle 210|Z_B|100\rangle_B)\), where we have kept the unnecessary labels \(A,B\) to make clear where this term comes from. (The \(x_A\) and \(y_A\) terms will not contribute for \(m=0\). )
This is now a straightforward integral over hydrogen wave functions. The three \(2\times 2\) matrices have the form \[ \begin{pmatrix} 0&k_m/R^3\\ k_m/R^3 &0 \end{pmatrix} \label{9.3.15}\]
(following the notation of Cohen-Tannoudji) where \(k_m\sim e^2a^2_0\), and the energy eigenvalues are \(\pm k_m/R^3\), with corresponding eigenkets \( (1/\sqrt{2})[(|100\rangle_A\otimes |210\rangle_B)\pm (|210\rangle_A\otimes |100\rangle_B)]\).
So for two hydrogen atoms, one in the ground state and one in the first excited state, the van der Waal interaction energy goes as \(1/R^3\), much more important than the \(1/R^6\) energy for two hydrogen atoms in the ground state. Notice also that the \(1/R^3\) can be
positive or negative, depending on whether the atoms are in an even or an odd state—so the atoms sometimes repel each other.
Finally, if two atoms are initially in a state \((|100\rangle_A\otimes |210\rangle_B)\), note that this is
not an eigenstate of the Hamiltonian when the interaction is included. Writing the state as a sum of the even and odd states, which have slightly different phase frequencies from the energy difference, we find the excitation moves back and forth between the two atoms with a period \(hR^3/2k_{m=0}\). |
(This question came up in a conversation with my professor last week.)
Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$.
Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? What if $G$ is finite?
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
(This question came up in a conversation with my professor last week.)
Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$.
Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? What if $G$ is finite?
The Mathieu group $M_{11}$ does not have this property. A quote from Example 2.16 in this paper: "Hence there is no automorphism of $M_{11}$ that maps $x$ to $x^{−1}$."
Background how I found this quote as I am no group theorist: I used Google on "groups with no outer automorphism" which led me to this Wikipedia article, and from there I jumped to this other Wikipedia article. So I learned that $M_{11}$ has no outer automorphism. Then I used Google again on "elements conjugate to their inverse in the mathieu group" which led me to the above mentioned paper.
EDIT: Following Geoff Robinson's comment let me show that any element $x\in M_{11}$ of order 11 has this property, using only basic group theory and the above Wikipedia article. The article tells us that $M_{11}$ has 7920 elements of which 1440 have order 11. So $M_{11}$ has 1440/10=144 Sylow 11-subgroups, each cyclic of order 11. These subgroups are conjugates to each other by one of the Sylow theorems, so each of them has a normalizer subgroup of order 7920/144=55. In particular, if $x$ and $x^{-1}$ were conjugate to each other, then they were so by an element of odd order. This, however, is impossible as any element of odd order acts trivially on a 2-element set.
No, such an isomorphism does not always exist, and the smallest counterexample is $G=C_5\rtimes C_4$ with $C_4$ acting faithfully. It is not hard to see that the only automorphisms of $G$ are inner, and that they cannot map an element of order 4 to its inverse.
Here's a comment which might as well be written down. If $f$ is required to be an inner automorphism, then for $G$ finite this question can be understood using the character table of $G$:
$x$ is conjugate to its inverse if and only if $\chi(x)$ is real for all characters $\chi$.
Since $\chi(x^{-1}) = \overline{ \chi(x) }$, one direction is clear. In the other direction, if $\chi(x)$ is real then $\chi(x) = \chi(x^{-1})$ for all characters $\chi$, hence $c(x) = c(x^{-1})$ for all class functions $c$. One also has the following cute result: the number of conjugacy classes which are closed under inversion is equal to the number of irreducible characters all of whose values are real (equivalently, the number of self-dual irreps). Since there exist plenty of groups (even simple groups) whose character tables have complex entries, there are plenty of groups with elements not conjugate to their inverses.
This is one way to address the question for finite groups with no outer automorphisms. |
Forgot password? New user? Sign up
Existing user? Log in
∑n=1∞(ζ(2n)n−12n−12n+1)\large \sum_{n=1}^{\infty} \left(\dfrac{\zeta(2n)}{n} - \dfrac{1}{2n} - \dfrac{1}{2n+1} \right) n=1∑∞(nζ(2n)−2n1−2n+11)
Find the value of the closed form of the above series.
Give your answer to 1 decimal place.
Notation: ζ(⋅)\zeta(\cdot) ζ(⋅) denotes the Riemann zeta function.
Problem Loading...
Note Loading...
Set Loading... |
We know that there are 2 types of risk which are systematic and unsystematic risk. Systematic risk can be estimate through the calculation of β in CAPM formula. But how can we estimate the unsystematic risk quantitatively? is there any formula or calculation that can be related to the measurement of unsystematic risk?
I'm not sure about the "CAPM formula" that you are referring to.
I assume you are referring to the estimated coefficient of a regression of a security on a market portfolio. That is to say
\begin{equation} \beta_{security,market} = \frac{\sigma_{security,market}}{\sigma^2_{market}} \end{equation}
The idiosyncratic risk is the portion of risk unexplained by the market factor. The value of $1 - R^2$ of the regression will tell you this proportion.
Empirically, the idiosyncratic risk in a single-factor contemporaneous CAPM model with US equities is around 60-70%.
I would use the identity and three step process that:
$$\textrm{Total Variance} = \textrm{Systematic Variance} + \textrm{Unsystematic Variance}$$
You can calculate systematic variance via:
$$\textrm{Systematic Risk} = \beta \cdot \sigma_\textrm{market} \Rightarrow \; \textrm{Systematic Variance} = (\textrm{Systematic Risk})^2$$
then you can rearrange the identity above to get:
$$\textrm{Unsystematic Variance} = \textrm{Total Variance} - \textrm{Systematic Variance}$$
Or if you want the number as "risk" (i.e. standard deviation), then:
$$\textrm{Unsystematic Risk} = \sqrt{(\textrm{Total Variance} - \textrm{Systematic Variance})}$$
NOTE: You're making assumptions here that that the Covariance of Unsystematic and Systematic is 0 (which in my experience holds up a good bit of the time).
do a regression where stock returns is dependent and market return is independent variable. Value of R^2 is Systematic risk and value of 1-R^2 is unsystematic risk...
If Y is the excess returns of your asset and X is that of the market, then CAPM tells you $Y = \beta X + \epsilon$ Taking the variance of both sides yields $$ \\ \sigma^2_{Y} = \beta^2 \sigma^2_{X} + \sigma^2_{\epsilon} \\ $$ We know that $$\beta = \frac{\sigma_{X,Y}}{\sigma^2_{X}} = \rho_{X,Y}\frac{\sigma_{Y}}{\sigma_{X}}$$ Where $\sigma_{X,Y}$ is the covariance and $\rho_{X,Y}$ the correlation. Hence, substituting for $\beta$ and solving for $\sigma^2_{\epsilon}$ we get: $$\sigma^2_{\epsilon}= \sigma^2_{Y}(1-\rho^2_{X,Y}) $$
I have studied unsystematic risk [USR] for more than two decades. In fact, I wrote a book (
which is here) whose central focus is how to deal with USR in the valuation of non-public companies. It is a multifaceted, complex, and difficult issue. Modern Portfolio Theory did professionals in my line of work no favors when it assumed away the existence of USR because few small-business owners hold diversified investment portfolios.
Actually, the value of R2 is the percent of total risk explained by systematic risk..so you need to compute total risk, which is the sd of your stock returns...and then annualize it (i.e. if your data is monthly, just multiply the sd you computed by sqrt of 12) and then multiply it with R2 to obtain your systematic risk. The rest is unsystematic.
For calculating systematic risk(beta) for a company which is registered on stock exchange can be calculated in excel through following steps. 1. co variance of both will be multiplied 2. Divided by the variance of stock exchange index A common expression for beta is
for further see link http://en.wikipedia.org/wiki/Beta_(finance)
by Akhtar rasheed international islamic university islamabad BBA 24(A)
I guess one can figure out the unsystematic risk by using the following formula:
$ Unsystematic Risk = [R_A - E(R_A)] - [R_M - E(R_M)] * \beta $
Where:
$R_A$ is the actual return on the asset
$E(R_A)$ is the expected return on the asset
$R_M$ is the actual return on the market
$E(R_M)$ is the expected return on the market
You can think of the ACTUAL - EXPECTED as how far the actual returns deviate from the expected returns i.e. the residuals
the simple answer is to make an adjustment to the beta of company. let me give you an example say, beta is 1.0 & correlation of the company with market is 0.5 (which is 50% of the movement in the prices is explained by the market and rest is because of some other reason). so, now one thing is clear that if we some how make this correlation equals to 1 (i.e 100% of the movement is explained by market it self) we can get the total risk.
so, total beta=total risk=Beta/Correlation(r) =1/.5 = 2 total beta = 2.
thanks
Unsystematic risk of a single stock can be calculated as follows:
$$\sigma_\lambda-\rho_{\lambda,m}\sigma_\lambda=\sigma_\lambda(1-\rho_{\lambda,m})$$
where $\sigma_\lambda$ is the volatility of the stock $\lambda$ and $\rho_{\lambda,m}$ is the correlation between this stock and the market.
Written differently this is the same as:
$$\sigma_\lambda-\beta_\lambda\sigma_m$$
which means that the unsystematic risk of a single stock is its volatility minus its beta scaled by the market volatility.
Sources:
Cara M. Marshall (2015) Isolating the systematic and unsystematic components of a single stock’s (or portfolio’s) standard deviation, Applied Economics, 47:1, 1-11, DOI: 10.1080/00036846.2014.959652 Fabio Pizzutilo (2015): Isolating the systematic and unsystematic components of a single stock’s (or portfolio’s) standard deviation: a comment, Applied Economics, DOI: 10.1080/00036846.2015.1068925
I assume here you're trying to calculate
appraisal ratio, the measure of systematic risk-adjusted excess return relative to idiosyncratic risk. I also agree with a previous comment that the current trend is to call unsystematic risk either specific or idiosyncratic risk.
Specific risk equals the standard deviation of alpha, or alpha plus an error term. You can't really
ex ante use any result with an error term because you can't predict when a factory will blow up and such.
I think I saw a correct description of alpha earlier, but it is: $$r_P - [r_F + \beta_{PB}(r_B - r_F)]$$ where $rP$ is portfolio return, $rF$ is the risk-free rate, $\beta_{PB}$ is beta for the portfolio against the benchmark, and $rB$ is the benchmark return. You can use $\beta_{PM}$ and $r_M$ (market measures rather than benchmark measures), but a portfolio manager should be able to beat his benchmark rather than a market index... unless he's an index manager.
I'm not sure how deep your desire to know this goes, but the benchmark should include all the securities from which the manager could select to implement his strategy in the weights appropriate to implement it. If he's just trying to beat the S&P500, use $r_M$ and $\beta_{PM}$.
Systematic risk and unsystematic risk
1) when total risk assume to be equal to standard deviation of portfolio
Systematic risk= B × standard deviation of market portfolio
And unsystematic risk = standard deviation of portfolio - syetamatic risk ( i.e total risk - systamatic risk)
protected by Community♦ Mar 27 at 14:01
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
In the Mass Effect series there's a terrestrial planet called Dekuuna with 10 times the mass of Earth, a surface gravity of 4G and a native intelligent race. I was just wondering if it was possible for a planet like that to exist in reality, and if yes, what are some factors that might lead to its formation assuming it formed around a Sun like star?
If $R$ is the planet's radius and $\rho$ is the planet's average density, then its surface gravity is $\propto \rho R$ and its mass is $\propto \rho R^3$.
Let's measure in units where the Earth's radius and average density are both 1. Then for Dekuuna we have $\rho R=4$ and $\rho R^3=10$. Therefore we get \begin{align} R &= \sqrt{\frac{\rho R^3}{\rho R}} = \sqrt{\frac{10}{4}} \approx 1.6\\ \rho &= \sqrt{\frac{(\rho R)^3}{\rho R^3}} = \sqrt{\frac{64}{10}} \approx 2.5 \end{align} So that planet would have a radius of about 1.6 times the Earth's radius and an average density of about 2.5 times the Earth's average density. The radius is definitely possible, and I think the density should be, too. To begin with, the higher radius and mass would already cause an increased density, although that alone will not get a factor 2.5. But then, the planet could have a relatively bigger core, and it might have more heavy elements in its core.
I also think 4 times earth gravity should not preclude the evolution of intelligent life.
I'm going to work off of celtschk's excellent answer, which correctly comes up with a radius of $\sim1.6 R_{\oplus}$ and a density of $\sim2.5\rho_{\oplus}$, where $_{\oplus}$ denotes Earth. If we look at the mass-radius curves of Mocquet et al. (2014), we see that the planet lies very close to the line for pure iron planets:
Some fun facts about iron planets:
They're essentially just cores of terrestrial planets. They likely cannot hold water. They have no tectonic activity or magnetic field. They may be close to their parent star, meaning surface temperatures will be extremely high.
This doesn't seem like a very pleasant place for life.
The earlier answers have done this better, but I found a mass/gravity/distance-from-center calculator at http://www.ajdesigner.com/phpgravity/gravity_acceleration_equation_planet_mass.php#ajscroll and ran some rough numbers through it. After a little trial and error, it looks as though a planet 10x as massive as Earth would have as surface gravity of 4G if its radius was approximately 10,106 km. That would make it about 2.55 (again, rough math) times as dense as Earth, at about 13.85 grams per cubic centimeter. That falls between the elements Americium (13.67 g/cc) and Berkelium (14.78 g/cc). Both of those are radioactive with relatively short half-lives - not sure if they degrade into something just as dense? My guess is that such a planet couldn't exist in reality, unless it was created and maintained by some highly tech-advanced race. Even so, I would think that the probable radioactivity of the thing would preclude the existence of intelligent life as we know it. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
For general discussion about Conway's Game of Life.
https://catagolue.appspot.com/haul/b3s/C1?committed=2
This would suggest that Catagolue does not accept hauls containing zero objects. This greatly skews the statistics for this rule, because, from the times of submission, it appears that most hauls produce zero objects, meaning that the results displayed on Catagolue show a much greater frequency of objects than actually exists.
This would suggest that Catagolue does not accept hauls containing zero objects. This greatly skews the statistics for this rule, because, from the times of submission, it appears that most hauls produce zero objects, meaning that the results displayed on Catagolue show a much greater frequency of objects than actually exists.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
In my computer, parallelization makes apgmera to run slower. Are you sure apgmera can be parallelized properly?
I am using Intel Core i7-2670QM, and Windows 10 Pro 64bit.
I am using Intel Core i7-2670QM, and Windows 10 Pro 64bit.
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
Yes, Windows has some difficulty with OpenMP parallelisation (unlike Linux, for instance). A suggested workaround is to run N single-core instances of apgmera; this is what biggiemac did on his Windows machines.David wrote:In my computer, parallelization makes apgmera to run slower. Are you sure apgmera can be parallelized properly? I am using Intel Core i7-2670QM, and Windows 10 Pro 64bit.
What do you do with ill crystallographers? Take them to the
! mono-clinic Well, that is parallel, but not concurrent. I can't expect any speed improval from it.calcyman wrote: Yes, Windows has some difficulty with OpenMP parallelisation (unlike Linux, for instance). A suggested workaround is to run N single-core instances of apgmera; this is what biggiemac did on his Windows machines.
What about using the standard parallelized algorithms?
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
I don't understand -- if you have n instances running on separate cores, then you'll produce n times as many soups per unit time.David wrote: Well, that is parallel, but not concurrent. I can't expect any speed improval from it.
What do you do with ill crystallographers? Take them to the
! mono-clinic Suppose that the time complexity of apgmera is O(f(n)). As I have 4 cores, the time complexity would be O(f(n) / 4), but by the definition of big O notation, the time complexity falls back to O(f(n)).calcyman wrote: I don't understand -- if you have n instances running on separate cores, then you'll produce n times as many soups per unit time.
If you use concurrent algorithms, I can expect improval in time complexity. For example, for a sorting algorithm, it's time complexity would be O(n log n) if it is neither parallel nor concurrent, but it could be O(log³ n), O(log² n), or even O(log n) if it is parallel and concurrent.
You can use the thread support library, or if possible, the parallelism TS.
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
All apgsearch is doing is many independent iterations of the same task - evolve random soup and census objects. The operative word there is "independent," which differentiates it from the sorting algorithm you mentioned. Soups need to share no information. As such, there is no way to have a better asymptotic complexity using a finite number of cores, just speed things up by a constant factor as Adam mentioned.
Physics: sophistication from simplicity.
Whatever n is, the point is the time complexity cannot be improved by just speeding things up by a constant factor.calcyman wrote:What is n? But within a single soup, there might be a way to have a better complexity. Generate a random soup with parallelism and concurrency, evolve it with parallelism and concurrency, census objects with parallelism and concurrency, etc.biggiemac wrote:All apgsearch is doing is many independent iterations of the same task - evolve random soup and census objects. The operative word there is "independent," which differentiates it from the sorting algorithm you mentioned. Soups need to share no information. As such, there is no way to have a better asymptotic complexity using a finite number of cores, just speed things up by a constant factor as Adam mentioned.
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
A soup takes O(1) time to run; the only possible improvements are those that reduce the constant factor.
Also (speaking of algorithms more generally), if you have k cores, you can't get any better than a k-fold time improvement over a single-core machine. In many cases, even that is optimistic (see Amdahl's law).
Also (speaking of algorithms more generally), if you have k cores, you can't get any better than a k-fold time improvement over a single-core machine. In many cases, even that is optimistic (see Amdahl's law).
What do you do with ill crystallographers? Take them to the
! mono-clinic
Hello, I am now a Linux user. Specifically, Linux Mint 17.3 'Rosa', MATE 64-bit.
I've installed apgmera and tried to compile it, but I get the following error message:
I've installed apgmera and tried to compile it, but I get the following error message:
The last two lines can be translated to:
Code: Select all
ndos@AcerAspireONE ~/바탕화면/apgmera $ bash recompile.shSkipping updates; use --update to update apgmera automatically.Rule unspecified; assuming b3s23.Symmetry unspecified; assuming C1.Configuring rule b3s23; symmetry C1Valid rulestring: b3s23Valid symmetry: C1Rule integer: 6152Rule circuit: [-131-124-450-014-672]Rule integer: 6152Rule circuit: [-131-124-450-014-672]Rule integer: 6152Rule circuit: [-131-124-450-014-672]Success!g++ -c -Wall -O3 -march=native -fopenmp -DUSE_OPEN_MP main.cpp -o main.omake: g++: 명령을 찾지 못했음make: *** [main.o] 오류 127
Code: Select all
make: g++: Couldn't find the command.make: *** [main.o] Error 127
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
Well, since I had already updated the system, I typed only "sudo apt-get install build-essential", and that worked. Thank you. And I can now do parallel processing practically.Rich Holmes wrote:You appear not to have the g++ compiler installed. Try
Code: Select all
sudo apt-get update && sudo apt-get install build-essential
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
Professor and cloverleaf interchange haven't been named in catagolue yet.
Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace!
It appears Apgsearch doesn't separate pseudo still lifes properly:
https://catagolue.appspot.com/object/xs ... zx11/b3s23 The two boats should be separated as well as the other two.
https://catagolue.appspot.com/object/xs ... zx11/b3s23
The two boats should be separated as well as the other two.
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.
That is...strange, to say the leastDavid wrote:It appears Apgsearch doesn't separate pseudo still lifes properly: https://catagolue.appspot.com/object/xs ... zx11/b3s23 The two boats should be separated as well as the other two.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Current rule interest: B2ce3-ir4a5y/S2-c3-y
I have a vague idea:use apgsearch to generate a random string,encode it into a QR code,then treat it like a normal soup in apgsearch.
Current status: outside the continent of cellular automata. Specifically, not on the plain of life.
An awesome gun firing cool spaceships:
An awesome gun firing cool spaceships:
Code: Select all
x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o!
That was my original idea, but I chose SHA-256 for cryptographic security: it's impossible to reverse-engineer (say) a loafer into a trivial predecessor and submit it as part of a haul.GUYTU6J wrote:I have a vague idea:use apgsearch to generate a random string,encode it into a QR code,then treat it like a normal soup in apgsearch.
What do you do with ill crystallographers? Take them to the
! mono-clinic Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X No, that is not his idea, I think. I think he means to generate a random string, make a QR code with that random string and use the QR codecalcyman wrote:That was my original idea, but I chose SHA-256 for cryptographic security: it's impossible to reverse-engineer (say) a loafer into a trivial predecessor and submit it as part of a haul.GUYTU6J wrote:I have a vague idea:use apgsearch to generate a random string,encode it into a QR code,then treat it like a normal soup in apgsearch. asthe soup.
Airy Clave White It Nay
(Check gen 2)
Code: Select all
x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
Yes, that's what I meant. But you can construct a QR code which is also a loafer predecessor, and then compute the string which yields that particular QR code. Then submit a haul to Catagolue containingSaka wrote:No, that is not his idea, I think. I think he means to generate a random string, make a QR code with that random string and use the QR codecalcyman wrote:That was my original idea, but I chose SHA-256 for cryptographic security: it's impossible to reverse-engineer (say) a loafer into a trivial predecessor and submit it as part of a haul.GUYTU6J wrote:I have a vague idea:use apgsearch to generate a random string,encode it into a QR code,then treat it like a normal soup in apgsearch. asthe soup. thatstring, and the census will mistakenly believe that a loafer has occurred naturally.
What do you do with ill crystallographers? Take them to the
! mono-clinic Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Ah, I seecalcyman wrote: Yes, that's what I meant. But you can construct a QR code which is also a loafer predecessor, and then compute the string which yields that particular QR code. Then submit a haul to Catagolue containing thatstring, and the census will mistakenly believe that a loafer has occurred naturally.
Airy Clave White It Nay
(Check gen 2)
Code: Select all
x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
How many distinct 16x16 soups are there of the two different x8 symmetries?
I calculate 2^36, or about 70B. Haven't we already covered most of this space, with high probability? We are above the 100B mark for both symmetries. Or is this just well known and understood and people are focusing on other symmetries?
I calculate 2^36, or about 70B.
Haven't we already covered most of this space, with high probability? We are above
the 100B mark for both symmetries.
Or is this just well known and understood and people are focusing on other symmetries?
We use 32x32 for those, though.rokicki wrote:How many distinct 16x16 soups are there of the two different x8 symmetries?
Edit: Ah , disregard i'm an idiot.
Last edited by drc on December 7th, 2016, 6:01 pm, edited 1 time in total.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Current rule interest: B2ce3-ir4a5y/S2-c3-y
BlinkerSpawn Posts:1905 Joined:November 8th, 2014, 8:48 pm Location:Getting a snacker from R-Bee's That's an important point: All symmetries (excl. D8 I think) are based off of a 16x16 soup with additional copies translated, rotated, and superimposed upon each other accordingly.drc wrote:We use 32x32 for those, though.rokicki wrote:How many distinct 16x16 soups are there of the two different x8 symmetries?
LifeWiki: Like Wikipedia but with more spaceships. [citation needed]
Is there any limit to the number of soups? I've been running apgsearch for Day & Night [b3678s34678].
The haul of 10000 soups is uploaded, but the haul of 100000000 soups is not.
The haul of 10000 soups is uploaded, but the haul of 100000000 soups is not.
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki. |
Given an infinite number of samples $(N)$, a higher (or lower) number of samples $(cN)$ can be derived using sinc interpolation followed by sampling. How can this be applied to finite length signals?
With $\mathrm{sinc}$ interpolation, one can derive a continuous-time signal as:
$$y(t) = \sum^{\infty}_{n=-\infty} y[n]\mathrm{sinc}\left({t\over T}-n\right)$$
For a finite number of sample points, should (can) we consider the $x[n]$ in the picture as
$$y[n] = \begin{cases} x[n], & \text{if } n \in [0, N-1] \\0, & \text{otherwise} \end{cases} $$
Or should $y[n]$ be considered as a periodic version of $x[n]$? (This link briefly addresses the same. The original stated form cannot be directly used with periodic signals)
$$y[n] = x\left[n\pmod N\right]$$
In the first consideration, outside the region $[0,\ N-1]$, if I understand correctly, the Gibb's phenomenon would result in a ringing effect. Would this
completely invalidate any values predicted outside the non-zero region or is it only that the degree of inconsistency is high? (More specifically for points close to but just outside the boundary in the interpolated continuous-time signal)
I am interested to know whether the addition of zeros would pollute the input set of points during the interpolation stage. |
If you consider particular enough models, the assignment of the allowed hypercharges may be easily calculated. The models themselves are constrained by various additional conditions such as anomaly cancellation.
For example, take an $SO(10)$ Grand Unified Theory. In that theory, all the fermions arise from the chiral spinorial ${\bf 16}$ representations of $SO(10)$ - or $spin(10)$, to be more accurate. It's the simplest case relevant for the Standard Model where it is easy to show that there are no anomalies carried by this ${\bf 16}$ representation: it boils down to the vanishing of $$\mbox{Tr}\left(\gamma_{ab} \{\gamma_{cd},\gamma_{ef}\}\right)$$for all $a,b,c,d,e,f=1,\dots, 10$ which is trivial to check.
The weights of ${\bf 16}$ under $SO(10)$ are$$ (\pm \frac12, \pm \frac12, \pm \frac12, \pm \frac12, \pm \frac12) $$with an even number of pluses (and odd number of minuses) - or vice versa, for the antifermions. The hypercharge is the inner product of the weight of a state with the generator$$ (+2,+2,+2,-3,-3)/3 $$The first three entries are linked to the three colors; the last two entries are associated with the weak doublets. The sum of the coordinates above had to vanish because the hypercharge is also a generator of $SU(5)$ where all generators have a vanishing trace (the sum of 5 diagonal elements) because of the $S$.
Now, take all the inner products. They will be sums of $k_1=-3,-1,+1$, or $+3$ terms $(+2/2)\cdot (+1/2)=1/2$, and $k_2=-2, 0$, or $+2$ times $(+3/2)\cdot(+1/2)=3/4$. List all $4\times 3$ possible values of$$ k_1\cdot \frac 12 + k_2\cdot \frac 34 $$for the allowed values of $k_1,k_2$ and you will get exactly the 11+1 possibilities you listed (zero will appear twice because $SO(10)$ also predicts the right-handed neutrino).
This is the derivation of your list from a single representation, the ${\bf 16}$ of $SO(10)$. It's completely inherited by $SU(5)$ and the Standard Model - without the right-handed neutrino. (Some nonzero values of the hypercharge also appear multiply times - for several basis vectors - e.g. quarks appear thrice.)
Note that the maximum hypercharge I could have obtained from the inner products was $$(2+2+2+3+3)/3/2=6/3$$I could have obtained any smaller multiple of $1/3$ because the terms $2/3$ and $3/3$ may conspire in various ways. However, I can't reduce $6/3$ just by $1/3$ because the minimum amounts I can subtract by changing a sign are $2/3$ or $3/3$. |
STMicro describes the IIS3DHHC as low-noise and high-stability, important factors in an IC aiming to suit precision applications. According to its datasheet, it could be used in leveling, incline measurement, and positioning of systems such as antennas.
Graphic depicting the positive accelerations in three orthogonal axes.
Measurement Details and Possible Markets
This triaxial accelerometer from STMicroelectronics has a ±2.5 g (g = 9.807 m/s²) range and a 16-bit data output—that provides a minimum acceleration detection of $$\frac{5\cdot 9.807 \; \tfrac{\text{m}}{\text{s}^2}}{2^{16} \; \text{bits}}\approx7.482\times10^{−4} \; \tfrac{\text{m}}{\text{s}^2}$$.
If used as an inclinometer, that corresponds to approximately 12-16 arc-seconds of division; 16 arc-seconds is an error of 5 inches over a 1-mile distance. To put it another way, if you attached this IC to a meter-stick, and then placed a single human hair under one end of the meter stick, this accelerometer should be able to detect that the angle of inclination has changed.
Unfortunately, this accelerometer has 2% full-scale nonlinearity, which means that, depending on the angle, the measurements can be off by as much as ±50 mg. In other words, the device can notice that a hair has been placed under the end of the meter stick, but the measurement might be off by the width of a baseball.
This level of precision allows this IC to act as an accurate inclinometer for antenna aiming, platform leveling, or machinery feedback loops. For reference, a NEMA17 stepper motor similar to the type used in 3D printing has a minimum resolution of around a dozen arc-minutes (using 1/16 step divisions.) And a Starrett machinist level has a precision of 80-90 arc-seconds.
This device might also find a market in determining building and structure health—a sagging bridge or a leaning building can be diagnosed long before cracks or other failures show. Or perhaps seismologists might use it to track shifting ground around an active volcano.
Device Footprint and Pin Function
Footprint for the IIS3DHHC
ST chose an interesting footprint for this device—the ceramic cavity land grid array (CC LGA). The design is similar to that of a quad flat no-leads (QFN) package except that pin 1 starts in the middle of the device (instead of at a corner), and the pins at the corners have mitered edges to allow for a smaller overall package.
The ceramic cavity LGA-16 package
Fully half of the pins (9-16) are connected to ground and another two provide input power (7-8). This leaves four pins for the Serial Peripheral Interface and two pins to act as programmable interrupts.
Inside the IC
The triaxial accelerometers send data to an analog-to-digital converter (ADC), which passes data to a digital filter. The filtered data is sent to a FIFO buffer or directly to the SPI.
Signal processing within the IIS3DHHC.
The low-pass filter can be configured to act as a finite impulse response (FIR) or an infinite impulse response (IIR) filter with a cutoff frequency of either 235 Hz or 440 Hz. The output data is updated at 1100 Hz, i.e., the device produces 1100 acceleration data points per second.
Interrupt pins can be configured to notify a host processor that the FIFO threshold has been reached, that the FIFO buffer is full, or that a buffer overrun has occurred. The FIFO threshold level is configurable (0 to 31).
The FIFO can be configured in a variety of modes. “Bypass mode” does not use the FIFO memory; data is sent directly to the SPI. “FIFO mode” passes data to the buffer until the buffer is full—then it stops collecting new data until that old data is offloaded by the host processor. In “continuous mode,” the FIFO memory will fill and then begin overwriting previously stored data. A fourth mode, “continuous-to-FIFO,” allows the host processor to switch between continuous and FIFO mode by using the first interrupt pins as an input, rather than an output. Finally, “bypass-to-continuous mode” switches between bypass and continuous modes based on the value of that same interrupt pin.
A built-in temperature sensor can provide 12-bit ambient temperature measurements in two’s complement format. The temperature sensor is intended to provide internal acceleration measurement compensation for the device. You can also use the temperature data to perform external compensation in a host microcontroller.
But compensating via a microcontroller isn’t terrifically straightforward since the relationship between temperature and acceleration offsets is not stated. Datasheets from other manufacturers will often provide a second or third-order polynomial that describes offset compensation. You should also know that the temperature data does not update as frequently as the acceleration data (it does not really need to since temperature is more stable than acceleration). Expect an update rate of approximately 68 Hz. The device has a stated stability of <0.4 mg/°C.
Calibration
ST provides “design tip” DT0053, which describes a method of performing a six-point tumble sensor calibration. The sensor is mounted in its final enclosure (provided that the enclosure has right-angle sides) and moved from position to position to collect values. The values are used to calculate cross-axis gains (for example, perhaps the x-axis measurement is affected partially by the z-axis measurement), as well as calculate acceleration offsets.
6-point tumble calibration from DT0053
This method determines any individual axis offsets and cross-axis gains and uses them to generate a transformation matrix. Then all measured accelerations $$A_n$$ are fed into the matrix to generate new acceleration values $$A_nʹ$$ that more closely coincide with the real-world acceleration.
$$\left(\begin{array}{c} A_X‘ \\ A_Y‘ \\A_ Z‘ \\\end{array}\right)=\left(\begin{array}{cccc} A_{X\text{gain}} & A_{Y to X} & A_{Z to X} & A_{X\text{offset}} \\ A_{X to Y} & A_{Y\text{gain}} & A_{Z to Y} & A_{Y\text{offset}} \\ A_{X to Z} & A_{Y to Z} & A_{Z\text{gain}} & A_{Z\text{offset}} \\\end{array}\right)\left(\begin{array}{c} A_X \\ A_Y \\A_ Z\\1 \\\end{array}\right)$$
ST also provides the IIS3DHHC carrier board to allow you to experiment with the sensor before you integrate it into a final design.
The evaluation board from the IIS3DHHC. Image from STMicro.com
This EVM is a simple carrier board that is designed to plug into the STEVAL-MKI109V3 motherboard.
STMicro’s software libraries should allow a user to get up and running rather quickly. Alternatively, users can interface the IC to a host microprocessor of their choosing—SPI has been around for so long that just about everyone knows how to use it.
STEVAL-MKI109V3 motherboard can interface directly to the IIS3DHHC eval kit.
What's Missing?
I personally would like to see some graphs that illustrate the accelerometer linearity over the measurement range. The datasheet mentions that the device’s non-linearity is 2% FS. The full-scale range is ±2.5g, and 2% of that is ±50 mg, which is a non-trivial amount. That corresponds to around 10 of the 16 measurement bits that drift away from the actual value.
Without knowing which way the device drifts, which orientations are most impacted, by how much, and in which directions, it becomes very difficult for an engineer to use this device correctly in applications that measure absolute inclination. As an example, an angle gauge for a CNC machine might be accurate at 0° pitch, but inaccurate when reading a 45° pitch (when the machine is actually several degrees from that position).
What types of products have you found 16-bit accelerometers in? Let us know below. |
Problem 435
Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.
Define the map $f:\R^2 \to \calF[0, 2\pi]$ by \[\left(\, f\left(\, \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\] We put \[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\] (a) Prove that the map $f$ is a linear transformation. (b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$. (c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$. (This yields an isomorphism of $\R^2$ and $V$.) (d) Define a map $g:V \to V$ by \[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\] Prove that the map $g$ is a linear transformation. (e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$.
(Kyoto University, Linear Algebra exam problem)Add to solve later |
An introduction on using arrays can be found here.
Whether taught formally in school or not, the properties that apply to numbers in operations are encountered by children during their learning of mathematics. A sound understanding of these properties provides a good basis for developing operations, including mental calculation. The operation of addition comes easily to most children, but working with multiplication requires more sophisticatedthinking and therefore usually needs more support. Modelling number properties involving multiplication using an array of objects not only allows children to represent their thinking with concrete materials, but it can also assist the children to form useful mental pictures to support memory and reasoning.
Commutative property
The commutative property of multiplication can be neatly illustrated using an array. For example, the array above could be read as $2$ rows of $6$, or as $6$ columns of $2$. Or the array could be physically turned around to show that $2$ rows of $6$ has the same number as $6$ rows of $2$. Regardless of the way you look at it, there remain $12$ objects. Therefore, the array illustrates that$2 \times6 = 6 \times 2$, which is an example of the commutative property for multiplication. Being able to apply the commutative property means that the number of multiplication facts that have to be memorised is halved.
Division as the Inverse Operation of Multiplication
Of the four operations, division is the most troublesome for young students. Full understanding of division tends to lag well behind the other operations. For many children opportunities to explore the concept with concrete materials are curtailed well before they perceive the relationships between division and the other four operations. One such relationship, the inverse relationship betweendivision and multiplication, can be effectively illustrated using arrays.
For example; $3 \times5 = 15$ ($3$ rows of $5$ make $15$), can be represented by the following array. Looking at the array differently reveals the inverse, that is; $15 \div 3 = 5 $ ($15$ put into $3$ rows makes $5$ columns - or $5$ in each row). Language clearly plays an important role in being able to express the mathematical relationships and the physical array supports this aspect ofunderstanding by giving the students something concrete to talk about.
Placing the mathematics into a real-life context through word problems can facilitate both understanding of the relationship and its expression through words. For example, "The gardener planted $3$ rows of $5$ seeds. How many seeds did she plant?" poses quite a different problem to "The gardener planted $15$ seeds in $3$ equal rows. How many seeds in each row?" yet both these word problemscan be modelled using the same array.
Further exploration of the array reveals two more ways of expressing inverse relationships: $5 \times3 = 15$ and $15 \div 3 = 5$ . The word problems can be adapted to describe these operations and highlight the similarities and differences between the four expressions modelled by the one array.
Distributive property of multiplication over addition
This rather long title not only names one of the basic properties that govern our number system, it also names a personally invented mental strategy that many people regularly use. This strategy often comes into play when we try to recall one of the handful of multiplication facts that, for various reasons, are difficult to remember. For example, does this kind of thinking seem familiar?
"I know $7 \times7$ is $49$. I need two more lots of $7$, which is $14$. So if I add $49$ and $14$... that makes $63$. Ah yes! $7 \times9=63$.'" Symbolically, this process can be represented as... \begin{eqnarray} 7 \times 9 &=& 7 \times (7 + 2)\\ &=& (7 \times 7)+(7 \times 2) \\ &=& 49 + 14 \\ &=& 63 \end{eqnarray} Another way to explain this process is through an array. The whole array represents $7 \times9$ ($7$ rows of $9$) The smaller array to the left of the line shows $7 \times7$ ($7$ rows of $7$). The small array to the right of the line shows $7 \times2$ ($7$ rows of $2$). It can now be easily seen that $7 \times9$ is the same as $(7 \times 7) + (7 \times2)$, which leads to $49 + 14 = 63$.
A slightly different approach to looking at this partitioned array fully illustrates the distributive property by highlighting the first step of splitting the $9$ into $7 + 2$, before the multiplying begins. With the partition line in place, each individual row of the whole array represents $9 = 7 + 2$. Therefore, all $7$ rows represent $7 \times(7 + 2)$, and as can be seen on the array,this is the same as $(7 \times7) + (7 \times2)$. |
Positronium consists of an electron and a positron. By what factor is a positronium atom bigger than a hydrogen atom?
The solution has been explained to me. The characteristic length when solving for the energy levels of the hydrogenic atom is the Bohr radius: $$a_0 \equiv \frac{4 \pi \epsilon_0 \hbar^2}{\mu e^2}$$
For positronium, we can calcuate the reduced mass, $\mu$:
\begin{equation} \mu = \frac{m_1 m_2}{m_1 + m_2} = \frac{m_e m_e}{m_e + m_e} = \frac{1}{2}m_e \end{equation}
giving a reduced mass of about half that for hydrogen. Therefore the Bohr radius for positronium is almost twice that of hydrogen.
However this is the distance between the two particles, rather than the center of rotation of the electron. The
size of the atom is double this since the atom is symmetric, meaning that an atom of positronium is in fact the same size as a hydrogen atom.
My question is:
Does this mean that an atom of, say, muonium will also be the same size as a hydrogen atom, or was the positronium atom a special case because the two particles have exactly the same mass?
If we calculate $\mu$ for muonium, we get a value of
\begin{equation} \mu = \frac{m_1 m_2}{m_1 + m_2} = \frac{m_\mu m_e}{m_\mu + m_e} = 0.995 m_e \end{equation}
So the Bohr radius for a muonium atom is $\frac{1}{0.995} = 1.005$ times larger than a that of a hydrogen atom.
But this, again, is the distance between the two particles rather than the size of the atom.
So we multiply by $\frac{\mu}{m_e}$ again to get the distance of the electron to the barycenter of the system (as we did for positronium). We end up cancelling our previous factor of $\frac{1}{\mu}$, giving us the result of muonium being the same size as hydrogen.
This seems wrong! |
In mechanics, the space can be described as a Riemann manifold. Forces, then, can be defined as vector fields of this manifold. Accelerations are linear functions of forces, so they are covector fields. But what about velocities and many other kinds of vectors?
Of course velocities are not forces, so I don't think it is right to reuse vector fields of this manifold. But does this mean that this manifold has many different tangent spaces at each point?
This sounds very strange to me. I think the problem is that math models have no physical units, maybe somehow we can create a many-sorted manifold to accommodate units?
Velocities and Spatial Accelerations are
twists and Forces and Momenta are wrenches. Both are screws (two-vectors) with one vector free and the other a spatial field. All of them transform with the same laws and their interactions have many dual properties. NOTE: See "A treatise on the theory of screws", Stawell R Ball, https://archive.org/details/theoryscrews00ballrich
The proportionality tensor transforming twists to wrenches is the 6×6 spatial mass matrix converting motion into momentum and acceleration into forces.
For example below I am composing a velocity twist and a momentum wrench. Do you spot the similarities?
$$\begin{aligned} {\hat v} &= \begin{pmatrix} {\bf \omega} \\{\bf r} \times {\bf \omega} \end{pmatrix} & {\hat p} &= \begin{pmatrix} {\bf p} \\{\bf r} \times {\bf p} \end{pmatrix}\end{aligned} $$
In classical mechanics a system is described by a Lagrangian $\mathscr{L}\colon TQ\to \mathbb{R}$, with $Q$ being the configuration space and $TQ$ its tangent bundle, namely the union over $q\in Q$ of all tangent spaces $T_qQ$: $TQ = \cup_q T_qQ$. A local chart on $Q$ looks like $(q_1, \ldots, q_n)$, the $q_k$ being the
degrees of freedom of the system. The Lagrangian is then $\mathscr{L}\equiv\mathscr{L}\big(q(t), v(t)\big)$ and the equations of motion are:$$\frac{d}{dt}\frac{\partial \mathscr{L}}{\partial v^{\mu}} - \frac{\partial \mathscr{L}}{\partial q^{\mu}}=0.$$The solution is a collection of $\big(q^{\mu}(t), v^{\mu}(t)\big)$ that live on $TQ$; if we make the further requirement that, on those solutions, $v=\dot{q}$, then the path on $TQ$ projects uniquely onto a path on $Q$, whose flow is given by the velocity fields.
To directly answer your questions:
Forces, then, can be defined as vector fields of this manifold. Accelerations are linear functions of forces, so they are covector fields. But what about velocities and many other kinds of vectors?
Wrong. Positions and velocities are coordinates of local charts $\phi$ from the tangent bundle $\phi\colon U\subset TQ\to\mathbb{R}$: as such, they transform contra-variantly. Forces, in the above formalism, are related to the conjugate momenta $p_{\mu}=\partial\mathscr{L}/\partial{v^{\mu}}$ and hence transform co-variantly, with the inverse matrix.
Of course velocities are not forces, so I don't think it is right to reuse vector fields of this manifold. But does this mean that this manifold has many different tangent spaces at each point?
See above. Also, manifolds just have one tangent space at each point, defined as the set of all directional derivatives calculated in that point.
I think the problem is that math models have no physical units, maybe somehow we can create a many-sorted manifold to accommodate units?
That has absolutely nothing to do with units. |
I can't address this quesion specifically because you don't give enough context. However as a general rule the reason we often put things into the form $1+x$ is so we can approximate them using a binomial expansion.
In this case we can write your final equation as:
$$ G(s) = \frac{k}{a}\left(1 +\frac{s}{a}\right)^{-1} \tag{1} $$
and if $s \ll a$ the bracket expands to:
$$ \left(1 +\frac{s}{a}\right)^{-1} = 1 - \frac{s}{a} + \mathcal{O}\left(\frac{s}{a}\right)^2$$
and if $s/a$ is small then the squared terms are very small and to a good approximation we can ignore them. That means your equation (1) simplifies to:
$$ G(s) \approx \frac{k}{a}\left(1 -\frac{s}{a}\right) $$
Whether this is a useful approximation for your equation I can't say because I don't know the details of your system. However it is a massively useful approximation in many areas of physics and one I have used many times on this site. |
Tagged: normal subgroup If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 470
Let $G$ be a finite group of order $p^n$, where $p$ is a prime number and $n$ is a positive integer.
Suppose that $H$ is a subgroup of $G$ with index $[G:P]=p$. Then prove that $H$ is a normal subgroup of $G$.
(
Michigan State University, Abstract Algebra Qualifying Exam) Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group |
That was an excellent post and qualifies as a treasure to be found on this site!
wtf wrote:
When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that
. In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/
Thanks for the link and I would have showcased it all on its own had I seen it first
The point I am making is something different. I am pointing out that:
All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out.
I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate.
But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate.
There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets.
Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity?
1) The rigorization of Newton's calculus culminated with infinitary set theory.
Newton discovered his theory of gravity using calculus, which he invented for that purpose.
I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it.
However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun.
I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant.
2) Einstein's gneral relativity uses Riemann's differential geometry.
In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics.
Isn't this the same problem as previous? dx=0?
3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series.
In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math.
I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'?
4) Quantum theory is functional analysis
.
If you took linear algebra, then functional analysis
can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series.
Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert
space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete
, meaning that it has no holes in it. It's like the real numbers and not like the rational numbers.
QM rests on the mathematics of uncountable sets, in an essential way.
Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces.
Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations."
We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such.
Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it."
He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god.
I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it.
ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig.
Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol
5) Cantor was led to set theory from Fourier series.
In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity).
I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity.
In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here:
If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory.
Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake.
I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat
Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing....
I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality.
It just means we're using averages rather than discrete actualities and it's close enough.
I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is.
I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself.
Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess
Don't take my pushback too seriously
I'd prefer if we could collaborate as colleagues rather than competing. |
OK, I'll try to answer your questions:Q1: the number of taps is not equal the to the filter order. In your example the filter length is 5, i.e. the filter extends over 5 input samples [$x(n), x(n-1), x(n-2), x(n-3), x(n-4)$]. The number of taps is the same as the filter length. In your case you have one tap equal to zero (the coefficient for $x(n-1)$), so ...
filtfilt is zero-phase filtering, which doesn't shift the signal as it filters. Since the phase is zero at all frequencies, it is also linear-phase. Filtering backwards in time requires you to predict the future, so it can't be used in "online" real-life applications, only for offline processing of recordings of signals.lfilter is causal forward-in-time ...
If you are optimizing for engineering time and are on a platform that supports large FFTs well (i..e not fixed point), then take hotpaw2's advice and use fast convolution. It will perform much better than a naive FIR implementation and should be relatively easy to implement.On the other hand, if you have some time to spend on this to get the best ...
Digital filter design is a very large and mature topic and - as you've mentioned in your question - there is a lot of material available. What I want to try here is to get you started and to make the existing material more accessible. Instead of digital filters I should actually be talking about discrete-time filters because I will not consider coefficient ...
There's a nice discussion of this problem in Embedded Signal Processing with the Micro Signal Architecture, roughly between pages 63 and 69. On page 63, it includes a derivation of the exact recursive moving average filter (which niaren gave in his answer),$$H(z) = { 1 \over{N} } { 1 - z^{-N} \over { 1 - z^{-1} } }.$$For convenience with respect to ...
It's the last thing you said ("Or does the output of the first filter feed as x_in in to the second filter and so on?"). The idea is simple: you treat the biquads as separate second-order filters that are in cascade. The output from the first filter is the input to the second, and so on, so the delay lines are spread out among the filters. If you need to ...
My favorite "Rule of thumb" for the order of a low-pass FIR filter is the "fred harris rule of thumb":$N=[f_s/delta(f)]*[atten(dB)/22]$wheredelta(f) is the transition band, in same units of $f_s$$f_s$ is the sample rate of the filteratten(dB) is the target rejection in dBFor example if you have a transition band of 100 Hz in a system sampled at 1KHz,...
You are correct. FFT based processing adds inherent latency to your system. However there are ways to tweak this.Let's assume you have an FIR filter of length "N". This can be implement FFT-based using the standard overlap add or overlap save method, where the FFT length would be 2*N. Overall system latency will also be roughly 2*N: you need to accumulate ...
If linear phase is a requirement, that will probably steer you toward an FIR implementation. It is possible to build IIR filters that have approximate linear phase, but it is easy to design a linear-phase FIR.If you're concerned about latency, forward-backward filtering as in filtfilt isn't really a good option. In general, it's really meant to be used an ...
I'm not sure exactly what you're looking for. As you noted in your question, the transfer functions of the Butterworth filter family are well-understood and easily calculated analytically. It is pretty simple to implement a Butterworth filter structure that is tunable by filter order and cutoff frequency:Based on the selected filter order, cutoff frequency,...
what is fraction saving? can you write a code.so that i can understand more clearly?Let's call the quantizer operator $\operatorname{Quant}\{\cdot\}$ . So the output of the quantizer, with $v[n]$ going in, is$$y[n] = \operatorname{Quant}\{ v[n] \}$$which we shall model as an additive error source:$$y[n] = v[n] + q[n]$$No matter how the ...
A little dated but may deserve a more comprehensive answer, especially since Direct Form II can get you into a lot of trouble.First of all, there is no "one size fits all" and the best choice depends on your specific application and constraints. What you can consider isMemory: Direct Form II and Transposed Form II take a little less state memory then ...
First of all, a bit from wikipedia on Direct Form I and II implementation.Direct form I requires more memory, but is a somewhat simpler strategy, and is less likely to have round-off and resonance problems.Direct form II requires less memory, but it has the potential for unusual interactions, larger numbers, and more round off error. Much of this can be ...
There is no analytic solution for $\alpha$ being a scalar (I think). Here is a script that gives you $\alpha$ for a given $K$. If you need it online you can build a LUT. The script finds the solution that minimizes$$\int_{0}^{\pi} dw \quad \left|H_1(jw) - H_2(jw)\right|^2$$where $H_1$ is the FIR frequency response and $H_2$ is the IIR frequency ...
There are actually two ways to implement second order sections: parallel and serial.In the serial version, the outputs of section N are the inputs to section N+1. In the parallel version all sections have the same input (and only one real zero instead of a conjugate complex pair of zeros) and each sections output is simply summed up.The two methods are ...
Note that for stable IIR filters, the impulse response does approach zero as $n$ goes to infinity. It just never becomes exactly zero. However, the sum of the absolute values is finite. Just as an example, take the exponential impulse response$$h[n]=a^nu[n],\qquad |a|<1\tag{1}$$where $u[n]$ is the unit step function. The sum$$\sum_{n=-\infty}^{\...
Although this seems like a remarkably simple questions, it requires a remarkably complicated answer.I don't think there is a "one-size" fits all solution. The best choice of algorithm will depend on what noise you can tolerate and the type of low pass (steepness & frequency). For example at 44.1 KHz sample rate a 4th order Butterworth at 10 kHz is ...
The Butterworth filter's frequency response is the result of specific formulas and its characteristic is the flat passband frequency response. Consequently, if the coefficients of the IIR filter are modified in any way, the filter might not maintain the "Butterworth" characteristics.In addition to the responses by "Hilmar" and "Jason R", maybe you could ...
This isn't really a MATLAB-specific issue; I see a couple more general questions:How do you implement a digital IIR filter?You can apply any general digital filter by convolving its impulse response with the signal that you want to filter. That looks like:$$y[n] = \sum_{k=0}^{N-1} x[k] h[n-k]$$This works great for FIR filters, but you run into ...
The frequency response of a single FFT bin filter looks like a Sinc function, which has a massive amount of overshoot or ripple at frequencies between FFT bins. So your filter is only useful if you can strictly guarantee that the input to the FFT only contains unmodulated frequencies that are strictly and exactly periodic in the FFT aperture length (e.g. ...
In more standard DSP terms, you have the following filter:$$y[n] = (1-a) x[n] + a y[n-1]$$where $x[n]$ and $y[n]$ are the input and output signals at time $n$ respectively.The transfer function (which you didn't ask for) is:$$H(z) = \frac{1-a}{1 - az^{-1}}$$so here is your single pole, at $z=a$ in the complex plane. This filter is also known as ...
The two solutions in a floating point implementation are assumed to be identical, with the two BiQuads being a factored version of the standard difference equation. The BiQuad is the better way to go for fixed point as you isolate two 2nd order systems and in doing so will be easier to keep stable under variations due to the quantization involved.For more ...
If you apply two filters in a series cascade, then the behavior of the cascade can be expressed in two different ways. In the time domain, the overall system's impulse response can be calculated by convolving the impulse responses of $y[n]$ and $y_2[n]$ together. For IIR filters, this can be somewhat cumbersome.In the frequency domain, the overall system's ...
Answer by @endolith is complete and correct! Please read his post first, and then this one in addition to it. Due to my low reputation I was unable to respond to comments where @Thomas Arildsen and @endolith argue about effective order of filter obtained by filtfilt:lfilter does apply given filter and in Fourier space this is like applying filter transfer ...
I would say that the answer to your question - if taken literally - is 'no', there is no general way to simply convert an FIR filter to an IIR filter.I agree with RBJ that one way to approach the problem is to look at the FIR filter's impulse response and use a time domain method (such as Prony's method) to approximate that impulse response by an IIR ...
Short answer:You can't. If an attacker can insert a signal that covers the whole bandwidth (e.g. a white signal, or at least one that has no spectral zeros) into the system (and he can do that over an arbitrarily long time, or add up observations), they will get an output, and can through the magic of correlation get the impulse response.
Here is a little bit of demo code to show why you are better off cascading 2nd order sections.clcsr = 44100;order = 13;[b,a] = butter(order,1000/(sr/2),'low');[sos] = tf2sos(b,a);x = [1; zeros(299,1)]; %impulse% all in oneY = filter(b,a,x);% cascaded biquadsZ = x;for nn = 1:size(sos,1);Z = filter(sos(nn,1:3),sos(nn,4:6), Z );endcla; ...
Try an overlap add/save convolution filter with the longest FFT/IFFT that fits your latency and computational performance constraints. You can design extremely long FIR filters when using this method with even longer FFTs.If you can FFT the entire song, or your entire audio signal file, in one very long FFT+IFFT (there are special FFT algorithms for long ...
The frequency response of the two complementary filters are $H_2(e^{j\theta}) = 1 - H_1(e^{j\theta})$, or the impulse responses $h_2[n] = \delta[n] - h_1[n]$.For an IIR filter, $H_1(z)$ can be written as $\frac{b_0 + b_1 z^{-1} + \ldots}{a_0 + a_1 z^{-1} + \ldots}$. Then $H_2(z)$ should be something like $\frac{(a_0 - b_0) + (a_1 - b_1) z^{-1} + \ldots}{... |
If total energy is conserved just transformed and never newly created, is there a sum of all energies that is constant? Why is it probably not that easy?
No. The universe is dominated by dark energy, which is consistent with a cosmological constant $\Lambda$. In other words, as the universe expands, the energy density stays roughly the same. So the (energy density)*volume is growing exponentially at late times.
Although the total energy is not well defined (as the volume of the universe may be infinite), the fractional rate of growth is certainly nonzero.
You might wonder how the total energy can grow without violating energy conservation. The answer is that in general relativity, we just need $\boldsymbol{\nabla} \cdot \boldsymbol{T} = 0$, so a cosmological constant is perfectly consistent as $\boldsymbol{\nabla} \cdot \Lambda \boldsymbol{g} = 0$
For a nice explanation by Sean Carroll, see http://blogs.discovermagazine.com/cosmicvariance/2010/02/22/energy-is-not-conserved/
Energy conservation stems from Noether's theorem applied to time (i.e., time-invariance leads to energy conservation, similarly to how spatial-invariance leads to momentum conservation). Since the universe is expanding (and accelerating at that), the state of the universe today is different than it was yesterday and will be tomorrow, hence
energy conservation cannot be established for the whole universe.
Locally, however, the stress-energy tensor, $$T^{\mu\nu}=\left(p+\rho\right)u^\mu u^\nu - pg^{\mu\nu},$$will satisfy the conservation law (of energy
and momentum),$$T^{\mu\nu}{}_{;\nu}=0$$(derived through the Bianchi identity, the $;\nu$ subscript denotes the covariant derivatve).
Wald states (Amazon link, emphasis are his) in Chapter 4
The issue of energy in general relativity is a rather delicate one. In general relativity there is no known meaningful notion of local energy density of the gravitational field. The basic reason for this is closely related to the fact that the spacetime metric, $g_{\mu\nu}$, describes both the background spacetime structure and the dynamical aspects of the gravitational field, but no natural way is known to decompose it into its "background" and "dynamical" parts. Since one would expect to attribute energy to the dynamical aspect of gravity but not to the background spacetime structure, it seems unlikely that a notion of local energy density could be obtained without a corresponding decomposition of the spacetime metric. However, for an isolated system, the
totalenergy can be defined by examining the gravitational field at large distances from the system. In addition, for an isolated system the flux of energy carried away from the system by gravitational radiation also is well defined.
Later, in Chapter 11,
...the most likely candidate for the energy density of the gravitational field in general relativity would be an expression quadratic in the first derivatives of the metric. However, since no tensor other than $g_{\mu\nu}$ itself can be constructed locally from only the coordinate basis components of $g_{\mu\nu}$ and their first derivatives, a meaningful expression quadratic in first derivatives of the metric can be obtained only if one has additional structure on spacetime, such as a preferred coordinate system or a decomposition of the spacetime metric into a "background part" and a "dynamical part" (so that, say one could take derivatives of the "dynamical part" of the metric with respect to the derivative operator associated with the background part). Such additional structure would be completely counter to the spirit of general relativity, which views the spacetime metric as fully describing all aspects of spacetime structure and the gravitational field.
Your question is tagged as general-relativity and cosmology, and as textbooks remark (e.g. Peebles [1]) "
there is not a general global energy conservation law in general relativity theory.”
Therefore: ”
The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved” [2].
[1] Peebles P. J. E., 1993, Principles of Physical Cosmology (Princeton Univ. Press).
[2] Harrison E., 1981, Cosmology ( Cambridge University Press)
What we like to call the energy, i.e., the total matter/energy content of space-time, might not be conserved. However, there is a lot of reason to suspect that fundamentally the universe is some big quantum system, and that space-time and particles and fields are emergent from this underlying idea. In that case, we expect there to be a Hamiltonian $H$ and some time evolution rule $i\hbar \partial_t \left|\psi\right\rangle = H \left|\psi\right\rangle$, and unitarity requires that energy be conserved. Papers by Page and Wootters have interesting things to say on the subject.
The only thing that prevents us defining a total conserved energy for the entire universe is that if the universe is infinite then the total energy could be infinite or indeterminate.
The statements that say energy is not conserved in general relativity are wrong, irrespective of who says them. You can define energy over any finite volume of space and you can define the flux of energy over the boundary surrounding the volume. The rate at which energy decreases in the volume is equal to the flux of energy across the boundary. This is the the most general way to express energy conservation globally.
All statements to the contrary can be refuted and to avoid arguing around in circles I have done that at length in my write-up at http://vixra.org/abs/1305.0034
protected by Qmechanic♦ Jan 31 '16 at 12:03
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I want to solve 3 coupled PDEs equations. They depend on time, radius and length. I used the method of lines (MOL) and converted them to a system of ODEs in time. Now I want to solve them using MATLAB. Here are the equations:
\begin{align} &\frac{\partial C_{i,p}}{\partial t} + \frac{u_{cp}C_{i,p}}{r} + C_{i,p}\frac{\partial u_{cp}}{\partial r} + u_{cp} \frac{\partial C_{i,p}}{\partial r} - D_{eff}\left(\frac{\partial^2 C_{i,p}}{\partial r^2} + \frac{1}{r} \frac{\partial C_{i,p}}{\partial r}\right) = - \frac{(1-\epsilon_f)}{\epsilon_f} \rho_f \frac{\partial q_i}{\partial t}\\ &\frac{\partial q_i}{\partial t} = k_s (q_i^{eq} - q_i)\\ &q_i^{eq} = \frac{q_s b P_i}{[1 + (bP_i)^n]^{1/n}}\\ &q_s = q_{s0} \exp\left[\eta \left(1 - \frac{T}{T_0}\right)\right]\\ &b = b_0 \exp\left[\frac{-\Delta H}{R T_0}\left(1 - \frac{T_0}{T}\right)\right]\\ &\eta = A + B\left(1 -\frac{T_0}{T}\right)\\ &P_i = C_{i,p}/RT\\ &T = T_f\\ &\rho C_{pf}\frac{\partial T_f}{\partial t}-\frac{\lambda_f}{(1-\epsilon_f)} \left[\frac{\partial^{2}T_f}{\partial r^{2}}+\frac{1}{r}\frac{\partial T_f}{\partial r}+\frac{\partial^{2}T_f}{\partial z^{2}}\right] = \rho_f \Delta H_{ads} \frac{\partial q_{i}}{\partial t} \end{align}
I want to use
ode15s to solve them. I create one matrix where the rows indicate radius nodes and the columns indicate length nodes. I have a problem because they are coupled together and I cant use ode15s for solving them. Because
ode15s solve one equation from time=[0:end] and then solve another. I want to solve all of the equations step by step, first for time=1 then time=2, etc. Can anyone help me solve this set of ODEs using
ode15 in MATLAB? |
This would lead to a hypergeometric distribution: X follows Hypergeometric(K, N, n), where:
K = number of defect units in one daily production N = the population size = the number of units produced per day n= the sample size = 25 draws, and in which p = the proportion of success/defect units in a daily production:p=K/N
However, we assume that the daily production is more than 10 times the sample size: \(N >10n\Leftrightarrow N>250\) which means that we can approximate to the binomial distribution: \(X\sim Bin(n;p)\) with a 10% probability of defect units, expressed by p= 0.1, so we get: the expression, the expected value and the varianse:
\({X\sim \operatorname {Bin} (25;0.1)}\)
\({\operatorname {E} [X]=np}=2.5\)
\({\operatorname {Var} (X)=np(1-p)} \Leftrightarrow {\operatorname {Var} (X)=250(1-0.1).} = 1.5\)
The criteria for returning a one-day production to inspection is if there are 4 or more defect units in the daily sample, which can be expressed as:
\(\displaystyle P(X\ge 4)=1-P(X\leq 3)\)
\(\displaystyle\Rightarrow P(X\leq k)=\sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}\)
\(\displaystyle\Leftrightarrow P(X\leq 3)=\sum _{i=0}^{25}{25 \choose i}0.1^{i}(1-0.1)^{25-i}\)
Header 3, blue Header 3, creyon
Paragraph 1: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text
Paragraph 2: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text
yxTtherm
Paragraph 1: The Swedish company, yxTherm, produces thermostats and wish to revise their established quality assurance parameters>
K = number of defect units in one daily production N = the population size = the number of units produced per day n= the sample size = 25 draws, and in which p = the proportion of success/defect units in a daily production:p=K/N a change of machinery.
Paragraph 1: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text
Paragraph 2: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text
However, we assume that the daily production is more than 10 times the sample size: \(N >10n\Leftrightarrow N>250\) which means that we can approximate to the binomial distribution: \(X\sim Bin(n;p)\) with a 10% probability of defect units, expressed by p= 0.1, so we get: the expression, the expected value and the varianse:
* \({X\sim \operatorname {Bin} (25;0.1)}\)
* \({\operatorname {E} [X]=np}=2.5\)
* \({\operatorname {Var} (X)=np(1-p)} \Leftrightarrow {\operatorname {Var} (X)=250(1-0.1).} = 1.5\)
The criteria for returning a one-day production to inspection is if there are 4 or more defect units in the daily sample, which can be expressed as:
\(\displaystyle P(X\ge 4)=1-P(X\leq 3)\)
\(\displaystyle\Rightarrow P(X\leq k)=\sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}\) \(\displaystyle\Leftrightarrow P(X\leq 3)=\sum _{i=0}^{25}{25 \choose i}0.1^{i}(1-0.1)^{25-i}\)
The step-by-step sum of the probablilities of having 3 or less:
\(\displaystyle{\Pr(0{\text{ defect units}})=f(0)=\Pr(X=0)={25 \choose 0}0.1^{0}(1-0.1)^{25-0}=0.071789799}\)
\(\displaystyle{\Pr(1{\text{ defect unit}})=f(1)=\Pr(X=1)={25 \choose 1}0.1^{1}(1-0.1)^{25-1}=0.199416108}\) \(\displaystyle{\Pr(2{\text{ defect units}})=f(2)=\Pr(X=2)={25 \choose 2}0.1^{2}(1-0.1)^{25-2}=0.265888144}\) \(\displaystyle{\Pr(3{\text{ defect units}})=f(3)=\Pr(X=3)={25 \choose 3}0.1^{3}(1-0.1)^{25-3}=0.226497308}\) \(\displaystyle\Leftrightarrow 1 – (0.071789799+0.199416108+0.265888144+0.226497308) = \underline{0.2364}\)
Paragraph 1: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text
Paragraph 2: Text, lorem, borem and so much more text, Text, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textvText, lorem, borem and so much more textvText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more textText, lorem, borem and so much more text |
I think this is a question is some very elementary circuit design but... I want to use a varactor diode to tune the resonance frequency of a resonator I have built.
I understand that a varactor is a voltage dependent capacitor, but I can't get my head around the correct configuration of using them.
Usually if I want to tune my resonator I insert a capacitor between the hot end of the resonator and the ground this will obviously shift the resonance frequency according to $$\nu_0 = \frac{1}{2\pi \sqrt{L(C_p+C_{capacitor})}}$$
I want to do the same but with the Varactor diode, my current thinking feels very wrong as the DC voltage source is surely just going to go straight through the resonator and into ground.
I know this is basic but I am a little stuck! |
In an old paper by Gudder http://www.ams.org/journals/proc/1969-021-02/S0002-9939-1969-0243793-1/S0002-9939-1969-0243793-1.pdf,
he defined: "a quantum probability space is a triple $(\Omega, C, M)$ where $C$ is a $\sigma$-class of subsets of $\Omega$ and $M$ is the set of states on $C$."
What does a $\sigma$-class means in math? I understand what a $\sigma$-field or algebra is.
What is "states" on $C$? Is $m\in M$ a mapping that $m:C\to[0,1]$?
I wonder if there are better (newer) notes or books on the formalism of quantum probability.
A paper https://arxiv.org/pdf/quant-ph/0601158.pdf explained the quantum probability space in detail. Are those concepts well-accepted? |
I'm studying statistical mechanics, in particular classical regime for Fermi Dirac and Bose Einstein gases. Time average value for occupation numbers in FDBE statistics: $$ \langle n_\epsilon\rangle_{FB} = \frac{1}{e^{(\epsilon-\mu)\beta}\pm1} $$ For Boltzmann Statistics: $$ \langle n_\epsilon \rangle_B = e^{(\mu-\epsilon)\beta} $$ How can one work out a nice condition of classical regime in which $ \langle n_\epsilon\rangle_{FB} \rightarrow \langle n_\epsilon\rangle_B $ ?
An obvious option is $e^{\frac{(\epsilon-\mu)}{kT}}\gg1$. However, I don't really like it, since it implies convergence at low temperature. Moreover I'm expecting an $\epsilon$-free asymptotic expression in terms of temperature and density.
@Adam : i've read your comment again and things are much more clear now :)! Here's what i've got:
I'll assume $ \beta|\mu|>>1 $ and $\mu<0 $ or $z \rightarrow 0 $.
In terms of z:
$$ \langle n_\epsilon\rangle_{FB} = \frac{1}{\frac{e^{\epsilon\beta}}{z}\pm1} \, \,\underrightarrow{z\rightarrow0} \, \,\langle n_\epsilon\rangle_{B}$$
Being $z=\lambda^3_t \rho$, i can say FDBE gases behaviour like classical one when the particle's thermal wavelenght is small if compared to the typical particle distance. Almost the "low density, high temperature" condition i was looking for.
At low temperature Boltzmann statistics lose physical mean (for example it's easy to recover the classical Sackur–Tetrode entropy from his thermodynamic) . Approximating in this scenary, although it may look mathematically legitimate, is conceptually wrong. Quantum statistcs have to be handled carefully on their own.
Am i doing it right :)?
Sorry for the poor english. Thanks you so much |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 31, Number 1 (1960), 222-224. Sums of Small Powers of Independent Random Variables Abstract
Let $(x_{nk}), k = 1, 2, \cdots, k_n; n = 1, 2, \cdots$ be a double sequence of infinitesimal random variables which are rowwise independent (i.e. $\lim_{n \rightarrow \infty} \max_{1 \leqq k \leqq k_n} P(|x_{nk}| > \epsilon) = 0$ for every $\epsilon > 0$, and for each $n, x_{n1}, \cdots, x_{nk_n}$ are independent). Let $S_n = x_{n1} + \cdots + x_{nk_n} - A_n$ where the $A_n$ are constants and let $F_n(x)$ be the distribution function of $S_n$. In a previous paper [3] the system of infinitesimal, rowwise independent random variables $(|x_{nk}|^r)$ was studied for $r \geqq 1$. Specifically, let $S^r_n = |x_{n1}|^r + \cdots + |x_{nk_n}|^r - B_n(r),$ where the $B_n(r)$ are suitably chosen constants. Let $F_n^r(x)$ be the distribution function of $S^r_n$. Necessary and sufficient conditions for $F_n^r(x)$ to converge $(n \rightarrow \infty)$ to a distribution function $F^r(x)$ and for $F^r(x)$ to converge $(r \rightarrow \infty)$ to a distribution function $H(x)$ were given, together with the form that $H(x)$ must take. In Section 2 of this paper we consider the system $(|x_{nk}|^r)$ for $0 < r < 1$. Results similar to the above are found, replacing $(r \rightarrow \infty)$ by $(r \rightarrow 0^+)$. However different assumptions must be made at certain points. Various remarks are made in this paper to show where the results here differ from [3]. In particular it is shown that, if $F^r(x)$ converges $(r \rightarrow 0^+)$ to a distribution function $H(x)$, then $H(x)$ will be the distribution function of the sum of two independent random variables, one Poisson and the other Gaussian. Furthermore, while the Gaussian summand may or may not be degenerate, the Poisson summand will be nondegenerate in all but one special case.
Article information Source Ann. Math. Statist., Volume 31, Number 1 (1960), 222-224. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177705999 Digital Object Identifier doi:10.1214/aoms/1177705999 Mathematical Reviews number (MathSciNet) MR119237 Zentralblatt MATH identifier 0100.34602 JSTOR links.jstor.org Citation
Shapiro, J. M. Sums of Small Powers of Independent Random Variables. Ann. Math. Statist. 31 (1960), no. 1, 222--224. doi:10.1214/aoms/1177705999. https://projecteuclid.org/euclid.aoms/1177705999 |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Implicit causal model in Edward)
(→Implicit causal model in Edward)
Line 203: Line 203:
== Implicit causal model in Edward ==
== Implicit causal model in Edward ==
−
[[File: coddde.png|600px
+
[[File: coddde.png|600px]]
Revision as of 23:47, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Radius of the base of a cylinder: \(R\)
Generatrix of a cylinder: \(L\) Height of a cylinder: \(H\) Heights of a truncated cylinder: \({h_1},\) \({h_2}\)
Generatrix of a cylinder: \(L\)
Height of a cylinder: \(H\)
Heights of a truncated cylinder: \({h_1},\) \({h_2}\)
Area of the base: \({S_B}\)
Lateral surface area: \({S_L}\) Total surface area: \(S\) Volume: \(V\)
Lateral surface area: \({S_L}\)
Total surface area: \(S\)
Volume: \(V\)
A cylinder is a geometric solid bounded by a cylindrical surface and two parallel planes crossing it. The cylindrical surface is formed by a straight line (called the generatrix) moving parallel to itself, so that any fixed point of the line moves along a plane curve called the directrix. A cylinder is called a circular cylinder if its directrix is a circle. A cylinder is called a right cylinder if it generatrix is perpendicular to the bases. A right circular cylinder is determined by the radius of the base \(R\) and the generatrix \(L,\) which is equal to the height \(H\) of the cylinder. Lateral surface area of a right circular cylinder \({S_B} = 2\pi RH\) Total surface area of a right circular cylinder \(S = {S_L} + 2{S_B} \) \(= 2\pi R\left( {H + R} \right)\) Volume of a right circular cylinder \(V = {S_B}H \) \(= \pi {R^2}H\) A truncated right circular cylinder or briefly a truncated cylinder is determined by the radius of the base \(R,\) the shortest height \({h_1}\) and the greatest height \({h_2}\). Lateral surface area of a truncated cylinder \({S_L} = \pi R\left( {{h_1} + {h_2}} \right)\) Area of the bases of a truncated cylinder \({S_B} = \pi {R^2} \) \(+\;\pi R\sqrt {{R^2} + {{\left( {{\large\frac{{{h_1} – {h_2}}}{2}}\normalsize} \right)}^2}} \) Total surface area of a truncated cylinder \(S = {S_L} + {S_B} =\) \( \pi R\Big[ {{h_1} + {h_2} + R }\) \(+\;{ \sqrt {{R^2} + {{\left( {{\large\frac{{{h_1} – {h_2}}}{2}}\normalsize} \right)}^2}} } \Big]\) Volume of a truncated cylinder \(V = {\large\frac{{\pi {R^2}\left( {{h_1} + {h_2}} \right)}}{2}\normalsize}\) |
Hi Quantitative Finance Stack Exchange,
It's my first go at GARCH models so give me a chance with my phrasing. I'm looking for an answer to a general question.
First, I understand that you can have a forecasting model to forecast returns and a GARCH model to forecast volatility. Let's proceed with the simplest example:
Forecasting returns:
$$\hat{y_t}=\alpha\cdot y_{t-1} + \epsilon_t$$
GARCH(1,1):
$$\hat{\sigma^2_t}=\beta_1\epsilon_{t-1}+\beta_2\sigma^2_{t-1}$$
Now, I've developed my trading strategy and let's say I found that it works, namely buy when $\hat{y_t} > 0.0020\%$. My question is this. What is the standard way of looking at how GARCH compliments my strategy, if at all?
The way I see it is that both predicts different things. One predicts $\hat{y_t}$ and another predicts $\hat{\sigma^2_{t}}$. Therefore, GARCH is only readily implementable if you somehow found a way to incorporate volatility in your strategy. If my existing strategy $\hat{y_t} > 0.0020\%$ works fine, there isn't a need for GARCH correct?
Thank you for your help, Donny |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Implicit causal model in Edward)
(→Implicit causal model in Edward)
Line 203: Line 203:
== Implicit causal model in Edward ==
== Implicit causal model in Edward ==
+
[[File: coddde.png|600px]]
[[File: coddde.png|600px]]
Revision as of 23:48, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
2.1 IS FINALLY OUT
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X ?muzik wrote:OH MY GOD 2.1 IS FINALLY OUT I waited over a year for this.Saka wrote:?muzik wrote:OH MY GOD 2.1 IS FINALLY OUT
http://geometry-dash.wikia.com/wiki/Update_2.1
Code: Select all
/bin/ls
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Haha, brilliant!A for awesome wrote:Messing around with Bash. I just figured out that if you repeat the command(separated by spaces) exactly N times, and recursively run its output, it will run exactly N times before producing an error.
Code: Select all
/bin/ls
If you let:
Code: Select all
one="/bin/ls"
Code: Select all
y="$one $x"
Code: Select all
y=`$x`
Code: Select all
z="$w $x $y"
Code: Select all
z=`echo $x | sed "s:$one:$y:g"`
! mono-clinic I beat Nine Cirles (I am epilepsy-proof!) and had 10% at Conical Depression.muzik wrote:Trying to beat Problematic in Geometry Dash. And failing horribly like I do at everything. https://www.youtube.com/watch?v=-UirK_v1gsk
Now 2.1 is out and I'm really sad that I can't download it.
EDIT: Have 2.101.
How come?gameoflifemaniac wrote:I had 71% at Nine Circles (I am epilepsy-proof!) and 10% at Conical Depression.muzik wrote:Trying to beat Problematic in Geometry Dash. And failing horribly like I do at everything. https://www.youtube.com/watch?v=-UirK_v1gsk Now 2.1 is out and I'm really sad that I can't download it.
Right now I'm trying to wrap my head around composing in the 5/4 time signature.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X I always imagine this as a tech support forum.muzik wrote:I just finished all of my prelim exams today. Today was my computing one, and I'm ashamed to say in a community like this that I probably did absolutely terribly.
Tomorrow I will take some tests for middle school.
You should really try 5/8 once you're finished with that. I feel like it's one of the most underused time signatures, and I've come up with really interesting things in it.muzik wrote:Right now I'm trying to wrap my head around composing in the 5/4 time signature.
Code: Select all
#Because this is the sandbox, I can post random incomprehensible things that, redundantly, no one can understand.|-`-i - | --i-a/ |- -i-|-i--| -`-i - | --i-a / |--i - |--i-c-dcg_c^b._-ag-ab.ab.agfg- cgc^-dcg_c^b._-ag-d^efedcb._c^---
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
I've written a song in the 7/4 time signature. 'song'. It was a birthday gift for my friendo a little while ago.muzik wrote:Right now I'm trying to wrap my head around composing in the 5/4 time signature.
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Sorry about the double post.muzik wrote:Right now I'm trying to wrap my head around composing in the 5/4 time signature.
Today in choir, the director tried to trick us and gave us a sight-reading exercise in 5/4. We failed miserably for the most part, except for me and a few other band people -- we're working on a piece with a long 5/4 section in band at the moment, so we weren't thrown off.
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Sort of related, I made this song a few years ago that starts off in an odd time signature, I'm not sure which because I have no idea how time signatures work, but it's like bars of length 7,8,6,8 repeating once it gets going.drc wrote:I've written a song in the 7/4 time signature. 'song'. It was a birthday gift for my friendo a little while ago.muzik wrote:Right now I'm trying to wrap my head around composing in the 5/4 time signature. Yay, someone else that does origami! I used to do a whole bunch of modular, geometric-y type stuff a while back because I was doing a finance apprenticeship and had access to copious amounts of post-it notes:gameoflifemaniac wrote:Folding origami magic ball ( ͡° ͜ʖ ͡°)
Just googled the origami magic ball, assume it's the one you're making, it looks crazy! And difficult to put together
If you want, you could try to make an origami icosahedron. It looks pretty difficult to put together, though:Lewis wrote:Yay, someone else that does origami! I used to do a whole bunch of modular, geometric-y type stuff a while back because I was doing a finance apprenticeship and had access to copious amounts of post-it notes: (image) Just googled the origami magic ball, assume it's the one you're making, it looks crazy! And difficult to put together
Code: Select all
x = 81, y = 96, rule = LifeHistory58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27.A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A$4.2A18$4.2A$4.2A2.2A$8.2A!
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X In math classGamedziner wrote:If you want, you could try to make an origami icosahedron. It looks pretty difficult to put together, though.Lewis wrote:Yay, someone else that does origami! I used to do a whole bunch of modular, geometric-y type stuff a while back because I was doing a finance apprenticeship and had access to copious amounts of post-it notes: (image) Just googled the origami magic ball, assume it's the one you're making, it looks crazy! And difficult to put together (Mesh of paper)[/img]
Geometry
Everyone was making cubes
And pyramids
I made an icosahedron
True story, btw
I actually have one of those at home. It uses a different construction technique, though.Gamedziner wrote:If you want, you could try to make an origami icosahedron. It looks pretty difficult to put together, though: [image]
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
We have just had "New Year Eve" meal.Happy new "Rooster"year!
An awesome gun firing cool spaceships:
Code: Select all
x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o!
I have actually quit the Magic Ball, but I have made a nice Spring into Action!Lewis wrote:Sort of related, I made this song a few years ago that starts off in an odd time signature, I'm not sure which because I have no idea how time signatures work, but it's like bars of length 7,8,6,8 repeating once it gets going.drc wrote:I've written a song in the 7/4 time signature. 'song'. It was a birthday gift for my friendo a little while ago.muzik wrote:Right now I'm trying to wrap my head around composing in the 5/4 time signature. Yay, someone else that does origami! I used to do a whole bunch of modular, geometric-y type stuff a while back because I was doing a finance apprenticeship and had access to copious amounts of post-it notes:gameoflifemaniac wrote:Folding origami magic ball ( ͡° ͜ʖ ͡°) Just googled the origami magic ball, assume it's the one you're making, it looks crazy! And difficult to put together
http://imgur.com/a/glnvu
https://www.reddit.com/r/Minecraft/comm ... spawn_for/ |
Latent Position Two-Graph Testing¶
[1]:
import numpy as npnp.random.seed(88889999)from graspy.inference import LatentPositionTestfrom graspy.embed import AdjacencySpectralEmbedfrom graspy.simulations import sbm, rdpgfrom graspy.utils import symmetrizefrom graspy.plot import heatmap, pairplot%matplotlib inline
Generate a stochastic block model graph to model as a random dot product graph¶
To start, we generate a binary stochastic block model graph (SBM). An SBM is composed of ‘communities’ or ‘blocks,’ where a node’s block membership in a graph determines its probability of connection to the other nodes in the graph.
[2]:
n_components = 4 # the number of embedding dimensions for ASEP = np.array([[0.9, 0.11, 0.13, 0.2], [0, 0.7, 0.1, 0.1], [0, 0, 0.8, 0.1], [0, 0, 0, 0.85]])P = symmetrize(P)csize = [50] * 4A = sbm(csize, P)X = AdjacencySpectralEmbed(n_components=n_components).fit_transform(A)heatmap(A, title='4-block SBM adjacency matrix')pairplot(X, title='4-block adjacency spectral embedding')
[2]:
<matplotlib.axes._subplots.AxesSubplot at 0x127829400>
[2]:
<seaborn.axisgrid.PairGrid at 0x127a0c7b8>
In the adjacency matrix above, there is a clearly defined block structrure corresponding to the 4 communities in the graph that we established. On the right, we see the
adjacency spectral embedding (ASE) of this graph. ASE(A) recovers an estimate of the latent positions of $A $. Latent positions refer to the idea of a random dot product graph (RDPG) which can be modeled as follows:
For an adjacency matrix \(A \in \mathbb{R}^{n x n}\), the probability of an edge existing between node \(i\) and node \(j\) (aka whether or not \(A_{ij}\) is a 1) is determined by the matrix \(P \in \mathbb{R}^{n x n}\)
\(P = XX^T\), where $X \in `:nbsphinx-math:mathbb{R}`^{n x d} $ and is referred to as the latent positions of the graph. \(X\) is referred to as the latent positions of the graph because each node \(n_i\) is modeled as having a hidden, usually unobserved location in \(\mathbb{R}^d\) (we’ll call it \(x_i\)). The probability of an edge existing between \(n_i\) and \(n_j\) is equal to the dot product \(x_i \cdot x_j\)
ASE is one way to obtain an estimate of the latent positions of a graph, \(\hat{X}\)
In the above embedding, we see 4 clusters of nodes corresponding to the 4 blocks that we prescribed. ASE recovers the fact that all of the nodes in a block have similar latent positions. So, RDPGs can also model an SBM graph.
Sample new RDPGs from this latent position¶
Given the estimate of X, we now sample two new RDPGs from the same latent position above
[3]:
A1 = rdpg(X, loops=False, rescale=False, directed=False)A2 = rdpg(X, loops=False, rescale=False, directed=False)Xhat1 = AdjacencySpectralEmbed(n_components=n_components).fit_transform(A1)Xhat2 = AdjacencySpectralEmbed(n_components=n_components).fit_transform(A2)heatmap(A1, title='Sampled RDPG 1 adjacency matrix')heatmap(A2, title='Sampled RDPG 2 adjacency matrix')pairplot(Xhat1, title='Sampled RDPG 1 adjacency spectral embedding')pairplot(Xhat2, title='Sampled RDPG 2 adjacency spectral embedding')
[3]:
<matplotlib.axes._subplots.AxesSubplot at 0x12c535390>
[3]:
<matplotlib.axes._subplots.AxesSubplot at 0x12c6287b8>
[3]:
<seaborn.axisgrid.PairGrid at 0x12bf77a20>
[3]:
<seaborn.axisgrid.PairGrid at 0x1279c46d8>
Qualitatively, both of the simulated RDPGs above match the behavior we would expect, with 4 clear blocks and the corresponding 4 clusters in the embedded space. But, can we say they were generated from the same latent positions?
Latent position test where null is true¶
Now, we want to know whether the above two graphs were generated from the same latent position. We know that they were, so the test should predict that the differences between Sampled RDPG 1 and 2 (up to a rotation, see below) are no greater than those differences observed by chance. In this case, we will use the
LatentPositionTest in
GraSPy because we know the true alignment between the vertices of the two graphs we are testing. In other words, node \(i\) in graph 1 can be thoughtof as equivalent to node \(i\) in graph 2 because of the way we generated these graphs.
In other words, we are testing
and want to see that the p-value for the latent position test is high (fail to reject the null)
Here, R is an orthogonal rotation matrix found from solving the orthogonal procrustes problem (Note: this constraint can be relaxed for other versions of semipar)
Note that LatentPositionTest.fit() may take several minutes
[4]:
lpt = LatentPositionTest(n_bootstraps=200, n_components=n_components)lpt.fit(A1, A2)print('p = {}'.format(lpt.p_value_))
[4]:
0.8325
p = 0.8325
We see that the corresponding p-value is high, indicating that the observed differences between latent positions of Sampled RDPG 1 and 2 are likely due to chance
Matched test where the null is false¶
Now, we distort the latent position of one of the sampled graphs by adding noise. The matched test should have a low p-value, indicating that we should reject the null hypothesis
[5]:
A3 = rdpg(X, loops=False, rescale=False, directed=False)A4 = rdpg(X + np.random.normal(0.05, 0.02, size=(X.shape)), loops=False, rescale=False, directed=False)Xhat3 = AdjacencySpectralEmbed(n_components=n_components).fit_transform(A3)Xhat4 = AdjacencySpectralEmbed(n_components=n_components).fit_transform(A4)heatmap(A3, title='Sampled RDPG 3 adjacency matrix')heatmap(A4, title='Sampled RDPG 4 (distorted) adjacency matrix')pairplot(Xhat3, title='Sampled RDPG 3 adjacency spectral embedding')pairplot(Xhat4, title='Sampled RDPG 4 (distorted) adjacency spectral embedding')
[5]:
<matplotlib.axes._subplots.AxesSubplot at 0x12dc56438>
[5]:
<matplotlib.axes._subplots.AxesSubplot at 0x12e084470>
[5]:
<seaborn.axisgrid.PairGrid at 0x12bf6e860>
[5]:
<seaborn.axisgrid.PairGrid at 0x12e0a4080>
[6]:
lpt = LatentPositionTest(n_bootstraps=200, n_components=n_components)lpt.fit(A3, A4)print('p = {}'.format(lpt.p_value_))
[6]:
0.0175
p = 0.0175 |
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$.
Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED.
The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis.
Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts:
$$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable.
Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.] |
Answer
Second quadrant
Work Step by Step
$\theta=-3485^{\circ}$ $\cos\theta = -0.423$ $\sin\theta = 0.906$ As the $\cos\theta$ is negative and the $\sin\theta$ is positive, the angle lies in the second quadrant.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Angle: \(\alpha\)
Trigonometric functions: \(\sin \alpha,\) \(\cos \alpha,\) \(\tan \alpha,\) \(\cot \alpha,\) \(\sec \alpha,\) \(\csc \alpha\)
Trigonometric functions: \(\sin \alpha,\) \(\cos \alpha,\) \(\tan \alpha,\) \(\cot \alpha,\) \(\sec \alpha,\) \(\csc \alpha\)
Set of integers: \(\mathbb{Z}\)
Integers: \(n\)
Integers: \(n\)
Trigonometric identities establish a connection between trigonometric functions of the same argument (angle \(\alpha\)). Pythagorean trigonometric identity \({\sin ^2}\alpha + {\cos ^2}\alpha = 1\) This identity is the result of application of the Pythagorean theorem to a triangle in the unit circle. Relationship between the cosine and tangent \(\large{\frac{1}{{\cos^2}\alpha}}\normalsize – {\tan ^2}\alpha = 1\) or \(\sec^2\alpha – {\tan ^2}\alpha = 1.\) This identity follows from the Pythagorean trigonometric identity and is obtained by dividing the left and right sides by \(\cos^2 \alpha\). It is assumed that \(\alpha \ne \large{\frac{\pi}{2}}\normalsize + \pi n,\) \(n \in \mathbb{Z}\). Relationship between the sine and cotangent \(\large{\frac{1}{{\sin^2}\alpha}}\normalsize – {\cot ^2}\alpha = 1\) or \(\csc^2\alpha – {\cot ^2}\alpha = 1.\) This formula also follows from the Pythagorean trigonometric identity (it is obtained by dividing the left and right sides by \(\sin^2 \alpha\). It is assumed that \(\alpha \ne \pi n,\) \(n \in \mathbb{Z}\). Definition of tangent \(\tan \alpha = \large{\frac{\sin \alpha}{\cos \alpha}}\normalsize ,\) where \(\alpha \ne \large{\frac{\pi}{2}}\normalsize + \pi n,\) \(n \in \mathbb{Z}\). Definition of cotangent \(\cot \alpha = \large{\frac{\cos \alpha}{\sin \alpha}}\normalsize ,\) where \(\alpha \ne \pi n,\) \(n \in \mathbb{Z}\). Consequence of the definitions of tangent and cotangent \(\tan \alpha \cdot \cot \alpha = 1,\) where \(\alpha \ne \large{\frac{\pi n}{2}}\normalsize ,\) \(n \in \mathbb{Z}.\) Definition of secant \(\sec \alpha = \large{\frac{1}{\cos \alpha}}\normalsize,\) \(\alpha \ne \large{\frac{\pi}{2}}\normalsize +\pi n,\) \(n \in \mathbb{Z}.\) Definition of cosecant \(\csc \alpha = \large{\frac{1}{\sin \alpha}}\normalsize,\) \(\alpha \ne \pi n,\) \(n \in \mathbb{Z}.\) |
Can you use the digits 2, 0, 1 and 7 each only
once to create the number 88?
What about this
$\left(\frac{0!}{.\overline1}\right)^2 + 7 = 88$
where
$.\overline1 = 0.1111\ldots$
Because modern math is done with computers, here's some Python:
>>> int(str(0 + 1 + 7) * 2)88
For that matter:
In base 86: $12 + 0*7$
If floor were allowed, then this works:
$\left\lfloor\sqrt{10!!}\right\rfloor + 27 $
because
$10!!$ is $10\cdot 8\cdot 6\cdot 4\cdot 2 = 3840 $
$\sqrt{3840} = 61.9677335393\cdots.$
The only
digits used here are 2,0,1,7 to reach 88:
$(\textbf{10}+(i\times i))^\bf2\rm+\bf7 = 88$
We can do it without the $0$...
$S=\{1,2,7\}$
$(\sum{S}-|S|)\times\prod{S}-\sum{S}$ (using the sum, $\sum$, cardinality, $||$, and product,$\prod$, of the set $S$.) Evaluated: $=(10-3)\times 14-10$ $=7\times 14-10$ $=98-10$ $=88$
So, obviously we could just add zero afterwards.
Mind you, I suppose that we could also do it with just one of the numbers in that case too.
$S = \{x\}$
$(|S|+|S|+|S|+|S|+|S|+|S|+|S|+|S|+|S|+|S|+|S|)\times(|S|+|S|)\times(|S|+|S|)\times(|S|+|S|)$ $=(1+1+1+1+1+1+1+1+1+1+1)\times(1+1)\times(1+1)\times(1+1)$ $=11\times2 \times2 \times2$ $=88$
...so
For $x = $...$7$, $2$, or $1$ just multiply the rest together and add them on. For $x=0$ one can add $(7\times 2)\pmod{1}=0$, or $(7+1)\pmod{2}=0$.
The only question is: Does doing what I have done here count as using the given numbers more than once?
Here is an alternative, sneaky way...
Subtract the
onefrom the seven, turn the resulting six upside-down, append the zero, then subtract the two. $7-1=6$ $\text{turn}(6)=9$ $\text{append}(9,0)=90$ $90-2=88$
If ceiling or nearest integer function is allowed,
$\lceil{\tan^{-1}(27+0!)}\rceil = 88^{\circ}$
As a perl one-liner you could write:
perl -le 'for ($_=-1-2,$i = 0; $i<7; $i++) {$_+= $i*$i }; print'
or
without a zero:
perl -le 'print ((7+1)x2)'
or
without a zero OR a two in the bash shell:
x=$((7+1)) && echo $x$x
or
without a zero, one, or two in bash:
false || x=$((7+$?)) && echo $x$x
or
without any numbers at all:
false || x=$(($?+$?+$?+$?+$?+$?+$?+$?)) && echo $x$x
If we use base 36
We now have access to the digits 2, 0, 1, A, N, D, 7. So:
$= (N \times D) - 7 + \left(\frac{A}{2}\right) - 1 + 0$ $= 8B - 7 + 5 - 1 + 0$ $= 8B - 3$ $= 88$ Using base 10 math gives us 2, 0, 1, 10, 13, 23, 7 $= (23 \times 13) - 7 + \left(\frac{10}{2}\right) - 1 + 0$ $= 299 - 7 + 5 - 1 + 0$ $= 299 - 3$ $= 296$ 296 is 88 in base 36
In base 9: $$88 = 102 - \lceil\sqrt 7\rceil$$
Here is my first answer after about 5 minutes of brute-force checks!
$\lceil\log{\sqrt{102!}}\rceil+7=88$
where log means logarithm in base 10.
By the way, as a wild guess, I think that 88 is very likely to be the OP's birth year.
Here is yet another one.
$$\lceil{\sqrt{\sqrt{\left(\sqrt{\sqrt{7!}}\right)!} \times \left( 2 + 0! + 1\right)!}\rceil}$$
Breaking it down:
$$7! = 5040$$ $$\sqrt{7!} = \sqrt{5040} = 70.992957$$ $$\sqrt{\sqrt{7!}} = \sqrt{70.992957} = 8.42573$$ $$\left(\sqrt{\sqrt{7!}}\right)! = 8.42573! = 101358.44566$$ $$\sqrt{\left(\sqrt{\sqrt{7!}}\right)!} = \sqrt{101358.44566} = 318.368411$$ $$ \sqrt{\left(\sqrt{\sqrt{7!}}\right)!} \times \left(2 + 0! + 1\right)! = 318.368411 \times 24 = 7640.84$$ $$\sqrt{\sqrt{\left(\sqrt{\sqrt{7!}}\right)!} \times \left(2 + 0! + 1\right)!} = \sqrt{7640.84} = 87.412$$
$$\lceil{\sqrt{\sqrt{\left(\sqrt{\sqrt{7!}}\right)!} \times \left( 2 + 0! + 1\right)!}\rceil} = \lceil{ 87.412 \rceil} = 88$$
Another use of mathematical functions and flooring...
$\lfloor\ln\Gamma(\frac{7\times10}{2})\rfloor$
$=\lfloor\ln\Gamma(35)\rfloor$ $=\lfloor88.58082754219768\rfloor$ $=88$ Reference: $\ln\Gamma(x)$
$0!-(.7-.1)\times.2$
$= 1 - (0.6)(0.2)$ $= 1 - 0.12 = 0.88$ remove the decimal point to get $088=88$.
If subfactorial is allowed:
$!(7-2)\times(1+0!)=44\times2=88$
protected by Aza Feb 5 '17 at 11:15
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
The target signal is generated by the following analytically solvable hybrid chaotic oscillator
\begin{equation}\label{eq:1} \ddot{u} - 2\beta \dot{u} +(\omega^2+\beta^2)(u-s)=0 \end{equation}
where
$\omega$ determines the symbol period as $T = \frac{2\pi}{\omega}$.
$\beta \in (0,T^{-1}\ln 2)$ affects the fluctuating amplitude.
$s \in{\pm 1}$ is the random bipolar symbol sequence driving the system.
Its typical continuous waveform $u(t)$ for $f=100 \, \mathrm{Hz}$ is as follows:
I am interested in estimating the basis frequency $f$ of the signal. However, the amplitude spectrum of $u(t)$ has no obvious feature that can be used to estimate the frequency. Hence, I calculate
$$z(t) = |u(t)| - 1$$
The amplitude spectrum of $z(t)$ is as follows
Obviously, we can see a spike at $f = 100 \, \mathrm{Hz}$. But it is hard to calculate the accurate expression of $z(f)$ since $u(t)$ is a kind of non-stationary signal.
The STFT time-frequency spectrum is shown below
As shown in the STFT results, in some time intervals, no obvious frequency component can be found. However, in the part of approximately periodic waveform, the frequency component at $f = 100 \, \mathrm{Hz}$ is strong.
Is there any theory that can support my method of frequency estimation from the FFT of the whole $z(t)$?
Here is the MATLAB code for generating the chaotic waveform from an analytical expression (not provided above) and my current experiments on frequency estimation.
clear;close all;finit = 100;%set frequency% finit =2000;% finit =2;om=2*pi*finit;be=om/(2*pi)*log(2)-0.01*rand(1,1);T = 2*pi/om; %PeriodN=100;% number of symboltall = N*T;%total timeh=T*0.01;%time stept=0:h:tall-h;ut=zeros(1,length(t));u_in = 2*rand(1)-1;% u_in =0.99;for i=1:N if u_in>=0 s_in = 1; else s_in = -1; end t_f = (i-1)*T:h:i*T-h; %Time of each frame n_f = (i-1)*length(t_f)+1:i*length(t_f); ut(n_f)=s_in+(u_in-s_in)*exp(be*(t_f+(1-i)*T)).*... (cos(om*t_f)-be/om*sin(om*t_f)); u_in=s_in+(u_in-s_in)*exp(be*T)*cos(om*i*T);endfigureplot(t, ut)xlabel('Time t/s');title('u(t)')y = abs(ut) - 1; % Observed signal%% FFT spectrumL = length(ut);Ts_v = t./(0:L-1);Fs = ceil(1/Ts_v(2));deltaf= 0.1;NFFT = floor(Fs/deltaf);fs = Fs;f = Fs/2*linspace(0,1,floor(NFFT/2+1));Y = fft(y,NFFT)/L;Yabs = 20*log10(abs(Y(1:floor(NFFT/2+1))));figure;plot(f,Yabs);xlim([0,max(f)/4]);ylim([-100,0]);xlabel('Frequency f/Hz');title('FFT Spectrum');%% Wavelet Transfrom Time-Frequency spectrumwavename='cmor3-3';totalscal=256*2;Fc=centfrq(wavename); c=2*Fc*totalscal;scals=c./(1:totalscal);f=scal2frq(scals,wavename,1/fs); coefs=cwt(y ,scals,wavename); figure;subplot(211)plot(t,y,'linewidth',1)xlim([0 1])ylim([-1.5 1.5])subplot(212)imagesc(t,f,abs(coefs));set(gca,'YDir','normal')ylim([0 200]);% colorbar;xlabel('Time t/s');ylabel('Frequency f/Hz');title('Wavelet Time-Frequency Spectrum');%% STFT Time-Frequency spectrumwindow_len = 10*2*pi/om/h;noverlap = 50;nfft = NFFT;[~,F,T,P]=spectrogram(y,hamming(window_len),noverlap,nfft,fs);figure;subplot(211)plot(t,y,'linewidth',1)xlabel('Time t/s');title('|u(t)|-1')xlim([0 1])ylim([-1.5 1.5])subplot(212)surf(T,F,abs(P),'edgecolor','none')axis tight;view(0,90)ylim([0 200]);xlabel('Time t/s');ylabel('Frequency f/Hz');title('STFT Time-Frequency Spectrum');``` |
It is common to read that the lifetime of a virtual particle is given by the uncertainty relation: $$\tau \sim \frac{\hbar}{E}$$ on the premise that the virtual particle 'borrows energy'. This statement is infact wrong (at least I think it is) since energy is conserved in Feynmann diagrams and thus no energy needs to be borrowed. Given this, how do we actually determine the lifetime of a virtual particle, and why is it not just the same as the real particle?
That relation you quote has noting to do with borrowing energy. It is just Heisenberg's uncertainty principle. However, in my humble opinion, it is best not to ascribe any "reality" to virtual particles. They are just pictorial representations of terms called propagators which appear when performing perturbation theory on a Quantum Field Theory. There is very little understanding to get along the route you took in your question.
Virtual particles are defined only within the mathematical framework of Feynman diagram calculations for measurable quantities, as crossections and lifetimes.
This is the first order in an expansion to get the crossection of e-e- scattering.
The wavy line represents a mathematical term which is under an integral, it is called a photon because it has the quantum numbers of a photon but not the mass, the mass is off mass shell. Have a look at this lecture where the propagator representing mathematically the virtual line, has the mass of the named particle in the denominator, but the four vector describing within the integral the virtual particle is off mass shell.
It is all under an integration and not real. Real on mass shell particles are the incoming and out going dark lines. So there is no lifetime for individual virtual lines. Only for the total interaction a lifetime can be calculated. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
This problem reminds me of tension field theory and related problems in studying the shape of inflated inextensible membranes (like helium balloons). What follows is far from a solution, but some initial thoughts about the problem.
First, since you're allowing creasing and folding, by Nash-Kuiper it's enough to consider short immersions $$\phi:P\subset\mathbb{R}^2\to\mathbb{R}^3,\qquad \|d\phi^Td\phi\|_2 \leq 1$$of the piece of paper $P$ into $\mathbb{R}^3$, the intuition being that you can always "hide" area by adding wrinkling/corrugation, but cannot "create" area. It follows that we can assume, without loss of generality, that $\phi$ sends the paper boundary $\partial P$ to a curve $\gamma$ in the plane.
We can thus partition your problem into two pieces: (I) given a fixed curve $\gamma$, what is the volume of the volume-maximizing surface $M_{\gamma}$ with $\phi(\partial P) = \gamma$? (II) Can we characterize $\gamma$ for which $M_{\gamma}$ has maximum volume?
Let's consider the case where $\gamma$ is given. We can partition $M_{\gamma}$ into
1) regions of pure tension, where $d\phi^Td\phi = I$; in these regions $M_{\gamma}$ is, by definition, developable;
2) regions where one direction is in tension and one in compression, $\|d\phi^Td\phi\|_2 = 1$ but $\det d\phi^Td\phi < 1$.
We need not consider $\|d\phi^Td\phi\|_2 < 1$ as in such regions of pure compression, one could increase the volume while keeping $\phi$ a short map.
Let us look at the regions of type (2). We can trace on these regions a family of curves $\tau$ along which $\phi$ is an isometry. Since $M_{\gamma}$ maximizes volume, we can imagine the situation physically as follows: pressure inside $M_{\gamma}$ pushes against the surface, and is exactly balanced by stress along inextensible fibers $\tau$. In other words, for some stress $\sigma$ constant along each $\tau$, at all points $\tau(s)$ along $\tau$ we have$$\hat{n} = \sigma \tau''(s)$$where $\hat{n}$ the surface normal; it follows that (1) the $\tau$ follow geodesics on $M_{\gamma}$, (2) each $\tau$ has constant curvature.
The only thing I can say about problem (II) is that for the optimal $\gamma$, the surface $M_\gamma$ must meet the plane at a right angle. But there are many locally-optimal solutions that are not globally optimal (for example, consider a half-cylinder (type 1 region) with two quarter-spherical caps (type 2 region); it has volume $\approx 1.236$ liters, less than Joriki's solution).
I got curious so I implemented a quick-and-dirty tension field simulation that optimizes for $\gamma$ and $M_{\gamma}$. Source code is here (needs the header-only Eigen and Libigl libraries): https://github.com/evouga/DaurizioPaper
Here is a rendering of the numerical solution, from above and below (the volume is roughly 1.56 liters).
EDIT 2: A sketch of the orientation of $\tau$ on the surface: |
Previous Article in This Series
Supporting Information
A Quick Review
In the previous article we covered the relationship between irradiance and illuminance in the context of light-sensitive electronic components. The first factor governing this conversion is the optical sensor’s sensitivity, i.e., the relationship between light intensity and output current (or digital counts, if the sensor is an IC that processes the raw sensor signal into digital information). The second factor—actually, a continuous function of factors—is the number, based on the spectral sensitivity of the human eye, that we use to convert irradiance (in watts per square meter) to illuminance (in lux). This number is 683 for electromagnetic radiation at 555 nm, and it decreases as follows for higher or lower wavelengths:
One way to measure illuminance, as discussed in the preceding article, is to use an optical sensor that is designed to approximate the spectral response of human vision. The datasheets for these devices usually have information that allows you to convert directly from output to lux; however, owing to the discrepancy between the ideal spectral response (i.e., the luminosity function) and the sensor’s spectral response, the accuracy of the measurement will vary depending on the spectral composition of the ambient illumination.
The RGB Approximation
A highly accurate illuminance measurement could be performed as follows: first, you need an optical sensor with a “flat” spectral response—i.e., the sensitivity is the same for every wavelength in the visible spectrum. Call this the wideband detector. In addition, you arrange in close proximity numerous narrowband detectors fine-tuned to a particular wavelength. The wideband detector produces an output proportional to the overall light intensity, with each wavelength contributing equally, and the narrowband detectors reveal the spectral composition of the light. You can then apply the luminosity function,
based on this spectral information, to the output of the wideband detector.
The precision achieved with this approach will be proportional to the number of narrowband detectors—more detectors means a more accurate representation of the actual spectral composition. Practical limitations quickly come into play, though, and furthermore—as discussed in the first article—highly precise illuminance measurements are unnecessary and, in a certain sense, impossible.
Thus, we can implement this approach but in a highly simplified form: we need a wideband detector with a fairly flat spectral response and three narrowband detectors. The obvious choice for the narrowband detectors are red, green, and blue—first, because RGB sensors are readily available, and second, because the RGB wavelengths divide the visible spectrum into three more or less equal portions:
This is the spectral sensitivity information for the Rohm RGBC sensor IC (p/n BH1745NUC) mentioned in the previous article. As you can see, the blue detector corresponds to about the lower one-third of the visible spectrum, the green detector corresponds to the middle one-third, and the red detector corresponds to the upper one-third. Note also that the response of the clear detector is fairly flat—actually, it is
very flat compared to the curves presented in the previous article, such as this one for the Fairchild phototransistor (p/n KDT00030TR) :
Nevertheless, the response of the Rohm part drops significantly at the lower wavelengths . . . I fully understand the difficulty this causes for those inclined to perfectionism—but remember, low-cost, low-complexity illuminance measurements are not an exercise in precision!
Step by Step
Let’s go through the process of calculating a lux measurement using the BH1745NUC. One piece of information we will need is the sensitivity of the clear detector, which we calculated in the preceding article as
\[\frac{160\ counts}{0.2\ \frac{W}{m^2}}=\frac{800\ counts}{\frac{W}{m^2}}=\frac{1\ count}{0.00125\frac{W}{m^2}}\]
1. We need to adjust the RGB measurements to compensate for the major differences in sensitivity. From the above plot we see that R is about 0.72 and B is about 0.56 when G is 1. Thus, we multiply the R and B values by the appropriate correction factor:
\[CF_R=\frac{1}{0.72}=1.39,\ \ \ CF_B=\frac{1}{0.56}=1.79\]
2. Determine the “portion” of the ambient light contained in the R, G, and B bands by adding up the three values and dividing each by the total. This approach presupposes the assumption whereby the R, G, and B values represent all the irradiance in the upper, middle, and lower third of the visible spectrum. For example, let’s say the corrected RGB outputs are as follows:
\[R=75\ counts,\ \ G=100\ counts,\ \ B=75\ counts\]
The “irradiance portion” (IP) for each color is thus
\[IP_R=\frac{75}{250}=30\%,\ \ IP_G=\frac{100}{250}=40\%,\ \ IP_B=\frac{75}{250}=30\%\]
3. Find the irradiance-to-illuminance conversion factor for each color:
From this overlay plot we can estimate the following irradiance-to-illuminance factors, recalling that the maximum at 555 nm is 683:
\[IRtoIL_R=683\times0.3=205,\ \ IRtoIL_G=683\times0.9=615,\ \ IRtoIL_B=683\times0.06=41\]
4. Multiply the irradiance portions by the appropriate IRtoIL factor, and add up the results to determine the overall IRtoIL factor for the particular spectral composition of the currently measured illumination:
\[IRtoIL_{overall}=\left(205\times0.3\right)+\left(615\times0.4\right)+\left(41\times0.3\right)=320\]
5. Use the clear detector’s sensitivity to determine the wavelength-independent irradiance. Let’s say the clear output is 300 counts:
\[300\ counts\div\frac{800\ counts}{\frac{W}{m^2}}=0.375\ \frac{W}{m^2}\]
6. Finally—multiply the irradiance by the overall irradiance-to-illuminance conversion factor:
\[0.375\ \frac{W}{m^2}\times320=120\ lux\]
Discussion and Conclusion
If you read through this process a few times, I think you will find that it is quite intuitive. One way to think about the fundamental concept is the following: if the clear detector were illuminated by a pure red light and then a pure green light of the same intensity, the output would not change (assuming that the clear detector is equally sensitive to all wavelengths). This means that at least one of the corresponding lux values cannot be anywhere near correct, because the irradiance-to-illuminance factor for red is much lower than for green—in this case, 205 vs. 615. So what we are doing here
is finding the “average” conversion factor based on how much R, G, and B radiation is in the ambient light. If the light were pure red, IR R would be 100%, and IP G and IP B would be 0%. Thus, the overall conversion factor would be the same as the conversion factor for red, i.e., 205. If the light were half pure red and half pure green, the overall conversion factor would be the average of the red and green conversion factors, i.e., (205 + 615)/2 = 410.
You might be wondering if the clear detector is really necessary here. That’s a difficult question. We could ignore the clear detector and simply interpret the sum of the three RGB measurements as the total irradiance, but we would end up ignoring irradiance that falls in the wavelengths not covered by the RGB sensitivity bands. However, if the wavelengths of this ignored irradiance are toward the edges of the luminosity function, it would actually be better to ignore it—if the RGB detectors don’t see it and the clear detector does, it will make an erroneously large contribution to the final lux value. I suppose the bottom line is that both approaches are feasible, with one or the other being more accurate depending on the spectral characteristics of the lighting conditions and of the clear detector—ignoring the clear detector is an easy way to eliminate error caused by its nonideal (i.e., non-wavelength-independent) sensitivity.
The procedure described here is perhaps more of a starting point. It should provide reasonably accurate illuminance measurements with minimal investment of time and money, but you should definitely consider ways to refine your algorithm based on empirical observations or, even better, by comparing your results to those of a high-quality lux meter. |
I have a question regarding the Mordell Weil theorem a number field $K$. I read the proof of the Mordell Weil theorem in "rational points on elliptic curves" by Tate and Silverman. They presented a proof for the case where $E[2] \in E (\mathbb{Q}) $ where $E : Y^2 = X(X^2 + AX + B)$ and mentioned before hand that it is possible to prove the case over number fields in the same fashion (thus without the use of group cohomology) with a little help of algebraic number theory.
After a bit of investigating, I concluded that every reasoning also holds for number fields except for the proof that the image of the map $\alpha : E(K) \rightarrow K^* / K^{*2}$ is finite (proposition 3.8(c)). In the proof they claimed that every squarefree integer representing the corresponging quadratic residue class divides $B$.
My reasoning for number fields was as follows:
Let $P = (x,y) \in E$, then $\alpha(P) = x \pmod {K^{*2}}$. Since $K$ is the field of fractions of $O_K$ (ring of integers) we have that $x = \frac{a}{b}$ for $a,b \in O_K$. Now consider $S_P := \{ \rho \in \text{Max}(O_K) \: : \: v_{\rho}(\alpha(P)) \neq 0 \pmod 2 \}$ where $v$ is just the valuation of the prime ideal factorization. My claim is that every prime ideal in $S_P$ must be a divisor of the ideal generated by $B$.
My question: is this correct? And if so, how do I prove this?
Thanks in advance. |
Two-dimensional polar coordinates
Sometimes the symbols \(r\) and \(θ\) are used for two-dimensional polar coordinates, but in this section I use \((ρ , \phi)\) for consistency with the \((r, θ, \phi)\) of three-dimensional spherical coordinates. In what follows I am setting vectors in \(\textbf{boldface}\). If you make a print-out, you should be aware that some printers apparently do not print Greek letter symbols in boldface, even though they appear in boldface on screen. You should be on the look-out for this. Symbols with ^ above them are intended as unit vectors, so you will know that they should be in boldface even if your printer does not recognize this. If in doubt, look at what appears on the screen.
\(\text{FIGURE III.8}\)
Figure \(\text{III.8}\) shows a point \(\text{P}\) moving along a curve such that its polar coordinates are changing at rates \(\dot{ρ}\) and \(\dot{\phi}\). The drawing also shows fixed unit vectors
\(\hat{x}\) and \(\hat{y}\) parallel to the \(x\)- and \(y\)-axes, as well as unit vectors \(\hat{\rho}\) and \(\hat{\phi}\) in the radial and transverse directions. We shall find expressions for the rate at which the unit radial and transverse vectors are changing with time. (Being unit vectors, their magnitudes do not change, but their directions do.)
We have \[\boldsymbol{\hat{\rho}} = \cos \phi \boldsymbol{\hat{x}} + \sin \phi \boldsymbol{\hat{y}} \label{3.4.1} \tag{3.4.1}\]
and \[\boldsymbol{\hat{\phi}} = -\sin \phi \boldsymbol{\hat{x}} + \cos \phi \boldsymbol{\hat{y}}. \label{3.4.2} \tag{3.4.2}\]
\[\therefore \quad \boldsymbol{\dot{\rho}} = - \sin \phi \dot{\phi} \boldsymbol{\hat{x}} + \cos \phi \dot{\phi} \boldsymbol{\hat{y}} = \dot{\phi} (-\sin \phi \boldsymbol{\hat{x}} + \cos \phi \boldsymbol{\hat{y}}) \label{3.4.3} \tag{3.4.3}\]
\[\therefore \quad \boldsymbol{\hat{\rho}} = \dot{\phi} \boldsymbol{\hat{\phi}} \label{3.4.4} \tag{3.4.4}\]
In a similar manner, by differentiating Equation \(\ref{3.4.2}\). with respect to time and then making use of Equation \(\ref{3.4.1}\), we find
\[\boldsymbol{\dot{\hat{\phi}}} = - \dot{\phi} \boldsymbol{\hat{\rho}} \tag{3.4.5} \label{3.4.5}\]
Equations \(\ref{3.4.4}\) and \(\ref{3.4.5}\) give the rate of change of the radial and transverse unit vectors. It is worthwhile to think carefully about what these two equations mean.
The position vector of the point \(\text{P}\) can be represented by the expression \(\boldsymbol{\rho} = \rho \boldsymbol{\hat{\rho}}\). The velocity of \(\text{P}\) is found by differentiating this with respect to time:
\[\textbf{v} = \boldsymbol{\dot{\rho}} = \dot{\rho} \boldsymbol{\hat{\rho}} +\rho \boldsymbol{\dot{\hat{\rho}}} = \dot{\rho} \boldsymbol{\hat{\rho}} + \rho \dot{\phi} \boldsymbol{\hat{\phi}}. \label{3.4.6} \tag{3.4.6}\]
The radial and transverse components of velocity are therefore \(\dot{\phi}\) and \(\rho \dot{\phi}\) respectively. The acceleration is found by differentiation of Equation \(\ref{3.4.6}\), and we have to differentiate the products of two and of three quantities that vary with time:
\begin{array}{c c c c l}
\textbf{a} & = & \dot{\textbf{v}} & = & \ddot{\rho}\boldsymbol{\hat{\rho}} + \dot{\rho} \boldsymbol{\dot{\hat{\rho}}} + \dot{\rho} \dot{\phi} \boldsymbol{\hat{\phi}} + \rho \ddot{\phi} \boldsymbol{\hat{\phi}} + \rho \dot{\phi} \boldsymbol{\dot{\hat{\phi}}} \\ &&& = & \ddot{\rho} \boldsymbol{\hat{\rho}} + \dot{\rho} \dot{\phi} \boldsymbol{\hat{\phi}} + \dot{\rho} \dot{\phi} \boldsymbol{\hat{\phi}} + \rho \ddot{\phi} \boldsymbol{\hat{\phi}} - \rho \dot{\phi}^2 \boldsymbol{\hat{\rho}} \\ &&& = & \left( \ddot{\rho} - \rho \dot{\phi}^2 \right) \boldsymbol{\hat{\rho}} + \left( \rho \ddot{\phi} + 2 \dot{\rho} \dot{\phi} \right) \boldsymbol{\hat{\phi}} . \\ \tag{3.4.7} \label{3.4.7} \end{array}
The radial and transverse components of acceleration are therefore \((\ddot{\rho} − \rho \dot{\phi}^2)\) and \((\rho \ddot{\phi} + 2 \dot{\rho} \dot{\phi})\) respectively.
Three-Dimensional Spherical Coordinates
In figure \(\text{III.9}\), \(\text{P}\) is a point moving along a curve such that its spherical coordinates are changing at rates \(\dot{r}, \dot{θ}, \dot{\phi}\). We want to find out how fast the unit vectors \(\hat{\textbf{r}}\), \(\boldsymbol{\hat{\theta}}\), \(\boldsymbol{\hat{\phi}}\) in the radial, meridional and azimuthal directions are changing.
\(\text{FIGURE III.9}\)
We have \[\hat{\textbf{r}} = \sin θ \cos \phi \hat{\textbf{x}} + \sin θ \sin \phi \hat{\textbf{y}} + \cos θ \hat{\textbf{z}} \label{3.4.8} \tag{3.4.8}\]
\[\boldsymbol{\hat{\theta}} = \cos θ \cos \phi \hat{\textbf{x}} + \cos θ \sin \phi \hat{\textbf{y}} - \sin θ \hat{\textbf{z}} \label{3.4.9} \tag{3.4.9}\]
\[\boldsymbol{\hat{\phi}} = - \sin \phi \hat{\textbf{x}} + \cos \phi \hat{\textbf{y}} \label{3.4.10} \tag{3.4.10}\]
\[\therefore \quad \hat{\textbf{r}} = (\cos θ \dot{θ} \cos \phi - \sin θ \sin \phi \dot{\phi} ) \hat{\textbf{x}} + (\cos θ \dot{θ} \sin \phi + \sin θ \cos \phi \dot{\phi} ) \hat{\textbf{y}} - \sin θ \dot{θ} \hat{\textbf{z}}. \label{3.4.11} \tag{3.4.11}\]
We see, by comparing this with equations \(\ref{3.4.9}\) and \(\ref{3.4.10}\) that
\[\dot{\hat{\textbf{r}}} = \dot{θ} \boldsymbol{\hat{\theta}} + \sin θ \dot{\phi} \boldsymbol{\hat{\phi}} \label{3.4.12} \tag{3.4.12}\]
By similar arguments we find that
\[\boldsymbol{\dot{\hat{\theta}}} = \cos θ \dot{\phi} \boldsymbol{\hat{\phi}} - \dot{θ} \hat{\textbf{r}} \label{3.4.13} \tag{3.4.13}\]
and
\[\boldsymbol{\dot{\hat{\phi}}} = - \sin θ \dot{\phi} \hat{\textbf{r}} - \cos θ \dot{\phi} \boldsymbol{\hat{\theta}} \label{3.4.14} \tag{3.4.14}\]
These are the rates of change of the unit radial, meridional and azimuthal vectors. The position vector of the point \(\text{P}\) can be represented by the expression \(\textbf{r} = r \ \hat{\textbf{r}}\). The velocity of \(\text{P}\) is found by differentiating this with respect to time:
\begin{array}{c c c}
\textbf{v} & = & \dot{\textbf{r}} = \dot{r} \hat{\textbf{r}} + r \ \dot{\hat{\textbf{r}}} = \dot{r} \hat{r} + r(\dot{θ} \boldsymbol{\hat{\theta}} + \sin θ \dot{\phi} \boldsymbol{\hat{\phi}} ) \\ & = & \dot{r} \hat{\textbf{r}} + r \ \dot{θ} \boldsymbol{\hat{\theta}} + r \sin θ \dot{\phi} \boldsymbol{\hat{\phi}} \\ \label{3.4.15} \tag{3.4.15} \end{array}
The radial, meridional and azimuthal components of velocity are therefore \(\dot{r}, \ r \dot{θ}\) and \(r \sin θ \dot{\phi}\) respectively.
The acceleration is found by differentiation of Equation \(\ref{3.4.15}\).
It might not be out of place here for a quick hint about differentiation. Most readers will know how to differentiate a product of two functions. If you want to differentiate a product of several functions, for example four functions, \(a, \ b, \ c\) and \(d\), the procedure is
\((abcd)^\prime = a^\prime bcd + ab^\prime cd + abc^\prime d + abcd^\prime\).
In the last term of Equation \(\ref{3.4.15}\), all four quantities vary with time, and we are about to differentiate the product.
\[\textbf{a} = \dot{\textbf{v}} = \ddot{r} \hat{\textbf{r}} + \dot{r} ( \dot{θ} \boldsymbol{\hat{\theta}} + \sin θ \dot{\phi} \boldsymbol{\hat{\phi}}) + \dot{r} \dot{θ} \boldsymbol{\hat{\theta}} + r \ddot{\theta} \boldsymbol{\hat{\theta}} + r \dot{θ} ( \cos θ \dot{\phi} \boldsymbol{\hat{\phi}} - \dot{θ} \hat{\textbf{r}} ) + \dot{r} \sin θ \dot{\phi} \boldsymbol{\hat{\phi}} + r \cos θ \dot{θ} \dot{\phi} \boldsymbol{\hat{\phi}} + r \sin θ \ddot{\phi} \boldsymbol{\hat{\phi}} + r \sin θ \dot{\phi} ( - \sin θ \dot{\phi} \hat{\textbf{r}} - \cos θ \dot{\phi} \boldsymbol{\hat{\theta}}) \tag{3.4.16} \label{3.4.16}\]
On gathering together the coefficients of \(\hat{\textbf{r}}, \boldsymbol{\hat{\theta}}, \boldsymbol{\hat{\phi}}\), we find that the components of acceleration are:
Radial: \(\ddot{r} - r \dot{θ}^2 - r \sin^2 θ \dot{\phi}^2 \) Meridional: \(r \ddot{θ} + 2\dot{r} \dot{θ} - r \sin θ \cos θ \dot{\phi}^2 \) Azimuthal: \(2 \dot{r} \dot{\phi} \sin θ + 2r \dot{θ} \dot{\phi} \cos θ + r \sin θ \ddot{\phi}\) |
I write down the solution for the Heston model. You can directly generalise the result.Let $f=f(t,s,v)\in C^{1,2,2}(\mathbb{R}_+^3)$ be a real-valued function (portfolio value) and consider the two-dimensional stochastic process $(S_t,v_t)$ with\begin{align*}\mathrm{d}S_t&=(r-q) S_t \mathrm{d}t+\sqrt{v_t} S_t \mathrm{d}W_{1,t}, \\\mathrm{d}v_t&=\...
I'll try my best to explain themBoth of them aim to match the implied volatility surface as shown by the empirical data.Local volatility process is a function of Stock and time without any stochastic term (not moving randomly). It changes with with different inputs of stock and time. It matches the implied volatility surface with short term maturity ...
I use Gatheral's notations.The SVI-Jump-Wings (SVI-JW) parameterization of the implied variance v (rather than the implied total varianceThe raw and natural parametrizations describe the total implied variance for one slice (fixed tenor). The SVI-JW describes the implied variance for one slice (fixed tenor). The total implied variance slice for a fixed ...
Given your regression relationship between atm IV and forward price, as long as beta <1, atm IV and forward price are negatively correlated which is usually consistent with the market observations - the higher the forward price (longer maturity), the lower the atm IV.If beta is greater than 1, rather, ATM IV and forward price are positive correlated, ...
Given the variation, ATM vol = alpha * F ^(beta-1), if your stochastic process for forward price dF= alphaF^beta dW, that means your effective beta, CEV, is 1. This gives horizontal backbone of the vol surface. I think it all depends on whether this is what you expect to see - the vol surface is stickey under shocked price scenarios. |
So I'm trying to decide whether the cosine part is intended to be plugged in for $z$ or whether it is strictly part of $h[n]$. (the number a lies in the open unit disk)
I mean I was pretty sure it was all part of $h[n]$ but then upon performing the z-transform I get this rational function
$$\frac{1 - a\cos(2\pi\frac{f_0}{F_s})z^{-1}}{1-2a\cos(2\pi\frac{f_0}{F_s})z^{-1} + a^2z^{-2}}$$
The thing is then I'm supposed to evaluate the poles and zeros and if you just ignore the cosine parts you get this really nice rational expression which factors and simplifies down to $\displaystyle\frac{z}{z-a}$.
So that has gotten me thinking that maybe I'm not understanding things correctly and the cosine portion is supposed to be plugged in for $z$ or something. Can anyone clarify this for me? |
The textbook (Scott Dodelson,
Modern Cosmology, Section 2.2 Distance, Page 35-36) states as follows:
Another way of inferring distances in astronomy is to measure the flux from an object of known luminosity. Recall that (forgetting about expansion for the moment) the observed flux $F$ a distance d from a source of known luminosity $L$ is \begin{equation} F=\frac{L}{4\pi d^2} \tag{2.47} \end{equation}
since the total luminosity through a spherical shell with area $4\pi d^2$ is constant. How does this result generalize to an expanding universe? Again it is simplest to work on the comoving grid, this time with the source centered at the origin. The flux we observe is \begin{equation} F=\frac{L(\chi)}{4\pi \chi^2 (a)} \tag{2.48} \end{equation} where $L(\chi)$ is the luminosity through a (comoving) spherical shell with radius $\chi(a)$. To further simplify, let's assume that the photons are all emitted with the same energy. Then $L(\chi)$ is this energy multiplied by the number of photons passing through a (comoving) spherical shell per unit time. In a fixed time interval, photons travel farther on the comoving grid at early times than at late times since the associated physical distance at early times is smaller. Therefore, the number of photons crossing a shell in the fixed time interval will be smaller today than at emission, smaller by a factor of $a$. Similarly, the energy of the photons will be smaller today than at emission, because of expansion. Therefore, the energy per unit time passing through a comoving shell a distance $\chi(a)$ (i.e., our distance) from the source will be a factor of $a^2$ smaller than the luminosity at the source. The flux we observe therefore will be \begin{equation} F=\frac{La^2}{4\pi \chi^2 (a)} \tag{2.49} \end{equation} where $L$ is the luminosity at the source. We can keep Eq. (2.47) in an expanding universe as long as we define the
luminosity distance\begin{equation} d_L\equiv\chi/a \tag{2.50} \end{equation} The questions that bothers me are According to the Dodelson's statements, there should be $L(\chi)\propto \frac{1}{a^2}$. Why does $L(\chi)$ equal to $La^2$ in Eq 2.49 ? To my understanding, the physical distance is the comoving distance multiplied scale factor, i.e. $d=a\cdot \chi$. But, Eq 2.50 violates it obviously. Why? |
I'm looking for a solution to make number $75$ with numbers $1$ $9$ $6$ $2$ in that order and the same rules as in Use 2 0 1 and 8 to make 67.
Here a copy of those rules:
You must use all 4 digits. Only the digits $1$, $9$, $6$, and $2$ can be used in that order.
You can make multi-digit numbers out of the numbers. Examples: $19$, $96.2$
The square function may NOT be used. Nor may the cube, raise to a fourth power, or any other function that raises a number to a specific power. You may use the ^ operation if you use a digit, for example, $(1 + 9)^6 - 2!$ is acceptable (if you're trying to get $999998$), because 1, 9, 6, and 2 are used. However, $19 ^ 2 / 6 + 2$ can't be used to get $62.166...$ because it uses an extra 2.
Sorry, but the integer function may NOT be used. Nor may the round, floor, ceiling, or truncate functions.
$+$, $-$, $\times$, $\div$ or $\frac{\Box}{\Box}$, $()$, $!$, $\sqrt{\Box}$, ${\Box}^{\Box}$, and $!!$ may be used for functions.
Please no brute-force methods. Good luck. |
The problem comes up already with a "more minimal" example:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$\breve{\breve{a}\breve{b}}$
\end{document}
which produces
! Undefined control sequence.
\macc@adjust ->\dimen@ \macc@kerna
\advance \dimen@ \macc@kernb \kern -\dimen@
thus showing that the problem is
not only in the repetition of the symbol. It happens with all math accents defined in terms of
\mathaccentV,
\hat \check \tilde \acute \grave \dot \ddot \bar \vec \mathring
because these accents work by looking if their argument contains another accented symbol, in order to stack precisely the accents. This requires doing some global definitions, but somehow, if the argument contains two of these accents, the mechanism fails.
The "repeated symbol" is contained in the macro
\macc@nucleus: after
$\tilde{\breve{X}$, it expands to
X and this is why this symbol is repeated (but I've not digged much into the details), since the definition of
\macc@nucleus is done via
\gdef.
The
amsmath documentation doesn't point out that only stacked accents on one symbols should be used and, actually, it's safe to put an accent on a subformula provided it doesn't contain accents. Solution.
As the macros are quite complex and require global assignments, it seems quite difficult to do surgery on them, so a different approach is easier. Define
\newsavebox{\accentbox}
\newcommand{\compositeaccents}[2]{%
\sbox\accentbox{$#2$}#1{\usebox\accentbox}}
Now
\breve{\breve{a}\breve{b}} can be changed into
\compositeaccents{\breve}{\breve{a}\breve{b}}
and all goes well. As one can see, the argument is typeset before applying the "global accent" and stored in a bin, over which we can safely put the "global accent".
Should one need such buildups also in superscripts or subscripts
\newcommand{\compositeaccentsX}[2]{%
\let\accenttemp#1\mathpalette\docompositeaccents{#2}}
\def\docompositeaccents#1#2{\compositeaccents\accenttemp{#1#2}}
and
$A_{\compositeaccentsX{\breve}{\breve{a}\breve{b}}}$ will work. It's better to stick with the simpler command, as the "extended" one requires to typeset four times the same formula.
Curiously enough, the
accents package shows a bug in situations like this one:
\documentclass{article}
\usepackage{amsmath,accents}
\begin{document}
$\breve{\breve{a}\breve{b}}$
\end{document}
will give no error, but will eat up the "a".
Very nice 10000th question on TeX.SE! |
Difference between revisions of "stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers"
m (→Image Foreground-Background Segmentation Experiment)
(→Results)
Line 78: Line 78:
== Results ==
== Results ==
− +
=== CIFAR-10 Experiment ===
=== CIFAR-10 Experiment ===
Revision as of 00:18, 21 April 2018 Contents Introduction
With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of (12 TFLOPS (tera-FLOPs per second)), while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of (567 GFLOPS) which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices.
In general, model compression can be accomplished using four main non-mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By non-mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable.
Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings.
Motivation
Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss:
$$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$
where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers.
Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the [math]l[/math]-th layer.
Due to the batch normalization, any uniform scaling of [math]W^l[/math] which would change [math]l_1[/math] and [math]l_2[/math] norms, but has no have no effect on [math]x^{l+1}[/math]. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective.
In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter [math]\gamma[/math] in all batch normalization. Not only placing sparse constraints on [math]\gamma[/math] is simpler and easier to monitor, but more importantly, they put forward two reasons:
1. Every [math]\gamma[/math] always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of [math]\gamma[/math];
2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of [math]\gamma[/math] parameter are independent across different layers.
Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is.
Method
At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters
gamma instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning entire channels: if gamma is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned. Summary
The basic algorithm can be summarized as follows:
1. Penalize the L1-norm of the batch normalization scaling parameters in the loss
2. Train until loss plateaus
3. Remove channels that correspond to a downstream zero in batch normalization
4. Fine-tune the pruned model using regular learning
Details
There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers.
Slow Convergence
To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$
Let
f be the model loss and g be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent. Penalty Normalization
In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area.
To control the global penalty, a hyperparamter
rho is multiplied with all the per-layer lambda in the final loss. Steps
The final algorithm can be summarized as follows:
1. Compute the per-layer normalized sparse penalty constant [math]\lambda[/math]
2. Compute the global LASSO loss with global scaling constant [math]\rho[/math]
3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent.
4. Remove channels that correspond to a downstream zero in batch normalization
5. Fine-tune the pruned model using regular learning
Results CIFAR-10 Experiment
Model A is trained with a sparse penalty of [math]\rho = 0.0002[/math] for 30 thousand steps, and then increased to [math]\rho = 0.001[/math]. Model B is trained by taking Model A and increasing the sparse penalty up to 0.002. Similarly Model C is a continuation of Model B with a penalty of 0.008.
For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency.
ILSVRC2012 Experiment
The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice. Both models were trained with an aggressive sparsity penalty of 0.1.
Image Foreground-Background Segmentation Experiment
The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The model was trained with a sparsity penalty of 0.5 and the results are shown in table below
The neural network used in this experiment is composed of two branches:
An inception branch that locates the foreground objects A DenseNet branch to regress the edges
It was found that the pruning primarily affected the inception branch as shown in Figure 1 below. This likely explains the poor performance on more challenging datasets as a result of a higher requirement on foreground objects, which has been impacted by the pruning of the inception branch.
Conclusion
Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected.
It would also be interesting to combine multiple approaches, or "throw the whole kitchen sink" at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made.
In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks.
Implementation
A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning
References Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Cheng, Y., Wang, D., Zhou, P., & Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282. Ye, J., Lu, X., Lin, Z., & Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Gordon, G., & Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 |
Tagged: normal subgroup If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 470
Let $G$ be a finite group of order $p^n$, where $p$ is a prime number and $n$ is a positive integer.
Suppose that $H$ is a subgroup of $G$ with index $[G:P]=p$. Then prove that $H$ is a normal subgroup of $G$.
(
Michigan State University, Abstract Algebra Qualifying Exam) Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group |
The mass spectrum can be found here.
I am interested to know what ion this is and why the peak at $m/z = 105$ is higher than the peak at $m/z = 106$ (which I believe corresponds to $\ce{C5H4NCO+}$).
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
From the general interpretation of electron ionization mass spectra, it would seem logical to report the $m/z$ 105 ion as a loss of $\ce{H_{2}O}$, thus $\ce{[C_{6}H_{3}NO]^{\bullet +}}$. As correctly suggested in the question, the peak at $m/z$ 106 is likely arising from a loss of $\ce{OH^{\bullet}}$ through an $\alpha$-cleavage of the $\ce{C-O}$ bond.
So the issue is: why would niacin (or nicotinic acid) undergo this $\ce{H_{2}O}$ loss? It is not observed for other similar molecules such as benzoic acid, which starts by a $\ce{OH^{\bullet}}$ loss. I did find two relevant publications in the literature on this point. Neeter and Nibbering
[1] performed $\ce{D}$ labelling experiments, showing that the transferred hydrogen atom originates mostly from the $\alpha$ position ( ortho from both the carboxylic acid and the nitrogen atom). Opitz [2] measured the appearance energy (AE) for $m/z$ 105 at 10.94 eV, more than 0.6 eV lower than the AE for $m/z$ 106 at 11.58 eV. He also suggests, based on metastable dissociation data showing loss of CO to $m/z$ 77, two possible structures for the $m/z$ 105 ion. So to answer the question: (a) formation of the $m/z$ 105 ion is thermodynamically favored compared to the formation of the $m/z$ 106 ion.
Based on this (scarce) literature evidence, I would suggest, based on labelling experiments, that some proton transfer occurs between the $\alpha$ cycle hydrogen and the hydroxyl group. This exchange can either move back and forth, or lead to a loss of water through formation of a cyclic structure.
References
Neeter, R.; Nibbering, N. M. M. Mass spectrometry of pyridine compounds. II: Hydrogen exchange in the molecular ions of isonicotinic and nicotinic acid as a function of internal energy.
Org. Mass Spectrom. 1971, 5 (6), 735–742. DOI: 10.1002/oms.1210050612.
Opitz, J. Electron-impact ionization of benzoic acid, nicotinic acid and their
n-butyl esters: an approach to regioselective proton affinities derived from ionization and appearance energy data. Int. J. Mass Spectrom. 2007, 265 (1), 1–14. DOI: 10.1016/j.ijms.2007.04.014.
I did quick simulation with MMass (open source, available on Windows, MacOS, Linux) for the isotopic distribution in this region (
Tools >
Mass Calculator), and it looks like there are two $[\ce{M}-\ce{1e}]^+$ fragments:
$$ \begin{array}{rlr} \hline \text{avg.}~m/z & \text{Fragment} & \text{a.i.}\\ \hline 105.09 & \ce{C6H3NO+} & \approx45\%\\ 106.10 & \ce{C6H4NO+} & \approx21\%\\ \hline \end{array} $$
Addition of the signals from both fragments (blue and green colors) would give nearly ideal peaks distribution reflecting experimental data (red color).
Technical note: MMass can import ASCII data, but NIST provides EI-MS spectra in unsupported JCAMP-DX format. To convert JDX to TXT, I used JDXview (free, Windows only). |
Assume we have a sample of data $\{x_1, \dots, x_n\}$ and a family of distributions $(f_\theta)_\theta$ indexed by some parameter vector $\theta$. We would like to fit $\theta$ to $\{x_1, \dots, x_n\}$ to obtain $\hat{\theta}$, then assess the goodness of fit of $f_{\hat{\theta}}$ to $\{x_1, \dots, x_n\}$.
Is there a general way to do this? Maybe for certain families of distributions? For specific ways of estimating $\hat{\theta}$?
The problem is of course that we are using the data we would like to assess the fit of to fit the distribution in the first place. In the specific case where $(f_\theta)$ is the family of normal distribution, parameterized by the mean and variance, $\theta=(\mu, \sigma^2)$, we can use the Lilliefors test. Is there something that works for other distribution families?
I had originally thought that a Probability Integral Transform (PIT) might work, but a quick simulation shows that it doesn't - the p value histograms look uniform enough in the middle, but they have too little mass for $p\approx 0$ and $p\approx 1$:
n_sims <- 1e4nn <- 20pp_pit_specified <- pp_pit_estimated <- matrix(NA,nrow=n_sims,ncol=20)pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) sim <- rnorm(nn) pp_pit_specified[ii,] <- pnorm(sim,mean=0,sd=1) pp_pit_estimated[ii,] <- pnorm(sim,mean=mean(sim),sd=sd(sim)) }close(pb)opar <- par(mfrow=c(1,2)) hist(pp_pit_specified,main="Parameters specified",xlab="",col="lightgray") hist(pp_pit_estimated,main="Parameters estimated",xlab="",col="lightgray")par(opar)
This is motivated by Non-uniform distribution of p-values. No, I don't have a specific use case in mind, I'm just curious. |
Difference between revisions of "stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers"
(→Results)
(→Results)
Line 78: Line 78:
== Results ==
== Results ==
− +
=== CIFAR-10 Experiment ===
=== CIFAR-10 Experiment ===
Latest revision as of 00:18, 21 April 2018 Contents Introduction
With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of (12 TFLOPS (tera-FLOPs per second)), while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of (567 GFLOPS) which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices.
In general, model compression can be accomplished using four main non-mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By non-mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable.
Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings.
Motivation
Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss:
$$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$
where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers.
Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the [math]l[/math]-th layer.
Due to the batch normalization, any uniform scaling of [math]W^l[/math] which would change [math]l_1[/math] and [math]l_2[/math] norms, but has no have no effect on [math]x^{l+1}[/math]. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective.
In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter [math]\gamma[/math] in all batch normalization. Not only placing sparse constraints on [math]\gamma[/math] is simpler and easier to monitor, but more importantly, they put forward two reasons:
1. Every [math]\gamma[/math] always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of [math]\gamma[/math];
2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of [math]\gamma[/math] parameter are independent across different layers.
Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is.
Method
At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters
gamma instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning entire channels: if gamma is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned. Summary
The basic algorithm can be summarized as follows:
1. Penalize the L1-norm of the batch normalization scaling parameters in the loss
2. Train until loss plateaus
3. Remove channels that correspond to a downstream zero in batch normalization
4. Fine-tune the pruned model using regular learning
Details
There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers.
Slow Convergence
To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$
Let
f be the model loss and g be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent. Penalty Normalization
In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area.
To control the global penalty, a hyperparamter
rho is multiplied with all the per-layer lambda in the final loss. Steps
The final algorithm can be summarized as follows:
1. Compute the per-layer normalized sparse penalty constant [math]\lambda[/math]
2. Compute the global LASSO loss with global scaling constant [math]\rho[/math]
3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent.
4. Remove channels that correspond to a downstream zero in batch normalization
5. Fine-tune the pruned model using regular learning
Results
The authors show state-of-the-art performance, compared with other channel-pruning approaches. It is important to note that it would be unfair to compare against general pruning approaches; channel pruning specifically removes channels without introducing
intra-kernel sparsity, whereas other pruning approaches introduce irregular kernel sparsity and hence computational inefficiencies. CIFAR-10 Experiment
Model A is trained with a sparse penalty of [math]\rho = 0.0002[/math] for 30 thousand steps, and then increased to [math]\rho = 0.001[/math]. Model B is trained by taking Model A and increasing the sparse penalty up to 0.002. Similarly Model C is a continuation of Model B with a penalty of 0.008.
For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency.
ILSVRC2012 Experiment
The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice. Both models were trained with an aggressive sparsity penalty of 0.1.
Image Foreground-Background Segmentation Experiment
The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The model was trained with a sparsity penalty of 0.5 and the results are shown in table below
The neural network used in this experiment is composed of two branches:
An inception branch that locates the foreground objects A DenseNet branch to regress the edges
It was found that the pruning primarily affected the inception branch as shown in Figure 1 below. This likely explains the poor performance on more challenging datasets as a result of a higher requirement on foreground objects, which has been impacted by the pruning of the inception branch.
Conclusion
Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected.
It would also be interesting to combine multiple approaches, or "throw the whole kitchen sink" at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made.
In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks.
Implementation
A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning
References Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Cheng, Y., Wang, D., Zhou, P., & Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282. Ye, J., Lu, X., Lin, Z., & Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Gordon, G., & Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 |
Heine Definition of Continuity
A real function \(f\left( x \right)\) is said to be continuous at \(a \in \mathbb{R}\) (\(\mathbb{R}-\) is the set of real numbers), if for any sequence \(\left\{ {{x_n}} \right\}\) such that
\[\lim\limits_{n \to \infty } {x_n} = a,\]
it holds that
\[\lim\limits_{n \to \infty } f\left( {{x_n}} \right) = f\left( a \right).\]
In practice, it is convenient to use the following three conditions of continuity of a function \(f\left( x \right)\) at point \(x = a:\)
Function \(f\left( x \right)\) is defined at \(x = a;\) Limit \(\lim\limits_{x \to a} f\left( x \right)\) exists; It holds that \(\lim\limits_{x \to a} f\left( x \right) = f\left( a \right).\) Cauchy Definition of Continuity \(\left(\varepsilon – \delta -\right.\) Definition)
Consider a function \(f\left( x \right)\) that maps a set \(\mathbb{R}\) of real numbers to another set \(B\) of real numbers. The function \(f\left( x \right)\) is said to be continuous at \(a \in \mathbb{R}\) if for any number \(\varepsilon \gt 0\) there exists some number \(\delta \gt 0\) such that for all \(x \in \mathbb{R}\) with
\[\left| {x – a} \right| \lt \delta ,\]
the value of \(f\left( x \right)\) satisfies:
\[\left| {f\left( x \right) – f\left( a \right)} \right| \lt \varepsilon .\]
Definition of Continuity in Terms of Differences of Independent Variable and Function
We can also define continuity using differences of independent variable and function. The function \(f\left( x \right)\) is said to be continuous at the point \(x = a\) if the following is valid:
\[{\lim\limits_{\Delta x \to 0} \Delta y }={ \lim\limits_{\Delta x \to 0} \left[ {f\left( {a + \Delta x} \right) – f\left( a \right)} \right] }={ 0,}\]
where \(\Delta x = x – a.\)
All the definitions of continuity given above are equivalent on the set of real numbers.
A function \(f\left( x \right)\) is continuous on a given interval, if it is continuous at every point of the interval.
Continuity Theorems Theorem \(1.\)
Let the function \(f\left( x \right)\) be continuous at \(x = a\) and let \(C\) be a constant. Then the function \(Cf\left( x \right)\) is also continuous at \(x = a\).
Theorem \(2.\)
Let the functions \({f\left( x \right)}\) and \({g\left( x \right)}\) be continuous at \(x = a\). Then the sum of the functions \({f\left( x \right)} + {g\left( x \right)}\) is also continuous at \(x = a.\)
Theorem \(3.\)
Let the functions \({f\left( x \right)}\) and \({g\left( x \right)}\) be continuous at \(x = a.\) Then the product of the functions \({f\left( x \right)}{g\left( x \right)}\) is also continuous at \(x = a.\)
Theorem \(4.\)
Let the functions \({f\left( x \right)}\) and \({g\left( x \right)}\) be continuous at \(x = a\). Then the quotient of the functions \(\large\frac{{f\left( x \right)}}{{g\left( x \right)}} \normalsize\) is also continuous at \(x = a\) assuming that \({g\left( a \right)} \ne 0\).
Theorem \(5.\)
Let \({f\left( x \right)}\) be differentiable at the point \(x = a.\) Then the function \({f\left( x \right)}\) is continuous at that point.
Remark: The converse of the theorem is not true, that is, a function that is continuous at a point is not necessarily differentiable at that point.
Theorem \(6\) (Extreme Value Theorem).
If \({f\left( x \right)}\) is continuous on the closed, bounded interval \(\left[ {a,b} \right]\), then it is bounded above and below in that interval. That is, there exist numbers \(m\) and \(M\) such that
\[m \le f\left( x \right) \le M\]
for every \(x\) in \(\left[ {a,b} \right]\) (see Figure \(1\)).
Theorem \(7\) (Intermediate Value Theorem).
Let \({f\left( x \right)}\) be continuous on the closed, bounded interval \(\left[ {a,b} \right]\). Then if \(c\) is any number between \({f\left( a \right)}\) and \({f\left( b \right)}\), there is a number \({x_0}\) such that
\[f\left( {{x_0}} \right) = c.\]
The intermediate value theorem is illustrated in Figure \(2.\)
Continuity of Elementary Functions
All elementary functions are continuous at any point where they are defined.
An elementary function is a function built from a finite number of compositions and combinations using the four operations (addition, subtraction, multiplication, and division) over basic elementary functions. The set of basic elementary functions includes:
Algebraical polynomials \(A{x^n} + B{x^{n – 1}} + \ldots\) \(+ Kx + L;\) Rational fractions \(\large\frac{{A{x^n} + B{x^{n – 1}} + \ldots + Kx + L}}{{M{x^m} + N{x^{m – 1}} + \ldots + Tx + U}}\normalsize\); Power functions \({x^p}\); Exponential functions \({a^x}\); Logarithmic functions \({\log _a}x\); Trigonometric functions \(\sin x\), \(\cos x\), \(\tan x\), \(\cot x\), \(\sec x\), \(\csc x\); Inverse trigonometric functions \(\arcsin x\), \(\arccos x\), \(\arctan x\), \(\text{arccot }x\), \(\text{arcsec }x\), \(\text{arccsc }x\); Hyperbolic functions \(\sinh x\), \(\cosh x\), \(\tanh x\), \(\coth x\), \(\text{sech }x\), \(\text{csch }x\); Inverse hyperbolic functions \(\text{arcsinh }x\), \(\text{arccosh }x\), \(\text{arctanh }x\), \(\text{arccoth }x,\) \(\text{arcsech }x\), \(\text{arccsch }x\). Solved Problems
Click a problem to see the solution.
Example 1Using the Heine definition, prove that the function \(f\left( x \right) = {x^2}\) is continuous at any point \(x = a.\) Example 2Using the Heine definition, show that the function \(f\left( x \right) = \sec x\) is continuous for any \(x\) in its domain. Example 3Using Cauchy definition, prove that \(\lim\limits_{x \to 4} \sqrt x = 2\). Example 4Show that the cubic equation \(2{x^3} – 3{x^2} – 15 = 0\) has a solution in the interval \(\left( {2,3} \right)\). Example 5Show that the equation \({x^{1000}} + 1000x – 1 = 0\) has a root. Example 6Let Example 7If the function Example 1.Using the Heine definition, prove that the function \(f\left( x \right) = {x^2}\) is continuous at any point \(x = a.\)
Solution.
Using the Heine definition we can write the condition of continuity as follows:
\[
{\lim\limits_{\Delta x \to 0} f\left( {a + \Delta x} \right) = f\left( a \right)\;\;\;}\kern-0.3pt {\text{or}\;\;\lim\limits_{\Delta x \to 0} \left[ {f\left( {a + \Delta x} \right) – f\left( a \right)} \right] } = {\lim\limits_{\Delta x \to 0} \Delta y = 0,} \]
where \(\Delta x\) and \(\Delta y\) are small numbers shown in Figure \(3.\)
At any point \(x = a:\)
\[{f\left( a \right) = {a^2},\;\;\;}\kern-0.3pt{f\left( {a + \Delta x} \right) = {\left( {a + \Delta x} \right)^2}.}\]
So that
\[\require{cancel}
{\Delta y = f\left( {a + \Delta x} \right) – f\left( a \right) } = {{\left( {a + \Delta x} \right)^2} – {a^2} } = {\cancel{a^2} + 2a\Delta x + {\left( {\Delta x} \right)^2} – \cancel{a^2} } = {2a\Delta x + {\left( {\Delta x} \right)^2}.} \] |
I added tags to comment
(A): multiple errors: it should be $=\exp \bigl[\nu \bigr( \log|x+iy|+i\arg (x+iy)\bigr)\bigr]$. First factor $\nu$ is applied to everything second, we do not allow $(x+iy)$ to cross $(-\infty,0]$; so angle is defined uniquely and it rans from $-\pi$ to $\pi$.
Anyway: $[(x\pm i \varepsilon)^\nu]' =\nu (x\pm i \varepsilon)^{\nu-1}$ from complex variables and here $\nu$ could be even complex.
As $\varepsilon\to +0$, there is a limit in $\mathscr{D}'$ (even in $L^1_{loc}$) of $(x\pm i \varepsilon)^\nu$ provided $\Re\nu >-1$; but then there exists a limit of its derivative, i.e. $\nu (x\pm i \varepsilon)^{nu-1}$ and we can divide by $\nu\ne 0$. So we defined $(x\pm i \varepsilon)^{\nu}$ as long as $\Re \nu >-2$ and $\nu \ne -1$.
Repeating, we define $(x\pm i \varepsilon)^{\nu}$ as long as $\Re \nu >-3$ and $\nu \ne -1,-2$. ... and so on... as $\nu\ne -1,-2,\ldots$.
Remark.
To mitigate the latter restriction, $f_\nu ^\pm :=\frac{(x\pm i0)^\nu}{\Gamma(\nu+1)}$ could be considered where $\Gamma$ is Euler's $\Gamma$-function; it has simple poles at $0,-1,-2,\ldots$ and $\Gamma(\nu+1)=\nu\Gamma(\nu)$.
(B), (C) are out of the window: they do not follow; also what is $x^{-\nu}$ for $x<0$ and $\nu\notin\mathbb{Z}$? What is $\delta^{\nu-1}$ for $\nu\ne 1,2,\ldots$? We can define those but a posteriori.
I suggested a simple way: look at $\log (x\pm i0)$ as $x>0$ and $x<0$; obviously $\log (x\pm i0)=\log|x|$ as $x>0$ and $\log (x\pm i0)=\log|x|\pm i\pi$ as $x<0$. In other words $\log (x\pm i0)=\log |x| \pm i\pi \theta(-x)$. Differentiating in $\mathscr{D}'$ we get
$$
(x\pm i0)^{-1}= (\log |x|)' +\pm i \pi (\theta (-x))'=x^{-1} \mp i\pi \delta(x)
$$
where (see other of bonus problems, http://forum.math.toronto.edu/index.php?topic=1167.0
) $(\log |x|)' =x^{-1}$ in vp sence, and
$(\theta (-x))'=-\delta(x)$.
PS Gelfand--Shilov 1--6 (+coauthors in higher volumes) is a truly remarkable book, but IMHO, sometimes they go too far. F.e. considering F.T. of distributions not in $\mathscr{S}'$ they get distributions over some classes of the entire analytic functions , in particular they get $\delta (x-c)$ with any $c\in \mathbb{C}$ (which is definitely a perversion). Unfortunately, G.Shilov died too young (at 58) and I never met him. I.Gelfand was a great mathematician. |
A propositional proof system according to Cook and Reckhow for a language $L \subseteq \Sigma^{\ast}$ is a deterministic polynomial time function $f : \Sigma^{\ast} \to L$ that is onto.
For $y \in L$ a word $x \in \Sigma^{\ast}$ with $f(x) = y$ is called a
proof for $y$.
Here is a post on the intuition, but I do not get it when I want to apply it.
For example, if I consider the language $$ EQUIV = \{ (u,v) : \mbox{$u$ and $v$ are equivalent regular expressions} \} $$ then I know an algorithm for this language would be to convert these regular expression into NFA's, determinize them and minimize them and then check if they are isomorphic.
But how would a proof system for $EQUIV$ look like? Would it be a surjective function $f : \Sigma^{\ast} \to EQUIV$ where the arguments somehow codes the regular expression, DFA's for them and an isomorphism between those DFA's? Then $f$ would simply check if the isomorphism is a valid isomorphism between the DFA's, guess this would be a simple task.
But how to check that the regular expression belong to the DFA's, I am not sure if that would be an easy task as it involves computing DFA's for regular expressions, which might take exponential time, or? Or could it code additional DFA's which could easily be composed to REGEXPs with some fixed algorithms (but is it then easy to check that these give isomorphic minimal DFA's as the given ones? I mean that includes to find an isomorphism, which is not easy, so code that again into the input?).
Or am I on the wrong track, might a proof system look totally different? |
Let me rephrase what you want to do as:
I want to approximately calculate the integral of the product of two non-negative discrete-time signals over time. Each signal is sampled from a band-limited continuous-time signal at a sampling frequency $2N$ times its highest frequency. A method is preferred that gives about 5 % error at most, is computationally efficient and does not require a large oversampling factor $N.$
Oversampling factor
The bandwidth of the product of two signals is equal to the sum of their bandwidths. Therefore an oversampling factor of $N = 2$ will suffice to represent the product without aliasing.
Integration
Integration is a linear time-invariant operation, basically a continuous-time filter, and applying it does not increase the bandwidth of the signal. What we want to achieve is, with the product signal as "input":
$$\xrightarrow{\text{input}}\boxed{\text{integrate}}\xrightarrow{\text{output}}$$
However, the impulse response of an integrator is of infinite bandwidth and sampling it to form the impulse response of a discrete-time integrator would cause aliasing, corrupting its frequency response. We can insert to the signal path an ideal lowpass filter that has its cutoff at the bandlimit of the input signal. It does nothing to the signal. The output will remain the same as before:
$$\xrightarrow{\text{input}}\boxed{\text{lowpass}}\xrightarrow{\text{input}}\boxed{\text{integrate}}\xrightarrow{\text{output}}$$
We can combine the lowpass filter and integrator to a single filtering operation. The impulse response of this lowpass integrator filter is the convolution of the impulse responses of the lowpass filter and the integrator filter. This shuffling of operations is enabled by the algebraic properties of convolution, namely associativity. We still get the same desired output as before:
$$\xrightarrow{\text{input}}\boxed{\text{lowpass integrate}}\xrightarrow{\text{output}}$$
The lowpass integrator impulse response:
$$\begin{align}h[k] &= \int_{-\infty}^k\frac{\sin(\pi x)}{\pi x}dx \\\\ &= \int_0^k\frac{\sin(\pi x)}{\pi x}dx + \frac{1}{2}, \\\end{align}$$
is sampled from the integral of the sinc function, without aliasing. Sinc is the impulse response of the ideal continuous-time low-pass filter with a cutoff frequency at $\pi$.
Figure 1. Impulse response $h[k]$ of an integrator. The impulse response continues indefinitely beyond what is shown, approaching values 0 to the left and 1 to the right.
The impulse response almost represents the process of calculating the sum of (optionally the current sample and) the past samples. As this sum becomes larger and larger, the difference to true integration becomes arbitrarily small, proportionally. The values of $h[k]$ approaching 1 for large $k$ means that for very old samples just summing them gives virtually no error in the integral estimate.
Summary
To summarize, oversample the input signals by a factor of two and calculate with a large accumulator a running sum of obtained samples of the product. This will have an arbitrarily small error compared to true integration, as enough data gets accumulated.
Analog filtering
You should also consider what the analog filtering of the current and voltage signals does to the integral of their product. This can be a big source of error. For example let's say that both signals equal $\sin^2(x).$ If this is filtered to remove all oscillation, it becomes a constant signal $\frac{1}{2}$. The average value (power) of the square of the unfiltered signal is $\frac{3}{8}$ and for the filtered signal $\frac{1}{4}$. |
I am not an an expert in set theory, so this question could be trivial. I am sorry in that case.
Let $I$ be a set and $\{ X_i \}_{i \in I}$ be a collection of sets such that $X_i \neq \emptyset$ for all $i \in I$. The axiom of choice tells precisely that the set $$ \prod_{i \in I} X_i \neq \emptyset $$ is not empty. The use of the word "choice" here is clear. To produce an element $(x_i)\in \prod_{i \in I} X_i$ we need to choose an $x_i \in X_i$ for all $i \in I$.
Now I am wondering: what if there no choice at all? Namely, if $X_i = \{x_i\}$ has one element for each $i \in I$? Indeed, in this case $\prod_{i \in I} X_i$ has only one element, that is $(x_i)$, and there is no choice at all. So the question is the following:
Question: it the axiom of choice needed to prove that the product of a family of sets, each with exactly one element, is not empty?
Thanks! |
stat946w18/Implicit Causal Models for Genome-wide Association Studies Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Side of a square: \(a\)
Diagonal of a square: \(d\) Radius of the circumscribed circle: \(R\) Radius of the inscribed circle: \(r\)
Diagonal of a square: \(d\)
Radius of the circumscribed circle: \(R\)
Radius of the inscribed circle: \(r\)
Perimeter: \(P\)
Area: \(S\)
Area: \(S\)
A square is a regular quadrilateral. It has four equal sides and four equal angles (\(90^\circ\) angles). Diagonal of a square \(d = a\sqrt 2\) Radius of the circumscribed circle \(R = d/2 =\) \(a\sqrt 2/2\) Radius of the inscribed circle \(r = a/2\) Perimeter of a square \(P = 4a\) Area of a square \(S = {a^2} = {\large\frac{{{d^2}}}{2}\normalsize} =\) \( 2{R^2} = 4{r^2}\) |
It isn't particularly clear to me what you mean by
I would like to understand how the parity-violated interaction between electron and proton can provide the photon with circular polarization in the $2s\rightarrow 1s$ transition with single photon emission.
and particularly what you really mean by "understand how ...".
The fact that the symmetries of this transition do allow for a nontrivial difference in the emission rates to different helicities is reasonably well explained in Problem 3 of one document you've linked to ('Final Solutions', to the 221B course at Berkeley, 2001, by Hitoshi Murayama). The specific meaning of 'circular polarization' is explained fairly well in footnote 18, p. 63, problem 1.14, of the book you've linked to, Atomic Physics: An Exploration Through Problems and Solutions (D. Budker, D. F. Kimball and David P. DeMille, Oxford University Press, 2004).
In short, you define a quantization axis (by convention, the $z$ axis), you prepare $2S_{1/2}$ atoms in definite-$\hat{S}_z$ states, and you put detectors at the positive and negative $z$ axis, which are able to detect the helicity of the produced photons, and which are insensitive to the two-photon radiation at half the $2S_{1/2}$-$1S_{1/2}$ energy spacing. Then preparing the $m_s=+1/2$ state will produce a right-circular photon ($S_z=1$) and preparing the $m_s=-1/2$ state will produce a right-circular photon ($S_z=-1$). Within standard QED, both of these channels
do come out with circular polarizations (despite your claims to the contrary in comments), but they do so at equal rates. If you include parity-violating weak interactions, then there will be a nonzero difference in the emission rates for the two, probably at the "large" level of one part in $10^4$. (If I understand the literature below correctly, that is.)
If you want to go beyond that, then there is just an enormous sliding scale of difficulty in terms of how technical you want to get, with no clear cutoff point for what constitutes "understanding" $-$ which is in any case a personal, subjective descriptor.
Still, since you've tagged this as resource-recommendations, here is one way into that rabbit hole:
Searching for
"1s-2s" M1 hydrogen parity on Google Scholar
The first paper from that list, 'Parity Nonconservation in Relativistic Hydrogenic Ions', M. Zolotorev and D. Budker, Phys. Rev. Lett. 78, 4717 (1997), leading in particular to its references 1 and 2, which deal with hydrogenic ions, '$2S_{1/2}\to1S_{1/2}+\text{one-photon}$ decay of muonic atoms and parity-violating neutral-current interactions', G. Feinberg and M. Y. Chen, Phys. Rev. D 10, 190 (1974) and 'Analysis of weak neutral currents in hydrogenic ions', R. W. Dunford and R. R. Lewis, Phys. Rev. A 23, 10 (1981), and particularly its reference 11, the two papers 'Weak neutral currents in atoms and μ-mesic atons', A. N. Moskalev, Zh. Eksp. Teor. Fiz. Pis'Ma Red. 19, 229 (1974); JETP Lett. 19, 141 (1974); and 'Some parity-nonconservation effects in emission by hydrogenlike atoms', Ya. I. Azimov et al., Zh. Eksp. Teor. Fiz. 67, 17 (1974); Sov. Phys. JETP 40, 8 (1975).
That last paper is probably where all the real action is - but maybe there are other rabbit-hole branches coming from other reference follow-ups, so do look around.
Still, given how unclear the question is about what exactly it is you want to know, I don't really see how digging into any of these specific resources with a deeper analysis would make sense. (And also: if you don't find anything in this list accessible, then I would suggest that it's too ambitious of a target for you at this stage.)
In any case, though, I have to ask: why are you so interested in this? As the problem set you've linked to lays out very clearly, the two-photon decay mechanism dominates the M1 single-photon channel by a factor of $\alpha^6\approx 1.5 \times 10^{-13}$, or, in other words, "this channel just doesn't happen in real life": you need to prepare $10^{13}$ metastable atoms for one of them to decay via the M1 route, and you need to repeat this $10^4$ times to get any signal at all.
This is not to say that the channel isn't interesting, but that challenge does mean that if you want to use the $2s$-$1s$ transition to test this type of mechanism, your best bet is to look at higher hydrogenic ions or similar systems $-$ as the literature above demonstrates.
And, moreover, this is also why the current work on using precision spectroscopy to test CPT-related effects (which is quite substantial $-$ for a decent recent review on work with hydrogenic systems see e.g. 'Lorentz and CPT tests with hydrogen, antihydrogen, and related systems', V. Alan Kostelecký and Arnaldo J. Vargas,
Phys. Rev. D 92, 056002 (2015)) does not use this system. |
Angles (arguments of functions): \(x,\) \(y\)
Trigonometric functions: \(\sin x,\) \(\cos x,\) \(\tan x,\) \(\cot x,\) \(\sec x,\) \(\csc x\)
Trigonometric functions: \(\sin x,\) \(\cos x,\) \(\tan x,\) \(\cot x,\) \(\sec x,\) \(\csc x\)
Hyperbolic functions: \(\sinh x,\) \(\cosh x,\) \(\tanh x,\) \(\coth x,\) \(\text{sech }x,\) \(\text{csch }x\)
Imaginary unit: \(i\)
Imaginary unit: \(i\)
Relationship between the sine and hyperbolic sine \(\sin {(ix)} = i\sinh x\) Relationship between the tangent and hyperbolic tangent \(\tan {(ix)} = i\tanh x\) Relationship between the cotangent and hyperbolic cotangent \(\cot {(ix)} = -i\coth x\) Relationship between the secant and hyperbolic secant \(\sec {(ix)} = \text{sech }x\) Relationship between the cosecant and hyperbolic cosecant \(\csc {(ix)} = -\text{csch }x\) Sine of a complex number \(\sin \left( {x + iy} \right) =\) \( \sin x\cosh y \,+\) \( i\cos x\sinh y\) Cosine of a complex number \(\cos \left( {x + iy} \right) =\) \( \cos x\cosh y \,-\) \( i\sin x\sinh y\) |
In "Knots and Primes: An Introduction to Arithmetic Topology", the author uses the following proposition
Let $h: Y \to X$ be a covering. For any path $\gamma : [0,1] \to X$ and any $y \in h^{-1}(x) (x = \gamma(0))$, there exists a unique lift $\hat{\gamma} : [0,1] \to Y$ of $\gamma$ with $\hat{\gamma}(0) = y$. Furthermore, for any homotopy $\gamma_t (t \in [0,1])$ of $\gamma$ with $\gamma_t = \gamma(0)$ and $\gamma_t(1) = \gamma(1)$, there exists a unique lift of $\hat{\gamma_t}$ such that $\hat{\gamma_t}$ is the homotopy of $\hat{\gamma}$ with $\hat{\gamma_t}(0) = \hat{\gamma}(0)$ and $\hat{\gamma_t}(1) = \hat{\gamma}(1)$.
The author then follows with; "In the following, we assume that any covering space is connected. By the preceding proposition, the cardinality of the fiber $h^{-1}(x)$ is independent of $x \in X$."
I am not sure why this results is true. This is my attempt at explaining it to myself. Take two different $x_1, x_2$ and their fibers $h^{-1}(x_1), h^{-1}(x_2)$ such that $y_1 \in h^{-1}(x_1)$ and $y_2 \in h^{-1}(x_2)$. Take two paths $\gamma_1, \gamma_2$ such that $\gamma_1(0) = x_1$ and $\gamma_2(0) = x_2$. Then we get two lifts $\hat{\gamma_1}, \hat{\gamma_2}$ with $\hat{\gamma_1}(0) = y_1$ and $\hat{\gamma_2}(0) = y_2$. Now, since our covering space is connected we can continuously deform $\hat{\gamma_1}$ into $\hat{\gamma_2}$ and conclude that every elements in $h^{-1}(x_1)$ is also in $h^{-1}(x_2)$ and vice versa. I feel that this is wrong but cannot figure out the right way to see this. Any help would be appreciated. |
I have calculated the exact time evolution of a simple 1-D qubit lattice (2008 paper) and this is what I've found for $\rho(t)$ containing one excitation of 2 qubit site $(|1\rangle,|2\rangle)$ + 1 sink $(|3\rangle)$ + a vacuum state $(|0\rangle)$:
We can observe that in beginning the excitation starts in site $1$, hops into $2$ and ends in the sink (with stable population $\approx0.7$); some are dissipated (transferred to vacuum). This corresponds to the evolution of von-Neumann entropy ($k_B\equiv1$) like
This is a complete system $(\mathrm{Tr}\rho(t)=1,\;t>0)$ and it is said, i.e in this answer, that the equilibrium is achieved when entropy is maximum. For my result this is clearly not the case since the entropy bumps and reaches stability at $t\rightarrow\infty$. I know the is partially because the population $\rho_{11}$ goes from $1$ to $0$, but what is really going on here? What makes this result seemingly inconsistent with the statement "entropy always increases"?
Edit:
The Hamiltonian is \begin{equation} H=\sum_{k=1}^N \omega_k\sigma^+_k\sigma^-_k + \sum_{k<l}\nu_{kl}(\sigma^+_k\sigma^-_l + \sigma^-_k\sigma^-_l) \end{equation} and the system follows Lindbladian evolution with \begin{equation} \mathcal{L}_{\mathrm{dissipation}}(\rho)=\sum_{k=1}^N \Gamma_k [-\{\sigma_k^+\sigma_k^-,\rho\}+2\sigma^-_k\rho\sigma^+_k], \\ \mathcal{L}_{\mathrm{dephasing}}(\rho)=\sum_{k=1}^N \gamma_k [-\{\sigma_k^+\sigma_k^-,\rho\}+2\sigma_k^+\sigma_k^-\rho\sigma_k^+\sigma_k^-], \\ \end{equation} here site $2$ is connected to the sink, where the population cannot escape, \begin{equation} \mathcal{L}_{\mathrm{sink}}(\rho)=\sum_{k=1}^N \Gamma_{N+1} [-\{\sigma_2^+\sigma_{N+1}^-\sigma_{N+1}^+\sigma_{2}^-,\rho\}+2\sigma_{N+1}^+\sigma_{2}^-\rho\sigma_{k}^+\sigma_{N+1}^-]. \\ \end{equation}
I took $N=2$ and work in one-exciton manifold (denoted by sites $|1\rangle,|2\rangle$ and the sink $|3\rangle$) plus a vacuum $|0\rangle$ so that \begin{equation} \rho=\rho_{11}|1\rangle\langle 1| +\rho_{22}|2\rangle\langle2|+\rho_{33}|3\rangle\langle3|+\rho_{00}|0\rangle\langle 0| + \mathrm{off-diagonals}, \end{equation} and also we can write $\sigma^-_n=|0\rangle\langle n|$. The parameters: $\Gamma_{1,2}=0.01,\Gamma_3=0.2,\nu_{12}=0.1,\gamma=0.02$. I got the same analytical result as in the paper. |
stat946w18/Implicit Causal Models for Genome-wide Association Studies Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Hledat
Zobrazují se záznamy 1-10 z 16
Asymptotic convergence of the solutions of a dynamic equation on discrete time scales
(Hindawi, 2012-01-03)
It is proved that, for the asymptotic convergence of all solutions, the existence of an increasing and asymptotically convergent solution is sufficient. Therefore, the main attention is paid to the criteria for the existence ...
Discrete matrix delayed exponential for two delays and its property
(Springer Nature, 2013-06-13)
In recent papers, a discrete matrix delayed exponential for a single delay was defined and its main property connected with the solution of linear discrete systems with a single delay was proved. In the present paper, a ...
Bounded solutions of delay dynamic equations on time scales
(Springer Nature, 2012-10-24)
In this paper we discuss the asymptotic behavior of solutions of a delay dynamic equation $$y^{\Delta}(t)=f(t,y(\tau(t)))$$ where $f\colon\mathbb{T}\times\mathbb{R}\rightarrow\mathbb{R}$, \tau\colon\T\rightarrow \T$ is a ...
Stabilization of Lure-type Nonlinear Control Systems by Lyapunov-Krasovskii Functionals
(Springer Nature, 2012-10-24)
The paper deals with the stabilization problem of Lure-type nonlinear indirect control systems with time-delay argument. The sufficient conditions for absolute stability of the control system are established in the form ...
An efficient new perturbative Laplace method for space-time fractional telegraph equations
(Springer Nature, 2012-11-27)
In this paper, we propose a new technique for solving space-time fractional telegraph equations. This method isbased on perturbation theory and the Laplace transformation.
On a delay population model with quadratic nonlinearity
(Springer Nature, 2012-12-28)
In this paper, a nonlinear delay differential equation with quadratic nonlinearity is investigated. It is proved that the positive equilibrium is globally asymptotically stable without any further limitations on parameters ...
Using acoustic emission methods to monitor cement composites during setting and hardening
(MDPI AG, 2017-04-28)
Cement-based composites belong among the basic building materials used in civil engineering. Their properties are given not only by their composition but also by their behaviour after mixing as well as by the methods of ...
Explicit general solution of planar linear discrete systems with constant coefficients and weak delays
(Springer Nature, 2013-03-06)
In this paper, planar linear discrete systems with constant coefficients and two delays $$ x(k+1)=Ax(k)+Bx(k-m)+Cx(k-n) $$ are considered where $k\in\bZ_0^{\infty}:=\{0,1,\dots,\infty\}$, $x\colon \bZ_0^{\infty}\to\mathbb{R}^2$, ...
Modeling of applied problems by stochastic systems and their analysis using the moment equations
(Springer Nature, 2013-10-09)
The paper deals with systems of linear differential equations with coefficients depending on the Markov process. Equations for particular density and the moment equations for given systems are derived and used in the ...
The Priestley-Chao Estimator of Conditional Density with Uniformly Distributed Random Design
(Český statistický úřad, 2018-09-21)
The present paper is focused on non-parametric estimation of conditional density. Conditional density can be regarded as a generalization of regression thus the kernel estimator of conditional density can be derived from ... |
EViews 10 New Econometrics and Statistics: Estimation
EViews 9 introduced Threshold Regression (TR) and Threshold Autoregression (TAR) models, and EViews 10 expands up these model by adding Smooth Threshold Regression and Smooth Threshold Autoregression as options.
In STR models the regime switching that occurs when an observed variable crosses unknown thresholds happens smoothly. As a result, STR models are often considered to have more “realistic” dynamics that their discrete TR model counterparts.
EViews' implementation of STR includes features such as:
Estimation of parameters for both shape and location of the smooth threshold. Model selection for the threshold variable. Specification of both regieme varying and regieme non-varying regressors.
EViews has included both White and Heteroskedasticity and Autocorrelation Consistent Covariance (HAC) estimators of the least-squares covariance matrix for over twenty years.
EViews 10 expands upon these robust standard error options with the addition of a family of heteroskedastic consistent covariance, and clustered standard errors.
EViews 10 increases the options for heteroskedastic consistent covariance estimators beyond the familiar White estimator available in previous versions. The class of estimators supported belong to the HC family described by Long and Ervin, 2000, and Cribari-Neto and da Silva, 2011.
The estimators differ in their choice of observation-specific weights used to improve the finite sample properties of the residual error covariance.
Specifically, EViews supports the following estimators and weight choices:
$$ \begin{array}{|l|c|} \hline \hfill \text{Method} \hfill & \text{Weight}\\ \hline \text{HC0 - White} & 1\\ \hline \text{HC1 - White with d.f. correction} & \sqrt{T/(T-k)}\\ \hline \text{HC2 - bias corrected} & (1-h_t)^{-1/2}\\ \hline \text{HC3 - pseudo-jacknife} & (1-h_t)^{-1}\\ \hline \text{HC4 - relative leverage} & (1-h_t)^{-\delta_t/2}\\ \hline \text{HC4m} & (1-h_t)^{-\gamma_t/2}\\ \hline \text{HC5} & (1-h_t)^{-\delta_t/4}\\ \hline \text{User - user specified} & \text{arbitrary} \\ \hline \end{array} $$ where $h_t = X_t^\top \left(X^\top X\right)^{-1}X_t$ are the diagonal elements of the familiar "hat matrix" $H = X^\top \left(X^\top X\right)^{-1}X$, and $\delta_t$ and $\gamma_t$ are discount factors.
In many settings, observations may be grouped into different groups or “clusters” where errors are correlated for observations in the same cluster and uncorrelated for observations in different clusters. EViews 10 offers support for consistent estimation of coefficient covariances that are robust to either one and two-way clustering.
As with the HC estimators, EViews supports a class of cluster-robust covariance estimators, with each estimator differing on the weights it gives to observations in the cluster.
The weighting of each estimator is as follows:
$$ \begin{array}{|l|c|} \hline \hfill \text{Method} \hfill & \text{Weight}\\ \hline \text{CR0 - Ordinary} & 1\\ \hline \text{CR1 - finite sample corrected (default)} & \sqrt{\frac{G}{(G-1)} \cdot \frac{(T-1)}{(T-k)}}\\ \hline \text{CR2 - bias corrected} & (1-h_t)^{-1/2}\\ \hline \text{CR3 - pseudo-jacknife} & (1-h_t)^{-1}\\ \hline \text{CR4 - relative leverage} & (1-h_t)^{-\delta_t/2}\\ \hline \text{CR4m} & (1-h_t)^{-\gamma_t/2}\\ \hline \text{CR5} & (1-h_t)^{-\delta_t/4}\\ \hline \text{User - user specified} & \text{arbitrary} \\ \hline \end{array} $$ where $h_t = X_t^\top \left(X^\top X\right)^{-1}X_t$ are the diagonal elements of the familiar "hat matrix" $H = X^\top \left(X^\top X\right)^{-1}X$, $\delta_t$ and $\gamma_t$ are discount factors, and $G$ is the number of clusters.
The basic $k$-variable VAR(p) specification has $k(pk+d)$ coefficients so that even moderate sized VARs require estimation of a large number of parameters. When VARs are applied to macroeconomic data with limited sample sizes, model over-parameterization is a frequent problem as there are too few observations to estimate precisely the VAR parameters.
EViews now offers support for the linear restriction approach to handling this over-parameterization problem.
One of the key elements behind Structural VAR estimation is the necessary imposition of restrictions on the residual structure matrices.
These restrictions generally take the form of restrictions on the factorization matrices, A and B, restrictions on the short-run impulse response matrix S, or restrictions on the long-run impulse response matrix F (or C), or a combination of the above.
Previous versions of EViews only allowed restrictions on A and B, or on F. EViews 10 broadens the restriction engine by allowing restrictions on any of the four matrices, adding linear restrictions, and adds a new interface allowing easier specification of the restrictions.
In EViews 10 you may now, from an estimated standard VAR, easily perform historical decomposition, the innovation-accounting technique proposed by Burbridge and Harrison (1985).
Historical decomposition decomposes forecast errors into components associated with structural innovations (computed by weighting ordinary residuals).
Dynamic forecasting using simulation methods is now supported from the equation forecast dialog.
Autoregressive Distributed Lag (ARDL) estimation has been drastically improved for EViews 10. In particular, EViews now allows absolute control over lag specification.
Any of the variables (dependent or regressor) can be specified with a custom lag, and you can mix the specification allowing certain variable to have fixed custom lags and the remainder having their lags chosen via model selection methods.
Moreover, in the context of the ARDL approach to the Bounds Cointegration Test of Pesaran Shin and Smith (2001) (PSS), EViews now offers inference under all 5 deterministic cases considered in PSS. Also, alongside the asymptotic critical values provided in PSS, EViews now offers finite sample critical values from Narayan (2005)
Finally, in addition to the Bounds F-test, Eviews now also reports the appropriate Banerjee, Dolado, Mestre (1998) (BDM) t-bounds test. |
Classification of Elements and Periodicity in Properties Atomic Radii, Ionic Radii, Ionization Energy, Electron affinity, Electronegativity Periodicity and Properties :
Repetition of similar properties at regular intervals
→ IA, II A, 0 → 2, 8, 8, 18, 18, 32 → III A - VI A → 8, 18, 18, 32 Atomic Radius Ionization potential Electron Affinity Electron Negativity Metallic and Non-metallic Valency Electropositivity Nature of oxides. Atomic Radius : Distance between nucleus and outermost electron. expressed in Å, nm, and pm etc ....Experimental determination of atomic radius is not possible. Atoms have no sharp boundary.
⇒ Radius of H - atom \frac{D_{H - H}}{2}=\frac{0.74}{2} = 0.37Å
Crystal Radius / Metallic Radius : Half of the inter nuclear distance between two nucleus
r_{Na} = \frac{d_{{Na} - {Na}}}{2}
Vander Waal's Radius : → minimum force between particles → size increases vander waal's force of attraction(VWFOA) increases. Cl 2 ----- Cl 2 , VWFOA is in between two chlorine molecules V R > M R > C R V R = vander waal's M R = metallic radius C R = crystal radius Factors influencing atomic radius : effective nuclear charge (Z*) \tt Z^{*} = \left(\frac{No.of \ protons}{No.of \ electrons}\right) Na^{+} = \frac{11}{10},\ Mg^{+2} = \frac{12}{10}, \ Al^{+3} = \frac{13}{10} Factors influencing ionization energy : → Atomic radius : IP \ (or) \ IE \propto \frac{1}{AR} Nuclear charge : Increases left to right Screening / shielding effect : s > p > d > f IP \propto \frac{1}{Screening} Half - filled and completely filled sub-shells : C N O 2s 22p 2 2s 22p 3 2s 22p 5 As it has half filled ⇒ more stable. Zn 4s 23d 10 completely filled-more stable. Effective nuclear charge (Z*) and screening constant : Z* = Z − σ (Z = atomic number) Screening Constant (σ) : following slates rule electrons from n th shell → 0.35 electrons from (n − 1) shell → 0.85 electrons (n − 2) and inner shells → 1.0 C : 1s 2 2s 2 2p 2 1s 2 2s 2 2p 1 + \upharpoonleft σ = 3(0.35) + 2(0.85) = 2.75 Z* = 6 − 2.75 = 3.25
Lanthanoid Contraction : → differentiating e − enters into deep seated 4f - orbitals shielding : s > p > d > f → due to poor shielding capacity of 4f - orbitals → size decreases gradually. Consequences of Lanthanoid Contraction : → atomic size of 4d, 5d series elements are almost same. eg : Zr - Hf, Nb - Ta ---- → IP values of 5d - series elements are more than those of 4d - series elements. Basic nature and covalent nature of hydroxides decreases from Ce(OH) 3 → Lu(OH) 3 Ionization Potential : Amount of energy required to remove the most loosely held electron from an isolated neutral gaseous atom. Xg + IP (or) IE 1 → X_g^{+} + e − ΔH = +IE 1 e.V = 3.35 × 10 -20 cal/atom = 1.6 × 10 -19 J/atom = 96.45 kJ/mol = 23.06 kcal/mol. Electron Gain Enthalpy (Δ egH) :
X_{(g)} + e^{-}\rightarrow X_g^{{-}}, \triangle H = -EA_{1}
Note :except for Be, Mg, N, He, Ne, Ar, Kr, Xe
EA
1is always positive
\triangle_{eg}H = -E_{A} - \frac{5}{2}RT
X_{(g)}^- + e^{-} \rightarrow X_{(g)}^{2-} ; \triangle H_{2} = +EA_{2} is endothermic
Electronegativity : Tendency of the bonded atom to attract the bonding e − Pauling Scale of EN : X_{A} - X_{B} = 0.208\sqrt{\triangle} Δ is in kcal/mole X_{A} - X_{B} = 0.1017\sqrt{\triangle} Δ is in kJ/mole \triangle = \left(E_{A - B}\right)_{exp} - \frac{1}{2}\left(E_{A - A} - E_{B - B}\right)_{theoretical} Mulliken's Scale of EN : EN = \frac{IP + EA}{2} case (1) EN = \frac{I_{P} + E_{A}}{2 \times 2.8} = \frac{IP + EA}{5.6} (in \ e.V) = \frac{IP + EA}{130}(kcal / mol) = \frac{IP + EA}{544}(kJ / mol) Hybridization : As the s - character of hybrid orbital increases EN ↑
%S
EN sp
3 25 % 2.55 sp
2 33.33 % 2.75 sp 50 % 3.25 sp > sp 2 > sp 3 Electronegativity Applications : % of ionic character = 16 (X A − X B) + 3.5(X A − X B) 2 electronegativity difference = 1.7 ⇒ 50% covalent character ⇒ 50% ionic character = 0 ⇒ 100% covalent character < 1.7 ⇒ more than 50% covalent > 1.7 ⇒ more than 50% ionic Part1: View the Topic in this Video from 0:12 to 9:52 Part2: View the Topic in this Video from 0:12 to 10:34 Part3: View the Topic in this Video from 0:12 to 8:55 Part4: View the Topic in this Video from 0:12 to 11:11 Part5: View the Topic in this Video from 0:12 to 9:08 Part6: View the Topic in this Video from 0:11 to 10:56
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Atomic radius,\tt r_{A}=\frac{r_{A}+r_{A}}{2}=\frac{d_{A-A}}{2}
[Distance A-A = radius of A + radius of A]
2. For heterodiatomic molecule AB, d
A-B = r A + r B + 0.09 (X A-X B) where, X A and X B are electronegativities of A and B.
3. Mulliken scale
Electronegativity (x) =\tt \frac{IE+\Delta He_{g}}{2}
4. Pauling scale: The difference in electronegativity of two atoms A and B is given by the relationship \tt x_{B}-x_{A}= 0.208\sqrt{\Delta}
where, \tt \Delta = E_{A-B}-\sqrt{E_{A-A}\times E_{B-B}} (Δ is known as resonance energy.) E A-B, E A-A and E B-B represent bond dissociation energies of the bonds A - B, A - A and B - B respectively.
5. Allred and Rochow's scale
Electronegativity =\tt 0.744+\frac{0.359\ Z_{eff}}{r^{2}} where, Z eff is the effective nuclear charge = Z − σ where, σ is screening constant. It's value can be determined by Slater's rule.
6. Covalent radius
r covalent or \tt r_{c}= \frac{1}{2} [bond length]
7. The screening effect and effective nuclear charge are very closely related, i.e,
Z' = Z − σ where, Z' = effective nuclear charge Z = atomic number σ = screening constant |
I generally agree with @dm63's answer: A convex (concave) smile around the forward usually indicates and leptokurtic (platykurtic) implied risk-neutral probability density. Both situations can or cannot admit arbitrage. I provide you with two counterexamples to your statements.A volatility smile that is concave around the forward does not necessarily ...
From an equities perspective, there are two concepts that should not be confused in my opinion and context should make the distinction self-explicit:Forward variance swap volatility (A)Forward implied volatility smile (B)I really recommend reading Bergomi's "Stochastic Volatility Modeling" which is an excellent book for equity practitioners. The topics ...
It's because of the settlement days you passed when you initialized the flat volatility curve. You're creating the spot, forward and flat volatilities as:boost::shared_ptr<BlackVarianceSurface> volatilitySurface(new BlackVarianceSurface(todaysDate, calendar,maturityArray, strikeArray,...
You are absolutely correct that they should be seen as approximations. While it would be nice to let h go to zero in a mathematical sense this is of course impossible in real life as the options are only traded in particular intervals. While the smallest interval may be less than 25, for historical reasons traders have gotten used to using the 25 point....
Either you or some reference you are following is in error here. At-the-money (or at least near-the-money) options are the most liquidly traded. And trading is much more heavy in out-of-the-money than in-the-money options.
Some NotationsIt's easy to get lost so let's introduce some notations and let$$ \sigma : (t, S, K, \tau) \to \sigma(K,\tau; S, t) $$denote the implied volatility smile prevailing at time $t$ when the spot price is $S_t=S$ for an option with strike level $K$ and time to expiry $\tau=T-t$. From here onward, we drop the $t$ argument to keep notations ...
There is nothing in simple cubic spline fitting routines that would prevent arbitrage. Even with conscientious use of knot points and smoothing techniques you may end up with simple spread and local volatility arbitrage conditions. Stochastic volatility models on the other hand can explicitly constrain your solutions to prevent call/ put spread arbitrage at ...
You can see concavity in mean-reverting underlying assets where the option tenor is comparable to the characteristic reversion time of the asset. For a geometric brownian motion, all underlying prices are possible, so any mean reversion or other limitation on large changes that might occur in reality would ultimately appear as a skinny tail and negative ...
There are lots of papers online and here are a few I would suggestmath.umnriskworxG. Dimitroff, J. de KockNowak, SibetzI you have matlab there is an step step example to calibrate SABR model. Since it uses the financial toolbox of matlab for a few functions I dont think you can replicate it in any other language. There must be C++ code available ...
It is possible, yes, but it requires assumptions. But, philosophically speaking, this is the case as with all pricing, of any instrument. For example, given only the price of a 6Y and 7Y IRS can you correctly price the 6.5Y IRS rate? Well, yes you can, but it depends upon your assumptions about interpolation which is a subjective choice.Lets look ...
One possible reason could be jumps. Over the longer maturity, there could be more jumps so the jumps average out in a way; whereas over the short term, a jump can make a bigger difference and hence the risk of jump increases demand.This reasoning is used to justify Stochastic volatility with jumps models in some books.
The central limit theorem guarantees, under fairly general assumptions, that the sum of returns becomes more normally distributed as the number of returns grows (technically, defining a return as $\mathrm{log}(S_{t+\Delta t}/S_t)$, $\sum_i ^n \mathrm{log}(S_{t+\Delta t i}/S_{t+\Delta t (i-1)} \to \mathcal{N}(\cdot,\cdot)$ as $ n \to \infty $). Thus, as $T$ ...
This is merely a question of notation, you should simply read$$ \sigma(K,T) = \sigma(S_t=K, t=T) $$For an easy to follow derivation see this excellent note from Fabrice RouahSome intuition behind the developments:The price of a European option, for instance a call, can be written in integral form:$$ C(t, S_t, K, T) = e^{-r(T-t)} \int_0^\infty (S_T-K)^...
Regarding your second question: Remember that Black/Scholes start by postulating a stochastic model for the dynamics of the underlying asset - a geometric Brownian motion with a constant diffusion coefficient $\sigma$. This asset price process should be the same no matter what option you want to value based on it. Saying that you allow for different values ...
I work in a relatively illiquid and old-fashioned market (options on power), where trades are arranged via phone & broker, so the issue of low underlying liquidity is definitely there. To remedy this, all options are dealt with delta hedge, where the price level of the delta hedge is pre-agreed, so market moves during arrange a trade do not matter as ...
I wonder if the reasons these approximations are widely used - instead of a whole set of estimates for different deltas, as proposed - have to do with liquidity and market structure.Liquidity: A market participant willing to trade e.g. a 10 delta option for no economic reason other than skew will find, for many products, that the edge evident from a fitted ...
Negative excess kurtosis leads to a concave vol smile.By the way, no-arbitrage arguments are of theoretical nature: implied volatilities can exhibit no-arbitrate violations in the theoretical sense for extended periods given that such arbitrate cannot be traded due to other factors, such as liquidity, spreads, transaction related costs...not saying this ...
Intuition: You can think of the vol smile as a reflection of the risk neutral distribution (compared to the Black Scholes Gaussian density). A fat tailed distribution creates the smile: fat tail -> higher prob of exercise than Gaussian with constant stdev -> higher option price than BS with ATM vol -> higher implied vol for given strike. Skewed distributions ...
The shape of the implied volatility smile is linked to the higher moments of the underlying return distribution though there is no one-to-one relationship. A convex (concave) smile usually indicates a distribution with positive (negative) excess kurtosis.Here is an example for a concave implied vol. smile. It is often observed in markets where a single ...
There is no simple way and you have to make correlation assumptions.For instance say you have a volatility surface for $\text{EURUSD}$ and another volatility surface for $\text{USDJPY}$ and you want to build a volatility surface for $\text{EURJPY}$.You start from the observation that a call with maturity $T$ and strike $K$ on $\text{EURJPY}$ with ...
First of all, a Bermudan Swaption does not have to be of American type. Consider a "9NC2 Bermudan" (9 non call 2), basically a Bermudan swaption with final maturity in 9 years which is not exercisable for the first 2 years.I have not worked at an exotic rates desk in a while (many years to be more precise) but from what I remember you need to use a ...
First note that the price of binary call is related to the price of an ordinary call in any model by$$BinC(T,K) = e^{-rT}\mathbb{E}^{\mathbb{Q}}[1_{S_T>K}] = - \frac{\partial}{\partial K}e^{-rT}\mathbb{E}^{\mathbb{Q}}[(S_T-K)_+] = - \frac{\partial}{\partial K}C(T,K)$$Now the volatility smile is implicitly defined by$$C(T,K) = C_{BS}(T,K,\Sigma(...
Would it be OK to mix put/call prices such that I only ever calculate implied volatility for in-the-money options?No. Use OTM options because they usually have narrower bid-ask spread. Ideally you calculate all IVs, and then use highest bid IV, smallest ask IV.If so, I assume this surface can then immediately be used to calculate ...Yes, then you ...
The choice of a model depends on what inputs you have, the complexity allowed (e.g. calculation time restrictions) and what you want to infer from it.The development of the LMM adressed the mathematical difficulty of finding a joint model for all Libor forwards and was a great achievement in the late 90'. But at that time the distribution of the Libors was ...
In addition to the presented answers, I just wanted to mention that such a situation is described in Hull, page 419 (Chapter 19 Volatility Smiles, 19.8: "When a single large jump is anticipated"). This happens when probability distribution of returns is binomial. It can occur in a situation when market is expecting some announcement which will either ...
The local volatility is just a $\mathbb{R}_+\times[0,T]\mapsto \mathbb{R}_+$ function where $T$ is some time horizon. It is the solution of a simple equation so it expression is written as $\sigma(K,t)$ but here $K$ is essentially a notation to denote a strike value as the Dupire equation relates the function $\sigma$ to vanilla market prices at a given ...
The implied Black-Scholes skew will be downward sloping in the limit on both the left and the right. (I believe @Gordon's derivation claiming upward slope may have a sign error somewhere).Left SideFor the left side it is sufficient to note that the lognormal model has no density below zero while the normal model has strictly positive density in that ...
Neither situation is necessarily an arbitrage. Negative smile is consistent with a 'thin-tailed' density function , just as positive smile is consistent with a fat tailed density function . It's true that an extreme amount of negative smile could cause the implied density to be negative in places I.e an arbitrage.
@q.t.f. 's answer is 100% correct. As an OMM, I wanted to add some reasoning behind this.The practice of trading ATM options has been established for over a century now, and before formal mathematical methods were developed, traders have developed many heuristics for pricing ( proportional to vol ) and hedging ( delta = 1/2 ) .Typically in a newly ... |
For some odd reason, Django REST framework doesn't include any support for write-only fields. There's support for read-only fields, but not write-only. An example use case is for an API call that changes the user's password. You want to verify that their current password is correct as well. Searching online gives a lot of half-baked solutions implementing some hacky "delete it here, add it there" kind of patching.
Unfortunately, the side effects include these fields no longer appearing on the HTML REST interface, among other rather silly issues. Here's my solution:
class ProfileSerializer(serializers.Serializer):
class Meta:
write_only_fields = ('current_password','password')
email = serializers.CharField(required=False)
password = serializers.CharField(required=False)
current_password = serializers.CharField()
def to_native(self, obj):
ret = self._dict_class()
ret.fields = self._dict_class()
for field_name, field in self.fields.items():
if field.read_only and obj is None:
continue
elif field_name in getattr(self.opts, 'write_only_fields', ()):
key = self.get_field_key(field_name)
value = self.init_data.get(key, None) if self.init_data else None
if value:
ret[key] = value
ret.fields[key] = self.augment_field(field, field_name, key, value)
else:
field.initialize(parent=self, field_name=field_name)
key = self.get_field_key(field_name)
value = field.field_to_native(obj, field_name)
method = getattr(self, 'transform_%s' % field_name, None)
if callable(method):
value = method(obj, value)
ret[key] = value
ret.fields[key] = self.augment_field(field, field_name, key, value)
return ret
def restore_object(self, attrs, instance=None):
return super(ReceptiveSerializer, self).restore_object(
dict((k,v) for (k,v) in filter(
lambda x:x[0] not in getattr(self.opts, 'write_only_fields', ()), attrs.items())
), instance)
def validate_current_password(self, attrs, source):
if self.object is None:
return attrs
u = authenticate(username=self.object.email, password=attrs[source])
if u is not None:
return attrs
else:
raise serializers.ValidationError('OBJECTION!')
The addition of the above
to_native and
restore_object methods (which you can copy/paste) will permit you to add a
write_only_fields property to the
Meta class, defining the fields you wish to have short circuit. If you wish to use this in multiple classes, you can extend this to a general serializer as follows:
class ReceptiveSerializerOptions(serializers.SerializerOptions):
def __init__(self, meta):
super(ReceptiveSerializerOptions, self).__init__(meta)
self.write_only_fields = getattr(meta, 'write_only_fields', ())
class ReceptiveSerializer(serializers.Serializer):
_options_class = ReceptiveSerializerOptions
def to_native(self, obj):
ret = self._dict_class()
ret.fields = self._dict_class()
for field_name, field in self.fields.items():
if field.read_only and obj is None:
continue
elif field_name in getattr(self.opts, 'write_only_fields', ()):
key = self.get_field_key(field_name)
value = self.init_data.get(key, None) if self.init_data else None
if value:
ret[key] = value
ret.fields[key] = self.augment_field(field, field_name, key, value)
else:
field.initialize(parent=self, field_name=field_name)
key = self.get_field_key(field_name)
value = field.field_to_native(obj, field_name)
method = getattr(self, 'transform_%s' % field_name, None)
if callable(method):
value = method(obj, value)
ret[key] = value
ret.fields[key] = self.augment_field(field, field_name, key, value)
return ret
def restore_object(self, attrs, instance=None):
return super(ReceptiveSerializer, self).restore_object(
dict((k,v) for (k,v) in filter(
lambda x:x[0] not in getattr(self.opts, 'write_only_fields', ()), attrs.items())
), instance)
Have it extend
serializers.Serializer or
serializers.ModelSerializer, whichever floats your boat. I've made a pull request for this update too. :D
Tired of cooking your hot dogs the plain old boring way? Fear not! You can electrocute them! The most interesting thing about that article though, at least to me, was the multiple mentions of the Presto Hotdogger.
It was difficult to find information on this, but I did manage to buy one off ebay (you can get one too!). The company that made these, Presto, is still in business, and they still make a wide variety of kitchen cookware and appliances, even though the Hotdogger has been discontinued.
Born in 1905 in Eau Claire, Wisconsin, Presto actually started out as the Northwestern Steel & Iron Works company. At the time, they manufactured cement mixers, marine engines, farm engines, and a number of other products), and among these products was a steam pressure cooker developed in 1908 for the canning industry. In 1910, the USDA reported that using these cookers was a good way to prevent botulism, and as a result, these cookers became quite the hot commodity. The appliance portion of the company forked into the National Pressure Cooker Company in 1917, and in 1939, it became Presto. If you'd like to read more about Presto's history, they have a wonderful history page on their website.
Fast forward to 1960, Presto develops the Presto Hotdogger, which basically takes electricity directly from your wall outlet, and pumps it into a hot dog! The appliance itself can cook up to six hot dog simultaneously, and actually does not have a power switch. Instead, to turn it off, you just unplug it. On the bright side, it cooks your hot dogs in 60 seconds, and it does indeed cook them quite well. Do they taste good? Well, that's a different question. Over the next ten years, these Hotdoggers continue selling, but at some point, Presto stops producing them. I couldn't find any exact evidence for why they stopped producing them, but I'd guess that it had something to do with the introduction of the consumer countertop microwave oven, which was introduced in 1967.
Of course, there could be a variety of other reasons the Hotdogger stopped selling and no future iterations were produced, but the microwave seems to be a much more versatile option to cooking food, including hot dogs, and produces a cooked product that tastes just as mediocre. In fact, conceptually, the Hotdogger isn't that far from a microwave. It uses the hot dog as a resistor between the two electrodes, so it's effectively heating the water inside the hot dog. Similarly, a microwave does the same thing. But even though a microwave produces the same product, there's something cool about electricuting hot dogs that I can't quite pinpoint.
The key idea behind PyExcelerate is that we want to go fast. We don't care if it's a micro-optimization or a macro-optimization, we want to squeeze out every bit of performance out of python as possible without making the code an unmaintainable mess. One of the sections of code that was most used was the alignment of Excel's styles. Each cell needs its own ability to edit its style, yet on compilation, the styles need to be compressed so we don't have a million cells that look the same using different styles. In order to check if the style already exists, we use a dict to map each style to it's corresponding style id. That way, if we encounter the same style later, we can just add a reference to the existing style instead of creating a new one.
Turns out that this operation is pretty slow. Profiling the execution of PyExcelerate, we find that somewhere between 40-50% of the execution time is actually spent doing this compression (and we tried just turning off compression, it turns out to be slower as a lot more references need to be built). So what can we do to optimize this?
6266945 function calls (5983321 primitive calls) in 19.340 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 1.351 1.351 19.342 19.342 profile.py:1(<module>)
199970 1.207 0.000 3.454 0.000 Style.py:78(__eq__)
480089/200067 1.052 0.000 3.071 0.000 {hash}
2 0.949 0.474 8.281 4.140 Workbook.py:45(_align_styles)
301027 0.876 0.000 1.399 0.000 Utility.py:26(lazy_get)
179986 0.714 0.000 1.253 0.000 Font.py:62(__eq__)
151000 0.713 0.000 2.021 0.000 Style.py:42(font)
151017 0.561 0.000 0.561 0.000 Font.py:6(__init__)
399940 0.554 0.000 0.554 0.000 Style.py:104(_to_tuple)
Looking at the profile, it seems like
_align_styles spends a good chunk of its time hashing, so let's see if we can speed that up. Now, obviously we can't rewrite the python hashing function to make it faster. In most cases, the built-in hashing ends up being faster than whatever we can come up with for
__hash__, but there is one neat trick for python dictionaries that we can exploit without misidentifying equivalent styles.
The way python dictionaries work is that the dictionary checks to see if the hash exists in the dictionary, and if it does, check for equality to make sure it isn't a collision. In most cases though, the hash function won't produce a collision, so equality is never checked. But hashing is slow, and we want to speed it up. What can we do? Hash less, and offload some of the work to checking equality! Consider the
Font class before:
class Font(object):
def __init__(self, bold=False, italic=False, underline=False, strikethrough=False, family="Calibri", size=11, color=None):
self.bold = bold
self.italic = italic
self.underline = underline
self.strikethrough = strikethrough
self.family = family
self.size = size
self._color = color
def __hash__(self):
return hash(self._to_tuple())
def __eq__(self, other):
return self._to_tuple() == other._to_tuple()
def _to_tuple(self):
return (self.bold, self.italic, self.underline, self.strikethrough, self.family, self.size, self._color)
For anyone about to point out that I can use
self.__dict__.keys() instead, we tried, it's too slow ;)
Right now, on each call of
__hash__,
all of the attributes are being considered to produce a hash, and
__eq__ is rarely called because hash collisions are rare. Now, consider this optimized code:
class Font(object):
def __init__(self, bold=False, italic=False, underline=False, strikethrough=False, family="Calibri", size=11, color=None):
self.bold = bold
self.italic = italic
self.underline = underline
self.strikethrough = strikethrough
self.family = family
self.size = size
self._color = color
def __eq__(self, other):
return (self.family, self.size, self._color) == (other.family, other.size, other._color)
def __hash__(self):
return hash((self.bold, self.italic, self.underline, self.strikethrough))
Now,
__hash__ is only hashing half of the number of attributes! As a result, we end up cutting off quite a bit of time in the hashing function overall. Let's compare the execution times:
What's going on here is that we're only hashing half of the attributes and most of the time, that's
good enough to determine if the two styles are equal. If the two styles happen to have the same bold, italic, underline, and strikethrough values and only differ on something else, then we fall back to checking equality. But because hashing is now twice as fast and we only lost some of the granularity, we end up with a very noticeable improvement, with style compression now only taking about 20-30% of the execution time. 5465301 function calls (5166618 primitive calls) in 17.860 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 1.506 1.506 17.863 17.863 profile.py:1(<module>)
492089/200067 1.150 0.000 3.316 0.000 {built-in method hash}
2 0.966 0.483 5.479 2.739 Workbook.py:45(_align_styles)
309027 0.950 0.000 1.497 0.000 Utility.py:26(lazy_get)
150000 0.785 0.000 1.993 0.000 Style.py:42(font)
100000 0.695 0.000 0.934 0.000 Range.py:190(coordinate_to_string)
100000 0.510 0.000 1.436 0.000 Worksheet.py:97(set_cell_style)
200015 0.504 0.000 3.820 0.000 Style.py:75(__hash__)
Optimal Ratio
In the above example, we implemented the trick hashing half of the number of attributes. What if we tried different ratios? Let \(n\) be the number of attributes and take some ratio \(0 \leq r \leq 1\). The expected value for the number of attribute checks is:
$$E(calls) = E(calls|no\:collision) \times P(no\:hash\:collision) + E(calls|collision) \times P(hash\:collision)$$
$$E(calls) = nr \times r + n \times (1-r)$$
Minimizing this function:
$$\frac{d}{dr} (nr^2 + n(1-r)) = 0$$
$$\Rightarrow r = \frac{1}{2}$$
So our ratio of half the attributes is optimal. Applying this to \(E(calls)\), we find that \(E(calls|r=\frac{1}{2})=\frac{3}{4}E(calls|r=1)\), which appears to be fairly consistent with the experimental results in the execution time plot above. Hooray!
__eq__ Violation
By the above construction, we find that the definition of
__eq__ is acutally violated. This is unfortunate because by enforcing the definition and recalculating the optimal ratio, we get \(r = 1\). Therefore, it's better to have this optimization performed on internal classes, or in the case of PyExcelerate, only when the performance gain can be expected. |
This is merely a reformulation of Abdelmalek Abdesselam's answer, in a somewhat different language and with different references. It should be a comment to that answer, but it is unfortunately too long. Long story short: see Karlin,
Total positivity, formula (2.10) in Section 1.2, with $u(t) = t$ and $\sigma(dt) = \mathbb{1}_{(0, \infty)}(t) t^{-1} e^{-t} dt$.
The kernel $K(x,y)$ is said to be
totally positive on $X \times Y$, where $X, Y \subseteq \mathbb{R}$, if$$ K\pmatrix{x_1&x_2&\cdots&x_n\\y_1&y_2&\cdots&y_n} := \det \left|\matrix{K(x_1,y_1)&K(x_1,y_2)&\cdots&K(x_1,y_n)\\K(x_2,y_1)&K(x_2,y_2)&\cdots&K(x_2,y_n)\\\vdots&\vdots&&\vdots\\K(x_n,y_1)&K(x_n,y_2)&\cdots&K(x_n,y_n)\\}\right| \geqslant 0$$whenever $x_1 < x_2 < \ldots < x_n$ and $y_1 < y_2 < \ldots < y_n$ (and, of course, $x_1, x_2, \ldots, x_n \in X$, $y_1, y_2, \ldots, y_n \in Y$). It is strictly totally positive if strict inequality holds. A standard reference for totally positive kernels is Karlin's book Total positivity (Stanford, 1968). Our goal is thus to prove that the kernel $G(\mu,\nu) = \Gamma(\mu + \nu)$ is strictly totally positive.
It is known that the kernel $e^{x y}$ is strictly totally positive on $\mathbb{R} \times \mathbb{R}$; see, for example, Example (i) in Section 2.1 of Karlin's book. Substituting $t = e^y$, we see that $K(x, t) = t^x$ and $\check{K}(t, x) = t^x$ are strictly totally positive on $\mathbb{R} \times (0, \infty)$ and $(0, \infty) \times \mathbb{R}$, respectively.
Define $\sigma(dt) = t^{-1} e^{-t} dt$ on $(0, \infty)$. Observe that$$G(\mu,\nu) = \Gamma(\mu + \nu) = \int_0^\infty t^{\mu + \nu - 1} e^{-t} dt = \int_0^\infty K(\mu, t) \check{K}(t, \nu) \sigma(dt) .$$The
basic composition formula (as it is called by Karlin, see (2.5) in Section 1.2 in his book; Karlin's reference for this formula is problem 68 in Pólya and Szegő, Aufgaben und Lehrsdtze aus der Analysis, vol. 1) tells us that$$\begin{aligned} & G\pmatrix{\mu_1&\mu_2&\cdots&\mu_n\\\nu_1&\nu_2&\cdots&\nu_n} = \idotsint\limits_{0<t_1 < t_2 < \ldots < t_n} K\pmatrix{\mu_1&\mu_2&\cdots&\mu_n\\t_1&t_2&\cdots&t_n} \times \\ & \hspace{10em} \times \check{K}\pmatrix{t_1&t_2&\cdots&t_n\\\nu_1&\nu_2&\cdots&\nu_n} \sigma(dt_1) \sigma(dt_2) \ldots \sigma(dt_n) .\end{aligned}$$The right-hand side is clearly positive, and our claim is proved.
The above argument is (essentially) contained in Karlin's book, when he proves that moment sequences generate totally positive kernels, see formula (2.10) in Section 1.2 of his book. He is only concerned with integer moments, but the argument carries over with no modifications to arbitrary moments. |
Answer
The required solution of probability is, $0$
Work Step by Step
We know that when the die is rolled then the sample space of equally likely outcomes given below is $ S=\left\{ 1,2,3,4,5,6 \right\}$ As there are six outcomes in the sample space, thus, $ n\left( S \right)=6$ Also, the event of getting a number greater than 7 can be represented as $ E=\phi $ Since there is only one outcome in this event, therefore, $ n\left( E \right)=0$ Thus, the probability of obtaining a number greater than 4 is: $\begin{align} & P\left( E \right)=\frac{n\left( E \right)}{n\left( S \right)} \\ & =\frac{0}{6} \\ & =0 \end{align}$ |
Chapters
Chapter 2: Relations
Chapter 3: Functions
Chapter 4: Measurement of Angles
Chapter 5: Trigonometric Functions
Chapter 6: Graphs of Trigonometric Functions
Chapter 7: Values of Trigonometric function at sum or difference of angles
Chapter 8: Transformation formulae
Chapter 9: Values of Trigonometric function at multiples and submultiples of an angle
Chapter 10: Sine and cosine formulae and their applications
Chapter 11: Trigonometric equations
Chapter 12: Mathematical Induction
Chapter 13: Complex Numbers
Chapter 14: Quadratic Equations
Chapter 15: Linear Inequations
Chapter 16: Permutations
Chapter 17: Combinations
Chapter 18: Binomial Theorem
Chapter 19: Arithmetic Progression
Chapter 20: Geometric Progression
Chapter 21: Some special series
Chapter 22: Brief review of cartesian system of rectangular co-ordinates
Chapter 23: The straight lines
Chapter 24: The circle
Chapter 25: Parabola
Chapter 26: Ellipse
Chapter 27: Hyperbola
Chapter 28: Introduction to three dimensional coordinate geometry
Chapter 29: Limits
Chapter 30: Derivatives
Chapter 31: Mathematical reasoning
Chapter 32: Statistics
Chapter 33: Probability
RD Sharma Mathematics Class 11 Chapter 27: Hyperbola Chapter 27: Hyperbola Exercise 27.10 solutions [Pages 13 - 14]
The equation of the directrix of a hyperbola is
x − y + 3 = 0. Its focus is (−1, 1) and eccentricity 3. Find the equation of the hyperbola.
Find the equation of the hyperbola whose focus is (0, 3), directrix is x + y − 1 = 0 and eccentricity = 2 .
Find the equation of the hyperbola whose focus is (1, 1), directrix is 3
x + 4 y + 8 = 0 and eccentricity = 2 .
Find the equation of the hyperbola whose focus is (1, 1) directrix is 2
x + y = 1 and eccentricity = \[\sqrt{3}\].
Find the equation of the hyperbola whose focus is (2, −1), directrix is 2
x + 3 y
Find the equation of the hyperbola whose focus is (a, 0), directrix is 2
x − y + a = 0 and eccentricity = \[\frac{4}{3}\].
Find the equation of the hyperbola whose focus is (2, 2), directrix is x + y = 9 and eccentricity = 2.
Find the eccentricity, coordinates of the foci, equation of directrice and length of the latus-rectum of the hyperbola .
9
x 2 − 16 y 2 = 144
Find the eccentricity, coordinates of the foci, equation of directrice and length of the latus-rectum of the hyperbola .
16
x 2 − 9 y 2 = −144
Find the eccentricity, coordinates of the foci, equation of directrice and length of the latus-rectum of the hyperbola .
4
x 2 − 3 y 2 = 36
Find the eccentricity, coordinates of the foci, equation of directrice and length of the latus-rectum of the hyperbola .
3
x 2 − y 2 = 4
Find the eccentricity, coordinates of the foci, equation of directrice and length of the latus-rectum of the hyperbola .
2
x 2 − 3 y 2 = 5.
Find the axes, eccentricity, latus-rectum and the coordinates of the foci of the hyperbola 25
x 2 − 36 y 2 = 225.
Find the centre, eccentricity, foci and directrice of the hyperbola .
16
x 2 − 9 y 2 + 32 x + 36 y − 164 = 0
Find the centre, eccentricity, foci and directrice of the hyperbola .
x
2 − y 2 + 4x = 0
Find the centre, eccentricity, foci and directrice of the hyperbola .
x
2 − 3y 2 − 2x = 8.
Find the equation of the hyperbola, referred to its principal axes as axes of coordinates, in the distance between the foci = 16 and eccentricity = \[\sqrt{2}\].
Find the equation of the hyperbola, referred to its principal axes as axes of coordinates, in the conjugate axis is 5 and the distance between foci = 13 .
Find the equation of the hyperbola, referred to its principal axes as axes of coordinates, in the conjugate axis is 7 and passes through the point (3, −2).
Find the equation of the hyperbola whose foci are (6, 4) and (−4, 4) and eccentricity is 2.
Find the equation of the hyperbola whose vertices are (−8, −1) and (16, −1) and focus is (17, −1).
Find the equation of the hyperbola whose foci are (4, 2) and (8, 2) and eccentricity is 2.
Find the equation of the hyperbola whose vertices are at (0 ± 7) and foci at \[\left( 0, \pm \frac{28}{3} \right)\] .
Find the equation of the hyperbola whose vertices are at (± 6, 0) and one of the directrices is
x = 4.
Find the equation of the hyperbola whose foci at (± 2, 0) and eccentricity is 3/2.
Find the eccentricity of the hyperbola, the length of whose conjugate axis is \[\frac{3}{4}\] of the length of transverse axis.
Find the equation of the hyperboala whose focus is at (5, 2), vertex at (4, 2) and centre at (3, 2).
Find the equation of the hyperboala whose focus is at (4, 2), centre at (6, 2) and e = 2.
If P is any point on the hyperbola whose axis are equal, prove that SP. S'P = CP
2.
Find the equation of the hyperbola satisfying the given condition :
vertices (± 2, 0), foci (± 3, 0)
Find the equation of the hyperbola satisfying the given condition :
vertices (0, ± 5), foci (0, ± 8)
Find the equation of the hyperbola satisfying the given condition :
vertices (0, ± 3), foci (0, ± 5)
Find the equation of the hyperbola satisfying the given condition :
foci (± \[3\sqrt{5}\] 0), the latus-rectum = 8
Find the equation of the hyperbola satisfying the given condition :
foci (0, ± 13), conjugate axis = 24
find the equation of the hyperbola satisfying the given condition:
foci (± \[3\sqrt{5}\] 0), the latus-rectum = 8
(vii) find the equation of the hyperbola satisfying the given condition:
foci (± 4, 0), the latus-rectum = 12
find the equation of the hyperbola satisfying the given condition:
vertices (± 7, 0), \[e = \frac{4}{3}\]
find the equation of the hyperbola satisfying the given condition:
foci (0, ± \[\sqrt{10}\], passing through (2, 3).
find the equation of the hyperbola satisfying the given condition:
foci (0, ± 12), latus-rectum = 36
If the distance between the foci of a hyperbola is 16 and its ecentricity is \[\sqrt{2}\],then obtain its equation.
Show that the set of all points such that the difference of their distances from (4, 0) and (− 4,0) is always equal to 2 represents a hyperbola.
Chapter 27: Hyperbola solutions [Page 18]
Write the eccentricity of the hyperbola 9
x 2 − 16 y 2 = 144.
Write the eccentricity of the hyperbola whose latus-rectum is half of its transverse axis.
Write the coordinates of the foci of the hyperbola 9
x 2 − 16 y 2 = 144.
Write the equation of the hyperbola of eccentricity \[\sqrt{2}\], if it is known that the distance between its foci is 16.
If the foci of the ellipse \[\frac{x^2}{16} + \frac{y^2}{b^2} = 1\] and the hyperbola \[\frac{x^2}{144} - \frac{y^2}{81} = \frac{1}{25}\] coincide, write the value of b
2.
Write the length of the latus-rectum of the hyperbola 16
x 2 − 9 y 2 = 144.
If the latus-rectum through one focus of a hyperbola subtends a right angle at the farther vertex, then write the eccentricity of the hyperbola.
Write the distance between the directrices of the hyperbola
x = 8 sec θ, y = 8 tan θ.
Write the equation of the hyperbola whose vertices are (± 3, 0) and foci at (± 5, 0).
If
e 1 and e 2 are respectively the eccentricities of the ellipse \[\frac{x^2}{18} + \frac{y^2}{4} = 1\]
and the hyperbola \[\frac{x^2}{9} - \frac{y^2}{4} = 1\] then write the value of 2
e 1 2 + e 2 2. Chapter 27: Hyperbola solutions [Pages 18 - 20]
Equation of the hyperbola whose vertices are (± 3, 0) and foci at (± 5, 0), is
16
x 2− 9 y 2= 144
9
x 2− 16 y 2= 144
25
x 2− 9 y 2= 225
9
x 2− 25 y 2= 81
If
e 1 and e 2 are respectively the eccentricities of the ellipse \[\frac{x^2}{18} + \frac{y^2}{4} = 1\] and the hyperbola \[\frac{x^2}{9} - \frac{y^2}{4} = 1\] , then the relation between e 1 and e 2 is
3
e 1 2+ e 2 2= 2 e 1 2+ 2 e 2 2= 3
2
e 1 2+ e 2 2= 3 e 1 2+ 3 e 2 2= 2
The distance between the directrices of the hyperbola
x = 8 sec θ, y = 8 tan θ, is
\[8\sqrt{2}\]
\[16\sqrt{2}\]
\[4\sqrt{2}\]
\[6\sqrt{2}\]
The equation of the conic with focus at (1,
−1) directrix along x − y + 1 = 0 and eccentricity \[\sqrt{2}\] is
xy = 1
2xy + 4x − 4y − 1= 0
x
2− y 2= 1
2xy − 4x + 4y + 1 = 0
The eccentricity of the conic 9
x 2 − 16 y 2 = 144 is
\[\frac{5}{4}\]
\[\frac{4}{3}\]
\[\frac{4}{5}\]
\[\sqrt{7}\]
A point moves in a plane so that its distances PA and PB from two fixed points A and B in the plane satisfy the relation PA − PB = k (k ≠ 0), then the locus of P is
a hyperbola
a branch of the hyperbola
a parabola
an ellipse
The eccentricity of the hyperbola whose latus-rectum is half of its transverse axis, is
\[\frac{1}{\sqrt{2}}\]
\[\sqrt{\frac{2}{3}}\]
\[\sqrt{\frac{3}{2}}\]
none of these.
The eccentricity of the hyperbola x
2 − 4y 2 = 1 is
\[\frac{\sqrt{3}}{2}\]
\[\frac{\sqrt{5}}{2}\]
\[\frac{2}{\sqrt{3}}\]
\[\frac{2}{\sqrt{5}}\]
The difference of the focal distances of any point on the hyperbola is equal to
length of the conjugate axis
eccentricity
length of the transverse axis
Latus-rectum
The foci of the hyperbola 9
x 2 − 16 y 2 = 144 are
(± 4, 0)
(0, ± 4)
(± 5, 0)
(0, ± 5)
The distance between the foci of a hyperbola is 16 and its eccentricity is \[\sqrt{2}\], then equation of the hyperbola is
x
2+ y 2= 32
x
2− y 2= 16
x
2+ y 2= 16
x
2− y 2
If
e 1 is the eccentricity of the conic 9x 2 + 4y 2 = 36 and e 2 is the eccentricity of the conic 9x 2 − 4y 2 = 36, then e 1 2− e 2 2= 2
2 <
e 2 2− e 1 2< 3 e 2 2− e 1 2= 2 e 2 2− e 1 2> 3
If the eccentricity of the hyperbola x
2 − y 2 sec 2α = 5 is \[\sqrt{3}\] times the eccentricity of the ellipse x 2 sec 2 α + y 2 = 25, then α =
\[\frac{\pi}{6}\]
\[\frac{\pi}{4}\]
\[\frac{\pi}{3}\]
\[\frac{\pi}{2}\]
The equation of the hyperbola whose foci are (6, 4) and (−4, 4) and eccentricity 2, is
\[\frac{(x - 1 )^2}{25/4} - \frac{(y - 4 )^2}{75/4} = 1\]
\[\frac{(x + 1 )^2}{25/4} - \frac{(y + 4 )^2}{75/4} = 1\]
\[\frac{(x - 1 )^2}{75/4} - \frac{(y - 4 )^2}{25/4} = 1\]
none of these
The length of the straight line x − 3y = 1 intercepted by the hyperbola x
2 − 4y 2 = 1 is
\[\frac{6}{\sqrt{5}}\]
\[3\sqrt{\frac{2}{5}}\]
\[6\sqrt{\frac{2}{5}}\]
none of these
The latus-rectum of the hyperbola 16
x 2 − 9 y 2 = 144 is
16/3
32/3
8/3
4/3
The foci of the hyperbola 2
x 2 − 3 y 2 = 5 are
\[( \pm 5/\sqrt{6}, 0)\]
(± 5/6, 0)
\[( \pm \sqrt{5}/6, 0)\]
none of these
The eccentricity the hyperbola \[x = \frac{a}{2}\left( t + \frac{1}{t} \right), y = \frac{a}{2}\left( t - \frac{1}{t} \right)\] is
\[\sqrt{2}\]
\[\sqrt{3}\]
\[2\sqrt{3}\]
\[3\sqrt{2}\]
The equation of the hyperbola whose centre is (6, 2) one focus is (4, 2) and of eccentricity 2 is
3 (x − 6)
2− (y −2) 2= 3
(x − 6)
2− 3 (y − 2) 2= 1
(x − 6)
2− 2 (y −2) 2= 1
2 (x − 6)
2− (y − 2) 2= 1
The locus of the point of intersection of the lines \[\sqrt{3}x - y - 4\sqrt{3}\lambda = 0 \text { and } \sqrt{3}\lambda + \lambda - 4\sqrt{3} = 0\] is a hyperbola of eccentricity
1
2
3
4
Chapter 27: Hyperbola RD Sharma Mathematics Class 11 Textbook solutions for Class 11 RD Sharma solutions for Class 11 Mathematics chapter 27 - Hyperbola
RD Sharma solutions for Class 11 Maths chapter 27 (Hyperbola) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Mathematics Class 11 solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 11 Mathematics chapter 27 Hyperbola are Sections of a Cone, Concept of Circle, Introduction of Parabola, Standard Equations of Parabola, Latus Rectum, Introduction of Ellipse, Relationship Between Semi-major Axis, Semi-minor Axis and the Distance of the Focus from the Centre of the Ellipse, Special Cases of an Ellipse, Standard Equations of an Ellipse, Latus Rectum, Introduction of Hyperbola, Eccentricity, Standard Equation of Hyperbola, Latus Rectum, Standard Equation of a Circle, Eccentricity.
Using RD Sharma Class 11 solutions Hyperbola exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 11 prefer RD Sharma Textbook Solutions to score more in exam.
Get the free view of chapter 27 Hyperbola Class 11 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation |
Learning Objectives
Use the work-energy theorem to analyze rotation to find the work done on a system when it is rotated about a fixed axis for a finite angular displacement Solve for the angular velocity of a rotating rigid body using the work-energy theorem Find the power delivered to a rotating rigid body given the applied torque and angular velocity Summarize the rotational variables and equations and relate them to their translational counterparts
Thus far in the section, we have extensively addressed kinematics and dynamics for rotating rigid bodies around a fixed axis. In this final subsection, we define work and power within the context of rotation about a fixed axis, which has applications to both physics and engineering. The discussion of work and power makes our treatment of rotational motion almost complete, with the exception of rolling motion and angular momentum, which are discussed in Angular Momentum. We begin this subsection with a treatment of the work-energy theorem for rotation.
Work for Rotational Motion
Now that we have determined how to calculate kinetic energy for rotating rigid bodies, we can proceed with a discussion of the work done on a rigid body rotating about a fixed axis. Figure 10.39 shows a rigid body that has rotated through an angle d\(\theta\) from A to B while under the influence of a force \(\vec{F}\). The external force \(\vec{F}\) is applied to point P, whose position is \(\vec{r}\), and the rigid body is constrained to rotate about a fixed axis that is perpendicular to the page and passes through O. The rotational axis is fixed, so the vector \(\vec{r}\) moves in a circle of radius r, and the vector d \(\vec{s}\) is perpendicular to \(\vec{r}\).
From Equation 10.2, we have
$$\vec{s} = \vec{\theta} \times \vec{r} \ldotp$$
Thus,
$$d \vec{s} = d (\vec{\theta} \times \vec{r}) = d \vec{\theta} \times \vec{r} + d \vec{r} \times \vec{\theta} = d \vec{\theta} \times \vec{r} \ldotp$$
Note that d\(\vec{r}\) is zero because \(\vec{r}\) is fixed on the rigid body from the origin O to point P. Using the definition of work, we obtain
$$W = \int \sum \vec{F}\; \cdotp d \vec{s} = \int \sum \vec{F}\; \cdotp (d \vec{\theta} \times \vec{r}) = \int d \vec{\theta}\; \cdotp (\vec{r} \times \sum \vec{F})$$
where we used the identity \(\vec{a}\; \cdotp (\vec{b} \times \vec{c}) = \vec{b}\; \cdotp (\vec{c} \times \vec{a})\). Noting that \((\vec{r} \times \sum \vec{F}) = \sum \vec{\tau}\), we arrive at the expression for the rotational
work done on a rigid body:
$$W = \int \sum \vec{\tau}\; \cdotp d \vec{\theta} \ldotp \label{10.27}$$
The total work done on a rigid body is the sum of the torques integrated over the angle through which the body rotates. The incremental work is
$$dW = \left(\sum_{i} \tau_{i}\right) d \theta \label{10.28}$$
where we have taken the dot product in Equation 10.27, leaving only torques along the axis of rotation. In a rigid body, all particles rotate through the same angle; thus the work of every external force is equal to the torque times the common incremental angle d\(\theta\). The quantity \(\left(\sum_{i} \tau_{i}\right)\) is the net torque on the body due to external forces.
Similarly, we found the kinetic energy of a rigid body rotating around a fixed axis by summing the kinetic energy of each particle that makes up the rigid body. Since the work-energy theorem W
i = \(\Delta\)K i is valid for each particle, it is valid for the sum of the particles and the entire body.
Work-Energy Theorem for Rotation
The work-energy theorem for a rigid body rotating around a fixed axis is
$$W_{AB} = K_{B} - K_{A} \label{10.29}$$
where
$$K = \frac{1}{2} I \omega^{2}$$
and the rotational work done by a net force rotating a body from point A to point B is
$$W_{AB} = \int_{\theta_{A}}^{\theta_{B}} \left(\sum_{i} \tau_{i}\right) d \theta \ldotp \label{10.30}$$
We give a strategy for using this equation when analyzing rotational motion.
Problem-Solving Strategy: Work-Energy Theorem for Rotational Motion
Identify the forces on the body and draw a free-body diagram. Calculate the torque for each force. Calculate the work done during the body’s rotation by every torque. Apply the work-energy theorem by equating the net work done on the body to the change in rotational kinetic energy
Let’s look at two examples and use the work-energy theorem to analyze rotational motion.
Example 10.17
Rotational Work and Energy
A 12.0 N • m torque is applied to a flywheel that rotates about a fixed axis and has a moment of inertia of 30.0 kg • m
2. If the flywheel is initially at rest, what is its angular velocity after it has turned through eight revolutions? Strategy
We apply the work-energy theorem. We know from the problem description what the torque is and the angular displacement of the flywheel. Then we can solve for the final angular velocity.
Solution
The flywheel turns through eight revolutions, which is 16\(\pi\) radians. The work done by the torque, which is constant and therefore can come outside the integral in Equation 10.30, is
$$W_{AB} = \tau (\theta_{B} - \theta_{A}) \ldotp$$
We apply the work-energy theorem:
$$W_{AB} = \tau (\theta_{B} - \theta_{A}) = \frac{1}{2} I \omega_{B}^{2} - \frac{1}{2} I \omega_{A}^{2} \ldotp$$
With \(\tau\) = 12.0 N • m, \(\theta_{B} - \theta_{A}\) = 16.0\(\pi\) rad, I = 30.0 kg • m
2, and \(\omega_{A}\) = 0, we have
$$(12.0\; N\; \cdotp m)(16.0 \pi\; rad) = \frac{1}{2} (30.0\; kg\; \cdotp m^{2})(\omega_{B}^{2}) - 0 \ldotp$$
Therefore,
$$\omega_{B} = 6.3\; rad/s \ldotp$$
This is the angular velocity of the flywheel after eight revolutions.
Significance
The work-energy theorem provides an efficient way to analyze rotational motion, connecting torque with rotational kinetic energy.
Example 10.18
Rotational Work: A Pulley
A string wrapped around the pulley in Figure 10.40 is pulled with a constant downward force \(\vec{F}\) of magnitude 50 N. The radius R and moment of inertia I of the pulley are 0.10 m and 2.5 x 10
−3 kg • m 2, respectively. If the string does not slip, what is the angular velocity of the pulley after 1.0 m of string has unwound? Assume the pulley starts from rest. Strategy
Looking at the free-body diagram, we see that neither \(\vec{B}\), the force on the bearings of the pulley, nor M\(\vec{g}\), the weight of the pulley, exerts a torque around the rotational axis, and therefore does no work on the pulley. As the pulley rotates through an angle \(\theta\), \(\vec{F}\) acts through a distance d such that d = R\(\theta\).
Solution
Since the torque due to \(\vec{F}\) has magnitude \(\tau\) = RF, we have
$$W = \tau \theta = (FR) \theta = FD \ldotp$$
If the force on the string acts through a distance of 1.0 m, we have, from the work-energy theorem,
$$\begin{split} W_{AB} & = K_{B} - K_{A} \\ Fd & = \frac{1}{2} I \omega^{2} - 0 \\ (50.0\; N)(1.0\; m) & = \frac{1}{2} (2.5 \times 10^{-3}\; kg\; \cdotp m^{2}) \omega^{2} \ldotp \end{split}$$
Solving for \(\omega\), we obtain
$$\omega = 200.0\; rad/s \ldotp$$
Power for Rotational Motion
Power always comes up in the discussion of applications in engineering and physics. Power for rotational motion is equally as important as power in linear motion and can be derived in a similar way as in linear motion when the force is a constant. The linear power when the force is a constant is P = \(\vec{F}\; \cdotp \vec{v}\). If the net torque is constant over the angular displacement, Equation 10.25 simplifies and the net torque can be taken out of the integral. In the following discussion, we assume the net torque is constant. We can apply the definition of power derived in Power to rotational motion. From Work and Kinetic Energy, the instantaneous power (or just power) is defined as the rate of doing work,
$$P = \frac{dW}{dt} \ldotp$$
If we have a constant net torque, Equation 10.25 becomes W = \(\tau \theta\) and the power is
$$P = \frac{dW}{dt} = \frac{d}{dt} (\tau \theta) = \tau \frac{d \theta}{dt}$$
or
$$P = \tau \omega \ldotp \label{10.31}$$
Example 10.19
Torque on a Boat Propeller
A boat engine operating at 9.0 x 10
4 W is running at 300 rev/min. What is the torque on the propeller shaft? Strategy
We are given the rotation rate in rev/min and the power consumption, so we can easily calculate the torque.
Solution
$$300.0\; rev/min = 31.4\; rad/s;$$
$$\tau = \frac{P}{\omega} = \frac{9.0 \times 10^{4}\; N\; \cdotp m/s}{31.4\; rad/s} = 2864.8\; N\; \cdotp m \ldotp$$
Significance
It is important to note the radian is a dimensionless unit because its definition is the ratio of two lengths. It therefore does not appear in the solution.
Exercise 10.8
A constant torque of 500 kN • m is applied to a wind turbine to keep it rotating at 6 rad/s. What is the power required to keep the turbine rotating?
Rotational and Translational Relationships Summarized
The rotational quantities and their linear analog are summarized in three tables. Table 10.5 summarizes the rotational variables for circular motion about a fixed axis with their linear analogs and the connecting equation, except for the centripetal acceleration, which stands by itself. Table 10.6 summarizes the rotational and translational kinematic equations. Table 10.7 summarizes the rotational dynamics equations with their linear analogs.
Table 10.5 - Rotational and Translational Variables: Summary
Rotational Translational Relationship $$\theta$$ $$x$$ $$\theta = \frac{s}{r}$$ $$\omega$$ $$v_{f}$$ $$\omega = \frac{v_{t}}{r}$$ $$\alpha$$ $$a_{t}$$ $$\alpha = \frac{a_{t}}{r}$$ $$a_{c}$$ $$a_{c} = \frac{v_{t}^{2}}{r}$$ Table 10.6 - Rotational and Translational Kinematic Equations: Summary
Rotational Translational $$\theta_{f} = \theta_{0} + \bar{\omega} t$$ $$x = x_{0} + \bar{v} t$$ $$\omega_{f} = \omega_{0} + \alpha t$$ $$v_{f} = v_{0} + at$$ $$\theta_{f} = \theta_{0} + \omega_{0} t + \frac{1}{2} \alpha t^{2}$$ $$x_{f} = x_{0} + v_{0} t + \frac{1}{2} at^{2}$$ $$\omega_{f}^{2} = \omega_{0}^{2} + 2 \alpha (\Delta \theta)$$ $$v_{f}^{2} = v_{0}^{2} + 2a (\Delta x)$$ Table 10.7 - Rotational and Translational Equations: Dynamics
Rotational Translational $$I = \sum_{i} m_{i} r_{i}^{2}$$ $$m$$ $$K = \frac{1}{2} I \omega^{2}$$ $$K = \frac{1}{2} mv^{2}$$ $$\sum_{i} \tau_{i} = I \alpha$$ $$\sum_{i} \vec{F}_{i} = m \vec{a}$$ $$W_{AB} = \int_{\theta_{A}}^{\theta_{B}} \left(\sum_{i} \tau_{i}\right) d \theta$$ $$W = \int \vec{F}\; \cdotp d \vec{s}$$ $$P = \tau \omega$$ $$P = \vec{F} \cdotp \vec{v}$$ Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
Science Advisor
Gold Member
2,227 1,257 Summary Wigner's friend seems to lead to certainty in two complimentary contexts Summary:Wigner's friend seems to lead to certainty in two complimentary contexts
This is probably pretty dumb, but I was just thinking about Wigner's friend and wondering about the two contexts involved.
The basic set up I'm wondering about is as follows:
The friend does a spin measurement in the ##\left\{|\uparrow_z\rangle, |\downarrow_z\rangle\right\}## basis, i.e. of ##S_z## at time ##t_1##. And let's say the particle is undisturbed after that.
For experiments outside the lab Wigner considers the lab to be in the basis:
$$\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)$$
He then considers a measurement of the observable ##\mathcal{X}## which has eigenvectors:
$$\left\{\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right), \frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle - |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)\right\}$$
with eigenvalues ##\{1,-1\}## respectively.
At time ##t_2## the friend flips a coin and either he does a measurement of ##S_z## or Wigner does a measurement of ##\mathcal{X}##
However if the friend does a measurement of ##S_z## he knows for a fact he will get whatever result he originally got. However he also knows Wigner will obtain the ##1## outcome with certainty.
However ##\left[S_{z},\mathcal{X}\right] \neq 0##. Thus the friend seems to be predicting with certainty observables belonging to two separate contexts. Which is not supposed to be possible in the quantum formalism.
What am I missing? |
what happens when a plane polarized light is incident on a half wave plate? i know what happens in the case of a quarter wave plate but there is no information available about the half wave plate. Please help
Half wave plates have two principal axes. If the light is linearly polarized and the polarization direction coincide with one of the axes of the waveplate, the polarization remains the same. When the polarization direction of the incident light does not coincide with one of the principal axes, the half wave plate will rotate the plane of polarization with twice the angle between the polarization of the incident light and the axes.
For example, if the angle between the polarization of the incident light and the principal axes of the half wave plate is $45^\circ$, the polarization plane will be rotated with $90^\circ$.
A Half wave plate introduces $\pi$ phase shift between $x$ and $y$ components (there is a $\lambda/2$ shift in phase). Hence, we get a rotation of $2\theta$ if incident at an angle of $\theta$ |
The general trick to calculating such odds is that the probability of rolling a result that matches some criterion equals the number of possible matching rolls, divided by the total number of possible rolls.
(By "roll", here, we mean a sequence of numbers obtained by rolling a certain number (e.g. 6) of a certain kind of fair dice (e.g. d6) in sequence. The important feature here is that each such roll, by itself, is equally likely, which is why the simple formula above works. If the rolls were
not all equally like, we'd have to resort to more complicated maths.)
For 6d6, the total number of possible rolls is \$6^6\$ = 46,656. (More generally, for
Nd X, the total number of possible rolls is \$X^N\$.) Next, we just need to figure out in how many ways we can roll each of the results we're interested in. Straights
For example, let's look at straights first. A straight on 6d6 obviously consists of the numbers 1, 2, 3, 4, 5 and 6, in any order. How many ways are there to order them?
Well, imagine that we have six dice, each showing one of the numbers from 1 to 6, and six positions marked 1 to 6 on the table that we want to put the dice in. For the first position, we can choose any of the dice, so we have 6 choices there; for the second position, we only have five dice left, so the number of possible choices we can make for the second die is 5, giving us a total of 6 × 5 = 30 possible choices for the first two dice.
Continuing in this manner, we find that the total number of different orders in which we can set down the six distinct dice is \$ 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720 \$. (Mathematicians have a specific name and a notation for such products, because they come up pretty often in math: they call them factorials, and write them by putting an exclamation point after the upper limit, as in 6! = 720.) Thus, the probability of rolling a straight on 6d6 is 6! out of \$6^6\$, or \$720 \div 46,656 \approx 0.0154 = 1.54\%\$.
(For straights shorter than 6 dice, things get more complicated; see the computer results below.)
\$n\$ of a kind
What about \$n\$ of a kind? Well, it's pretty obvious that there are exactly six ways to roll 6 of a kind — either all 1, all 2, all 3, all 4, all 5 or all 6. Thus, the probability of rolling six of a kind is \$ 6 \div 6^6 \approx 0.00013 = 0.013\%\$. This is just about the rarest kind of combination you can get.
5 of a kind
For five of a kind, we clearly have six choices for the number that occurs five times, and five choices for the single mismatched roll (or vice versa; it really doesn't matter which way you count them, since the result is the same), for a total of \$6 \times 5 = 30\$ possibilities. But since we're considering
ordered die rolls (which we must do, to ensure that every roll is equally likely), we also have six choices for the position of the mismatched die in the sequence, giving us a total of \$30 \times 6 = 180\$ ways to roll 5-of-a-kind on 6d6, and thus a probability of \$180 \div 6^6 \approx 0.00386 = 0.386\%\$.
4 of a kind
How about four of a kind? Again, we have six choices for the matched dice, but now there are more possibilities for the mismatched ones. We could consider the cases where the two mismatched dice are the same or different separately, but that quickly gets a bit complicated.
The easy way here is to
first assign the two mismatched dice into specific positions in the sequence; we can put the first one in any of 6 positions, and the second in any of the remaining 5, for a total of 30 choices — but, since we haven't yet assigned values for those dice, they're identical, and so we need to divide by 2 to avoid counting identical positions twice (because putting the first mismatched die in position 1, and the second in position 2, gives the same result as putting the first in position 2 and the second in position 1), giving us 15 ways to place the mismatched dice into the sequence of 6 rolls.
Having done that, we just need to pick arbitrary values for those two die rolls; they
can be identical, but neither of them can equal the four matched dice, so we have \$5 \times 5 = 25\$ choices total here. Putting this together with the 6 choices for the matched dice, and the 15 ways of picking the positions of the mismatched dice, and we get \$6 \times 15 \times 25 = 2,250\$ ways of rolling 4-of-a-kind on 6d6, with a probability of \$2,250 \div 6^6 \approx 0.0482 = 4.82\%\$, or slightly under one in 20—a lot more likely than 5-of-a-kind.
3 of a kind
We could do the same thing for three-of-a-kind, but that gets even more complicated, mainly because it's now also possible to roll
two different sets of three in a single 6d6 roll. Counting the possible combinations, in a similar manner as above, isn't really difficult as such, but it does get tedious and error-prone. ...and so on
Fortunately, we can cheat and use a computer! Since there are only about 47 thousand possible 6d6 rolls, a computer can loop through all of them in a fraction of a second, and count how many times the most common die occurs in each of them. We can also do the same for straights, counting the longest sequence of consecutive dice rolled:
Using the
dice_pool() helper function (which enumerates all possible sorted outcomes of rolling
Nd X dice and their respective probabilities) from this answer, here's a simple Python program to calculate the probabilities of various groups and straights:
# generate all possible sorted NdD rolls and their probabilities
# see http://en.wikipedia.org/wiki/Multinomial_distribution for the math
factorial = [1.0]
def dice_pool(n, d):
for i in range(len(factorial), n+1):
factorial.append(factorial[i-1] * i)
nom = factorial[n] / float(d)**n
for roll, den in _dice_pool(n, d):
yield roll, nom / den
def _dice_pool(n, d):
if d > 1:
for i in range(0, n+1):
pair = (d, i)
for roll, den in _dice_pool(n-i, d-1):
yield roll + (pair,), den * factorial[i]
else:
yield ((d, n),), factorial[n]
# the actual calculation and output code starts here
groups = {}
straights = {}
for roll, prob in dice_pool(6, 6):
# find largest n-of-a-kind:
largest = max(count for num, count in roll)
if largest not in groups: groups[largest] = 0.0
groups[largest] += prob
# find longest straight:
longest = length = 0
for num, count in roll:
if count > 0:
length += 1
else:
length = 0
if longest < length: longest = length
if longest not in straights: straights[longest] = 0.0
straights[longest] += prob
# print out results
for n in groups:
print("max %d of a kind: %9.6f%%" % (n, 100*groups[n]))
for n in straights:
print("max %d in a row: %9.6f%%" % (n, 100*straights[n]))
And here's the output:
max 1 of a kind: 1.543210%
max 2 of a kind: 61.728395%
max 3 of a kind: 31.507202%
max 4 of a kind: 4.822531%
max 5 of a kind: 0.385802%
max 6 of a kind: 0.012860%
max 1 in a row: 5.971365%
max 2 in a row: 34.615055%
max 3 in a row: 32.407407%
max 4 in a row: 17.746914%
max 5 in a row: 7.716049%
max 6 in a row: 1.543210%
Note that this output doesn't distinguish e.g. two or three pairs from a single pair, or a triple and a pair from just a triple. If you know some Python, it would not be difficult to modify the program to check for those as well.
Also note that it's actually really hard to get no more than one die of each kind (since that actually requires rolling a perfect straight), and also pretty hard to get no more than one in a row (although still a lot easier than getting six of a kind, since e.g. rolling 1,1,3,3,5,5 also counts). Three in a row is also only slightly less likely than two in a row (although some of the rolls counted as three in a row by the program actually include both), but larger groups and straights show the expected downward trend in probability as the group size increases. |
Here's an answer from a non-particle physicist to complement what (former) professional particle physicist Anna V has written.
"Real particles" enter and leave Feynman diagrams. Therefore, in principle, they can be detected in an experiment - they are the "terminals" of a Feynman diagram: ports through which we can "see" the system within.
In contrast, the path of a virtual particle begins and ends within a Feynman diagram. It has no "free ends" dangling over the "boundaries" of the diagram and is therefore not directly measurable. We can't detect them in experiment.
None of this is likely new to you. You're still left wondering what reality we can ascribe to virtual particles, if we can't directly detect them. You can think of virtual particles more literally as Feynman liked to do, or you might try this approach: I personally like to think of them a little more abstractly as
simply as mathematical terms in a perturbation series.
A good starting point to visualise this gist is the kinds of ideas explored in the following papers:
as well as the works of the late Hilary Booth of the Australian National University.This is not standard QED and it is very specialised and contrived: think of it as an illustrative "Baby QED" for someone (like me) who hasn't mastered quantum field theory. We consider here the system of one electron, a proton (the latter thought of as a classical particle, simply setting up an inverse square electrostatic field in a Hydrogen atom and the "virtual photons" that are swapped between them. The electron in the classical potential is of course simply described by the first quantised Dirac equation. Now we add the electromagnetic field by adding Maxwell's equations and coupling the system as follows:
$$\gamma^\mu\left(i \partial_\mu - q A_\mu\right) \psi + V \psi - \psi = 0$$
$$\partial_\nu F^{\nu\,\mu} = q\,\bar{\psi} \gamma^\mu \psi$$
$$F_{\mu\,\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$$
with the Lorenz Gauge
$$\partial_\mu A^\mu = 0$$.
The first equation is the Dirac equation, the second Maxwell's equations with a charge / current (4-current) distribution determined by the probability density of the Dirac electron. The third relates the Maxwell tensor (containing the $\vec{E}$ and $\vec{B}$ fields) to the four-potential, which couples back into the Dirac equation through the "gauge covariant derivative". So we have a rather elegant, but thorny to solve, coupled nonliear system.
In the papers, the equations lead to a fixed point problem $X=F(X)$ of a certain integro-differential operator $F$, which is contractive, so the solution is the limit of the sequence:
$$X_0,\, F(X_0),\, F^2(X_0),\,\cdots$$
and can thus be solved nonperturbatively, by the contraction mapping principle and it gives an infinite series of terms corresponding to virtual pairs too. It yields an exact solution which is an infinite series, what a mathematician would call the Peano-Baker series (see Baake and Schlaegel, "The Peano Baker Series" and it is what a theoretical particle physicist would call (I believe) the Dyson series.
Now the terms in this infinite series are $X_0$: Dirac's solution of the Hydrogen atom and the higher order terms are iterated integral operators: these iterations can be thought of as the perterbations wrought by one "virtual photon", the next term involves virtual photons and virtual pair production followed by virtual pair annihilation and so forth.
The "Virtual particles" in this viewpoint can be thought of simply as an evocative "mnemonic" to the structure of the mathematical terms in the infinite series. |
Radius of the base of a circular cone: \(R\)
Generatrix of a cone: \(m\) Height of cone: \(H\) Volume: \(V\)
Generatrix of a cone: \(m\)
Height of cone: \(H\)
Volume: \(V\)
Area of the base: \({S_B}\)
Lateral surface area: \({S_L}\) Total surface area: \(S\)
Lateral surface area: \({S_L}\)
Total surface area: \(S\)
A cone or a conical surface is a three-dimensional shape formed by the movement of a straight line (called the generatrix) that passes through a fixed point (the vertex of the cone) and crosses a given curve called the directrix. The cone is often defined as a three-dimensional shape bounded by the interior of a plane crossing the conical surface and the portion of the conical surface between the vertex and the boundary of crossing. The portion of the plane lying inside the conical surface is called the base of the cone and the portion of the conical surface is called the lateral surface. A cone is called a circular cone if its base is a circle. A cone is called a right circular cone if the line from the vertex of the cone to the centre of its base is perpendicular to the base. A right circular cone is formed by rotating a right triangle about its leg. A right circular cone is determined by the radius of the base \(R\) and height \(H\) (or similarly, by the radius of the base \(R\) and generatrix \(m\)). Relationship between the height, radius of the base and generatrix (slant height) of a right circular cone \(H = \sqrt {{m^2} – {R^2}} \) Lateral surface area of a right circular cone \({S_L} = \pi Rm\) Base area of a right circular cone \({S_B} = \pi {R^2}\) Total surface area of a right circular cone \(S = {S_L} + {S_B} \) \(= \pi R\left( {m + R} \right)\) Volume of a right circular cone \(V = {\large\frac{{{S_B}H}}{3}\normalsize} \) \(= {\large\frac{{\pi {R^2}H}}{3}\normalsize}\) |
How can I find the general formula for the following real sequence $$(x_n)_{n \ge0}=(1,0,-1,0,\frac{1}{2},0,\frac{-1}{6},0,\frac{1}{24},0,\frac{-1}{120},\ldots)$$ I just know $x_0$ to $x_{10}$ so how can I find the general formula for this real sequence?
To get it into one equation, you can use that $$ \cos{\tfrac{1}{2}n\pi} = \begin{cases} 1 & n = 4k \\ 0 & n= 4k \pm 1 \\ -1 & n=4k+2 \end{cases}, $$ where $k \in \mathbb{Z}$. This then gives the squence as $$ x_n = \frac{\cos{\frac{1}{2}n\pi}}{(n/2)!} $$ (What is $(n/2)!$ when $n$ is odd? You don't need to know for this, but you can get it from $(-1/2)!=\sqrt{\pi}$, which you'll learn about later on. Point is, it's not zero, so the odd terms are still zero.
It appears that you have $$x_n=\begin{cases} 0 & n\text{ odd}\\ (-1)^{n/2}\frac{1}{(n/2)!} & n\text{ even}\end{cases}$$
HINT: You will need to split the formula into two parts, one for odd subscripts and one for even subscripts. The one for odd subscripts should be pretty obvious. For the even subscripts, note that $x_{2n}$ is negative when $n$ is odd and positive when $n$ is even; you should know a simple function of $n$ that is $-1$ when $n$ is odd and $1$ when $n$ is even, and you can make this function a factor in your formula. Finally, you should recognize the denominators $1,1,2,6,24,120,\ldots$ as a familiar sequence; to give you a little more help, the next two terms are $720$ and $5040$.
x(n)=((-1)^(n/2))*(1/((n/2)!)) , if n=even x(n)=0 , if n=odd
Inspired by the solution by Chappers above, here's my contribution:
$$\large x_n=\frac{\Re (i^n)}{\big(\frac n2\big)!}$$ |
In the paper "Liquidity Risk and Risk Measure Computation" authors describe a linear supply curve model for liquidity risks in presence of market impact, i.e. impact-affected asset price $S(t,x)$ is proportional to unaffceted price $S(t,0)$ and to traded volume $x$ with some coefficient $\alpha$.
Under the diffusion (with constant drift $\mu$ and volatility $\sigma$) assumption for unaffected price process, the model parameter $\alpha$ is estimated through the regression on returns (see (9)):
$$\log\left(\frac{S(t_2,x_{t_2})}{S(t_1,x_{t_1})}\right) \simeq \int_{t_1}^{t_2}(\mu-\frac{1}{2}\sigma^2)dt + \int_{t_1}^{t_2}\sigma dW_t + \alpha(x_{t_2}-x_{t_1})$$
There is a couple of basic questions that comes up regarding this regression:
Should one use the signed values for the buy/sell volumes $x_{t_1}$ and $x_{t_2}$? In the section 2 of the article this is mentioned, however the regression on $x_{t_2}-x_{t_1}$ seems to be complicated in some cases. For example, if there are only buy trades of the same size or when one has only external trades data with no indication of buy/sell available.
How should we treat $\int_{t_1}^{t_2}(\mu-\frac{1}{2}\sigma^2)dt$ term? Should we estimate it in the same regression or estimate it separately using unaffected price time series $S(t,0)$?
Is it fine to use intraday data throughout a certain period for this regression (e.g. shouldn't one try to account for price jumps between trading days)? |
Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky?
Every idea, reference, paper is welcome! Thank you!
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky?
Every idea, reference, paper is welcome! Thank you!
Larry Harris has a chapter on performance evaluation in Trading and Exchanges. He states that over a long period of time, a skilled asset manager will consistently have excess returns whereas a lucky one will be expected to have random and unpredictable returns. Thus, we start with the portfolio's market-adjusted return standard deviation:
\begin{equation} \sigma_{adj} = \sqrt{\sigma^2_{port} + \sigma^2_{mk} - 2\rho\sigma_{port}\sigma_{mk}} \end{equation}
where $\rho$ is the correlation between the market and portfolio returns.
For a sample size $n$ (generally number of years), the average excess returns, and the adjusted standard deviation from above, we have a t-statistic:
\begin{equation} t = \frac{\overline{R_{port}} - \overline{R_{mk}}}{\frac{\sigma_{adj}}{\sqrt{n}}} \end{equation}
Now we can simply determine the probability that the manager's excess returns were luck by plugging this t-statistic into the t-distribution's PDF with degrees-of-freedom $n - 1$. The lower the probability, the more we can believe the manager's excess returns were from skill.
Some links:
Below is some code that I used recently to illustrate luck (and con-games). The story went like this:
I'll dream up your lucky lottery number for 2010.....let's say it's 20639. The number doesn't matter because we're going to use that for the seed of a random number generator. Then, I'll take the first three digits of your lucky lottery number (206) and reverse them (-.602) and use that as a multiplier on that random number. Since the S&P500 started off in 2010 at about 1100, I'll start the model at that level. Here is the code:
library(quantmod) #Read 2010 S&P500 data from Yahoo tem <- as.zoo(getSymbols("^gspc", from="2010-01-01", to="2011-01-01", auto.assign=FALSE, src = "yahoo")) #Build a lucky lottery model based on the seed set.seed(20639) #Your lucky lottery number yt <- tem$GSPC.Adjusted coredata(yt) <- 1100 * exp(-0.602 * cumsum(rnorm(length(yt), sd=0.0113))) #Plot the results plot(tem$GSPC.Adjusted, type="l", main="S&P500 for Year 2010", ylab="S&P500", xlab="", lwd=3, col="darkgray") lines(yt, lwd=2, col="red") legend("bottomright", legend=c("Actual S&P500", "Dredged S&P500"), lwd=c(3, 2), col=c("darkgray", "red"))
This is all meaningless BS, but it took a while for some people to untangle luck from the con. The really hard part is to convince yourself that your "skill" actually found something real. Something "real" is a very rare event in the world of investing.
In order to have a shot at separating skill from luck, you need a sense of what luck looks like. I think the best chance of understanding luck is to use random portfolios. See, for instance: http://www.portfolioprobe.com/about/random-portfolios-in-finance/
Read Fooled by Randomness by Nassim Taleb. In a nutshell, he says that you can only tell the difference by understanding the risks that were taken. Lucky investors can win for many years before blowing up. Even if he doesn't blow up, there is no way to know what might have happened if the risks turned out badly.
Take a look at White's Reality Check.
Another very crude way would be to calculate a "skill score" (from
The Mathematics of Technical Analysis, p325)
$$\tt{skill\ score} = \frac{SKILL\_correct - NOSKILL\_correct}{Total\ decisions - NOSKILL\_correct}$$
SKILL_correct: the profitable trades
NOSKILL_correct: randomly assigned trades that were profitable
Total decisions: number of trades
If this number is 0 or negative, it indicates that you are mostly dealing with a lucky investor, and not a skilled one.
I was going to suggest that you use alpha, which is the measure of a managers excess return beyond their benchmark. But here is an alternative view which is quite interesting.
Check out the last chapter in Grinold's classic Active Portfolio Management (2nd Ed) for a discussion on separating luck from skill
I remember an article from graduate school that describes a methodology for measuring the true timing ability of a money manager. I don't remember the name of the article nor the name of the author, however, I do remember some of the details of the article. Maybe someone else has run across it and would be kind enough to post the appropriate reference.
Let's assume that a manager has the ability to be either in cash, earning the risk-free rate, or a long position in a basket of stocks. If the money manager had superior timing ability, he would be in the basket when the basket was returning greater than the risk-free rate, and he would be in a cash position when the basket is returning less than the risk-free rate. What you basically have is a return profile that looks a lot like the payoff of call option.
If you plot market return on the x-axis and manager return on the y-axis, the return should be flat at the risk-free rate for everything to the left of the risk-free rate on the x-axis/ At the risk free-rate on the x-axis, the return should be a 45-degree line up and to the right of the diagram. Over time, you measure manager return against market return, and if he is any good, you should see the call option payoff diagram being roughly drawn out.
Martijn Cremers and Antti Petajisto have a series of papers using the concept of "Active Share," a new measure of active portfolio management which represents the share of portfolio holdings that differ from the benchmark index holdings, to evaluate mutual fund managers. They find that the most active stock pickers have outperformed their benchmark indices even after fees and transaction costs. In contrast, closet indexers or funds focusing on factor bets have lost to their benchmarks after fees.
Bottom line: when separating skill from luck, concentrate on those managers that actually try to differentiate themselves from the crowd. Also, the more fine grained your strategy is, the more likely it is to represent skill over luck.
My 2c worth.
Experience tells me that the better ways to get a feel for whether their strategy is based on something more than luck are amongst:
1) `getting to know your traders' -- have a chat, pick their brains, try to get some insight into their methods;
2) see how hard the market has been -- check whether you have just been part of a bull market which basically made pretty-much every strategy a winner, and discount performance accordingly.
I would be skeptical of anything too mathematical for measuring outperformance, since even a badly-designed strategy may have performed well.
I see an analogy with driving: many drivers will successfully get from A to B, but you'll get a much clearer picture of the sorts of risks a driver takes by sitting in the car with them, rather than trying some mathematical analysis of the race. Make sense?
A very well thought through exposition on the matter is given in this paper:
A Consultant’s Perspective on Distinguishing Alpha from Noise by John R. Minahan
It combines a lot of wisdom and common sense that sometimes seems to get lost in the process... |
I want to know the hybridization of the central atom in $\ce{(SiH3)3N}$.
I think it should be $\mathrm{sp^3}$, because $\ce{N}$ is attached to three silicon atoms and one lone pair. But actually it is supposedly $\mathrm{sp^2}$.
How is this so?
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
I want to know the hybridization of the central atom in $\ce{(SiH3)3N}$.
I think it should be $\mathrm{sp^3}$, because $\ce{N}$ is attached to three silicon atoms and one lone pair. But actually it is supposedly $\mathrm{sp^2}$.
How is this so?
Ordinarily and according to Bent’s rule, we would expect nitrogen’s lone pair to be in an $\mathrm s$ orbital and nitrogen using its three $\mathrm p$ orbitals to form three bonds to the three silicon atoms. This configuration would allow for the greatest stabilisation.
However, due to nitrogen’s small size, this perfect world already falls apart for ammonia ($\ce{NH3}$), where nitrogen is bound to three otherwise tiny hydrogen atoms. Because an electronically perfect angle of $90^\circ$ would generate much too much steric strain between the hydrogen atoms, $\mathrm s$ contribution is mixed into the bonding $\mathrm p$ orbitals to a certain extent; for ammonia, this extent happens to be almost perfect $\mathrm{sp^3}$ — results for other amines will vary. This electronic situation is not ideal, however it is clearly better than having $\mathrm{sp^2}$ hybridisation and the lone pair in a $\mathrm p$ type orbital. An $\mathrm{sp^2}$ hybridisation of nitrogen in ammonia can be reached, but only as the transition state of nitrogen inversion.
Carrying on to the compound $\ce{N(SiH3)3}$, we would be inclined to again assume a hybridisation of $\mathrm{sp^3}$ in line with the previous paragraph. However, Beagley and Conrad performed electron diffraction studies on $\ce{N(SiH3)3}$ and found the molecule to be practically planar within experimental error.
[1,2] A planar molecule without doubt means that nitrogen is $\mathrm{sp^2}$-configured in $\ce{N(SiH3)3}$.
The question remains why. There must be some kind of stabilising interaction of nitrogen’s remaining $\mathrm p$ orbital with something else to keep that molecule planar. Beagley and Conrad suggest — in line with what was thought at the time — that this be due to π bonds with silicon’s remote $\mathrm d$ orbitals.
[1]. Numerous evidence, much of which is collected on this site, speaks the opposite (namely that $\mathrm d$ orbitals do not play any role in the bonding situation of main group metals).
Instead, I think we are dealing with something you may call ‘inverse hyperconjugation’. Remember that $\chi(\ce{Si}) = 1.9$ which is less than hydrogen, meaning that the $\ce{Si-H}$ bonds are polarised towards hydrogen. This in turn means that $\sigma^*_{\ce{Si-H}}$ is a silicon-centred orbital with its primary lobe pointing towards nitrogen. Therefore, nitrogen’s $\mathrm p$ orbital can favourably interact with the antibonding $\sigma_{\ce{Si-H}}^*$ orbital, increasing the $\ce{Si-N}$ bond order and decreasing the $\ce{Si-H}$ bond order. The effects are the same as with hyperconjugation stabilisation of secondary or tertiary carbocations but the electronic demand is reversed. We could attempt to draw the following resonance structures in Lewis formalism to explain this:
$$\ce{H-SiH2-N(SiH3)2 <-> \overset{-}{H}\bond{...}SiH2=\overset{+}{N}(SiH3)2}\tag{1}$$
In this Lewis formalism, the double bond would be generated from a $\mathrm p$ orbital on both silicon and nitrogen.
Note and reference:
[1]: B. Beagley, A. R. Conrad,
Trans. Faraday Soc. 1970, 66, 2740–2744. DOI: 10.1039/TF9706602740.
[2]: Actually, $\angle(\ce{Si-N-Si}) \approx 119.5^\circ < 120^\circ$. The authors state:
[1]
The apparant slight deviation from planarity is associated with a shrinkage effect
11on the $\ce{Si\dots Si}$ distance of about $\require{mediawiki-texvc}\pu{0.007 \AA}$ (see [table]). Spectroscopic results 12are entirely in agreement that the molecule is planar.
11 A. Allmenningen, O. Bastiansen and T. Munthe-Kaas,
Acta Chem. Scand., 1956, 10,261. [sic!]
12 E. A. V. Ebsworth, J. R. Hall, M. J. Mackillop, D. C. McKean, N. Sheppard and L. A. Woodward,
Spectrochim Acta, 1958, 13,202. [sic!]
In order to test Jan's argument, I did a NBO analysis of your structure (optimised at PBE-D3/def2-SVP with NWChem 6.6 using a conformational search with MMFF94s and Avogadro as the starting point; a frequency calculation determined it was a true mininum).
Figure 1: optimised geometry (angles in degrees and distances in angstrom)
The obtained geometry is in perfect agreement with Jan's answer, showing a $\ce{Si-N-Si}$ angle of 120°.
The five most significative NBO second order stabilisation energies are:
| | E(2) | E(j)-E(i) | F(i,j)Donor NBO (i) | Acceptor NBO (j) | kcal/mol | a.u. | a.u.==============|==================|==========|===========|========LP ( 1) N 1 | BD*( 1)Si 2- H 6 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 2- H 7 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 3- H 8 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 3- H 9 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 4- H12 | 5.08 | 0.43 | 0.043LP ( 1) N 1 | BD*( 1)Si 4- H13 | 5.08 | 0.43 | 0.043
That is, a six-fold $\ce{n_\ce{N}} \rightarrow \sigma^*(\ce{Si-H})$ donation, worth of 5.08 kcal/mol each, seems to be the most significant delocalisation.
Figure 2: $\ce{n_\ce{N}} \rightarrow \sigma^*(\ce{Si-H})$ delocalisation scheme
On the other hand, the natural electron configurations are as follows:
Atom | Natural Electron Configuration------|---------------------------------- N | [core]2s( 1.53)2p( 5.12) Si | [core]3s( 1.01)3p( 1.96)3d( 0.02) H | 1s( 1.15)
Thus, the electron configuration of $\ce{N}$, according to NBO analysis, is $\ce{1s^{2} 2s^{1.53} 2p^{5.12}}$.Furthermore,
the nitrogen lone pair is of pure $\pi$ character.Looking closer to the $\ce{N-Si}$ bond we see:
(Occupancy) Bond orbital/ Coefficients/ Hybrids------------------------------------------------------------------------------- 1. (1.97614) BD ( 1) N 1-Si 2 ( 81.00%) 0.9000* N 1 s( 33.32%)p 2.00( 66.66%)d 0.00( 0.02%) ( 19.00%) 0.4359*Si 2 s( 23.84%)p 3.16( 75.42%)d 0.03( 0.74%)
Thus,
In agreement with Jan's answer.
We call this weird thing as back bonding. The lone pair kinda delocalises or seeks refuge in the empty d orbital of Si, basically providing each N-Si, bond, on an average, a one third of an extra bond. |
I have an optimization problem with the following objective function.
$\max_{a^{l}_{n,k} b^l_{n,k}} \sum_{n=1}^{\overline{N_l}} b_{k,n} \frac{C_1}{C_2} \log_2 \bigg(1 +\frac{a_{k,n} h_{k,n}} {b_{n,k} c_3}\bigg) $
The constraints are linear.
The objective is concave, if I keep all the constants as 1, for simplicity the objective function is:
$f = b \cdot \log_2(1+a/b)$
which is concave right, or does it depends on the actual values of the constants?
Also if I add another parameter in the denominator of the log term as:
$\max_{a^{l}_{n,k} b^l_{n,k}} \sum_{n=1}^{\overline{N_l}} b_{k,n} \frac{C_1}{C_2} \log_2 \bigg(1 +\frac{a_{k,n} h_{k,n}} {b_{n,k} c_3 + X}\bigg) $ does it still remain concave.
$f= b⋅ \log (1+\frac{a}{b+1})$ is not concave right? Or does it depends on the value of X and other constants(which I kept one)? |
I have the following problem:
Input: two sets of intervals $S$ and $T$ (all endpoints are integers).
Query: is there a monotone bijection $f:S \to T$?
The bijection is monotone w.r.t. the set inclusion order on $S$ and $T$. $$\forall X\subseteq Y \in S, \ f(X) \subseteq f(Y)$$
[I am not requiring the reverse condition here.
if the reverse condition were required, i.e., $\forall X, Y, X\subseteq Y \Leftrightarrow f(X) \subseteq f(Y)$, then this would be in PTIME because it amounts to isomorphism testing of the corresponding inclusion posets (which have order dimension 2 by construction), which is in PTIME by Möhring, Computationally Tractable Classes of Ordered Sets, Theorem 5.10, p. 61.] Update:
The problem is in $\mathsf{NP}$: we can check efficiently if a given $f$ is a monotone bijection.
Is there a polynomial-time algorithm for this problem? Or is it $\mathsf{NP}$-hard?
The question can be stated more generally as existence of a monotone bijection between two given posets of order dimension 2.
Using a reduction inspired by the answers to this question, I know that the problem is $\mathsf{NP}$-hard when dimensions are not restricted. However, it is not clear if the reduction would also work when dimensions are restricted.
I am also interested to know about tractability when the dimension is just bounded by some arbitrary constant (not just 2). |
If we define
in the following manner: Transpassing
$\phi$ is
Transpassing $\iff$ $\exists x,z (x=\{y \in V | \phi(y)\} \wedge \phi(z) \wedge x \subset TC(z))$
Where clearly $\phi$ is a predicate symbol, $TC(x)$ stands for "the transitive closure class of x" defined in a customary manner as the minimal transitive superclass of x, and transitive class is defined as a class that has all of its elements being subsets of it.
Now define
as: Reflective
$\phi$ is reflective $\iff $ $\forall (x) (\phi(x) \to x \in V) $
Now $V$ is a primitive constant symbol denoting the class of all sets, as in Ackermann set theory.
Now let's work in a first order set theory that has the following axioms:
Extensionality: as in Ackermann's set theory. Class comprehension scheme: if $\phi(y)$ is a formula, then all closures of $(\phi$ is reflective$ \to \exists x (x=\{y|\phi(y)\}))$ are axioms. Transitivity: $V$ is a transitive class Acyclicity: no class is an element of its transitive closure Acyclic set construction scheme: if $\phi(y) $ is a formula in which $V$ does not occur, and where $y,z_1,..,z_n$ are all of its free variables, then:
$\forall z_1,..,z_n \in V $$( \phi$ is not transpassing $\to \exists x \in V (x=\{y| \phi(y)\} ))$ is an axiom.
Now this theory would clearly prove all axioms of Ackermann's set theory (except the full consequence of Regularity), including the second completeness axiom for $V$. So the above acyclic set construction principle is indeed stronger than the reflection scheme of Ackermann, to write the later, it is:
Ackermann's reflection scheme for set construction: if $\phi(y)$ is a formula in which $V$ does not occur, and where $y,z_1,..,z_n$ are all of its free variables, then:
$\forall z_1,..,z_n \in V$ $( \phi$ is reflective $\to \exists x \in V (x=\{y| \phi(y)\} ))$ is an axiom.
It is easy to prove that in this theory every reflective predicate (given the conditions of not using the symbol $V$ and parameters standing just for sets) is non-transpassing. But does that hold in the opposite direction?
My question is: Does Ackermann's set theory prove all axioms of this theory? In other words, can we prove in Ackermann's set theory that every non-transpassing predicate (with above qualifications) is a reflective predicate?
The idea is that the theory presented here is based intuitively on a separate notion, that of defining classes of acyclically constructed sets, though it overlaps with Ackermann's in it being essentially a class theory with set-hood taken to be primitive, and it shares with it all of its first four axioms. However, the notion of acyclicity is intuitively different from that of reflection; here all axioms pivot around the theme of acyclic construction while in Ackermann's two axioms seem deliberately fixed to suit proving axioms of union and power, even Regularity doesn't seem to be necessary for Ackermann's. However, should the answer to my question be in the positive, then the above theory would only be a rather long reformulation of Ackermann's set theory though reflects a more unified theme of axiomatization? |
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$.
Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED.
The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis.
Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts:
$$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable.
Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.] |
Rajeeva L Karandikar
Articles written in Proceedings – Mathematical Sciences
Volume 124 Issue 3 August 2014 pp 457-469
We give a construction of an explicit mapping
$$\Psi: D([0,\infty),\mathbb{R})\to D([0,\infty),\mathbb{R}),$$
where $D([0,\infty), \mathbb{R})$ denotes the class of real valued r.c.l.l. functions on $[0,\infty)$ such that for a locally square integrable martingale $(M_t)$ with r.c.l.l. paths,
$$\Psi(M.(\omega))=A.(\omega)$$
gives the quadratic variation process (written usually as $[M,M]_t$) of $(M_t)$. We also show that this process $(A_t)$ is the unique increasing process $(B_t)$ such that $M_t^2-B_t$ is a local martingale, $B_0=0$ and
$$\mathbb{P}((\Delta B)_t=[(\Delta M)_t]^2, 0 < \infty)=1.$$
Apart from elementary properties of martingales, the only result used is the Doob’s maximal inequality. This result can be the starting point of the development of the stochastic integral with respect to r.c.l.l. martingales.
Current Issue
Volume 129 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode |
Determine Whether a Set of Functions $f(x)$ such that $f(x)=f(1-x)$ is a Subspace Problem 285
Let $V$ be the vector space over $\R$ of all real valued function on the interval $[0, 1]$ and let
\[W=\{ f(x)\in V \mid f(x)=f(1-x) \text{ for } x\in [0,1]\}\] be a subset of $V$. Determine whether the subset $W$ is a subspace of the vector space $V$. Proof.
We claim that $W$ is a subspace of $V$.
To show the claim, we need to check that the following subspace criteria.
The zero vector of $V$ is the zero function $\theta(x)=0$.
Since we have \[\theta(x)=0=\theta(1-x)\] for any $x\in [0, 1]$, the zero vector $\theta$ is in $W$, hence condition 1 is met.
Let $f(x), g(x)$ be arbitrary elements in $W$. Then these functions satisfy
\[f(x)=f(1-x) \text{ and } g(x)=g(1-x)\] for any $x\in [0,1]$. We want to show that the sum $h(x):=f(x)+g(x)$ is in $W$. This follows since we have \[h(x)=f(x)+g(x)=f(1-x)+g(1-x)=h(1-x).\] Thus, condition 2 is satisfied.
Finally, we check condition 3. Let $c$ be a scalar and let $f(x)$ be an element in $W$.
Then we have \[f(x)=f(1-x).\] It follows from this that \[cf(x)=cf(1-x),\] and this shows that the scalar product $cf(x)$ is in $W$.
Therefore condition 3 holds, and we have proved the subspace criteria for $W$. Thus $W$ is a subspace of the vector space $V$.
Add to solve later |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.