text
stringlengths
256
16.4k
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
17 2 Summary I am wondering at the striking similarity between the expressions for creation/annihilation operators in terms of position and momentum operators and the expressions for sine and cosine in terms of the exponential. Hello everyone, I have noticed a striking similarity between expressions for creation/annihilation operators in terms position and momentum operators and trigonometric expressions in terms of exponentials. In the treatment by T. Lancaster and S. Blundell, "Quantum Field Theory for the Gifted Amateur", Chapter 2, eqns. 2.9-2.13, the creation/annihilation operators for energy levels of the simple harmonic oscillators are given as ## \hat{a} = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} + \dfrac{i}{m\omega}\hat{p}\right) ## ## \hat{a}^\dagger = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} - \dfrac{i}{m\omega}\hat{p}\right) ## and the inverse formulae are ## \hat{x} =\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{1}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} +\hat{a}^\dagger ) ## ## \hat{p} =-i\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{-i}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} -\hat{a}^\dagger ) ## Now, my observation is that the first pair of expressions have the same structure as the Euler's formula ## e^{iz} = \cos(z)+i\sin(z), \;e^{-iz} = \cos(z)-i\sin(z)##, upon substitution ##e^{iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}, \;e^{-iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}^{\dagger},## ## \cos(z) \rightarrow \hat{x},\; \sin(z) \rightarrow \dfrac{1}{m\omega}\hat{p}##, and the second pair of equations is recovered with the same substitution from the inverse formulae ## \cos(z) = \dfrac{1}{2}(e^{iz}+e^{-iz}), ## ## \sin(z)=\dfrac{-i}{2}(e^{iz}-e^{-iz}).## Now, I realize that the structural similarity stems from the definition ## \hat{a}= \hat{x}+i\hat{p}##, but there seems to be a geometrical meaning to this. Can we indeed interpret the interplay between the position and momentum as the connection between trigonometric functions? What is the meaning of the commutation relations then? Are you familiar with any textbook treating this aspect of quantization? Thank you in advance! I have noticed a striking similarity between expressions for creation/annihilation operators in terms position and momentum operators and trigonometric expressions in terms of exponentials. In the treatment by T. Lancaster and S. Blundell, "Quantum Field Theory for the Gifted Amateur", Chapter 2, eqns. 2.9-2.13, the creation/annihilation operators for energy levels of the simple harmonic oscillators are given as ## \hat{a} = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} + \dfrac{i}{m\omega}\hat{p}\right) ## ## \hat{a}^\dagger = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} - \dfrac{i}{m\omega}\hat{p}\right) ## and the inverse formulae are ## \hat{x} =\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{1}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} +\hat{a}^\dagger ) ## ## \hat{p} =-i\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{-i}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} -\hat{a}^\dagger ) ## Now, my observation is that the first pair of expressions have the same structure as the Euler's formula ## e^{iz} = \cos(z)+i\sin(z), \;e^{-iz} = \cos(z)-i\sin(z)##, upon substitution ##e^{iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}, \;e^{-iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}^{\dagger},## ## \cos(z) \rightarrow \hat{x},\; \sin(z) \rightarrow \dfrac{1}{m\omega}\hat{p}##, and the second pair of equations is recovered with the same substitution from the inverse formulae ## \cos(z) = \dfrac{1}{2}(e^{iz}+e^{-iz}), ## ## \sin(z)=\dfrac{-i}{2}(e^{iz}-e^{-iz}).## Now, I realize that the structural similarity stems from the definition ## \hat{a}= \hat{x}+i\hat{p}##, but there seems to be a geometrical meaning to this. Can we indeed interpret the interplay between the position and momentum as the connection between trigonometric functions? What is the meaning of the commutation relations then? Are you familiar with any textbook treating this aspect of quantization? Thank you in advance!
Let $\alpha$ and $\beta$ be two distinct eigenvalues of a $2\times2$ matrix $A$. Then which of the following statements must be true. 1 - $A^n$ is not a scalar multiple of identity matrix for any positive positive integer $n$. 2 - $ A^3 = \dfrac{\alpha^3-\beta^3}{\alpha-\beta}A-\alpha\beta(\alpha+\beta)I$ For the statement 1 I picked up a diagonal matrix with diagonal entries 1 and -1 whose square comes out to be identity matrix. Thus statement may be false. But for the second statement i am not able to figure out a way to start. This probably may be easy but I am not able to get this. Please post a small hint so that I may proceed further.
EDIT: As a consequence of reading the question too quickly, everything I've written below is about functions $\mathbb{R}\rightarrow\mathbb{R}$, not $[0, 1]\rightarrow\mathbb{R}$ - as an exercise, show that this doesn't affect anything. Easiest (if least illuminating) way: count them. There are $2^{2^{\aleph_0}}$-many functions from $\mathbb{R}$ to $\mathbb{R}$, but only $2^{\aleph_0}$-many of those are continuous (exercise - as well as the worst proof imaginable that there exist discontinuous functions). And the number of sequences of continuous functions is no bigger: $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$. Note that this proves a stronger result: the Baire hierarchy is the hierarchy of functions you get by starting with the continuous functions and iteratively taking pointwise limits. Baire class 1 is continuous, and for $\alpha>1$, Baire class $\alpha$ is the set of functions which are the limit of a sequence of functions each individually in some $<\alpha$-level of the Baire hierarchy. The Baire hierarchy goes on for $\omega_1$-many levels, and then you stop getting any new functions. The counting argument shows that there are functions which are not Baire class $\alpha$, for any fixed countable $\alpha$. And, if the continuum hypothesis fails - that is, if $2^{\aleph_0}>\aleph_1$ - then this argument shows there are functions which aren't in any level of the Baire hierarchy! (By the way, there's a similar hierarchy, the Borel hierarchy, and everything I've written about the Baire hierarchy holds of the Borel hierarchy too.) We can actually show that there are some functions not in the Baire hierarchy, without any assumptions on cardinal arithmetic. But this is a bit more complicated. It goes as follows: Fix a bijection $f$ from $\omega_1\times\mathbb{R}$ to $\mathbb{R}$. Basically, to each countable ordinal $\alpha$, $f$ associates continuum-many reals. Separately, for each $\alpha\in\omega_1$, fix a bijection $g_\alpha$ between $\mathbb{R}$ and the set of functions of Baire class $\alpha$. (Such a bijection exists, by the argument above; this uses transfinite induction.) Now we combine these! Let $\mathbb{B}$ be the set of all functions in the Baire hierarchy. We can get a function $h:\mathbb{R}\rightarrow \mathbb{B}$ as follows: given $r$, let $f^{-1}(r)=(\alpha, s)$ - we let $h(r)$ be $g_\alpha(s)$. At this point, check that $h$ is in fact a surjection from $\mathbb{R}$ to $\mathbb{B}$. And now we diagonalize! Let $F(r)=h(r)(r)+1$. Then $F\not\in\mathbb{B}$. Done! Note that this can be made explicit: there are lots of easily-describable (if a bit messy) bijections between $\mathbb{R}$ and the set of continuous functions. And there are also lots of reasonbly natural injections of $\mathbb{R}^\omega$ and $\mathbb{R}$. Combining these, we get an explicit bijection $\beta$ from $\mathbb{R}$ to the set $\mathcal{S}$ of sequences of continuous functions. Now, we can use this to define a function $F$ which is not a pointwise limit of continuous functions as follows. If $r$ is a real, we let $F(r)$ be $1+\lim_{n\rightarrow\infty} \beta(r)(n)(r)$, if that limit exists, and $0$, if that limit doesn't exist. This $F$ has a perfectly explicit, if annoyingly messy, definition. And it diagonalizes against the sequences of continuous functions, so it's not Baire class 2. Similarly, we can find explicit-if-messy functions not in Baire class $\alpha$, for any fixed countable $\alpha$. Where this breaks down is in trying to get a function which isn't in the Baire hierarchy at all: it is consistent with ZF that every function is in the Baire hierarchy (this involves killing choice to a stupidly extreme degree, however - $\omega_1$ winds up being a countable union of countable sets!).
I’m going to comment only on what you wrote for (i): Assume for all $x_n \in X$ with $\| x_n \|_X = 1$ that $$x_n^\ast (x_n) \lt \frac{\| x_n^\ast \|}{2} \iff 2 x_n^\ast (x_n) \lt \sup_{x_n; \| x_n \|_X = 1 } \{| x_n^\ast(x_n)|\}$$ Claim: Then $\exists r \in \mathbb{R}: x_n^\ast(x_n) \lt r \lt \sup_{\dots}\{\dots \}$: (i) $x_n^\ast (x_n) \neq 0$: because $\| x_n \| = 1$ and $x_n^\ast$ linear (ii) if $x_n^\ast (x_n) \lt 0$ then $2 x_n^\ast (x_n) \lt x_n^\ast (x_n) \lt x_n^\ast(- x_n) \lt 2 x_n^\ast(-x_n) \lt \sup_{\dots} \{ \dots \}$ (iii) $x_n^\ast(x_n) \gt 0$ then $\forall x_n: x_n^\ast (x_n) \lt 2 x_n^\ast (x_n) \lt \sup$ $\implies $ the $\sup$ is not the l.u.b., contradiction, so the claim $x_n^\ast (x_n) \geq \frac{\| x_n^\ast \|}{2}$ is true. The first problem, and it’s a major one, is that right off the bat you’re using $n$ for two completely different things. On the one hand it’s apparently supposed to be the index of a particular, fixed member of some countable norm-dense subset of $X^*$, though you never actually said that. On the other hand it’s a dummy index picking out members of $X$ of norm $1$; this would be a bad idea even if you weren’t already using $n$ for something else, since there’s no reason to suppose that there are only countably many elements of $X$ of norm $1$. You should have begun something like this: Let $D=\{x_n^*:n\in\mathbb{N}\}$ be a countable norm-dense subset of $X^*$. Fix $x_n^*\in D$, and assume for each $x\in X$ with $\|x\|=1$ that $$x_n^*(x)<\frac{\|x_n^*\|}2.$$ Note that I stopped before your $\iff$ symbol: that’s because what follows it is not part of your assumption, but rather an inference from your assumption, so it does not belong in the same clause with assume that. Once you’ve clearly stated your assumption, then you can go on and draw conclusions: This implies that $$2x_n^*(x)<\sup_{\|y\;\|=1}\|x_n^*(y)\|$$ for each $x\in X$ with $\|x\|=1$. Note that I had to use a different letter for the dummy variable ($y$) in the supremum from the one used for the specific $x$ of norm $1$ in the surrounding statement: they refer to different objects. The line that begins Claim makes no sense even after the ellipsis at the end is properly filled in. (Note, by the way, that this is something that you should have done yourself, so that the reader needn’t guess; with cut-and-paste it’s completely trivial.) You still have $n$ meaning two different things, and it’s not clear whether you’re talking talking about a specific $x$ of norm $1$ or all of them together. I suspect that you meant this: Claim: There is an $r\in\mathbb{R}$ such that $$x_n^*(x)<r<\sup_{\|y\;\|=1}|x_n^*(y)|$$ for each $x\in X$ of norm $1$. Presumably what follows is supposed to be a proof of the claim. Say so. Proof of Claim: Fix $x\in X$ of norm $1$. Since $\|x_n^*\|$ is linear, $x_n^*(x)\ne 0$. This doesn’t appear to follow. In fact, there seems to be nothing in your argument to here to preclude the possibility that $x_n^*$ is the zero functional. If $x_n^*(x)>0$, then $$x_n^*(x)<2x_n^*(x)<\sup_{\|y\;\|=1}|x_n^*(y)|\;,$$ and if $x_n^*(x)<0$, then $$2x_n^*(x)<x_n^*(x)<x_n^*(-x)<2x_n^*(-x)<\sup_{\|y\;\|=1}|x_n^*(y)|\;.$$ This does not in fact prove the claim; you’ve merely shown that for each $x\in X$ of norm $1$, $$x_n^*(x)<\|x_n^*\|=\sup_{\|y\;\|=1}|x_n^*(y)|\;,$$ i.e., that $x_n^*$ does not attain its norm on $\{y\in X:\|y\|=1\}$. Had you made the effort to write it intelligibly, thinking about what you were actually saying, you might have noticed that much of it makes no sense and that it does not in fact do what you wanted it to do. At the very least you would have made it possible for others to spot the problems and help with them much more easily. Remember: A proof is just a particular kind of expository prose. It should consist of paragraphs of sentences. Yes, it will often contain special symbols, but DON’T use symbols just for the sake of using them. The object is to convince the reader that something is true, and you can’t do that if you can’t make yourself understood.
First, let's speak about perceptrons in general:their input $X_0$ is a $K$-dimensional vector. So if you want to use $(P_{bid}(t),P_{ask}(t), Q_{bid}(t),Q_{ask}(t))$, it would mean that without any effort (but later we will see that is would be better to do some efforts, as usual):$$X_0(t)=(P_{bid}(t),P_{ask}(t), Q_{bid}(t),Q_{ask}(t))'\in\mathbb{R}^4$$... Two aspects of statistical learning are useful for trading1. First the ones mentioned earlier: some statistical methods focused on working on live datasets. It means that you know you are observing only a sample of data and you want to extrapolate. You thus have to deal with in sample and out of sample issues, overfitting and so on... From this viewpoint, ... Deutsche Bank's Quantitative Strategy (US) team put together the following piece on this topic (note: their research is available for clients, but I found that somebody uploaded the piece to a sketchy web site). In case the link dies, some of the academic papers they site are:Akbras, F., E. Kocatulum, and S. Sorescu, 2008, “Mispricing following public ... The bias comes from the paper Stambaugh (1999) and has nothing to do with small sample bias. It has to do with point (1) below.The argument goes as follows:Typical lagged explanatory variables for stock-return regressions arecorrelated with contemporaneous stock returnsThis contemporaneous correlation biases forecasting regressionsFirst review OLS ... A cautionary tale on all these approaches it told by Tim Loughran and Bill MacDonald in the Journal of Finance, 2011 (When Is a Liability Not a Liability? Textual Analysis, Dictionaries, and 10-Ks, here).In their analysis they show that the commonly used Harvard Psychosociological Dictionary is inadequate for sentiment classification in a financial ... Without seeing your trading desk's P&L it's impossible to say whether it is predictable or not. But here are a few thoughts -There's no reason to think that it isn't predictable. In general, financial time series are hardest to predict when the represent the return stream of an investible asset. A trading desk's P&L isn't really investible, so ... Let me start with a simple example. Suppose you have a dividend strip that pays an unknown dividend $D_T$. The gross return (something like 1.05 and NOT 5%!) on this security is, by definition, $$R_{t\to T} = \frac{D_T}{P_t}$$ where $P_t$ is the current price of this security. If we use lowercase letters to denote logs (i.e., $\log D_T = d_T$ etc..) we can ... People seem to think that using ML is going to circumvent the process of actually learning to trade, it doesn't. ML can be used to refine trading ideas, but it doesn't generate them, you need to use your brain for that. I'm currently working on this task, to apply machine learning to stock trading. However, the concerns raised in other answers are major obstacles. So, I'm taking a different tact.My strategy is more akin to teaching a car to drive - the machine learning is not based on the underlying data, but rather on the driver's reaction to the data. So based on what ... No I believe there is no directional predictive value derived from looking at divergences between futures and their underlying price value. The reason for divergences are of the no-arbitrage argument type. Futures could be arbitraged (and are immediately if such arbitrage opportunities surface, even those opportunities may only fill the stomach of a single ... My favorite tool is Sornette's own Finanical Crisis Observatory: http://tasmania.ethz.ch/pubfco/fco.htmlIf you are interested, I have developed my own tool in Java and JavaCL which can be found here: https://thebubbleindex.codeplex.com/Update: Code moved to github: https://github.com/thebubbleindex/thebubbleindex The mean could be the long run variance which issig2 = fit.Constant/(1-fit.GARCH{1}-fit.ARCH{1});I hope this explains.If not, note I ran this model through Matlab, I get different values. you can paste your m1 and m2 values and some other intermediate results so I can see why Matlab differs.EDIT: The question refers to forecasting the returns. ... Sorry, but despite being used as a popular example in machine learning, no one has ever achieved a stock market prediction.It does not work for several reasons (check random walk by Fama and quite a bit of others, rational decision making fallacy, wrong assumptions ...), but the most compelling one is that if it would work, someone would be able to become ... One possibility worth exploring is to use the support vector machine learning tool on the Metatrader 5 platform. Firstly, if you're not familiar with it, Metatrader 5 is a platform developed for users to implement algorithmic trading in forex and CFD markets (I'm not sure if the platform can be extended to stocks and other markets). It is typically used for ... I have written an entire paper on this approach at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2828744As to your specifics1) "Volatility" as defined by variance does not exist, which is why it is changing. The first moment is undefined so the second cannot exist. See the paper as to why. Your fitted pdf will treat the outcomes as having a ... There is large literature on MIDAS (mixed-frequency data sampling) models, the leading scholars being Eric Ghysels and Rossen Valkanov — google their research for references. However, the motivation for these models has mostly been to forecast low-frequency stuff with high-frequency variables, updating, say, quarterly GDP predictions as weekly ... Although not directly related to financial modeling, I've found the following quotation to be very instructive:"I remember my friend Johnny von Neumann used to say, 'with four parameters I can fit an elephant and with five I can make him wiggle his trunk.'" -- E. FermiYou may also read this: http://mahalanobis.twoday.net/stories/264091/ Hopefully these ideas open up some solution strategies.A. Calibration approach:In the case of a volatility model such as Axioma's above, you could perform an instantaneous volatility adjustment. Procedure:You build your usual T+H volatility model.You measure the realized volatility and implied volatility of the training set.You measure the out-of-... I can offer three suggestions:(a) Since any model, however sophisticated, will miss tail cases (such as Oct 2008) I would increase the number of high-frequency factors (eg weekly jobless claims - I don't know if that is a relevant example in your case - but just to give you an idea) in the model. Not only does that make the model more responsive to current ... I think this one has a clear answer (I am solely talking about equities here):The change magnitude is much more predictable than the direction.The reason being that equity volatility is much more predictable than equity risk premiums. Volatility is nothing else but change magnitude and due to the stylized facts of volatility clustering together with mean ... There are two excellent choices for implementing prediction markets:(1) Use book orders that stand until filled, just as intrade.com does.(2) Use an automated market maker (like Robin Hanson's) that stands ready to make trades.The book orders model is very simple to implement, but can suffer from very wide Bid/Ask spreads. And, it can be tough to bet ... “Make things as simple as possible, but not simpler.” The problem you want to avoid is (near) multicollinearity. The tip-off will be that adding/removing a regressor will significantly change the coefficients on the other regressors. In practice (well, in the research that I read) I rarely see this explicitly tested.If you think that you have ... The graph you attached suggests that you were trying to find swings between major highs and lows. This can be done by simply finding local extrema in the price series. The concept is:find local extrema: minima in Low prices, maxima in High prices;find local extrema in the results, if swings are too short;repeat #2 until satisfied with the results.This ... Another way of staying "time-varying risk-premium", is saying that the risk-premium is predictable. However, that the fact that the risk-premium is predictable does not means that you can make money out of this.The best two references to understand this are:Cochrane (2008) - The dog that did not barkGoyal and Welch (2007)The first tells you what ... The point of confusion may be in thinking that a predictable price process is synonymous with a mean-reverting process while using the definitions in these papers, it's actually the opposite! In the context of these papers, a random walk would be 100% predictable: the unpredictable component of a random walk (i.e. the period specific shock which has finite ... There are a few exclusions that I have commonly seen:Excluding thinly traded stocks. The price that shows up in your data feed may not relate to actual tradable prices.Filtering for ADR/Pink locals. You can find stocks listed in multiple places in ways that would lead you to think that they are great for pairs trades when actually they are the same ... There's no rule to answer this question for you. You need some combination of:Judgment: Are the parameters you're including reasonable?Sniff test: Is there theory to justify your parameter choices, or are you just hunting for chance associations?Hold-outs: You correctly mention that the problem is "in sample performance." The solution is therefore to ... The two components you refer to in your questions are:Market direction (the sign of the return)Change magnitude (the absolute value of the return)First, I'm sure you realize that neither of these are predictable at a 100%, otherwise there would be no way to make profit (you make profit by seeing things other didn't).To answer the question, I would say ... The renowned CXO Advisory Group has a section "What Works Best?".Here some general information is given and many links to their research articles which e.g. summarize lots of current academic research (although most of the linked articles are behind a paywall the links to the original papers are normally provided).The article closes with "In summary, ...
First, I do not see why this is a Bayesian construction. Indeed, if you do mean that the density is known, it cannot possibly be a Bayesian construction. Consider the case where $f(x)=280{x^3}(1-x)^4,x\in[0,1]$,. There is no unknown parameter here. This is both your prior and your posterior as no amount of data will alter anything. If this were a Bayesian construction then there would have to be some uncertain parameter, but there is no uncertain parameter. This is a beta distribution with $\alpha=5$ and $\beta=4$ The prior is forced to be $\Pr(\alpha=5;\beta=4)=1$. The question is "what is x?" The only uncertainty is in the valuation. This is a Frequentist problem. EDIT In the case where it is drawn from an unknown distribution, you are facing two options, even if the distribution is known with certainty to the actors. The first is to use Bayesian non-parametric methods, the second is to use Frequentist non-parametric methods. Depending on what I wanted to accomplish, I would choose one or the other. The Bayesian method will be coherent and so you could place gambles on it. It will also likely be very difficult to implement. There cannot be a Bayesian solution that is free of its prior. Such a thing does not exist. It might be that it is uninformative, but it must exist. The alternative is to use Fisher's failed method of fiducial statistics. The Frequentist method will minimize the maximum loss you could experience from making a choice based on the data by using an incorrect inference. It will also allow you to control for power. It will usually be far simpler to implement. Bayesian non-parametric methods are potentially infinite dimensional constructions and you would need to do a bit of reading on them. A simple approximation though would be to use the beta distribution because of its incredible flexibility, although you could use any high degree polynomial that stays above the axis since your bounding guarantees that a constant of integration exists. You would then perform model selection. As long as you believe it is unimodal, the bounding on both sides guarantees that a mean exists. Even though your distribution is unknown, it is guaranteed to have moments. The t-test is probably inappropriate because of the bounding is so tight, but you could use the empirical quantiles to test significance. If you felt you needed the higher moments, the method of moments is always available. Finally, in either case, you have kernel methods available to you. You cannot avoid a prior using Bayesian methods, but the greatest advantage of Frequentist statistics is to be able to solve problems when you cannot form a prior.
It's 0x11B. Basically, you shift left by 1 bit, and if the upper bit is 1 (and only in that case), you XOR the value with 0x11B. In C terms, it will look like this: static inline unsigned mul2(unsigned x) { x <<= 1; if (x & 0x100) { x ^= 0x11B; } return x; } Now this can be done without a conditional (which, on some architectures, will result into a conditional jump) with some trickeries like this: static inline unsigned mul2(unsigned x) { x <<= 1; x ^= (-(x >> 8)) & 0x11B; return x; } This relies on the fact that -1, cast into an unsigned type, yields an all-ones pattern. It really pays off, in these matters, to take the time to understand what is going on at an algebraic level. What really happens here is that we work in the finite field $GF(256)$. A value is a polynomial of degree at most 8 with coefficients in $GF(2)$. For instance, 0xB2 (in binary 10110010) really represents: $$ v = X^7 + X^5 + X^4 + X $$ The coefficients are always 0 or 1. Addition is done for each coefficient independently, and in $GF(2)$, so $1 + 1 = 0$. In practice, this means that addition of two elements in $GF(256)$ really is bitwise XOR. For multiplication, this is again done with polynomials, and a modular reduction, using a specific degree-8 polynomial, which happens (in the case of AES) to be: $$ P = X^8 + X^4 + X^3 + X + 1 $$ which corresponds to 0x11B ( 100011011 in binary). This is defined in section 4.2 of FIPS-197. Thus, multiplication by 2 ( 0x02, i.e. 00000010 in binary) really is multiplication by the polynomial $X$, followed by reduction modulo $P$. If we take our value $v$ above ( 0xB2), then multiplication by $X$ "bumps up" the monomials: $$ v\times X = (X^7 + X^5 + X^4 + X)X = X^8 + X^6 + X^5 + X^2 $$ In the binary representation, this is a left-shift by 1. However, the result must be reduced modulo polynomial $P$. In this specific case, $v\times X$ is a degree-8 polynomial, which is one too many (since $P$ has degree 8, all polynomials modulo $P$ must have degree 7 or less). Modular reduction really is subtracting a multiple of $P$ such that the result will have degree 7 or less (note that subtraction and addition are the same thing: a bitwise XOR). In this case, it suffices to subtract $P$ once: $$ \begin{eqnarray*}v\times X \pmod P &=& (X^8 + X^6 + X^5 + X^2) + (X^8 + X^4 + X^3 + X + 1) \\ &=& X^6 + X^5 + X^4 + X^3 + X^2 + X + 1\end{eqnarray*} $$ so the result will be 01111111 in binary, 0x7F in hexadecimal. You can check that, indeed, left-shifting 0xB2 by 1 yields 0x164, and XORing that with 0x11B results in 0x07F. Multiplication by 3 in $GF(256)$ is polynomial multiplication by $X + 1$, followed by modular reduction. To implement that easily, reuse the multiplication by 2: static inline unsigned mul3(unsigned x) { return mul2(x) ^ x; } because for a polynomial $Q$, you have $(X + 1)Q = XQ + Q$, so really it is "multiply by $X$ (i.e. "2") then add (XOR) the operand one more time". Similarly, multiplication by 9 will be: static inline unsigned mul9(unsigned x) { return x ^ mul2(mul2(mul2(x))); } because "9" is really $X^3 + 1$ so that is "multiply by $X$ three times, and add once more the operand". This whole polynomial business, and in particular the "addition is XOR", takes some effort wrapping your mind around it; but once you get it, the rest is "easy".
I am building a Neural Network for a binary classification problem where the Bayes error (lowest possible error rate) is probably close to 50%. What makes the task easier is that I don't need to make a prediction for each observation of the test sample. I only want to make a prediction for the observations where the model has a fairly high confidence. However a high rate at which predictions are made is better than a low one. So far, I have used a standard neural network (feed-forward, cross-entropy loss, L2 regularization and sigmoid activation on final node). In the testing sample, I only take into account the observations for which the final node's value $(\hat{Y}_i)$ is outside of an interval of low confidence: $$\text{predicted class}_i = \begin{cases} 1 &\text{ if } \hat{Y}_i > 0.5 + a \\ 0 &\text{ if } \hat{Y}_i < 0.5 - a \\ \text{NA} &\text{else} \end{cases} \\ \text{where } a\in [0, 0.5] \text{ indicates the level of confidence required}$$ To tune the hyperparameters (including $a$), I have designed a metric that depends positively on: Test-sample accuracy (only counting predictions different from NA) Percentage of predictions that are different from NA. I am not yet satisfied with the performance achieved with this approach, and I am sure that there are smarter ways to approach this, for example a custom loss function. Advices, links to articles, or even related search keywords are welcome.
Invertible and Singular Elements in an Algebra Invertible and Singular Elements in an Algebra Definition: Let $\mathfrak{A}$ be an algebra. A point $e \in \mathfrak{A}$ is said to be a Unit Element or a (Multiplicative) Identity Element of $\mathfrak{A}$ if $e \neq 0$ and if for every $a \in \mathfrak{A}$ we have that $ea = a = ae$. An Algebra with Unit (or Unital Algebra) is an algebra $\mathfrak{A}$ that has a unit. The following proposition tells us that if $\mathfrak{A}$ is an algebra with unit then the unit element is unique. Proposition 1 (Uniqueness of Units): Let $\mathfrak{A}$ be an algebra. If $e, e' \in \mathfrak{A}$ are both units then $e = e'$. (1) Proof: Suppose that $e, e' \in \mathfrak{A}$ are both units. Then: \begin{align} \quad e = ee' = e' \end{align} Where the first equality comes from the fact that $e'$ is a unit, and the second equality comes from the fact that $e$ is a unit. $\blacksquare$ Since units in an algebra are unique, it is sometimes conventional to use the symbol $1$ to denote the unit in an algebra with unit. Definition: Let $\mathfrak{A}$ be an algebra with unit $1$ and let $a \in \mathfrak{A}$. a) A point $a \in \mathfrak{A}$ is said to be a Left (Multiplicative) Inverse of $b \in \mathfrak{A}$ if $ab = 1$. b) A point $b \in \mathfrak{A}$ is said to be a Right (Multiplicative) Inverse of $a \in \mathfrak{A}$ if $ab = 1$. c) A point $c \in \mathfrak{A}$ is said to be a (Multiplicative) Inverse of $a \in \mathfrak{A}$ if it is both a left and right inverse of $a$. Observe that if the operation of multiplication on the algebra $\mathfrak{A}$ is commutative then the existence of a left multiplicative inverse implies the existence of a right multiplicative inverse. In general though, multiplication on $\mathfrak{A}$ is NOT assumed to be commutative. Definition: Let $\mathfrak{A}$ be an algebra with unit. A point $a \in \mathfrak{A}$ is said to be Invertible in $\mathfrak{A}$ if $a$ has an inverse in $\mathfrak{A}$, and $\mathrm{Inv}(\mathfrak{A})$ is the set of all invertible elements in $\mathfrak{A}$. A point $a \in \mathfrak{A}$ is said to be Singular in $\mathfrak{A}$ if it has no inverse in $\mathfrak{A}$, and $\mathrm{Sing}(\mathfrak{A})$ is the set of all singular elements in $\mathfrak{A}$. The following proposition tells us that if $a \in \mathfrak{A}$ is invertible then its inverse is unique. Proposition 2 (Uniqueness of Inverses): Let $\mathfrak{A}$ be an algebra with unit and let $a \in \mathfrak{A}$. If $a$ is invertible and $x, y$ are both inverses of $a$ then $x = y$. (2) Proof: Suppose that $x$ and $y$ are both inverses of $a$. Then in particular, $1 = ay$ and $xa = 1$. So: \begin{align} \quad x = x1 = x(ay) = (xa)y = 1y = y \end{align} Therefore $x = y$. $\blacksquare$
Topological Subspaces Recall from the Initial Topologies page that if $X$ is a set, $\{ Y_i : i \in I \}$ is a collection of topological spaces, and $\{ f_i : X \to Y_i : i \in I \}$ is a collection of maps, then the initial topology induced by $\{ f_i : i \in I \}$ on $X$ is the topology $\tau$ which makes $f_i : X \to Y_i$ continuous for all $i \in I$. We will now look at a very important type of topology known as a subspace topology. Definition: Let $(X, \tau)$ be a topological space and let $A \subseteq X$. The Topological Subspace or simply Subspace topology on $A$ is the topology $\tau_A = \{ A \cap U : U \in \tau \}$. From the definition above it is not clear whether or not $\tau_A$ indeed forms a topological space $(A, \tau_A)$. The following theorem will show that the subspace topology $\tau_A$ is in fact a topology on $A$. Theorem 1: Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then the collection $\tau_A = \{ A \cap U : U \in \tau \}$ is a topology on $A$. Proof:Let $A$ be a subset of the topological space $(X, \tau)$ and consider the inclusion function $i : A \to X$ defined for all $a \in A$ by: That is, for all $a \in A$ we have that $i(a) = a \in X$. The initial topology on $A$ induced by $i$ is the coarsest topology on $A$ that makes $i : A \to X$ continuous. This topology has subbasis: Since $U$ is an open set in $X$ and $A \subseteq X$, we have that $i^{-1}(U) = A \cap U$ for all $U \in \tau$. So $\tau_A = \{ A \cap U : U \in \tau$, i.e., the subspace topology $\tau_A$ is indeed a topology since it equals the initial topology on $A$ induced by the inclusion map $i$. $\blacksquare$ Proposition 2 (Transitivity of Subspaces): Let $(X, \tau)$ be a topological space. If $Y$ is a topological subspace of $X$ and $Z$ is a topological subspace of $Y$ then $Z$ is a topological subspace of $X$.
LaTeX JS24 Credit Royalty Convert latex math notation to functions in Javascript! Language Python 3.x Metrics API Calls - 940 Avg call duration - 1.00sec Permissions The Algorithm Platform License is the set of terms that are stated in the Software License section of the Algorithmia Application Developer and API License Agreement. It is intended to allow users to reserve as many rights as possible without limiting Algorithmia's ability to run it as a service. Learn More Run an Example Input Output { "func": "(n,α,β)=>{q=1;for(i=1;i<=n;i+=1){q*=(Math.asin(α)*Math.sin(β))/(Math.sqrt(α*β))};return q};", "params": [ "n", "α", "β" ]} Install and Use Install CLI Install Docs Install the Algorithmia CLI client by running: curl -sSLf https://algorithmia.com/install.sh | sh Then authenticate by running: $ algo auth# When prompted for api endpoint, hit enter# When prompted for API key, enter your key: YOUR_API_KEY Use algo run Jeffro/latexjs/0.1.1 -d '"\\prod_{i=1}^{n}\\frac{\\arcsin{\\alpha}*\\sin{\\beta}}{\\sqrt{\\alpha*\\beta}}"' --timeout 300
AliPhysics 5eaf189 (5eaf189) #include <AliFMDCorrNoiseGain.h> AliFMDCorrNoiseGain () AliFMDCorrNoiseGain (const AliFMDFloatMap &map) Float_t Get (UShort_t d, Char_t r, UShort_t s, UShort_t t) const void Set (UShort_t d, Char_t r, UShort_t s, UShort_t t, Float_t x) const AliFMDFloatMap & Values () AliFMDFloatMap fValues Get the noise calibration. That is, the ratio \[ \frac{\sigma_{i}}{g_{i}k} \] where \( k\) is a constant determined by the electronics of units DAC/MIP, and \( \sigma_i, g_i\) are the noise and gain of the \( i \) strip respectively. This correction is needed because some of the reconstructed data (what which have an AliESDFMD class version less than or equal to 3) used the wrong zero-suppression factor. The zero suppression factor used by the on-line electronics was 4, but due to a coding error in the AliFMDRawReader a zero suppression factor of 1 was assumed during the reconstruction. This shifts the zero of the energy loss distribution artificially towards the left (lover valued signals). So let's assume the real zero-suppression factor is \( f\) while the zero suppression factor \( f'\) assumed in the reconstruction was (wrongly) lower. The number of ADC counts \( c_i'\) used in the reconstruction can be calculated from the reconstructed signal \( m_i'\) by \[ c_i' = m_i \times g_i \times k / \cos\theta_i \] where \(\theta_i\) is the incident angle of the \( i\) strip. This number of counts used the wrong noise factor \( f'\) so to correct to the on-line value, we need to do \[ c_i = c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor \] which gives the correct number of ADC counts over the pedestal. To convert back to the scaled energy loss signal we then need to calculate (noting that \( f,f'\) are integers) \begin{eqnarray} m_i &=& \frac{c_i \times \cos\theta_i}{g_i \times k}\\ &=& \left(c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right)\frac{\cos\theta}{g_i \times k}\\ &=& \left(\frac{m_i'\times g_i\times k}{\cos\theta} - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right) \frac{\cos\theta}{g_i \times k}\\ &=& m_i' + \frac{1}{g_i \times k} \left(\lfloor f\times n_i\rfloor- \lfloor f'\times n_i\rfloor\right)\cos\theta\\ &=& m_i' + \frac{\lfloor n_i\rfloor}{g_i \times k} \left(f-f'\right)\cos\theta \end{eqnarray} inline inline inline protected
Bounded Linear Operators from X to X Recall that if $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ are normed linear spaces then a linear operator $T : X \to Y$ is said to be bounded if there exists an $M > 0$ such that for every $x \in X$:(1) The "smallest" such $M$ is denoted $\| T \|$ and is defined by:(2) We will now shortly discuss bounded linear operators from $X$ to $X$. Proposition 1: Let $(X, \| \cdot \|_X)$ be a normed linear space. If $S, T : X \to X$ are bounded linear operators then $ST = S \circ T : X \to X$ is a bounded linear operator. Moreover, $\| ST \| \leq \| S \| \| T \|$. Proof:Since $S$ and $T$ are bounded linear operators there exists $M_1, M_2 > 0$ such that $\| S(x) \|_X \leq M_1 \| x \|_X$ for all $x \in X$ and $\| T(x) \|_X \leq M_2 \| x \|_X$ for all $x \in X$. Hence: So $ST$ is a bounded linear operator, and by setting $M_1 = \| S \|$ and $M_2 = \| T \|$ we have that: Proposition 2: Let $(X, \| \cdot \|_X)$ be a normed linear space. If $X$ is a Banach space and if $T : X \to X$ is a bounded linear operator and $\| T \| < 1$ then $I - T$ is invertible and $(I - T)^{-1} = \sum_{n=0}^{\infty} T^n$. Proof:Since $X$ is a Banach space, so is $\mathcal B(X, X)$. From the Absolute Summability Criterion for Completeness we have that every absolutely summable series in $\mathcal B(X, X)$ converges in $\mathcal B(X, X)$. Consider the series $\sum_{n=0}^{\infty} T^n$. We have that: And the righthand numerical series converges since $\| T \| < 1$. Therefore $\sum_{n=0}^{\infty} T^n$ converges to some $S \in \mathcal B(X, X)$. We have that: Therefore $(I - T)$ is invertible and $(I - T)^{-1} = \sum_{n=0}^{\infty} T^n$. $\blacksquare$ Corollary 3: Let $(X, \| \cdot \|_X)$ be a normed linear space. If $X$ is a Banach space then the set of all invertible bounded linear operators in $\mathcal B(X, X)$ is open in $\mathcal B(X, X)$. Proof:Let $O$ be the set of all invertible bounded linear operators in $\mathcal B(X, X)$. Let $T \in O$. We will show that there exists an $\epsilon > 0$ such that for all bounded linear operators $S$ with $\| T - S \| < \epsilon$ then $S \in O$, which shows that the open ball centered at $T$ with radius $\epsilon$ is fully contained in $O$. Let $\epsilon = \frac{1}{\| T^{-1} \|}$. Note that $\| T^{-1} \| \neq 0$ since $T$ is invertible and so $T \neq 0$. Let $S \in \mathcal B(X, X)$ such that $\| S - T \| < \frac{1}{\| T^{-1} \|}$. Then: So by Lemma 2 we have that $I - (T^{-1} (S - T))$ is invertible. But note that: Since $T$ is invertible and $T^{-1}S$ is invertible, so is the composition $S = TT^{-1} S$. So $S$ is invertible, and hence the open ball centered at $T$ with radius $\epsilon = \frac{1}{\| T^{-1} \|}$ is fully contained in $O$. So $O$ is open in $\mathcal B(X, X)$. $\blacksquare$
Determinants for 2 x 2 Matrices We will now begin to look at an important value associated to every square matrix known as the determinant of that matrix. We will start by looking at determinants for $2 \times 2$ matrices which conveniently have an easy formula for computational purposes. We will subsequently look at computing determinants for larger square matrices. Definition: Given a $2 \times 2$ matrix $A$ in the form $A = \begin{bmatrix} a & b\\ c & d \end{bmatrix}$, the Determinant of the $2 \times 2$ Matrix $A$ denoted $\det (A) = ad - bc$. Note that this definition applies only towards determinants of matrices that have size $2 \times 2$. For example, consider the following $2 \times 2$ matrix:(1) We note that $a = 5$, $b = 2$, $c = 1$, and $d = 3$. Therefore using the formula in the definition, we get that $\det (A) = ad - bc = (5)(3) - (2)(1) = 13$. We will now look at an important theorem where we can use the determinant of a $2 \times 2$ matrix in order to quickly test if the matrix has an inverse and calculate it. Theorem 1: A $2 \times 2$ matrix $A = \begin{bmatrix} a & b\\ c & d \end{bmatrix}$ is invertible if $\det(A) = ad - bc ≠ 0$. The inverse of $A$ can be obtained with the following formula $A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b\\ -c & a \end{bmatrix} = \begin{bmatrix} \frac{d}{ad - bc} & - \frac{b}{ad - bc}\\ - \frac{c}{ad - bc} & \frac{a}{ad - bc} \end{bmatrix}$. If $ad - bc = 0$, then $A$ is not invertible. Proof of Theorem 1:Assume that $A = \begin{bmatrix} a & b\\ c & d \end{bmatrix}$ and $A^{-1} = \begin{bmatrix} \frac{d}{ad - bc} & - \frac{b}{ad - bc}\\ - \frac{c}{ad - bc} & \frac{a}{ad - bc} \end{bmatrix}$. We will proceed to show that $AA^{-1} = I_2$. We will first calculate all of the value of all entries in the product $AA^{-1}$, that is $(AA^{-1})_{11}$, $(AA^{-1})_{12}$, $(AA^{-1})_{21}$, and $(AA^{-1})_{22}$: When $A$ and $A^{-1}$ are multiplied through we obtain $I_2 = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$. We also note that if a matrix $A$ is invertible, there exists only one unique inverse which we have found. We note that if $ad - bc = 0$, then $\frac{1}{ad-bc} = \frac{1}{0}$, which is undefined. $\blacksquare$ Example 1 (6) Given the following matrix, evaluate the inverse by calculating its determinant. We first calculate that $\det (A) = ad - bc = (4)(6) - (-2)(3) = 30$. By the formula in theorem 1 then:(7) You can verify that this inverse is correct by $AA^{-1} = I_2$.
Setup: Let $p$ be a large prime number and $F = \{0, \dots, p-1\}$ be the field of order $p$. Let $I$ denote the discrete interval $I = \{1, \dots, M\}$ for some $M < p$. Regard both $I$ and $F$ as subsets of the real line, and let $f : [0,p) \to R$ be a smooth indicator function of $I$ with support contained in $\tilde I = \{0, \dots, M+1\}$. Extend $f$ to be periodic on the line with period $p$ and (abusing notation) call this function $f$ as well. Now sample $f$ at the integers to obtain a discrete function (abusing notation still) called $f$ with period $p$. This last $f$ has a finite fourier series $$f(x) = \sum_{\xi=0}^{p-1} \hat f(\xi) \exp(2 \pi i \xi x / p)$$ with fourier coefficients $$\hat f(\xi) = \frac1p \sum_{z=0}^{p-1} f(z) \exp(- 2 \pi i \xi z / p).$$ and since f is smooth its fourier coefficients decay quickly, $\sum_{\xi=0}^{p-1} \hat f(\xi) = O(1).$ Problem: One should be able to show (1) The coefficients $\hat f(\xi)$ can be dropped for $\xi > p/M$, and (2) For $\xi = 1, \dots, p/M$ the bound $|\hat f(\xi)| \leq M/p$ holds. I think this can be accomplished with an appropriate form of the Poisson summation formula.
The Interior Points of Sets in a Topological Space Examples 1 Recall from The Interior Points of Sets in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then a point $a \in A$ is called an interior point of $A$ if there exists an open set $U \in \tau$ such that:(1) We also proved some important results for a topological space $(X, \tau)$ with $A \subseteq X$: $A$ is open if and only if every $a \in A$ is an interior point of $A$, i.e., $A = \mathrm{int} (A)$. If $U \in \tau$ is such that $U \subseteq A$ then $U \subseteq \mathrm{int} (A)$. $\mathrm{int} (A)$ is the largest open subset of $A$. We will now look at some examples regarding interior points of subsets of a topological space. Example 1 Consider the set $X = \{ a, b, c \}$ and the nested topology $\tau = \{ \emptyset, \{ a \}, \{a, b \}, X \}$. Let $A = \{ a, c \} \subset X$. What are the interior points of $A$? We note that all interior points of $A$ must be contained in $A$ by the definition of an interior point, so we need to only check whether $a \in A$ is an interior point and whether $c \in A$ is an interior point. For $a \in A$, does there exists an open set $U \in \tau$ such that $a \in U \subseteq A$? Yes! The set $U = \{ a \} \in \tau$ and:(2) Therefore $a \in A$ is an interior point of $A$. For $c \in A$, does there exist an open set $U \in \tau$ such that $a \in U \subseteq A$? No! The only set in $\tau$ containing $c$ is the wholeset $X = \{ a, b, c \}$ and $X \not \subseteq A$ since $b \in X$ and $b \not \in A$. Therefore $c$ is not an interior point of $A$. Example 2 Consider an arbitrary set $X$ with the discrete topology $\tau = \mathcal P (X)$. Let $S \subseteq X$. What are the interior points of $S$? Let $x \in S$. Since $S \subseteq X$, we have that $S \in \tau = \mathcal P(X)$. Let $U = S$. Then for each $x \in S$ we have that:(3) Therefore every point $x \in S$ is an interior point of $S$. Example 3 Consider an arbitrary set $X$ with the indiscrete topology $\tau = \{ \emptyset, X \}$. Let $S$ be a nontrivial subset of $X$. What are the interior points of $S$? Let $S$ be a nontrivial subset of $X$. Then:(4) For all $x \in S$, we see from the nesting above that there exists no open set $U \in \tau$ such that $x \in U \subseteq S$. Therefore, every point $x \in S$ is not an interior point of $S$.
22 0 1. Homework Statement A rotating beacon is located 2 miles out in the water. Let A be the point on the shore that is closest to the beacon. As the beacon rotates at 10 rev/min, the beam of light sweeps down the shore once each time it revolves. Assume that the shore is straight. How fast is the point where the beam hits the shore moving at an instant when the beam is lighting up a point 2 miles along the shore from point A? 2. Homework Equations This is a self-study question that I took from the Ohio State University Coursera course (7.08 A Beacon Problem, if you want to see it for yourself). When I first saw it, it looked a lot like a physics question, so I tried to solve it using physics - turn rev/min into angular velocity, use that to calculate tangential velocity, then find x-component of that tangential velocity [itex]\vec{v}_{tx}[/itex], which should be the speed at the which the beam of light is moving along the shore at that instant. My (physics-derived) answer, 40π, is exactly half of the correct (calculus-derived) answer, 80π. That makes me think that somewhere along the way, I must have made some mistake or misplaced a 2, but I can't tell where. Why is my physics answer different from the correct answer (calculated via related rates)? 3. The Attempt at a Solution Let's say that the beam of light hits the shore at point B, 2 miles to the right of point A. Since the beacon light has uniform circular motion, we should be able to calculate [itex]v_t[/itex] like so: [itex] \omega = \frac{10rev}{min} \cdot \frac{2 \pi rad}{rev} = \frac{20 \pi rad}{min} \\ r = \sqrt{2^2 + 2^2} = \sqrt{4 + 4} = \sqrt{8} = 2 \sqrt{2} mi \\ v_t = \omega \cdot r \\ v_t = \frac{20 \pi \, rad}{min} \cdot 2\sqrt{2}mi = 40 \pi \sqrt{2} \frac{mi}{min} \\ [/itex] I'm assuming that [itex]\vec{v_t}[/itex] is perpendicular to the beam of light, and that [itex]\vec{v_t}_x[/itex] runs along the x-axis. I'm also assuming that the angle between [itex]\vec{v_t}[/itex] and [itex]\vec{(v_t)}_x[/itex] is [itex]45^\circ[/itex], since [itex]\vec{v_t}[/itex] is always parallel to the direction of motion and the complementary angle must be 45 when [itex]\theta = 45^\circ[/itex] (the triangle totals 180, and since both the x and y sides are 2, their angles must be 45 each), then... [itex]\cos{45} = \frac{\vec{(v_t)}_x}{\vec{v_t}} \\ \vec{(v_t)}_x = \vec{v_t} \cdot 20 \pi \cos{45} = 40 \pi \\ \vec{(v_t)}_x = 40 \pi [/itex] But... The calculation via related rates gives this solution... [itex]\frac{d\theta}{dt} = 2\pi \cdot 10 = 20\pi \\ \tan{\theta} = \frac{x}{2} = \frac{1}{2}x \\ (\tan{\theta})' = (\frac{1}{2}x)' \\ \sec^2{\theta} \cdot \frac{d\theta}{dt} = \frac{1}{2} \cdot \frac{dx}{dt} \\ \frac{dx}{dt} = 2 \cdot \sec^2{\theta} \cdot 20\pi \\ = 2 \cdot (\frac{1}{\cos{45}})^2 \cdot 20\pi \\ = 2 \cdot (\frac{1}{\frac{\sqrt{2}}{2}})^2 \cdot 20\pi \\ = 2 \cdot \frac{1}{\frac{2}{4}} \cdot 20\pi \\ = 2 \cdot 2 \cdot 20\pi \\ = 80\pi[/itex] I totally accept the calculus solution and explanation. I just don't understand why I couldn't get the same answer through physics. Attachments 57.8 KB Views: 349
Finite Topological Products of Topological Spaces If $X$ and $Y$ are topological spaces then we can define a special topology on the Cartesian product $X \times Y$ to obtain a new topological space. Definition: Let $X$ and $Y$ be topological spaces and let $X \times Y$ be the Cartesian product of these sets. The Topological Product of these two spaces is the set $X \times Y$ with the topology $\tau$ with basis $\displaystyle{\mathcal B = \{ U \times V : U \: \mathrm{is \: open \: in \:} X, V \: \mathrm{is \: open \: in \:} Y \}}$. If $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces then the topological product is the set $\displaystyle{\prod_{i=1}^{n} X_i = X_1 \times X_2 \times ... \times X_n}$ with the topology $\tau$ with basis $\displaystyle{\mathcal B = \left \{ \prod_{i=1}^{n} U_i : U_i \: \mathrm{is \: open \: in \:} X_i, \: \forall i \in \{ 1, 2, ..., n \} \right \}}$. In other words, the topological product of two topological spaces is the new topological space $(X \times Y, \tau)$ where $\tau$ is topology with the basis of Cartesian sets $U \times V$ which we call open if $U$ is open in $X$ and $V$ is open in $Y$. Alternatively, if $X$ and $Y$ are topological spaces then the topology on $X \times Y$ described above will be induced as the initial topology from the projection maps $p_1 : X \times Y \to X$ and $p_2 : X \times Y \to Y$. (Recall that the initial topology induced by a collection of maps is the coarsest topology which makes each of the maps continuous). We prove this on the subsequence Projection Mappings of Finite Topological Products page for the more general finite topological products. When it comes to two topological spaces $X$ and $Y$, we can look at two products, $X \times Y$ and $Y \times X$. Fortunately these two spaces are homeomorphic as we prove in the following theorem. Theorem 1: Let $X$ and $Y$ be topological spaces. Then the topological products $X \times Y$ and $Y \times X$ are homeomorphic and an explicit homeomorphism is given by $f : X \times Y \to Y \times X$ defined by $f(x, y) = (y, x)$. Proof:To show that $f$ is a homeomorphism we must show that $f$ is a bijective, continuous, and open map. It should not be too hard to see that $f$ is bijective. Let $(x, y), (z, w) \in X$ and suppose that $f(x, y) = f(z, w)$. Then $(y, x) = (w, z)$ which implies that $y = w$ and $x = z$, so $(x, y) = (z, w)$ and $f$ is injective. Now let $(z, w) \in Y$. Then $f(w, z) = (z, w)$ which shows that $f$ is surjective. Hence $f$ is bijective. We now show that $f$ is continuous. Let $V \times U \subseteq Y \times X$ where $V$ is open in $Y$ and $U$ is open in $X$. Then: But $U$ is open in $X$ and $V$ is open in $Y$, so $f^{-1}(V \times U)$ is open in $U \times V$ so $f$ is continuous. We lastly show that $f$ is open by showing that $f^{-1}$ is continuous. Let $U \times V \subseteq X \times Y$ where $U$ is open in $X$ and $V$ is open in $Y$. Then: But $V$ is open in $Y$ and $U$ is open in $X$, so $f^{-1}$ is continuous. Therefore $f$ is a bijective, continuous, and open map, so $f$ is a homeomorphism between $X \times Y$ and $Y \times X$ so these spaces are homeomorphic. $\blacksquare$
Suppose I have a projectile, fired at angle $\theta$, with velocity $v_0$, assuming it's only acted upon by gravity. How can I find an equation to enable me to plot the motion (vertical height against time) on a graph (I'll be doing this computationally) You have to find the parametric equation of the trajectory. With units in feet and seconds we get the following equations. Note that your $$x(t)=(v_0 \cos \theta )t$$ and your $$y(t)= -16 t^2+(v_0 \sin \theta )t +x_0$$
Imagine we are in the following situation, we have a nonlinear system of functions $f$ with coördinates $x_i$, and unknown parameters $\delta_i$'s and $t$. Let this system be defined as, \begin{eqnarray*} f_1(x_{1},x_{2},...,x_{n})=y_1 \\ f_2(x'_{1},x'_{2},...,x'_{n})=y_2 \\ f_3(x''_{1},x''_{2},...,x''_{n})=y_3 \end{eqnarray*} Now for every data entry (measurement to the functions) we collect the known coördinates $x_i,x'_i$ and $x''_i$, the known function values $y_1, y_2$ and $y_3$. The goal is to find values for the $\delta_i$'s and $t$'s that best fit de data. It is of note to mention that the values of $t$ are specific to a measurement, thus for a dataset containing $n$ measurements one would need to find a $t$ for each measurement ($n$ in total). Let me elaborate via the following example. We have a function $f$ as below, where $A$ and $\xi$ are unknown parameters, where $A$ plays the role of the $\delta_i$'s and $\xi$ that of the parameter $t$. Furthermore, $\mu_i$ and $\sigma_i$ are the given coördinates. f[A_, μ_, σ_, x_] := A^2 E^(-((x - μ)^2/(2 σ^2)))A = 2;{μ1, μ2, μ3} = RandomReal[{-5, 5}, 3];{σ1, σ2, σ3} = RandomReal[{2, 5}, 3];ξ = RandomReal[{-5, 5}];y1 = f[A, μ1, σ1, ξ] + .1 RandomReal[{-1, 1}];y2 = f[A, μ2, σ2, ξ] + .1 RandomReal[{-1, 1}];y3 = f[A, μ3, σ3, ξ] + .1 RandomReal[{-1, 1}];Show[ Plot[{ f[A, μ1, σ1, x], f[A, μ2, σ2, x], f[A, μ3, σ3, x]}, {x, -7.5, 7.5}, PlotRange -> All], ListPlot[{ {{ξ, y1}}, {{ξ, y2}}, {{ξ, y3}}}] ] A data entry would look like this, {{1, id, μ1, σ1, μ2, σ2, μ3, σ3, y1}, {2, id, μ1, σ1, μ2, σ2, μ3, σ3, y2}, {3, id, μ1, σ1, μ2, σ2, μ3, σ3, y3}, ... } Where the first element identifies with the function $f_1, f_2$ or $f_3$, and the second element is used as ID for a specific measurement, because the values of $y_i$ are simultaneously found for a measurement (they have equal $t$) and thus should be grouped, in this case by an ID for the measurement. Now make some test data, make[n_] := Module[{l1, l2, id, ξ, A, f, μ1, σ1, μ2, σ2, μ3, σ3, y1, y2, y3}, f[A_, μ_, σ_, x_] := A^2 E^(-((x - μ)^2/(2 σ^2))); A = 2; l1 = {}; l2 = {}; Do[ {μ1, μ2, μ3} = RandomReal[{-5, 5}, 3]; {σ1, σ2, σ3} = RandomReal[{2, 5}, 3]; ξ[id] = RandomReal[{-5, 5}]; y1 = f[A, μ1, σ1, ξ[id]] + .1 RandomReal[{-1, 1}]; y2 = f[A, μ2, σ2, ξ[id]] + .1 RandomReal[{-1, 1}]; y3 = f[A, μ3, σ3, ξ[id]] + .1 RandomReal[{-1, 1}]; AppendTo[l1, { {1, id, μ1, σ1, μ2, σ2, μ3, σ3, y1}, {2, id, μ1, σ1, μ2, σ2, μ3, σ3, y2}, {3, id, μ1, σ1, μ2, σ2, μ3, σ3, y3}}]; AppendTo[l2, {i, ξ[id]}];, {id, n}]; {Flatten[l1, 1], l2}] {data, ξlist} = make[100]; The Question Given this example one would like find all values of ξ[id] given the measurement ID (so recreate the ξlist list), and find $A\approx2$ by means of a fit to the function f as used to create the data. I have been trying to use NonlinearModelFit[] with this answer that dealt with fitting for multiple functions simultaneously. This would work if ξ[id] was constant for all measurements, but it is not. EDIT Apparently I solved my problem without me knowing, I used the "ParametersTable" option in the nlm but that does not seem to work, which got me confused for a while. Anyway here is the code that worked for me, fitmodel[set_, id_, μ1_, σ1_, μ2_, σ2_, μ3_, σ3_, A_, x_]:=Which[ set == 1, f[A, μ1, σ1, x], set == 2, f[A, μ2, σ2, x], set == 3, f[A, μ3, σ3, x]]fitmodel2[set_, id_, μ1_, σ1_, μ2_, σ2_, μ3_, σ3_, A_]:=fitmodel[set, id, μ1, σ1, μ2, σ2, μ3, σ3, A, t[Round[id]]]parm = Flatten[Append[{A}, Table[t[i], {i, 100}]]];nlm = NonlinearModelFit[data, fitmodel2[set, id, μ1, σ1, μ2, σ2, μ3, σ3, A], parm, {set, id, μ1, σ1, μ2, σ2, μ3, σ3}];nlm["BestFitParameters"] // TableForm The only question that remains is how to find the parameter errors, but that should be relatively easy. I'll update when I have a way of finding them. Now the problem I am trying to solve is a bit more complicated than this, where $f_1, f_2$ and $f_3$ are not of the same form, and have more coördinates and parameters, but the general idea still applies. I have been stuck on this for a while now, and any help would be greatly appreciated.
Path Connected Topological Spaces Review Path Connected Topological Spaces Review We will now review some of the recent content regarding path connected topological spaces. Recall from the Path Connected Topological Spaces page that a topological space $X$ is said to be Path Connectedif for every pair of distinct points $x, y \in X$ there exists a continuous function $\alpha : [0, 1] \to X$ such that $\alpha(0) = x$ and $\alpha(1) = y$. The functions $\alpha$ are called Pathsfrom $x$ and $y$. We saw that $\mathbb{R}^n$ with the usual topology is path connected and for any pair of points $\mathbf{x} = (x_1, x_2, ..., x_n), \mathbf{y} = (y_1, y_2, ..., y_n) \in \mathbb{R}^n$ we can define a path $\alpha : [0, 1] \to \mathbb{R}^n$ by: \begin{align} \quad \alpha(t) = (1 - t) \mathbf{x} + t \mathbf{y} \end{align} On the Path Connectivity of Connected Topological Spacespage we saw that every path connected topological space is connected. The converse is not true in general. There exists some connected topological spaces that are not path connected. In this sense, path connectivity is a "stronger" type of connectivity. We then began to investigated the path connectivity of path connected sets. On the Path Connectivity of Countable Unions of Connected Setswe saw that if $\{ A_i \}_{i=1}^{\infty}$ is a countably infinite collection of path connected sets in a topological space $X$ and if further, $A_i \cap A_{i+1} \neq \emptyset$ for all $i \in I$ (i.e., successive sets overlap), then the resulting union $\displaystyle{\bigcup_{i=1}^{\infty} A_i}$ is also a path connected topological space. On the Path Connectivity of the Range of a Path Connected Set under a Continuous Functionpage we saw that if $X$ is a path connected topological space and $f : X \to Y$ is a continuous function then the range, $f(X)$ is a path connected topological space in $Y$. On the Path Connectedness of Arbitrary Topological Productspage we saw that if $\{ A_i \}_{i \in I}$ is an arbitrary collection of path connected topological spaces then the topological product $\displaystyle{\prod_{i \in I} A_i}$ is also path connected. We proved this by first taking any two distinct points $(x_i)_{i \in I}$ and $(y_i)_{i \in I}$ in $\displaystyle{\prod_{i \in I} X_i}$ and then considering the projection maps defined for all $j \in I$ by $p_j((x_i)_{i \in I}) = x_j$. Since each of the spaces $X_j$ are path connected there exists continuos maps $\alpha_j : [0, 1] \to X_j$ such that $\alpha(0) = p_j((x_i)_{i \in I}) = x_j$ and such that $\alpha (1) = p_j((x_i)_{i \in I}) = y_j$, and so, we can define a function $\displaystyle{\alpha : [0, 1] \to \prod_{i \in I} X_i}$ for all $x \in [0, 1]$ by: \begin{align} \quad \alpha(x) = (\alpha_j(x))_{j \in I} \end{align} Since each component of $\alpha$ is continuous we have that $\alpha$ is continuous which showed that $\displaystyle{\prod_{i \in I} X_i}$ is path connected. On the Path Connectedness of Open and Connected Sets in Euclidean Spacepage we looked at a very nice theorem regarding path connectivity of open and connected sets in Euclidean space $\mathbb{R}^n$. We saw that if $A$ is an open and connected set in $\mathbb{R}^n$ then $A$ is path connected. On the Locally Connected and Locally Path Connected Topological Spacespage we then began to define local connectivity and local path connectivity of a topological space. We said that a topological space $X$ is Locally Connectedat $x$ if for every neighbourhood of $U$ of $x$ there exists a connected neighbourhood $V$ of $x$ with $x \in V \subseteq U$. Furthermore, we said that $X$ is locally connected (in general) if $X$ is locally connected at every point in $X$. Similarly, we said that a topological space $X$ is Locally Path Connectedat $x$ if for every neighbourhood of $U$ of $x$ there exists a path connected neighbourhood $V$ of $x$ with $x \in V \subseteq U$. Furthermore, we said that $X$ is locally path connected (in general) if $X$ is locally path connected at every point in $X$.
Table of Contents The Adjoint of a Bounded Linear Operator Between Banach Spaces Definition: Let $X$ and $Y$ be Banach spaces and let $T : X \to Y$ be a bounded linear operator. The Adjoint of $T$ is the linear operator $T^* : Y^* \to X^*$ defined for all $f \in Y^*$ by $T^*(f) = f \circ T$. Note that for every $f \in Y^*$, the map $f \circ T : X \to \mathbb{R}$ is a bounded linear functional on $X$ since for every $x \in X$ we have that:(1) So indeed, $T^*$ is a map with domain $Y^*$ and codomain $X^*$. It is also easy to verify that $T^*$ is a linear operator. The following proposition tells us that if $T$ is a bounded linear operator from $X$ to $Y$ then $T^*$ is a bounded linear operator from $Y^*$ to $X^*$ and that moreover, $\| T^* \| = \| T \|$. Proposition 1: Let $X$ and $Y$ be Banach spaces and let $T : X \to Y$ be a bounded linear operator. Then $T^* : Y^* \to X^*$ is a bounded linear operator and $\| T^* \| = \| T \|$. Proof:For each $f \in Y^*$ we have that: Where the inequality at $(*)$ comes from the fact that $f$ is a bounded linear functional on $X$ and $T$ is a bounded linear operator on $X$. Thus $\| T^* \| \leq \| T \|$. For the reverse inequality, let $x_0 \in X$ with $\| x_0 \| = 1$. By one of the corollaries on the Corollaries to the Hahn-Banach Theorem page we have that for the point $T(x_0) \in Y$ there exists an $f_0 \in Y^*$ with $\| f_0 \| = 1$ such that $f_0(T(x_0)) = \| T(x_0) \|$. Therefore: So for every $x_0 \in X$ with $\| x_0 \| = 1$ we have that $\| T(x_0) \| \leq \| T^* \|$. So given any $x \in X$ with $x \neq 0$, consider the point $\frac{x}{\| x \|}$. It has norm $1$ and so: Hence for all $x \in X$ with $x \neq 0$: And of course the above inequality also holds if $x = 0$. Thus $\| T \| \leq \| T^* \|$. Hence $\| T \| = \| T^* \|$. $\blacksquare$
I believe the only way you can do this is to assume you have fixed length inputs to the hash function $f$. Otherwise, it is problematic what probability distribution you'd want to impose on the input set $\{0,1\}^{\ast}$ which is the collection of all finite input strings. In practice, hash functions do have an upper limit on the input string, but that's astronomical, in terms of testing all input strings. So, let's assume the hash function has security parameter of $k$ bits. This corresponds to the function acting like a random function with outputs of length $n=2k$ bits. Your testing would generate a large number of random values from a uniform distribution on $\{0,1\}^{m},$ thus treating the hash function as a random function $f:\{0,1\}^m\rightarrow \{0,1\}^n.$ Let this random set of inputs be denoted by $X$. Now define$$a_{ij}=\#\{x \in X:[f(x\oplus e_i)]_j \neq [f(x)]_j\}$$for $1\leq i\leq n,1\leq j\leq m,$where $e_i$ is the vector with a one in the $i^{th}$ position and zeroes everywhere and $[u]_j$ denotes the $j^{th}$ component of vector $u$. $a_{ij}$ counts the number of inputs from $X$ which differ in the $j^{th}$ output bit when the $i^{th}$ input bit is flipped. You can now define a degree of strict avalanche criterion $D_{SAC}(f)$ as$$D_{SAC}(f):=1-\frac{\sum_{i=1}^n \sum_{j=1}^n \left|\frac{2a_{ij}}{\#X}-1\right|}{nm},$$with the expectation that $D_{SAC}(f)$ should be approximately 1, i.e., the sum of the absolute differences $$\left|\frac{2a_{ij}}{\#X}-1\right|$$ over $i$ and $j$ should be small. One way of expressing this may be to say $E[A_{ij}] \approx (\#X/2)$ if $A_{ij}$ is a random variable representing the $a_{ij}$ with $A_{ij}$ distributed as the binomial variable $Bin(\#X,1/2).$ From here you can then model the overall experiment in terms of chi squared variables, by using the Gaussian approximation to the binomial. So take the properly scaled $nm$ variables as being from a chisquared distribution with $nm-1$ degrees of freedom.
On the Wikipedia entry of Darcy's law, a derivation of Darcy's law from Stokes equation is provided. The derivation starts at the Stokes equation, which reads: $$ \mu \nabla^2 u_i + \rho g_i - \partial_i p = 0 $$ where $\mu$ is the viscosity, $u$ the flow velocity, $\rho$ the fluid density, $g$ acceleration due to gravity, $p$ the fluid pressure, and $\partial$ denotes the partial derivative, all taken in the $i$-th direction ($x$, $y$, $z$, etc.). It is then said that: Assuming the viscous resisting force is linear with the velocity we may write: $$ - \left(k_{ij} \right)^{-1} \mu \phi u_j + \rho g_i - \partial_i p = 0 $$ I fail to see how this assumption of leads to $\nabla^2 u_i = - \left(k_{ij} \right)^{-1} \phi u_j$. Would someone be so kind to explain this step in more detail?
Topologies on a Finite 3-Element Set Recall from the Topological Spaces page that a set $X$ an a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if: $\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$. We will now look more into some of the topologies that can be applied on a finite $3$-element set, $X = \{a, b, c \}$. There are many different topologies that can be constructed on $X$. For example, the indiscrete topology containing only the empty set and $X$ itself, $\tau_1 = \{ \emptyset, X \}$ is illustrated below: For another example, $\tau_2 = \{ \emptyset, \{ a \}, X \}$ is a topology and is illustrated to be: We can also construct a topology $\tau_3 = \{ \emptyset, \{b, c \}, X \}$ which looks like: Or even a topology $\tau_4 = \{ \emptyset, \{ a \}, \{ b, c \}, X \}$ looking like: There are a ton of other topologies that can be constructed with this small finite set $X$ - so you may be wondering, is every collection of subsets from $X$ a topology? The answer is no. For example, consider the collection $\rho = \{ \emptyset, \{ a \}, \{ b \}, X \}$ is not a topology on $X$ because the union $\{ a \} \cup \{ b \} = \{ a, b \} \not \in \rho$, so the second condition for $\rho$ to be a topological space is not satisfied. The collection $\rho$ is depicted below:
Let's work it out through the conversion algorithm given in Wikipedia. input: $S \to abC \mid babS \mid de$ $C\to aCa \mid b$ Introduce $S_0$: $S_0 \to S$ $S \to abC \mid babS \mid de$ $C\to aCa \mid b$ remove $\epsilon$ rules: there are no $\epsilon$ rules, so nothing changes. eliminate unit rules: Originally, there are none, but we added one, namely $S_0\to S$. Since it's the only one, we'll deal with it later (after $S$ is already in CNF). This will be done by adding, for any rule $S\to V_iV_j$, the rule $S_0 \to V_iV_j$. But let's first complete the conversion. replace all other rules into normal form: we take each transition which is not in the correct form and replace it with $N\to V_iV_j$, introducing new non-terminals $V_1, V_2,...$ as needed: $S\to ab C$ $\Longrightarrow$ $S\to V_1 C$ setting $V_1 \to ab$. The new $V_1$ is not in CNF, but can easily be converted to CNF by re-defining it as $V_1\to AB$ with $A\to a$, $B\to b$. $S\to babS$ $\Longrightarrow$ $S\to V_2S$ adding $V_2 \to bab$. Now $V_2$ is not in CNF, so we change it to $V_2 \to V_3B$ adding $V_3\to BA$. (skipping a trivial step here) $S\to de$ is almost CNF. We change it to $S\to DE$ and add $E\to e$, and $D\to d$. $C\to aCa$ $\Longrightarrow$ $C\to AV_4$ where $V_4 \to CA$. $C \to b$ is already in CNF. so we end up with: $S_0 \to S$ $S\to V_1 C$ $S\to V_2S$ $S\to DE$ $C\to AV_4$ $C \to b$ $V_1 \to AB$ $V_2 \to V_3B$ $V_3 \to BA$ $V_4 \to CA$ and $A\to a$, $B\to b$, $D\to d$, $E\to e$. Finally, we need to deal with the unit rule $S_0 \to S$. As said, we will replace the $S$ in the right-hand-side with the "content" of S. That is, we remove that unit rule and add instead: $S_0 \to V_1C \mid V_2S \mid DE$.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
As we have seen, the Fourier expansion of $f : \{-1,1\}^n \to {\mathbb R}$ can be thought of as the representation of $f$ over the orthonormal basis of parity functions $(\chi_S)_{S \subseteq [n]}$. In this basis, $f$ has $2^n$ “coordinates”, and these are precisely the Fourier coefficients of $f$. The “coordinate” of $f$ in the $\chi_S$ “direction” is $\langle f, \chi_S\rangle$; i.e., we have the following formula for Fourier coefficients: Proposition 8For $f : \{-1,1\}^n \to {\mathbb R}$ and $S \subseteq [n]$, the Fourier coefficient of $f$ on $S$ is given by \begin{equation*} \widehat{f}(S) = \langle f, \chi_S \rangle = \mathop{\bf E}_{\boldsymbol{x} \sim \{-1,1\}^n}[f(\boldsymbol{x}) \chi_S(\boldsymbol{x})]. \end{equation*} We can verify this formula explicitly: \begin{equation} \label{eqn:fourier-coeff-verification} \langle f, \chi_S \rangle = \left\langle \sum_{T \subseteq [n]} \widehat{f}(T)\,\chi_T, \chi_S \right\rangle = \sum_{T \subseteq [n]} \widehat{f}(T) \langle \chi_T, \chi_S \rangle = \widehat{f}(S), \end{equation} where we used the Fourier expansion of $f$, the linearity of $\langle \cdot, \cdot \rangle$ and finally Theorem 5. This formula is the simplest way to calculate the Fourier coefficients of a given function; it can also be viewed as a streamlined version of the interpolation method illustrated in equation (3) here. Alternatively, this formula can be taken as the definition of Fourier coefficients. The orthonormal basis of parities also lets us measure the squared “length” ($2$-norm) of $f : \{-1,1\}^n \to {\mathbb R}$ efficiently: it’s just the sum of the squares of $f$’s “coordinates” — i.e., Fourier coefficients. This simple but crucial fact is called Parseval’s Theorem. Parseval’s TheoremFor any $f : \{-1,1\}^n \to {\mathbb R}$, \[ \langle f, f \rangle = \mathop{\bf E}_{\boldsymbol{x} \sim \{-1,1\}^n}[f(\boldsymbol{x})^2] = \sum_{S \subseteq [n]} \widehat{f}(S)^2. \] In particular, if $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued then \[ \sum_{S \subseteq [n]} \widehat{f}(S)^2 = 1. \] As examples we can recall the Fourier expansions of ${\textstyle \min_2}$ and $\mathrm{Maj}_3$: \[ {\textstyle \min_2}(x) = -\tfrac12 + \tfrac12 x_1 + \tfrac12 x_2 + \tfrac12 x_1 x_2, \qquad \mathrm{Maj}_3(x) = \tfrac{1}{2} x_1 + \tfrac{1}{2} x_2 + \tfrac{1}{2} x_3 - \tfrac{1}{2} x_1x_2x_3. \] In both cases the sum of squares of Fourier coefficients is $4 \times (1/4) = 1$. More generally, given two functions $f, g : \{-1,1\}^n \to {\mathbb R}$, we can compute $\langle f, g \rangle$ by taking the “dot product” of their coordinates in the orthonormal basis of parities. The resulting formula is called Plancherel’s Theorem. Plancherel’s TheoremFor any $f, g : \{-1,1\}^n \to {\mathbb R}$, \[ \langle f, g \rangle = \mathop{\bf E}_{\boldsymbol{x} \sim \{-1,1\}^n}[f(\boldsymbol{x})g(\boldsymbol{x})] = \sum_{S \subseteq [n]} \widehat{f}(S) \widehat{g}(S). \] We can verify this formula explicitly as we did in \eqref{eqn:fourier-coeff-verification}: \[ \langle f, g \rangle = \Bigl\langle \sum_{S \subseteq [n]} \widehat{f}(S)\,\chi_S, \sum_{T \subseteq [n]} \widehat{g}(T)\,\chi_T \Bigr\rangle = \sum_{S, T \subseteq [n]} \widehat{f}(S)\widehat{g}(T) \langle \chi_S, \chi_T \rangle = \sum_{S\subseteq [n]} \widehat{f}(S)\widehat{g}(S). \] Now is a good time to remark that for boolean-valued functions $f, g : \{-1,1\}^n \to \{-1,1\}$, the inner product $\langle f, g \rangle$ can be interpreted as a kind of “correlation” between $f$ and $g$, measuring how similar they are. Since $f(x)g(x) = 1$ if $f(x) = g(x)$ and $f(x)g(x) = -1$ if $f(x) \neq g(x)$, we have: Proposition 9If $f, g : \{-1,1\}^n \to \{-1,1\}$, \[ \langle f, g \rangle = \mathop{\bf Pr}[f(\boldsymbol{x}) = g(\boldsymbol{x})] – \mathop{\bf Pr}[f(\boldsymbol{x}) \neq g(\boldsymbol{x})] = 1 – 2\mathrm{dist}(f,g). \] Here we are using the following definition: Definition 10Given $f, g : \{-1,1\}^n \to \{-1,1\}$, we define their (relative Hamming) distanceto be \[ \mathrm{dist}(f,g) = \mathop{\bf Pr}_{\boldsymbol{x}}[f(\boldsymbol{x}) \neq g(\boldsymbol{x})], \] the fraction of inputs on which they disagree. With a number of Fourier formulas now in hand we can begin to illustrate a basic theme in the analysis of boolean functions: interesting combinatorial properties of a boolean function $f$ can be “read off” from its Fourier coefficients. Let’s start by looking at one way to measure the “bias” of $f$: Definition 11The meanof $f : \{-1,1\}^n \to {\mathbb R}$ is $\mathop{\bf E}[f]$. When $f$ has mean $0$ we say that it is unbiased, or balanced. In the particular case that $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued, its mean is \[ \mathop{\bf E}[f] = \mathop{\bf Pr}[f = 1] – \mathop{\bf Pr}[f = -1]; \] thus $f$ is unbiased if and only if it takes value $1$ on exactly half of the points of the Hamming cube. This formula holds simply because $\mathop{\bf E}[f] = \langle f, 1 \rangle = \widehat{f}(\emptyset)$ (taking $S = \emptyset$ in Proposition 8). In particular, a boolean function is unbiased if and only if its empty-set Fourier coefficient is $0$. Next we obtain a formula for the variance of a real-valued boolean function (thinking of $f(\boldsymbol{x})$ as a real-valued random variable): Proposition 13The varianceof $f : \{-1,1\}^n \to {\mathbb R}$ is \[ \mathop{\bf Var}[f] = \langle f – \mathop{\bf E}[f], f – \mathop{\bf E}[f] \rangle = \mathop{\bf E}[f^2] – \mathop{\bf E}[f]^2 = \sum_{S \neq \emptyset} \widehat{f}(S)^2. \] This Fourier formula follows immediately from Parseval’s Theorem and Fact 12. In particular, a boolean-valued function $f$ has variance $1$ if it’s unbiased and variance $0$ if it’s constant. More generally, the variance of a boolean-valued function is proportional to its “distance from being constant”. The proof of Proposition 15 is an exercise. By using Plancherel in place of Parseval, we get a generalization of Proposition 13 for covariance: Proposition 16The covarianceof $f, g : \{-1,1\}^n \to {\mathbb R}$ is \[ \mathop{\bf Cov}[f,g] = \langle f – \mathop{\bf E}[f], g – \mathop{\bf E}[g] \rangle = \mathop{\bf E}[fg] – \mathop{\bf E}[f]\mathop{\bf E}[g] = \sum_{S \neq \emptyset} \widehat{f}(S)\widehat{g}(S). \] We end this section by discussing the Fourier weight distribution of boolean functions. Definition 17The (Fourier) weightof $f : \{-1,1\}^n \to {\mathbb R}$ on set $S$ is defined to be the squared Fourier coefficient, $\widehat{f}(S)^2$. Although we lose some information about the Fourier coefficients when we square them, many Fourier formulas only depend on the weights of $f$. For example, Proposition 13 says that the variance of $f$ equals its Fourier weight on nonempty sets. Studying Fourier weights is particularly pleasant for boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ since Parseval’s Theorem says that they always have total weight $1$. In particular, they define a probability distribution on subsets of $[n]$. Definition 18Given $f : \{-1,1\}^n \to \{-1,1\}$, the spectral samplefor $f$, denoted $\mathscr{S}_{f}$, is the probability distribution on subsets of $[n]$ in which the set $S$ has probability $\widehat{f}(S)^2$. We write $\boldsymbol{S} \sim \mathscr{S}_{f}$ for a draw from this distribution. For example, the spectral sample for the ${\textstyle \min_2}$ function is the uniform distribution on all four subsets of $[2]$; the spectral sample for $\mathrm{Maj}_3$ is the uniform distribution on the four subsets of $[3]$ with odd cardinality. Given a boolean function it can be helpful to try to keep a mental picture of its weight distribution on the subsets of $[n]$, partially ordered by inclusion. Here is an example for the $\mathrm{Maj}_3$ function, with the white circles indicating weight $0$ and the shaded circles indicating weight $1/4$: Finally, as suggested by the diagram we often stratify the subsets $S \subseteq [n]$ according to their cardinality (also called “height” or “level”). Equivalently, this is the degree of the associated monomial $x^S$. Definition 19For $f : \{-1,1\}^n \to {\mathbb R}$ and $0 \leq k \leq n$, the (Fourier) weight of $f$ at degree $k$is \[ \mathbf{W}^{k}[f] = \sum_{\substack{S \subseteq [n] \\ |S| = k}} \widehat{f}(S)^2. \] If $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued, an equivalent definition is \[ \mathbf{W}^{k}[f] = \mathop{\bf Pr}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|S| = k]. \] By Parseval’s Theorem, $\mathbf{W}^{k}[f] = \|f^{=k}\|_2^2$ where \[ f^{=k} = \sum_{|S| = k} \widehat{f}(S)\,\chi_S \] is called the degree $k$ part of $f$. We will also sometimes use notation like $\mathbf{W}^{> k}[f] = \sum_{|S| > k} \widehat{f}(S)^2$ and $f^{\leq k} = \sum_{|S| \leq k} \widehat{f}(S)\,\chi_S$.
Say I'm attempting to improve click-through rates on videos on my website. I've been reading the literature on contextual bandits and came across the Microsoft MWT white paper. I believe this is the right method for this case. However, I'm a bit confused on the details of the policy exploration and policy training (Section 3.2 and 3.3). My first question: Is there a difference between a policy and exploration policy? I want to say no, but want confirmation. My second question pertains to learning a policy. As an example, say that I initially used an $\epsilon$-greedy exploration policy and collected user click-log data. The paper states that (offline) I want to find a policy out of all allowed policies that approximately maximizes the estimated expected reward (Inverse Propensity Scoring estimator) $\mu_{ips}(\pi)$ for policy $\pi$ -- Eqn. 4 in Section 3.3. The policy chooses an action $a\in A$ (e.g. a video to serve to the user) given a context $x$ (e.g. user attributes and perhaps attributes of the article/video currently being watched) and an outcome (click or no-click) is observed. The authors propose to reduce the policy training to a cost-sensitive classification problem (e.g. logistic regression, decision trees, or neural nets) where each policy $\pi$ is viewed as a classifier (i.e. for context $x_{i}$, a given policy $\pi$ chooses action $a_{i}$). However, it's not clear to me what the classification task is. Does it mean that I can use something like a neural net whose output is the policy that minimizes the cost? Is this already performed in ML packages like VW? If so, how?
It looks like you're new here. If you want to get involved, click one of these buttons! Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order? So, yes. 1. Reflexivity holds 2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\) 3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\) 4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\) So, yes. Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before. Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714). Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) That's an interesting question, Jonathan. That's an interesting question, Jonathan. Jonathan Castello I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. [Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello) > I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. **Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not? Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: > In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not? (EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.) Can two lattices be isomorphic and their associated posets not? Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that: Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\). The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist. On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)). Hence $$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints. > Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that: - if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\) - if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\) - \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\) Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\). The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist. On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)). Hence $$ \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right) $$ So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints.
The Euro swaption market is changing from cash to physical settlement quotation in July 2018 $-$ see e.g. "Euro swaptions market prepares for pricign revamp (Risk, 2018)". When describing the issues around cash settled swaptions valuation and trading, at one point the aforementioned article states the following (my emphasis): " Valuation of in-the-money[cash] swaptions was split between market participants that used models, and those that used the principle of put-call parity to infer the price of swaptions from so-called zero-wide collars$-$ a receiver and a payer swaption both struck at-the-money. As volatility rose and rates fell[after the ECB lowered rates at the end of 2014] , swaptions valuation become more difficult, also making it harder to obtain reliable prices for zero-wide collars." The payoff of a cash settled (payer) swaption is a function $h$ of the swap rate $S_{\tau}(T)$: $$h\left(S_{\tau}(T)\right)=A^c(S_{\tau}(T))(S_{\tau}(T)-K)^+$$ where the cash annuity is defined as: $$A^c(S_{\tau}(T))=\sum_{i=1}^n\prod_{j=1}^i\frac{\delta_i}{(1+\delta_j S_{\tau}(T))^j}$$ I am assuming the model valuation method consists on Black's approximation: $$\text{Swaption}_{\ \tau}^{\text{Pay}}(t)\approx A^c(S_{\tau}(t))E_t^{A^{\phi}}\left[(S_{\tau}(T)-K)^+\right]$$ Is anyone familiar with the zero-wide collar pricing method mentioned in the article? Is the parity relationship related to the physical annuity: $$A^{\phi}(S_{\tau}(T))=\sum_{i=1}^n\delta_iP(T,T_i), T \leq T_1, \dots, T_n \text{ ?}$$
14 0 Homework Statement A yo-yo is placed on a conveyor belt accelerating ##a_C = 1 m/s^2## to the left. The end of the rope of the yo-yo is fixed to a wall on the right. The moment of inertia is ##I = 200 kg \cdot m^2##. Its mass is ##m = 100kg##. The radius of the outer circle is ##R = 2m## and the radius of the inner circle is ##r = 1m##. The coefficient of static friction is ##0.4## and the coefficient of kinetic friction is ##0.3##. Find the initial tension in the rope and the angular acceleration of the yo-yo. Homework Equations ##T - f = ma## ##\tau_P = -fr## ##\tau_G = Tr## ##I_P = I + mr^2## ##I_G = I + mR^2## ##a = \alpha R## First off, I was wondering if the acceleration of the conveyor belt can be considered a force. And I'm not exactly sure how to use Newton's second law if the object of the forces is itself on an accelerating surface. Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the
So the moon is full of helium-3. Since it's a gas in the moon's vacuum... Why doesn't it escape? Earth Science Stack Exchange is a question and answer site for those interested in the geology, meteorology, oceanography, and environmental sciences. It only takes a minute to sign up.Sign up to join this community The Moon is not "full" of helium-3. 3He is at most fifty parts per billion of the lunar regolith 1 and that "high" concentration pertains only to permanently shadowed craters. The Moon is bombarded by a steady stream of helium-3 while sunlit. Some of this incoming helium-3 is temporarily embedded in the lunar regolith. Without this steady supply, the helium-3 content would dissipate at a temperature-dependent rate proportional to the amount of helium-3 in the lunar regolith. The quantity $q(t)$ of in a cubic meter of lunar regolith is thus dictated by a simple differential equation, $\dot q(t) = \alpha(t) - \beta(T)q(t)$. Time averaging the bombardment and escape rates yields $\dot {\bar q}(t) = \bar{\alpha} - \beta(\bar T)q(t)$. This differential equation yields a steady state value of $\bar q = \bar{\alpha}/\beta(\bar T)$. 1 Cooks, " 3He in permanently shadowed lunar polar surfaces", Icarus, 206:2 778-779 (2010).
Idonknow A mathematician is a person who can find analogies between theorems; A better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories; And one can imagine that the ultimate mathematician is one who can see analogies between analogies --S. Banach Member for 6 months 1 profile view Last seen Aug 2 at 9:36 Communities (32) Mathematics 2.8k 2.8k1111 gold badges5555 silver badges129129 bronze badges TeX - LaTeX 357 35722 gold badges44 silver badges1010 bronze badges Academia 320 32011 gold badge66 silver badges1818 bronze badges Physics 280 28033 silver badges1515 bronze badges Cryptography 224 22411 silver badge2020 bronze badges View network profile → Top network posts 40 Prove that there is a real number $a$ such that $\frac{1}{3} \leq \{ a^n \} \leq \frac{2}{3}$ for all $n=1,2,3,...$ 34 What is the difference between LyX and LaTeX? 28 Given $y_n=(1+\frac{1}{n})^{n+1}$ show that $\lbrace y_n \rbrace$ is a decreasing sequence 25 Let $p_1, p_2,\dots,p_n$ be polynomials of $k$ variables $x_1,\dots,x_k$ and $p_1^2 + \dots +p_n^2=x_1^2 + \dots + x_k^2$ Prove that $n \geq k$. 20 Is it possible to earn a PhD in mathematics with emphasis in teaching? 18 product of two uniformly continuous functions is uniformly continuous 17 If $|\lbrace g \in G: \pi (g)=g^{-1} \rbrace|>\frac{3|G|}{4}$, then $G$ is an abelian group. View more network posts →
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Given a string comprised solely of the characters $\text{a, b, c}$ we want to know the number of different $(i, j, k)$ triples such that $0 \leq i,j, k < n \text{ (here n denotes the length of the string)}$ satisfying the following two conditios : $\bullet \text{ }s[i] = "a", s[j] ="b" \text{ and } s[k] = "c"$ $\bullet \text{ } (j + 1)^2 = (i+1)(k + 1) $ We consider such two tuples, $(i, j, k)$ and $(x, y, z)$ to be different only of $i \neq x$ or $j \neq y$ or $k \neq z$. Example case and solution : The string $"ccaccbbbaccccca"$ with $n = 15$. There are two triplets that satisfy the given constraints, namely $(2, 5, 11)$ and $(8, 5, 3)$. The second solution shows that $i$ is not necessarily less than $k$. I know this is a programming question, but I'm more inclined to look for a mathematical approach that I currently seem to be missing. Since $n$ could be as large as $5 \cdot 10^5$ brute force is not feasible. I've attempted coming by the solution by noticing that in order for $\sqrt{(i + 1)(k + 1) } = (j +1 ) \in \mathbb{N}$, $k +1$ should be of the form $(i + 1) \cdot \alpha^2$ or of the form $\frac{\alpha^2}{i + 1}$. Hence, we would have to iterate and test in order to find the $ \alpha$ values that would satisfy our conditions. Later on, I found out by inspecting the process that the latter form covers all the solutions, multiple times, so I would only need to look for answers by using the second form. This still led to a temporal complexity of $O(\sqrt{n} \cdot \sum_{k=1}^{n}\sqrt{k})$. I've been at this for too many hours and I have no idea how to proceed. How would you solve this? I'm guessing $O(n)$ isn't possible but $O(n \log n)$ should probably do the trick.
Hilbert's hotel (HH) is only a metaphor, and when pushed too far it can lead to confusions. I think this is one of those situations: the key point is "we can't obviously compose infinitely many functions," which is pretty clear, but it's obscured by the additional language. The point of HH is to illustrate how an infinite set (the set of rooms) can have lots of maps from itself to itself ("person in room $n$ goes to room $f(n)$") which are injective ("no two different rooms send their occupants to the same room") but not surjective ("some rooms wind up empty"). Note that already we can see an added complexity in the metaphor: the statement There is a set $X$ and a map $f:X\rightarrow X$ which is an injection but not a surjection has only one type of "individual," namely the elements of $X$, but HH has two types of "individual," namely the rooms and the people. Now let's look at the next level of HH: getting an injection which is far from a surjection. Throwing aside the metaphor at this point, all that's happening is composition. Suppose $f:X\rightarrow X$ is an injection but not a surjection. Pick $x\in X\setminus ran(f)$. Then it's a good exercise to check that $x\not\in ran(f\circ f)$, $f(x)\not\in ran(f\circ f)$, and $x\not=f(x)$. What does this mean? Well, when we composed $f$ with itself we got a new "missed element," so that while $ran(f)$ need only miss one element of $X$ we know that $ran(f\circ f)$ is missing two elements of $X$. Similarly, by composing $n$ times we get a self-injection of $X$ whose range misses at least $n$ elements of $X$. At this point it should be clear why we can't proceed this way to miss an infinite set: how do we define "infinite-fold" compositions? This is what the question "where should the guest in room $1$ go?" is ultimately getting at. It's worth pointing out that there are situations where infinite composition makes sense. Certainly if $f:X\rightarrow X$ is such that for each $x\in X$ the sequence $$x,f(x),f(f(x)), f(f(f(x))),...$$ is eventually constant with eventual value $l_x$, then it makes some amount of sense to define the "infinite composition" as $$f^\infty:X\rightarrow X: x\mapsto l_x.$$ And if $X$ has some additional structure we might be able to be even more broad: for example, when $X=\mathbb{R}$ we can use the metric structure (really, the topology) and make sense of $f^\infty$ under the weaker assumption that the sequence $$x,f(x),f(f(x)), f(f(f(x))), ...$$ converges (in the usual calculus-y sense) for each $x\in \mathbb{R}$. For example, the function $f(x)={x\over 2}$ would yield $f^\infty(x)=0$ under this interpretation (even though only one of the "iterating $f$" sequences is eventually constant - namely, the $x=0$ one). But this is not something we can do in all circumstances, and you should regard the idea of infinite composition with serious suspicion at best. (Although again, there are situations where it's a perfectly nice and useful idea!)
Mac Lane - Moerdijk's "Sheaves" gives this cryptic hint in page 91 that the equivalence between etale spaces and sheaves on a space $X$ can be cooked up using formal methods. More precisely, we are in the following nerve-realization situation: $$\begin{matrix} \mathcal{O}(X)\xrightarrow{A}&\mathbf{Top}/X\\ \downarrow^y&\\ \mathbf{Set}^{\mathcal{O}(X)^{op}} \end{matrix}$$ where the left Kan extension $\text{Lan}_yA$ has a right adjoint $N_A\colon \mathbf{Top}/X\to \mathbf{Set}^{\mathcal O(X)^\text{op}}$, which is defined precisely taking the (pre)sheaf of sections of $(p\colon E\to X)\in \mathbf{Top}/X$: $$N_A(p)\colon U\mapsto \mathbf{Top}/X\left(AU, p \right) = \mathbf{Top}/X\left(\left[\begin{smallmatrix} U \\ \downarrow \\ X\end{smallmatrix}\right], \left[\begin{smallmatrix} E \\ \downarrow \\ X\end{smallmatrix}\right] \right) = \{s\colon U\to E\mid ps\colon U\subseteq X\}$$ I am trying to work out the details of this construction, in particular I would like to "Prove formally" by the Kan formula for $\text{Lan}_yA(F)$ that it is precisely the etale space of the sheaf $F$; "Prove formally" that this adjunction restricts to an equivalence $\mathbf{Sh}(X)\cong \mathbf{Et}(X)$ (this can be done appealing Lemma 4 right before page 91) A nice consequence of adjoint nonsense would be that the reflection obtained in this way is also exact ($A$ commutes with finite limits, which exist in $\mathcal O(X)$). I'm stuck in trying to make $\text{Lan}_yA(F)$ an explicit object; one can appeal the Kan formula to obtain $$ \text{Lan}_yA(F)\cong\int^{U\colon \mathcal O(X)} FU\otimes AU $$ where $\otimes$ denotes the canonical $\bf Set$-tensoring of ${\bf Top}/X$ which acts like $S\otimes \left[\begin{smallmatrix} E \\ \downarrow \\ X\end{smallmatrix}\right] = \left[\begin{smallmatrix} \coprod_SE \\ \downarrow \\ X\end{smallmatrix}\right]$. The shape of colimits in $\mathbf{Top}/X$ gives that this space consists of $\left[\begin{smallmatrix} \big(\coprod_U FU\times U\big)/\simeq \\ \downarrow \\ X\end{smallmatrix}\right]$ where I am modding out by a suitable equivalence relation. It would be nice to deduce that $\big(\coprod_U FU\times U\big)/\simeq = \coprod_{x\in X}F_x$, with the topology... Well, I'm beginning to suspect this is a too-painful alternative to the old explicit method. This is why I'm asking you if this can really be done.
Table of Contents The Set of Accumulation Points in Finite Topological Products On The Interior of Sets in Finite Topological Products and The Closure of Sets in Finite Topological Products pages we saw that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces and $A_i \subseteq X_i$ for all $i \in \{1, 2, ..., n \}$ then the interior/closure of the product of these sets equals the product of the interiors/closures of these sets. We will now see that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces then the product of the set of accumulation points of sets is contained in the set of accumulation points of the product of sets. Theorem 1: Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of topological spaces and let $A_i \subseteq X_i$ for all $i \in \{1, 2, ..., n \}$. Then $\displaystyle{\left ( \prod_{i=1}^{n} A_i \right )' \supseteq \prod_{i=1}^{n} A_i'}$. Proof:Let $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \prod_{i=1}^{n} A_i'}$. Then $x_i \in A_i'$ for all $i \in \{ 1, 2, ..., \}$. Let $\displaystyle{U = \bigcup_{i=1}^{n} U_i}$ be an open neighbourhood of $\mathbf{x}$ in $\displaystyle{\prod_{i=1}^{n} X_i}$. Then each $U_i$ is an open neighbourhood of $x_i$, and hence: Taking the product from $i = 1$ to $n$ gives us that: Therefore: Hence $\mathbf{x}$ is an accumulation point of $\displaystyle{\prod_{i=1}^{n} A_i}$, i.e., $\displaystyle{\mathbf{x} \in \left ( \prod_{i=1}^{n} A_i \right )'}$, and so:
I am given the series: $$\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}$$ and I am asked to determine whether it is convergent or not. I know I need to use the comparison test to determine this. I can make a comparison with a harmonic p series ($a_n=\frac{1}{n^p}$ where p > 1, series converges). I argue that as the denominator grows more rapidly than the numerator, I need only look at the denominators: $$\frac{1}{n^2+5}\le\frac{1}{n^2}$$ $\frac{1}{n^2}$ is a harmonic p series where $p>1$ which converges. As $\frac{\sqrt{n}+\sin(n)}{n^2+5}$ is less than that, by the comparison test, $\sum_{n=1}^{\infty} \frac{\sqrt{n}+\sin(n)}{n^2+5}$ is convergent. Is this a valid argument for this question?
This is not really an answer to your question, essentially because there isn't (currently) a question in your post, but it is too long for a comment. Your statement that A co-ordinate transformation is linear map from a vector to itself with a change of basis. is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is not a linear map from $V$ to itself. Instead, it is the map$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$and takes coordinates to coordinates. Now, to go to the heart of your confusion, it should be stressed that covectors are not members of $V$; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the dual space $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.) The dual space is the vector space of all linear functionals from $V$ into its scalar field:$$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different. To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map. As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$ If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors. To get the transformation rule for covectors between two bases, you need to string two of these together:$$\left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n,$$which is very different to the one for vectors, (1). Still think that vectors and covectors are the same thing? Addendum Let me, finally, address another misconception in your question: An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined. Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:$$\langle\varphi,v\rangle=\varphi(v).$$This bilinear form is always guaranteed and presupposes strictly less structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation. Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional$$\begin{align}i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\w&\mapsto \langle v,w\rangle_\text{I.P.}.\end{align}$$By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified. Addendum 2, on your question about the gradient. I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.) However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation$$df=\nabla f\cdot dx$$does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$. To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that$$T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),$$if it exists. The gradient is exactly this map; it was born as a linear functional, whose coordinates over any basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,$$df=\sum_j \frac{\partial f}{\partial x_j}d x_j,$$is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$. Addendum 3. OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread: the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened. I will also, address, then this question: given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different? The main reason for this is well addressed by Christoph in his answer, but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says what the object in question varies with or against. In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix. One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces. However, as is the case anywhere in mathematics where two fully dual spaces are considered (example, example, example, example, example), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages: Anything one proves for one set of objects has a dual fact which is automatically proved. Therefore, one need only ever prove one version of the statement. When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the content of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state some version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible. However, this dual version is not the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content. Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that there is no natural isomorphism between a vector space and its dual, which is phrased, and proved in, the language of category theory. The notion of 'natural' isomorphism is tricky, but it would imply the following: For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints. That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$. This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer. In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified. I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that Just because dual objects are equivalent it doesn't mean they are the same. This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different.
Trisection of an angle The problem of dividing an angle $\phi$ into three equal parts; one of the classical problems of Antiquity on ruler-and-compass construction. The solution of the problem of trisecting an angle reduces to finding rational roots of a cubic equation $4x^3-3x-\cos\phi=0$, where $x=\cos(\phi/3)$, which, in general, is not solvable by quadratic radicals. Thus, the problem of trisecting an angle cannot be solved by means of ruler and compass, as was proved in 1837 by P. Wantzel. However, such a construction is possible for angles $m90^\circ/2^n$, where $n,m$ are integers, as well as by using other means and instruments of construction (for example, the Dinostratus quadratrix or the conchoid). References [1] Yu.I. Manin, "Ueber die Lösbarkeit von Konstruktionsaufgaben mit Zirkel und Lineal" , Enzyklopaedie der Elementarmathematik , 4. Geometrie , Deutsch. Verlag Wissenschaft. (1969) pp. 205–230 (Translated from Russian) Comments A remarkable result on trisection of the angles of a triangle is F. Morley's theorem (1899), stating that the three points of intersection of the adjacent trisectors of the angles of an arbitrary triangle form an equilateral triangle (cf. [a1]). References [a1] H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961) [a2] W.W.R. Ball, H.S.M. Coxeter, "Mathematical recreations and essays" , Dover, reprint (1987) [a3] I. Stewart, "Galois theory" , Chapman & Hall (1973) pp. Chapt. 5 How to Cite This Entry: Trisection of an angle. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Trisection_of_an_angle&oldid=16472
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*} The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$. By virtue of Proposition 20 we have another interpretation for the total influence of monotone functions: This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice: Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$ Rousseau [Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$ Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition: Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \] Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce: An alternative analytic definition involves introducing the Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$. In the exercises you are asked to verify the following: $\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$. We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence: Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \] Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights. Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$. Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$. For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or (edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets. For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$: This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”.
Question: A 0.76 kg block is shot horizontally from a spring, as in the example above, and travels 0.516 m up a long a frictionless ramp before coming to rest and sliding back down. If the ramp makes an angle of 45.0{eq}^{\circ} {/eq} with respect to the horizontal, and the spring originally was compressed by 0.16 m, find the spring constant. N/m Energy Conservation Principle in Spring Block System: The loss in elastic potential energy of a block is equal to the gain in gravitational potential energy, provided the change in kinetic energy is zero. Answer and Explanation: The elastic potential energy of a spring constant k = {eq}\displaystyle \frac{1}{2}kx^{2} {/eq} Given, Mass of block ( m) = 0.76 kg. Distance traveled by the block along the ramp ( d) = 0.516 m Angle of inclination {eq}\displaystyle (\theta)= 45^{0} {/eq} Compression of the spring ( x) = 0.16 m. Now according to Energy conservation principle loss in elastic potential energy of a block= gain in gravitational potential energy {eq}\displaystyle \frac{1}{2}kx^{2}=mgh {/eq} -------(1) Here, h= vertical height reached by the block = {eq}\displaystyle d sin 45^{0} {/eq} {eq}\displaystyle = 0.516\times 0.7=0.3612 m {/eq} and g = acceleration due to gravity = {eq}\displaystyle \frac{m}{s^{2}} {/eq} Now from equation (1) {eq}\displaystyle kx^{2}=2mgh {/eq} {eq}\displaystyle k=\frac{2mgh}{x^{2}} {/eq} {eq}\displaystyle k= \frac{2\times 0.76\times 9.8\times 0.3612}{(0.16)^{2}} = \frac{5.380}{0.0256} = \boxed{210.15 N/m} {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from ICSE Environmental Science: Study Guide & SyllabusChapter 1 / Lesson 6
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
In set-builder notation, it's just $E\cup \{vw\mid v\in\Sigma^*\text{ and }w\in H\}$. So, why is this the same thing as "Languages that can be recognized by the last $k$ characters of the string" One direction is fairly simple. If $L$ can be recognized by looking at the last $k$ characters, then there must be some set $H$ of length-$k$ strings such that, if the last $k$ characters form a string in $H$, you accept the string, and if they form a string not in $H$, you reject. Furthermore, there may be some strings that you accept even though they have fewer than $k$ characters: this set is $E$. The other direction is a little more complex. Suppose that $L = E\cup\Sigma^*H$ for some finite $E$ and $H$. We need to show that there is some $k$ and sets $X\subseteq\Sigma^{<k}$ and $Y\subseteq \Sigma^k$ such that $L=X \cup\Sigma^*Y$. Note that this is different from the hypothesis that $L=E\cup\Sigma^*H$ because it's more specific: $E$ and $H$ can be any finite sets of strings, whereas $X$ contains only strings of length less than $k$ and $Y$ contains only strings of length exactly $k$. The solution is to take $k$ to be whichever is the larger of: the length of the longest string in $H$; one plus the length of the longest string in $E$. (Since both $E$ and $H$ are finite, each has a longest string, or several longest strings of the same length.) We start by taking $X=E$, which we're allowed to do because every string in $E$ has length strictly less than $k$. Now, consider some string $h=h_1\dots h_\ell\in H$. We have $\ell\leq k$ by the choice of $k$. If $\ell=k$, we're happy: just add $h$ to $Y$. Now suppose that $\ell<k$. If a string ends with $h$, then either it has length less than $k$ or it has length at least $k$ and its last $k$ characters are $a_1\dots a_{k-\ell}h_1\dots h_\ell$ for some $a_1, \dots, a_{k-\ell}\in\Sigma$. Therefore, we add to $X$ all strings of length less than $k$ that end with $h$, and we add to $Y$ all strings of length exactly $k$ that end with $h$. And we repeat this for every $h\in H$. I've explained how to construct $X$ and $Y$. I won't write out their formal definitions as sets because those definitions are so full of notation that they're not enlightening.
The asset-or-nothing European option pays at t = T the value of the stock when at time T that value exceeds or is equal to the exercise price E, and nothing if the value of the stock is below E. So, in mathematical terms: $$V(S,T) = \left\{ \begin{array}{lr} S & \text{if}\quad S \ge E,\\ 0 & \text{if}\quad S < E. \end{array} \right. $$ The cash-or-nothing European option pays at t = T a fixed value B when at time T that value exceeds or is equal to the exercise price E, and nothing if the value of the stock is below E. So, in mathematical terms: $$V(S,T) = \left\{ \begin{array}{lr} B & \text{if}\quad S \ge E,\\ 0 & \text{if}\quad S < E. \end{array} \right. $$ We know that the formulas for these options are the following: \begin{align} &\text{Cash-or-nothing call:}\quad c_{cn}=Be^{-rT}N(d_2),\\ &\text{Cash-or-nothing put:}\quad p_{cn}=Be^{-rT}N(-d_2),\\ &\text{Asset-or-nothing call:}\quad c_{an}=Se^{-qT}N(d_1),\\ &\text{Asset-or-nothing put:}\quad p_{an}=Se^{-qT}N(-d_1).\\ \end{align} where $$ d_1=\dfrac{\ln(S/E)+(r-q+\sigma^2/2)(T-t)}{\sigma\sqrt{T-t}} $$ and $$ d_2=d_1-\sigma\sqrt{T-t}. $$ We also know that we are supposed to follow the derivation of Black-Scholes in order to derive these formulas but we are having trouble understanding how it differs from the derivation of Black-Scholes itself.
I have a set of values ${x_i}, i=1, \dots ,N$ of which I calculate the median M. I was wondering how I could calculate the error on this estimation. On the net I found that it can be calculated as $1.2533\frac{\sigma}{\sqrt{N}}$ where $\sigma$ is the standard deviation. But I did not find references about it. So I do not understand why.. Could someone explain it to me? I was thinking that I could use bootstrap to have an estimate of the error but I would like to avoid it because it would slow down a lot my analysis. Also I was thinking to calculate the error on the median in this way $$\delta M = \sqrt{ \frac{\sum_i(x_i - M)^2}{N-1} } $$ Does it make sense?
Topological susceptibility and chiral condensate with maximally twisted mass fermions byDrElena García Ramos(DESY) →Europe/Rome Aula Conversi (LNF) Aula Conversi LNF Via Enrico Fermi, 4000044 Frascati (Roma) Description We study the `spectral projector' method for the computation of the thechiral condensate and the topological susceptibility, using maximally twisted mass Wilson fermions. In particular, we perform a study of the quark massdependence of the chiral condensate $\Sigma$ and topologicalsusceptibility $\chi_{top}$ in the range $270\;\mathrm{MeV} < m_{\pi} <500\;\mathrm{MeV}$ and compare our data with the analytical predictions. Inaddition, we compute $\chi_{top}$ in the quenched approximation where wematched the lattice spacing to the dynamical simulations. Using the Kaon, $\eta$ and $\eta'$ meson masses computed on the dynamical ensembles, we thenperform a preliminary test of the Witten-Veneziano relation.
3.6. Spectral solver load definition Purpose Defining the load applied to the the volume element (VE) .The load file is a plain text file with default file extension *.load .A load file may contain a number of consecutive load cases; each load case corresponds to one line in the load file. Valid keywords All valid keywords are given in the following table.Keywoards are case-insensitive. Overview Keyword Meaning Arguments Comments Fdot, dotF deformation gradient rate ($ \dot{\bar{\tnsr F}}$) 9 real numbers or asterisks instead of L or F; component wise exclusive with P F deformation gradient aim ($\bar{\tnsr F}$) 9 real numbers or asterisks instead of L or Fdot; component wise exclusive with P L velocity gradient ($ \bar{\tnsr L}$) 9 real numbers or asterisks instead of Fdot or F; component wise exclusive with P P Piola–Kirchhoff stress ($ \bar{\tnsr P}$) 9 real numbers or asterisks component wise exclusive with Fdot, F, and L t, time, delta total time increment 1 real number incs, N number of increments; linear time scaling 1 integer instead of logIncs logIncs, logIncrements, logSteps number of increments; logarithmic time scaling 1 integer instead of incs freq, frequency, outputfreq frequency of results output 1 integer default value is 1, e.g. every step is written out euler rotation of loadcase frame by z-x-z Euler angles keyword (optional); 3 real values Keywords: deg, degree (default), radian; instead of rot rot, rotation rotation of loadcase frame by rotation matrix 9 real values instead of euler dropguessing, guessreset reset guessing None r, restart, restartwrite frequency of saving restart information 1 integer default value of 0 disables writing of restart information Deformation gradient rate ( Fdot) Specifies the rate of deformation gradient evolution.See the example "Mixed Boundary Conditions" for more information about applying a deformation gradient rate in combination with stress boundary conditions. Deformation gradient aim ( F) Specifies the deformation gradient at the end of the load case.A deformation gradient rate between initial and final deformation gradient is linearly interpolated.See the example "Mixed Boundary Conditions" for more information about applying a deformation gradient rate in combination with stress boundary conditions. Velocity gradient ( L) Specifies the velocity gradient applying deformation to the VE.See the example "Mixed Boundary Conditions" for more information about applying a velocity gradient in combination with stress boundary conditions. PiolaKirchhoff stress ( P) Specifies the stress boundary conditions.See the example "Mixed Boundary Conditions" for more information about using stress boundary conditions in combination with deformation boundary conditions. Total time increment ( t) Specifies the total increment $\Delta t$ of time in seconds for the load case.Thus, the load case runs from $t_0$ to $t_0 + \Delta t$. With linear time stepping (by keyword incs ), the time at increment $n$ out of the total $N$ is given by \[ t(n) = t_0 + \frac{n}{N} \Delta t \] If time scaling is switched to logarithmic (by keywords logIncs )then the time of increment $n$ is given by \[ t(n) =\begin{cases}2^{n-N} \Delta t & \text{in the first loadcase};\\ t_0 \left(\displaystyle\frac{t_0 + \Delta t}{t_0}\right)^{n/N} & \text{in subsequent loadcases}.\end{cases}\] Number of increments ( [log]incs) Specifies the number N of increments the load case is subdivided into.If prefixed by »log«, a logarithmic time scaling is used.Otherwise, linear time scaling is used.See total time increment for details on the time step calculation. Frequency of results output ( freq) Specifies the frequency at which results are written to the output file SolverJobName.spectralOut . By default, the results of every increment are written out, e.g. freq is set to 1. Rotate load frame ( euler, rot) The rotation of the load frame allows loading directions that are not in the direction of the periodic expansion of the VE. z-x-z Euler angles ( euler) Specifies the rotation between load frame and laboratory frame as z-x-z Euler angles.By default, or when using the keywords deg or degree , angles are given in degree; keyword radian switches to radians. Rotation matrix ( rot) Specifies the rotation between load frame and laboratory frame as a rotation matrix $SO(3)$. Rotation matrices requirements A rotation matrix must be symmetric and its determinant must be 1.0 (with some allowed numerical tolerances).The following three basic rotation matrices rotate three-dimensional vectors about the $x$, $y$, and $z$ axis, respectively: \begin{align}R_x(\theta) &=\begin{bmatrix}1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta\\ \end{bmatrix}\\[6pt]R_y(\theta) &=\begin{bmatrix}\cos \theta & 0 & \sin \theta\\ 0 & 1 & 0\\ -\sin \theta & 0 & \cos \theta\\ \end{bmatrix}\\[6pt]R_z(\theta) &=\begin{bmatrix}\cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1\\ \end{bmatrix}\end{align} Reset guessing ( dropguessing) Turns off guessing along former trajectory at start of a consecutive load case and calculation start with a homogeneous guess.By default, guessing is on for each load case except for the first one where only a homogeneous guess is possible. Frequency of saving restart information ( restart) Specifies the frequency at which restart information is written.By default, restart information is never saved. Examples Basic load case ( Fdot, L, time, steps) The most simple load case is defined by prescribing a deformation $\bar{\tnsr F}$ resulting e.g. from a constant technical deformation rate $ \dot{\bar{\tnsr F}}$, a loading time $t$ and the number of steps $n$ in which the problem should be solved. Fdot followed by 9 floating point values lists the deformation rate tensor components in the order 11,12,13,21,22,23,31,32,33. t followed by a positive floating point value specifies the total time of the load case. incs followed by an integer value larger than one indicates the number of steps to use. Thus, a load case describing tension in 11 and compression in 22 direction is given by: Fdot 1.0e-4 0.0 0.0 0.0 -1.0e-4 0.0 0.0 0.0 0.0 t 10.0 incs 10 Each increment has the duration of 1.0 second because the total deformation time of 10.0 seconds is divided into 10 increments.The resulting deformation in 11-direction is ≈ 10 -3 and in 22-direction ≈ -10 -3 . Instead of prescribing a constant technical strain rate by defining $\dot{\bar{\tnsr F}}$, it is possible to prescribe a velocity gradient $\bar{\tnsr L}$ to get a constant true strain rate.The velocity gradient is indicated by the keywords L and is also followed by 9 floating point values: L 1.0e-4 0.0 0.0 0.0 -1.0e-4 0.0 0.0 0.0 0.0 t 10.0 incs 10 Mixed boundary conditions ( P, Fdot, L) The samples load cases above force the VE to significantly change its volume at large deformations.Except for special cases (simple shear, rotation, etc.), a load case prescribing all components of $\bar{\tnsr F}$ will lead to a non-volume preserving load.Therefore, the deformation should be undefined at at least one component of the 3x3 tensor $\bar{\tnsr F}$ and a stress must be prescribed at those components to get a unique solution. To leave a component of the deformation undefined, use an asterisk at the corresponding position.In the following example, a deformation is described in 11 direction and the deformation in 22 direction will be adjusted to a value such that the average Piola–Kirchhoff stress (keyword P ) in that direction is 0.0.All other components have 0 deformation (resulting potentially in stress). Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 P * * * * 0.0 * * * * t 10.0 incs 10 Mixed boundary conditions need to fulfill the following requirements: Stress and Deformation BCs are mutually exclusive. The stress boundary conditions must not allow for rotation, e.g. the opposite off-diagonal elements cannot have stress components at the same time. If a velocity gradient is prescribed, each row of the tensors must either contain stress or velocity gradient. Load cases not possible due to restriction 1: Fdot 1.0e-4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 L 1.0e-4 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 Fdot * * * * * * * * * P * * * * * * * * * t 10.0 incs 40 L * * * * * * * * * P * * * * * * * * * t 10.0 incs 40 Fdot 1.0e-4 * * * * * * * * P 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 t 10.0 incs 40 Fdot 0.0 0.0 * 0.0 0.0 0.0 0.0 0.0 0.0 P * * * 0.0 * * * * * t 10.0 incs 40 L 0.0 0.0 0.0 * * * * * * P * * * * * * 0.0 0.0 0.0 t 10.0 incs 40 Load cases not possible due to restriction 2: Fdot 1.0e-4 * 0.0 * 0.0 0.0 0.0 0.0 0.0 P * 0.0 * 0.0 * * * * * t 10.0 incs 40 L * * * * * * 0.0 0.0 0.0 P 0.0 0.0 0.0 0.0 0.0 * * * * t 10.0 incs 40 Fdot 1.0e-4 0.0 * 0.0 0.0 0.0 * 0.0 0.0 P * * 0.0 * * * 0.0 * * t 10.0 incs 40 Fdot 1.0e-4 * * * 0.0 0.0 * 0.0 0.0 P * 0.0 0.0 0.0 * * 0.0 * * t 10.0 incs 40 Load cases not possible due to restriction 3: L 1.0e-4 * 0.0 0.0 * 0.0 0.0 0.0 0.0 P * 0.0 * * 0.0 * * * * t 10.0 incs 40 L 1.0e-4 * 0.0 0.0 0.0 0.0 0.0 0.0 0.0 P * 0.0 * * * * * * * t 10.0 incs 40 L * 1.0 * 0.0 0.0 0.0 0.0 0.0 0.0 P 1.0 * 0.0 * * * * * * t 10.0 incs 40 L * 1.0 * 0.0 0.0 0.0 * * * P 1.0 * 0.0 * * * 0.0 0.0 0.0 t 10.0 incs 40 The following load cases do not offend any of the restrictions and are allowed: Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 P * * * * 0.0 * * * * t 10.0 incs 40 Fdot * 0.0 0.0 0.0 10.0e-4 0.0 0.0 0.0 0.0 P 0.0 * * * * * * * * t 10.0 incs 40 Change of loading direction ( dropguessing) Each line in a load file specifies one load case, the load cases are subsequently applied to the VE.In the following example, uniaxial tension in 11 direction is applied at the same rate with increasing time increments. Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 3.0 incs 2 Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 5.0 incs 2 To increase the performance of the iterative scheme, the predicted deformation at the beginning of each new load case (and also during subsequent increments of the same load case) follows the rate of the last increment.However, when changing the loading direction, this strategy can lead to longer calculation times or even prevent convergence.The keyword dropguessing disables the guessing at the beginning of a new load case as shown in the following example where the deformation direction changes form the 11 to the 22 component: Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 dropguessing Naturally, no guessing is possible for the first increment of the first load case. Rotation of load frame ( rotation, euler) The rotation of the load frame allows to load the VE in arbitrary directions, e.g. not following the sample x-y-z coordinate system. Equivalent load cases (Rotation by 180°) By rotating the VE by 180°, the sample x-y-z coordinate axes are aligned with the the lab x-y-z coordinate axes, thus the load Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 can be applied also using the following load cases (rotation of 180° around z) when using an isotropic material.For crystalline material, the orientation definition also needs to be rotated, i.e. using the keyword rotation in the texture part in material.config Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 rot -1.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 1.0 Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 euler 180.0 0.0 0.0 Fdot * 0.0 0.0 0.0 1.0e-4 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 euler radian 3.14159265359 0.0 0.0 Rotation by 45° The following load cases apply the same load, but for the second one load and lab frame are rotated to each other by 45°. Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 Fdot 1.0e-4 0.0 0.0 0.0 * 0.0 0.0 0.0 0.0 p * * * * 0.0 * * * * t 2.0 incs 2 rot 0.70710678 -0.70710678 0.0 0.70710678 0.70710678 0.0 0.0 0.0 1.0 The shape change of the original loading direction and of the rotated one are shown in figure 1 in blue and red, respectively. (a) not rotated (b) rotated by 45° Figure 1: Shape changes of VE for unrotated and rotated load frame Rotating loadcase about z-axis in 10\deg increments Figure 2: Shape changes of VE under rotating load frame
Table of Contents Nonheredity of Separability on Topological Subspaces Recall from the Hereditary Properties of Topological Spaces page that if $(X, \tau)$ is a topological space then a property of $X$ is said to be hereditary if for all subsets $A \subseteq X$ we have that the topological subspace $(A, \tau_A)$ also has that property (where $\tau_A$ is the subspace topology on $A$). On the Heredity of First Countability on Topological Subspaces and Heredity of Second Countability on Topological Subspaces pages we saw that if $(X, \tau)$ is a first (or second) countable topological space then for any subset $A \subseteq X$ we have that $(A, \tau_A)$ is also first (or second) countable. In other words, first and second countability is hereditary. On the Heredity of the Hausdorff Property on Topological Subspaces page we also saw that the Hausdorff property is hereditary. We will now look at an example of a nonhereditary property, namely, the separability of a topological space. Recall from the Separable Topological Spaces page that a topological space $(X, \tau)$ is said to be separable if it contains a countable dense subset, say $A$, and that $A$ is said to be dense if for every open set $U \in \tau$ we have that $A \cap U \neq \emptyset$. We will show that separability is not hereditary by providing a counter example. Recall from The Lower and Upper Limit Topologies on the Real Numbers page that the lower limit topology (or Sorgenfrey line) on $\mathbb{R}$ is the topological space generated by:(1) Now consider the product of this topological space with itself. We will have the whole set $\mathbb{R}^2$ and the open sets will be generated by:(2) In other words, if $\tau$ is the topology generated by sets of the form described above, the open sets of $(\mathbb{R}^2, \tau)$ will be formed by sets that look like: Furthermore, we should note that $(\mathbb{R}^2, \tau)$ is a separable topological space because the subset $\mathbb{Q}^2 \subseteq \mathbb{R}^2$ is a countable and dense. Now consider the line $y = -x$ in $\mathbb{R}^2$. This line can be nicely described in set notation as $L = \{ (x, -x) : x \in \mathbb{R} \}$. So, the subspace topology on $L$ will be:(3) For each point $(x, -x) \in L$ we note that the open set $U = [x, x+ \epsilon) \times [-x, -x+\epsilon) \in \tau$ where $\epsilon > 0$ intersects $L$ at only $(x, -x)$ as illustrated below: So in fact every point in the topological space $(L, \tau_L)$ is open with respect to the subspace topology $\tau_L$ and we hence see that $\tau_L$ is actually the discrete topology on $L$! So every singleton set $\{ (x, -x) \} \subset L$ where $x \in \mathbb{R}$ is an open set. Any dense subset $A$ of $L$ must we such that $A \cap \{ (x, -x) \} \neq \emptyset$ for all $x \in \mathbb{R}$, i.e., $(x, -x) \in A$ for all $x \in \mathbb{R}$. But the set of real numbers is uncountable, and the only dense subset of $L$ is $L$ itself! Therefore $(L, \tau_L)$ is not a separable topological space and so separability is not hereditary.
The Cauchy-Riemann Theorem Examples 1 Recall from The Cauchy-Riemann Theorem page that if $A \subseteq \mathbb{C}$ is open, $f : A \to \mathbb{C}$ with $f = u + iv$, and $z_0 \in A$ then $f$ is analytic at $z_0$ if and only if there exists a neighbourhood $\mathcal N$ of $z_0$ with the following properties: 1)$\displaystyle{\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}}$ all exist and are continuous on $\mathcal N$. 2)The Cauchy-Riemann equations $\displaystyle{\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}}$ and $\displaystyle{\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}}$ on $\mathcal N$. We also stated an important result that can be proved using the Cauchy-Riemann theorem called the complex Inverse Function theorem which says that if $f'(z_0) \neq 0$ then there exists open neighbourhoods $U$ of $z_0$ and $V$ of $f(z_0)$ such that $f : U \to V$ is a bijection and such that $\displaystyle{\frac{d}{dw} f^{-1}(w) = \frac{1}{f'(z)}}$ where $w = f(z)$. We will now look at some example problems in applying the Cauchy-Riemann theorem. Example 1 Determine whether the function $f(z) = \overline{z}$ is analytic or not. Let $f(z) = f(x + yi) = x - yi = \overline{z}$. Then $u(x, y) = x$ and $v(x, y) = -y$. The first order partial derivatives of $u$ and $v$ clearly exist and are continuous. They are:(1) So the first condition to the Cauchy-Riemann theorem is satisfied. However note that $\displaystyle{1 = \frac{\partial u}{\partial x} \neq \frac{\partial v}{\partial y} = -1}$ ANYWHERE. So one of the Cauchy-Riemann equations is not satisfied anywhere and so $f(z) = \overline{z}$ is analytic nowhere. Example 2 Determine whether the function $f(z) = e^{z^2}$ is analytic or not using the Cauchy-Riemann theorem. Let:(2) Then $u(x, y) = e^{x^2 - y^2} \cos (2xy)$ and $v(x, y) = e^{x^2 - y^2} \sin (2xy)$. The partial derivatives of these functions exist and are continuous. They are given by:(3) So $\displaystyle{\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}}$ everywhere. Also:(5) So $\displaystyle{\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}}$ everywhere as well. Thus by the Cauchy-Riemann theorem, $f(z) = e^{z^2}$ is analytic everywhere. This should intuitively be clear since $f$ is a composition of two analytic functions. Example 3 Prove that if $f$ is analytic at then $\displaystyle{\mid f'(z) \mid^2 = \left (\frac{\partial u}{\partial x} \right )^2 + \left ( \frac{\partial v}{\partial x} \right )^2}$ and $\displaystyle{\mid f'(z) \mid^2 = \left (\frac{\partial u}{\partial y} \right )^2 + \left ( \frac{\partial v}{\partial y} \right )^2}$. Suppose that $f$ is analytic. Then from the proof of the Cauchy-Riemann theorem we have that:(7) Therefore:(8) Hence:(9) The other formula can be derived by using the Cauchy-Riemann equations or by the fact that in the proof of the Cauchy-Riemann theorem we also have that:(10)
Note: While writing this answer, I discovered what seems to be a gap in the proof given in the cited lecture notes. I'll thus present a slightly modified version of the proof below, and discuss the discrepancy a bit at the end. Let's start with a quick recap, since your quote and summary of the lecture notes leaves out some important bits. The formal definition of a one-way function, slightly expanded from Definition 5 in your lecture notes, is: A function (family) $f: \{0,1\}^n \to \{0,1\}^m$ is one-way if and only if it can be computed by a polynomial-time algorithm, and if there is no probibilistic polynomial-time algorithm capable of finding preimages for it with non-negligible probability. In other words, for $f$ to be one-way, there can be no probabilistic algorithm $A$ that could, given the output $y = f(x)$ for some randomly chosen input $x \in \{0,1\}^n$ and a maximum run-time polynomial in $n+m$, find an input $x'$ (possibly, but not necessarily, equal the original input $x$) such that $f(x') = y$ with a probability that is more than a negligible function of $n$. An informal summary of this, dispensing with all the formalism of asymptotic complexity theory, would simply be that $f$ is one-way if there is no practical way, given a random output of $f$, to find an input that yields that output when given to $f$. Based on this definition, we can show that: Padding the output of $f$ with, say, a bunch of zeroes doesn't affect whether it is one-way. (By definition, the adversary will always receive a valid output, so they can just strip away the zeros and then proceed as if they were attacking the original, unpadded function.) Also, adding a bunch of extra dummy bits to the inputs of $f$, which don't affect the output, doesn't change whether $f$ is one-way. (Since the dummy input bits don't affect the output, the adversary can choose those dummy bits any way it likes; but finding the correct values for the other, non-dummy input bits is still exactly as hard as finding a preimage for the original, unmodified function.) (These technically hold only if the amount of padding / ignored bits added is a polynomial function of the original input + output length, but that's plenty enough for our purposes: $n \mapsto 2n$ is certainly a polynomial function.) So, given an (arbitrary) one-way function $f$ with $n$-bit inputs and outputs, we can construct another one-way function $h$ with twice the input and output length like this: Let $x^*$ be the first $n$ bits of the input $x$ to $h$. Ignore the rest of the input. Compute $y^* = f(x^*)$. Prepend an arbitrary constant $n$-bit string $c$ (e.g. $c = 000...0$) to $y^*$, and output the resulting $2n$-bit string $y = c \,\|\, y^*$ as $h(x)$. Now, by construction, this function $h$ is one-way, since finding preimages for it is at least as hard as finding preimages for $f$. (Of course, the security parameters for $f$ and $h$ differ by a factor of 2, but that makes no difference asymptotically; a polynomial function of $2n$ is a polynomial function of $n$.) But also by construction, the first $n$ bits of the output of $h$ are always constant, while the remaining output bits depend only on the first $n$ bits of the input. Thus, $h(h(x)) = c \,\|\, f(c)$ for all $x$, and so finding preimages for $h(h(x))$ is trivial (since literally any input will do). Now, the construction given in the lecture notes you cite goes a little bit further, explicitly defining $h$ to yield an all-zero output whenever the first $n$ input bits are zero (and always setting the first $n$ bits of the output to zero otherwise). While not strictly necessary (we'll get constant output from $h(h(x))$ anyway), this doesn't actually harm the one-wayness of $h$ either. In fact, we can show that modifying $h$ so that it always outputs a constant value for a negligibly small fraction of the total input space doesn't affect its one-wayness (and that $1/2^n$ is, indeed, a negligibly small fraction as $n$ tends to infinity). However, where the lecture notes go wrong is when they try to justify this by claiming that: "A generalization of the previous theorem (fixing values in a one-way function) shows that $h$ is also a one-way function. (In short, we are only fixing the values of $\frac{2^n}{2^{2n}} = \frac1{2^n}$ of all of the possible values of $x$. Since we are only fixing a negligible fraction of the possible values of $x$, the same proof with slight modifications still applies.)" In fact, this claim is false. As a simply counterexample, consider the modified function $h'$ defined as: Split the $2n$-bit input $x$ into two $n$-bit strings $x_1$ and $x_2$. If $x_1 = c$, return $h'(x) = c \,\|\, x_2 = x$. Otherwise, return $h'(x) = c \,\|\, f(x_1)$. Clearly, $h'(x) = h(x)$ for all but a negligibly small fraction of the inputs (namely, those that begin with the $n$-bit constant string $c$). Yet $h'$ is obviously not a one-way function, since any valid output $y = h'(x)$ always begins with $c$, and so is its own preimage! Of course, this doesn't invalidate the actual claim, since the function $h$ actually constructed in the notes is in fact one-way (provided that $f$ is one-way). Still, if these notes are from a course you're studying in, you might want to mention this gap in the proof to your instructor.
You could use Excel (see below) or you could solve the equation $(2)$ below numerically, e.g. using the secant method. We have a so called uniform series of $n=60$ constant installments $m=400$. Let $i$ be the nominal annual interest rate. The interest is compounded monthly, which means that the number of compounding periods per year is $12$. Consequently, the monthly installments $m$ are compounded at the interest rate per month $i/12$. The value of $m$ in the month $k$ is equivalent to the present value $m/(1+i/12)^{k}$. Summing in $k$, from $1$ to $n$, we get a sum that should be equal to $$P=26000-\frac{26000}{4}=19500.$$ This sum is the sum of a geometric progression of $n$ terms, with ratio $1+i/12$ and first term $m/(1+i/12)$. So $$\begin{equation*}P=\sum_{k=1}^{n}\frac{m}{\left( 1+\frac{i}{12}\right) ^{k}}=\frac{m}{1+\frac{i}{12}}\frac{\left( \frac{1}{1+i/12}\right) ^{n}-1}{\frac{1}{1+i/12}-1}=m\frac{\left( 1+\frac{i}{12}\right) ^{n}-1}{\frac{i}{12}\left( 1+\frac{i}{12}\right) ^{n}}.\tag{1}\end{equation*}$$ The ratio $P/m$ is called the series present-worth factor (uniform series)$^1$. For $P=19500$, $m=400$ and $n=5\times 12=60$ wehave: $$\begin{equation*}19500=400\frac{\left( 1+\frac{i}{12}\right) ^{60}-1}{\frac{i}{12}\left( 1+\frac{i}{12}\right) ^{60}}.\tag{2}\end{equation*}$$ I solved numerically $(2)$ for $i$ using SWP and got$$\begin{equation*}i\approx 0.084923\approx 8.49\%.\tag{3}\end{equation*}$$ ADDED. Computation in Excel for the principal $P=19500$ and interest rate $i=0.084923$ computed above. I used a Portuguese version, that's why the decimal values show a comma instead of the decimal point. The Column $k$ is the month ($1\le k\le 60$). The 2nd. column is the amount $P_k$ still to be payed at the beginning of month $k$. The 3rd. column is the interest $P_ki/12$ due to month $k$. The 4th. column is the sum $P_k+P_ki/12$. The 5th column is the installment payed at the end of month $k$. The amount $P_k$ satisfies $$P_{k+1}=P_k+P_ki/12-m.$$ We see that at the end of month $k=60$, $P_{60}+P_{60}i/12=400=m$. The last installment $m=400$ at the end of month $k=60$ balances entirely the remaining debt, which is also $400$. We could find $i$ by trial and error. Start with $i=0.01$ and let the spreadsheet compute the table values, until we have in the last row exactly $P_{60}+P_{60}i/12=400$. -- $^1$ James Riggs, David Bedworwth and Sabah Randdhava, Engineering Economics,McGraw-Hill, 4th. ed., 1996, p.43.
Consider the following statement: (S): If $G$ is not a complete graph, then there is a minor $M$ of $G$ such that $M \not \cong G$, and there is a graph homomorphism $f:G\to M$. Hadwiger's conjecture states that for every finite simple undirected graph $G$, the complete graph $K_{\chi(G)}$ is a minor of $G$. Since colorings $G$ are homorphisms from $G$ to some complete graphs, it is not hard to see that in the finite case, Hadwiger's conjecture is equivalent to (S). Now, the statement of Hadwiger's conjecture is false for graphs with infinite chromatic number (see for example the disjoint union of all $K_n, n\in\mathbb{N}$). Question. Does statement (S) hold for graphs with infinite chromatic number? (This is a follow-up to this question.)
I'm stuck pretty much at the first hurdle trying to follow the derivation of the geodesic equations from the Lagrangian $L\left(\dot{x}^{c},x^{c}\right)\equiv\frac{1}{2}g_{ab}\left(x_{c}\right)\dot{x}^{a}\dot{x}^{b}$ in Foster and Nightingale's A Short Course in General Relativity. Differentiating the Lagrangian they give $$\frac{\partial L}{\partial\dot{x}^{c}}=\frac{1}{2}g_{ab}\delta_{c}^{a}\dot{x}^{b}+\frac{1}{2}g_{ab}\dot{x}^{a}\delta_{c}^{b},$$ which I can see, because $\frac{\partial\dot{x}^{a}}{\partial\dot{x}^{c}}$ is 1 only when $a=c$ (otherwise equals zero), and $\frac{\partial\dot{x}^{b}}{\partial\dot{x}^{c}}$ is 1 only when $b=c$ (otherwise equals zero). However, they then go on to say that the above simplifies to $$\frac{\partial L}{\partial\dot{x}^{c}}=g_{cb}\dot{x}^{b}.$$ How do they do that? Why do the indices change on the metric? I'm a self-taught plodder, so please don't worry about making your answers too simple.
The Cauchy-Riemann Theorem Examples 2 Recall from The Cauchy-Riemann Theorem page that if $A \subseteq \mathbb{C}$ is open, $f : A \to \mathbb{C}$ with $f = u + iv$, and $z_0 \in A$ then $f$ is analytic at $z_0$ if and only if there exists a neighbourhood $\mathcal N$ of $z_0$ with the following properties: 1)$\displaystyle{\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}}$ all exist and are continuous on $\mathcal N$. 2)The Cauchy-Riemann equations $\displaystyle{\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}}$ and $\displaystyle{\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}}$ on $\mathcal N$. We also stated an important result that can be proved using the Cauchy-Riemann theorem called the complex Inverse Function theorem which says that if $f'(z_0) \neq 0$ then there exists open neighbourhoods $U$ of $z_0$ and $V$ of $f(z_0)$ such that $f : U \to V$ is a bijection and such that $\displaystyle{\frac{d}{dw} f^{-1}(w) = \frac{1}{f'(z)}}$ where $w = f(z)$. We will now look at some more example problems in applying the Cauchy-Riemann theorem. Example 1 Let $A \subseteq \mathbb{C}$ be open and $f : A \to \mathbb{C}$. Let $A^* = \{ z : \overline{z} \in A \} \subseteq A$ and define $g : A^* \to \mathbb{C}$ by $g(z) = \overline{f(\overline{z})}$. Prove that $g$ is then analytic on $A^*$. Let $f = u + iv$. Then $f(x, y) = u(x, y) + iv(x, y)$. Then by the definition of $g$ we have:(1) We have that:(2) And also:(4) So the Cauchy-Riemann equations are satisfied for $g$ which means that $g$ is analytic on $A^*$. Example 2 Use the Cauchy-Riemann theorem to prove that $f(z) = \mid z \mid$ is not analytic. We have that:(6) So $u(x, y) = \sqrt{x^2 + y^2}$ and $v(x, y) = 0$. So the partial derivatives of these functions are:(7) The Cauchy-Riemann equations hold nowhere for this function so $f$ is not analytic. Example 3 Let $f$ be analytic on an open connected set $A$ and such that $\mathrm{Re} f(z) = C$ where $C \in \mathbb{R}$ is a constant. Prove that $f$ is constant on $A$. We have that $\mathrm{Re} f(z) = u(x, y) = C$ and that $f$ is analytic on $A$, so by the Cauchy-Riemann equations we must have that:(8) And also:(9) In particular, $\frac{\partial v}{\partial x} = 0$ and $\frac{\partial v}{\partial y} = 0$ on all of $A$. Thus $v$ must be constant on $A$ (since $A$ is open and connected), i.e., there exists a $D \in \mathbb{R}$ such that $v(x, y) = D$ on $A$. But $v(x, y) = \mathrm{Im} f(z) = D$, so on all of $A$:(10) In other words, $f$ is constant on $A$.
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
I'm looking for interesting, difficult, or otherwise clever multivariable integral problems that are more difficult than usual textbook problems (which, in the textbook I'm reading at least, usually involve either reordering an iterated integral, or making a fairly usual substitution like polar or spherical coordinates). Namely, I'm interested in problems that involve either tricky uses of multivariable substitution, interesting interpretations of the problem (i.e. to solve the problem you need to use a multivariable integral, but how?), clever partitioning of the domain of integration, or other interesting maneuvers. To give some positioning, a problem that would be too easy is to solve the following: $$ \int_0^3\int_{x^2}^9 x^3e^{y^3}\text{d}y\text{d}x $$ And for completeness, here's a problem that would probably be too hard. Edit: I'm also interested in more obscure and unusual problems. Edit 2: Here are some more problems I'd consider "too easy": $$ \iiint_S\sin\sqrt{x^2+y^2+z^2}\text{d}V $$ where $S$ is the region bounded by $x^2+y^2+z^2 = 49$ and $z^2=x^2+y^2$. $$ \int_0^2\int_0^1\int_y^1\sinh(z^2)\text{d}z\text{d}y\text{d}x $$ One thing that would be nice are questions that require nonstandard substitutions. Everywhere I look only exercises the cylindrical and spherical coordinate transformations, but Wikipedia has an expansive list of other interesting coordinate systems. What about integration over a torus? Or the intersection of a torus and a hyperbolic paraboloid? What about integrals that require bizarre transformations to complete, ones that don't even have names? I want problems that really exercise one's ability to decipher the best solution to the integral, to understand a difficult region of integration geometrically, and/or to call from different areas of mathematics to solve the integral in unique ways. By "decipher", I mean "see the trick to dig into a problem and make it easier". For example, the following integral looks ridiculous: $$ \int_0^1\int_{2\sqrt x}^{1+x}\frac{x}{y+1}\frac{\text{d}y\text{d}x}{\sqrt{y^2-4x}} $$ If you take the time to experiment, you might find that the substitution $x = uv$, $y = u + v$ turns the integral into $$ \int_0^1\int_u^1\frac{uv}{u+v+1}\text{d}v\text{d}u $$ which can be solved more easily. Disclaimer: this is a very contrived problem, I made it up by starting with the end and working in reverse, but it gives you some idea as to the standard of non-triviality I'm hoping for. In actuality, I would deem this integral uninteresting because there's no "simple but clever/difficult to find" trick.
In my functional analysis class we defined a map $x:\Omega \to F,$ where $\Omega\subset \mathbb{C}$ is open and $F$ is a complex Banach space, to be differentiable in $z_0\in \Omega$ if the limit \begin{equation}\lim_{z\to z_0}\frac{1}{z-z_0}\left( x(z)-x(z_0)\right) \end{equation} exists in $F$ and to be holomorphic in $z_0$ if there exists a nbhd of $z_0$ in which $x$ is differentiable in every point. Then the following remark was made: $x$ is holomorphic in $z_0$ if and only if $x$ has a power series expansion \begin{equation}x(z)=\sum_{k=0}^\infty a_k (z-z_0)^k, \end{equation} where $a_k \in F,$ near $z_0.$ She mentioned that this is proven exactly as in standard complex analysis. In my complex analysis class however we proved the fact that every holomorphic function has a power series axpansion via the cauchy integral formula \begin{equation}f(z)=\frac{1}{2\pi i}\int_{\partial B_{z_0}(R)}\frac{f(w)}{w-z} dw. \end{equation} I, however, can't see how this could be generelised to Banach spaces. When I asked my professor how one could do that, she replied that it is possible to prove equivalence without the cauchy integral theorem. In her complex analysis script, however, she does it using the cauchy integral formula. So my question is how to prove this equivalence in general Banach spaces? Is there some kind of cauchy formula too? Thanks in advance
I'm currently trying to understand how the different incarnations of homology with local coefficients relate to one another. Let $X$ be a semi-locally simply connected space, and let $\pi_1 = \pi_1(X,x_0)$. Homology with local coefficients is usually built from one of the following three objects: A $\mathbb{Z}\pi_1$-Module A Bundle of discrete abelian groups $p:E\to X$ In other words, these are fiber bundles $G\hookrightarrow E\to X$ with fibers discrete abelian groups isomorphic to $G$, and whose structure group is some subgroup of $\operatorname{Aut}(G)$ so that the local trivializations $\phi_U:p^{-1}(U)\to U\times G$ restrict to homomorphisms on the fibers. A Functor $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ where $\Pi_1(X)$ is the fundamental groupoid, and $\mathcal{L}(x)$ is always discrete abelian. Given a $\mathbb{Z}\pi_1$-module $M$, one can construct a bundle $\widetilde{X}\times_{\pi_1}M\to X$ of discrete abelian groups using the Borel construction. Conversely, given a bundle of discrete abelian groups $p:E\to X$, this is really a covering space, and so there is an action of $\pi_1$ on the fiber $G$, giving it the structure of a $\mathbb{Z}\pi_1$-module. This brings me to my questions: A. How does a bundle $p:E\to X$ of discrete abelian groups give rise to a functor $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$? Edit:So that the resulting homology groups $H_*(X;E)$ and $H_*(X;\mathcal{L})$ are isomorphic? Here is my guess: for a bundle $p:E\to X$ we set $\mathcal{L}(x) = p^{-1}(x)$, and for a homotopy class of paths $[\omega:I\to X]$ (a morphism from $\omega(0)=x_0$ to $\omega(1)=x_1$) we set $\mathcal{L}[\omega]$ to be the map $p^{-1}(x_0)\to p^{-1}(x_1)$ built from using the homotopy lifting property on $$h:p^{-1}(x_0)\times I\to X,\quad h(e,t) = \omega(t).$$ (lift this to $H:p^{-1}(x_0)\times I\to E$, and then $H(-,1):p^{-1}(x_0)\to p^{-1}(x_1)$ is the map I'm referring to.) However, I'm having a hard time showing that this is a homomorphism. This is probably not a good approach since there is no canonical identification of each fiber with $G$. B. How does a local system $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ give rise to a bundle $p:E\to X$ of discrete abelian groups? (or a $\mathbb{Z}\pi_1$-module?) Edit:So that the resulting homology groups $H_*(X;E)$ and $H_*(X;\mathcal{L})$ are isomorphic? I gather from this discussion that it is possible in this case but I'm not sure how that would work. References [1.] Hatcher, Algebraic Topology, pg. 330 [2.] Whitehead, Elements of Homotopy Theory, pg. 257 (note: he calls functors $\mathcal{L}:\Pi_1(X)\to\textbf{Ab}$ "bundles of groups")
Forgot password? New user? Sign up Existing user? Log in In a right triangle, find the measure of angle θ \theta θ between the median and the angle bisector drawn from the vertex of the acute angle equal to α \alpha α. Now, if tan(α2)=123 \tan\left(\dfrac{\alpha}{2}\right) = \dfrac{1}{\sqrt[3]{2}} tan(2α)=321, enter tan(θ) \tan(\theta) tan(θ) as your answer. Problem Loading... Note Loading... Set Loading...
First, we have the equation:$$x^2 - y^2 = 2^k$$We define $\nu_2(n)$ to be the power of $2$ that divides $n$. Now, assume that $\nu_2(x) \neq \nu_2(y)$. We divide our equation by $2^{2\min(\nu_2(x),\nu_2(y))}$. Then, we will have the LHS as an odd value, as one term would be even, and the other would be odd. Thus, we would have an odd power of $2$, that is, $2^{k-2\min(\nu_2(x),\nu_2(y))} = 1$. This would mean that the difference of two positive perfect squares is $1$. Contradiction. Let $\nu_2(x) = \nu_2(y) = t$. Then, let $x = 2^t \cdot p$ and $y = 2^t \cdot q$. Here, $p$ and $q$ are odd. Let $l = k-2t$. By dividing by $2^{2t}$ :$$p^2-q^2 = 2^l \implies (p-q)(p+q) = 2^l \implies p-q = 2^{l_1} \space , \space p+q = 2^{l_2}$$Again, we can note that if $4 \mid p-q$ and $4 \mid p+q$, then $4 \mid (p-q) + (p+q) \implies 4 \mid 2p \implies 2 \mid p$. However, this is wrong as $p$ is odd. Thus, it is not possible for both of $p+q$ and $p-q$ to be divisible by $4$. Since they are both even and powers of $2$, and $p-q < p+q$, we have $p-q = 2$. Solving $p-q = 2$ and $p+q = 2^{l-1}$, we get $p = 2^{l-2}+1$ and $q = 2^{l-2}+1$. We are given the condition that $x$ and $y$ have no prime factors greater than $5$. Then, we can note that the only prime factors of $p$ and $q$ are $3$ and $5$. Moreover, as $p-q = 2$, we can have $3$ and $5$ only dividing one of $p$ and $q$. Thus, one of $p$ and $q$ is a power of $3$ and the other is a power of $5$. Case 1: Power of $5$ is equal to $1$ Here, we have $p > q$ and as the power of $5$ is equal to $1$, we have $q=1$ which would give us $p=3$. Then, we would have the solution:$$ (x,y,k) = (3 \cdot 2^t , 2^t , 2t+3)$$ Case 2: Power of $5$ is more than $1$ Here, we can note that $p=2^{l-2}+1$ and $q = 2^{l-2}-1$. We have:$$5 \mid 2^n \pm 1 \implies 2 \mid n$$Thus, we have $2 \mid l-2$. Then, $3 \mid 2^{l-2}-1$. Now, we get $p= 2^{l-2}+1 = 5^m \implies 2^{l-2} = 5^m-1$. By lifting the exponent lemma: $$\nu_2(5^m-1) = \nu_2(5-1) + \nu_2(m) = \nu_2(m) + 2 \implies 2^{l-4} \mid m$$This shows that $m \geqslant 2^{l-4}$. Hence:$$2^{l-2} = 5^m-1 \geqslant 5^{2^{l-4}}-1$$which is true only when $l=4$. In that case, we get $p=5$ and $q=3$, which shows:$$(x,y,k) = (5 \cdot 2^t , 3 \cdot 2^t , 2t+4)$$ Therefore, the only solutions are $(x,y,k) = (3 \cdot 2^t , 2^t , 2t+3)$ and $(x,y,k) = (5 \cdot 2^t , 3 \cdot 2^t , 2t+4)$.
I've heard that the EM algorithm ensures that the true likelihood is non-decreasing at each iteration of the algorithm, but I'm not sure why this is the case. I've provided a basic plot which I believe to illustrate the difficulty I have in understanding this. To frame the question, let's first consider the EM algorithm decomposition, where $x$ is the observed data, $z$ is the missing data, $\theta$ represents the parameter set. $$ L(\theta) \equiv \text{ln}P(x|\theta) = \text{ln}\left[\sum_{z}P(x|z,\theta)P(z|\theta)\right] $$ Multiplying by $\text{ln}\left[\frac{P(z|x,\theta\prime)}{P(z|x,\theta\prime)}\right]$ and adding and subtracting $L(\theta\prime) \equiv \text{ln}P(x|\theta\prime)$ yields $$ \text{ln}\left[\sum_{z}P(x|z,\theta\prime)\frac{P(x|z,\theta)P(z|\theta)}{P(x|z,\theta\prime)}\right] - \text{ln}P(z|\theta\prime) + L(\theta\prime) \\ \geq \sum_{z}P(x|z,\theta\prime)\text{ln}\left[\frac{P(x|z,\theta)P(z|\theta)}{P(x|z,\theta\prime)P(z|\theta\prime)}\right] + L(\theta\prime)\ldots (\text{by Jensen's inequality}) \\ = \sum_{z}P(x|z,\theta\prime)\text{ln}\left[\frac{P(x,z|\theta)}{P(z,x|\theta\prime)}\right] + L(\theta\prime) \equiv B(\theta,\theta\prime) $$ This represents the expression we want to maximize in the EM process; from this, we can see that $L(\theta\prime)$ is a lower bound for $L(\theta)$ (this bound is realized when $\theta$ is equal to $\theta\prime$). Now assume two possible entry points (theta$\prime$1 and theta$\prime$2) to the EM process exhibited in the plot below. In this plot, the true likelihood is given by the L(theta) (blue) line whilst the function to be maximised is given by the B(theta,theta$\prime$) (orange) line. The local maximum corresponding theta$\prime$1 (which corresponds to the lower bound scenario mentioned above, L(theta) = L(theta$\prime)$) does indeed increase L(theta); however, the local maximum corresponding to theta$\prime$2 doesn't increase L(theta). What's to prevent the theta$\prime$2 scenario from happening? Also, I've also heard that the role of the expectation step is to equate $P(x,z|\theta)$ and $P(z,x|\theta\prime)$ which would achieve the lower bound scenario above, but I'm not sure how this works. I suspect that this property is linked to the answer to the above question. Perhaps somebody could elaborate on this.
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Error Estimation and Error Verification of Newton's Method Let $f$ be a function that satisfies the conditions to apply Newton's Method and converges to the root $\alpha$. We will now develop a way to estimate the error of the approximation iterates $x_n$ (obtained from Newton's Method) from the root $\alpha$. Note that since $\alpha$ is a root of $f$, then $f(\alpha) = 0$ and thus by the Mean Value Theorem, for some $\xi_n$ between $x_n$ and $\alpha$, we have that:(1) The exact error is given above, however, we don't necessary know what $\xi_n$ is. Instead, let's find a suitable value to replace $\xi_n$ with. Suppose that $x_n$ is close to $\alpha$. Then $\xi_n$ is close to $x_n$ since $\xi_n$ is between $x_n$ and $\alpha$ and thus the error is approximated as:(2) Now recall that the general iteration formula for Newton's Method is $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ and so $-\frac{f(x_n)}{f'(x_n)} = x_{n+1} - x_n$. Substituting this into our error approximation above and we get that:(3) Therefore if we are given an allowable error of $\epsilon$, then if we can ensure that $x_{n+1} - x_n < \epsilon$ then it's likely that the error between $x_n$ and $\alpha$ is less than $\epsilon$ or almost less than $\epsilon$. Error Verification We saw above that the error between the root $\alpha$ and our approximation $x$ is approximately equal to $x_{n+1} - x_n$ however, if $x_{n+1} - x_n < \epsilon$ then it is possible that $\alpha - x_n > \epsilon$ still. To verify that $x_{n+1} - x_n < \epsilon$, we can test to see whether $f(x_n - \epsilon) \cdot f(x_n + \epsilon) < 0$. If so, then we know that a root $\alpha$ exists between them and i within $\epsilon$ of our approximation $x_n$.
So I'm working a problem that states: A function $f$ is analytic in an open set $U$. Define $g$ by $g(z)=\overline{f(\overline{z})}$ (just because the notation can be hard to read, this is the the complex conjugate of the function $f$ defined at the complex conjugate of $z$). Show that $g$ is analytic in the oopen set $U^{\star}=\{z:\overline{z}\in{U}\}$ and that $g^{\prime}(z)=\overline{f^{\prime}(\overline{z})}$ (this is the complex conjugate of the derivative of $f$ evaluated at the complex conjugate of $z$) for $z\in{U^{\star}}$ Now, my book gives the definition of an analytic function as one that is a function defined over an open set on which it is differentiable at every point of that set and that every derivative of that function satisfying the preceeding properties has derivatives that are analytic on that domain as well. So, I know that being differentiable at a point means that a function is continuous at that point, so I see straight away that this means $\forall{\epsilon}>0$ there exists a $\delta>0$ such that if $|z-z_0|<\delta$ then $|f(z)-f(z_0)|<\epsilon$. Since $U$ is an open set, it follows that $U\cap{z_0}$ is non-empty. Because $f$ is analytic in $U$, I can choose two elements $z$ and $z_0$ of $U$ that satisfy $|z-z_0|=|\overline{z}-\overline{z_0}|<\delta$ and $|f(z)-f(z_0)|<\epsilon$. Since $|\overline{z}-\overline{z_0}|<\delta$, this means that $|f(\overline{z})-f(\overline{z_0})|<\epsilon$. Observing that $|f(\overline{z})-f(\overline{z_0})|=|\overline{f(\overline{z})-f(\overline{z_0})}|$, I conclude that $\overline{f(\overline{z})}$ is continuous. This being the case, then (and I think I've done this part correct) I can compute the derivative by using the limit definition as: $g'(z)=\lim\limits_{h\rightarrow{0}}\frac{\overline{f(\overline{z}+\overline{h})}-\overline{f(\overline{z})}}{h}=\lim\limits_{\overline{h}\rightarrow{0}}\overline{({f(\overline{z}+\overline{h})-f(\overline{z})})\ \ /\ \ {\overline{h}}}$, as I know that as $h\rightarrow{0}$, so does $\overline{h}\rightarrow{0}$. Since the function $f$ is analytic for all complex numbers in $U$, its derivative there, as given above, is the complex conjugate of the limit which is $\overline{f'(\overline{z})}$. That's what I'm concerned about, however, because I haven't actually shown yet that $\overline{f(\overline{z})}$ is, in fact, analytic on its domain, all I know is that it is continuous on its domain. My thought is that it might only be analytic for the set $U$ if $U$ is a subset of the real numbers, since that is the only place that $\overline{z}$ is analytic and thus the composition $(a\circ{b})(z)$ for $b(z)=f(\overline{z})$ (analytic for $\overline{z}\in{U}$) and $a(z)=\overline{z}$, so the composition would be analytic on the set given by the intersection of the real numbers with $U$ but I don't believe that is right? I've tried considering an equivalent case given by the conjugation $\overline{g(z)}=\overline{\overline{f(\overline{z})}}=f(\overline{z})$ to no avail. Essentially, I'm asking to see if anyone can point me in the right direction and tell me if what I've done so far is on the right track. I should add that these problems are from a section in the book preceeding the discussion about the Cauchy-Riemann Equations, so while I'm not sure if they will help, I don't want to use them. I'm tagging this as homework although it's not 'homework' in the sense that I'm taking a class, but the problem is from a book, so I feel it is appropriate to tag it as such. I invite any admins to remove the tag if they feel it is unnecessary.
Group non-membership problem: Input:Group elements $g_1,..., g_k$ and $h$ of $G$. Yes:$h \not\in \langle g_1, ..., g_k\rangle$ No:$h\in \langle g_1, ..., g_k\rangle$ Notation:$\langle g_1, ..., g_k\rangle$ is the subgroup generated by $g_1,...,g_k$. Quantum proof: The group non-membership problem is in $\mathsf{QMA}$. The idea is simple: for $\mathcal{H} = \langle g_1, ..., g_k\rangle$, the quantum proof that $h\in \mathcal{H}$ will be the state $$|\mathcal H\rangle = \frac{1}{\sqrt{|\mathcal H|}}\sum_{a\in \mathcal{H}} |a\rangle.$$ Questions: I think the idea of the proof is that if $|h\rangle$ can be shown to be orthogonal to $|\mathcal H\rangle$ then it would imply that that $h \not\in \mathcal{H}$. Otherwise, $h\in \mathcal{H}$. But how exactly are we supposed to assign quantum states (i.e. the $|a\rangle$'s) corresponding to the elements of $\mathcal{H}$? Do we need to assign separate binary strings to all the elements of the group generated by the elements of $G$, such that they can be represented by qubit systems? And if we do assign such binary strings a priori, wouldn't it be much simpler to directly (classically) check whether the string assigned to $h$ matches with any of the strings corresponding to the elements of $\mathcal{H}$? I can't really see the speed advantage here. Could someone clarify this "quantum" proof? Note: All quotes are from John Watrous - Quantum Complexity Theory (Part 2) - CSSQI 2012 (timestamp included).
Title Multiple positive solutions of a sturm-liouville boundary value problem with conflicting nonlinearities Publication Type Journal Article Year of Publication 2017 Authors Feltrin, G Journal Communications on Pure & Applied Analysis Volume 16 Pagination 1083 ISSN 1534-0392 Keywords Leray-Schauder topological degree;; positive solutions; Sturm-Liouville boundary conditions; Superlinear indefinite problems Abstract We study the second order nonlinear differential equation \begindocument $ u'' + \sum\limits_i = 1^m α_ia_i(x)g_i(u) - \sum\limits_j = 1^m + 1 β_jb_j(x)k_j(u) = 0,\rm $ \enddocumentwhere $\alpha_i, \beta_j>0$, $a_i(x), b_j(x)$ are non-negative Lebesgue integrable functions defined in $\mathopen[0, L\mathclose]$, and the nonlinearities $g_i(s), k_j(s)$ are continuous, positive and satisfy suitable growth conditions, as to cover the classical superlinear equation $u"+a(x)u.p = 0$, with $p>1$.When the positive parameters $\beta_j$ are sufficiently large, we prove the existence of at least $2.m-1$positive solutions for the Sturm-Liouville boundary value problems associated with the equation.The proof is based on the Leray-Schauder topological degree for locally compact operators on open and possibly unbounded sets.Finally, we deal with radially symmetric positive solutions for the Dirichlet problems associated with elliptic PDEs. URL http://aimsciences.org//article/id/1163b042-0c64-4597-b25c-3494b268e5a1 DOI 10.3934/cpaa.2017052 Multiple positive solutions of a sturm-liouville boundary value problem with conflicting nonlinearities Research Group:
Author Message Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 9:12 am Post subject: Mathematical Symbols in Xfig and gnuplot Hi everybody, Studying physics I am starting to use Xfig and gnuplot. Now I wonder How I can use symbols like omega or phi in the text in xfig or labels in gnuplot. Any help? Thanks! furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Wed Apr 27, 2005 2:40 pm Post subject: Hi! The good old "make-my-graphs-pretty" question You have several possibilties. gnuplot 1) The enhanced postscript driver Start gnuplot. Try Code: gnuplot> plot sin(x) title "{/Symbol F}(x)" You will get a window with your graph labled verbose as "{/Symbol F}(x)". But now try Code: gnuplot> set term post enh Terminal type set to 'postscript' Options are 'landscape enhanced monochrome blacktext \ dashed dashlength 1.0 linewidth 1.0 defaultplex \ palfuncparam 2000,0.003 \ butt "Helvetica" 14' gnuplot> set output "test.ps" gnuplot> plot sin(x) title "{/Symbol F}(x) gnuplot> exit View the resulting file "test.ps" in your favorite postscript viewer: Now you have a greek upper Phi as label. To learn more about the enhanced possibilities of the postscrip driver read the file "/usr/share/doc/gnuplot-4.0-r1/psdoc/ps_guide.ps" (or whatever gnuplot version you use). Advantages: Easy to use, output file easy included in almost every document Disadvantages: Limited possibilities, looks ugly, wrong fonts when included in other documents (esp. LaTeX) 2) The LaTeX drivers I guess you want to include your graph in a LaTeX File (I hope you have learned LaTeX, if not do so, quickly, it's essential for all physical publications!) Again several possibilities... 2a) The "latex" driver, which uses the pictex environment Code: gnuplot> set term latex Options are '(document specific font)' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi(x)$" gnuplot> exit Now gnuplot produced a file "test.tex" which you can include in your LaTeX Document with Process your LaTeX Document and you'll see the graph labled with the TeX Fonts and all the glory you can use to typeset formulas in LaTeX: fractions, integrals, ... all you can do in LaTeX can be used. Advantage: beautifull output, fonts fitting to the rest of your document Disadvantage: more complicated to use, limited capabilities of the LaTeX picture environment 2b) Combined LaTeX and Postscript. Almost like 2a) but now the graph is in Postscript, just the lables are set by LaTeX: Code: gnuplot> set term pslatex Terminal type set to 'pslatex' Options are 'monochrome dashed rotate' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Now the file "test.tex" will contain postscript special to draw the graph, the label is still set by LaTeX. Use it in your LaTeX Document like before. Advantage: almost unlimited graphics possibilities due to postscript Disadvantage: usage of poststcript needs converting the whole document to postscript afterwards (but that's normal anway), pdflatex isn't able to process postscript (well, VTeX's version can, but it's not open source) 3) The fig driver You export your graph in gnuplot into xfigs "fig" file format, which can be usefull if you want to add modify your graph afterward, for example add some text and arrows (which can be also done in gnuplot but is a pain in the ass...) Code: gnuplot> set term fig textspecial Terminal type set to 'fig' Options are 'monochrome small pointsmax 1000 landscape inches dashed textspecial fontsize 10 thickness 1 depth 10 version 3.2' gnuplot> set output "test.fig" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Open your file "test.fig" in xfig and go on as described below. XFig Xfig offers you, almost like gnuplot the possibility to add greek symbols as postscript fonts, and has also a "special flag" which is meant for using LaTeX code in your illustration, which is set in LaTeX later when compiling your document with latex. 4) The symbol postscript font Click in xfig on the large "T" to get the text tool. Click on "Text font (Default)" in the lower right corner and select "Symbol (Greek)". Click somewhere in the image. Now you can type greek letters. Unlike in gnuplot they will appear on the screen. Export your file to postscript and you can use it in your documents like the files generated by gnuplot described in 1) above. 5) The special text flag Note the option "textspecial" in the "set term fig textspecial" command in 3) above. This tells xfig that text set with this flag has a special meaning in some exported formats. You can set it manually in xfig with the button "Text flags" in the lower bar. Set "Special flag" to "Special" in the appearing dialog. Now click somewhere and type somthing like "$\int{-\infty}^\infty e^{-x^2} dx$. Now go to "File -> Export" and select one of "Latex picture" (which is like gnuplots "latex" terminal described above) or "Combined PS/LaTeX (both parts)" (which is like gnuplots "pslatex" driver, with the only exception that the LaTeX and Postscript code are stored in two separate files. Don't worry, you will just have to include the file ending with "_t" into your LaTeX document, this will automaticall include the other file). [Edit:] It may necessary to set the "hidden" flag in newer versions of xfig to avoid getting both labels, the one set by xfig and the one from LaTeX on top of each other. You will see that there is also an "Combined PDF/LaTeX (both parts)" export options which is usefull if you want to generate pdf files from your LaTeX sources directly using pdflatex, since that can't include postscript graphics. On the other hand you still can make a dvi file from your LaTeX sources and convert that to pdf using dvipdf, or convert your postscript files to pdf by epstopdf, or ... You see there are a lot of possibilities, and I just mentioned the ones I used, and did a good job for me during my diploma thesis in physics, and still do. Fell free to ask if you still have questions, Frank Last edited by furanku on Thu Feb 14, 2008 10:31 am; edited 2 times in total Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 5:14 pm Post subject: Thanks a lot for your help! (it worked) incognito n00b Joined: 15 Jan 2004 Posts: 3 Posted: Wed Apr 27, 2005 11:30 pm Post subject: lurkers thank you furanku, Great post - hopefully the moderators would consider putting it in the Documents, Tips, and Tricks section. incognito adsmith Veteran Joined: 26 Sep 2004 Posts: 1386 Location: NC, USA Posted: Thu Apr 28, 2005 1:11 am Post subject: There;s a script which does all that very nicely and automatically... google for "texfig" furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Thu Apr 28, 2005 8:29 am Post subject: Thanks, incognito, but I guess it's not gentoo related enough for the tips and trick section. But almost all of our new students in our workgroup come up with this question after a while, so I thought that's a nice occasion to write down what I learned about that and give them simply the URL (I guess, at least I have to spellcheck it on the weekend, sorry, I'm not a native english speaker and wrote it in a hurry yesterday...) adsmith, that's a nice little script. It includes your exported fig file in a skeleton LaTeX document and processes and previews that. For larger documents I prefer the method using a make file which does the neccessary conversions (I didn't mention that xfig comes with a seperate program called "fig2dev" which can do all the exports xfig can do on the commandline). That combined with the preview-latex (screenshot) mode (which displays all your math and graphics inline in the [x]emacs, it's now part of auctex), gives me the for my taste most effective document writing environment. But as far as I can see new users are more attracted by kile (a KDE TeX environment, screenshots) or lyx (screenshots), and in that case texfig is a good help to get the LaTeX labels in your figures right. Thanks for your tip! nixnut Bodhisattva Joined: 09 Apr 2004 Posts: 10974 Location: the dutch mountains Posted: Thu Feb 14, 2008 7:59 pm Post subject: Moved from Other Things Gentoo to Documentation, Tips & Tricks. Tip/trick, so moved here _________________ Please add [solved] to the initial post's subject line if you feel your problem is resolved. Help answer the unanswered talk is cheap. supply exceeds demand You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
I will prove below that your bound $\frac{\exp\left(C\frac{\log N}{\log \log N}\right)}{A^3}$ (which follows from $\sum_{d\mid N, d > A} \frac{1}{d^3} \le \frac{d(N)}{A^3}$) is optimal at least in the regime $A = N^c$, where $0 < c < 1$ is fixed. (note that we can't have $A$ very small since $\sum_{d\in \mathbb{N}} \frac{1}{d^3} < \infty$). Put $N = p_1p_2\ldots p_k$ where $k$ is some natural number and $p_j$ are prime numbers. From, say, prime number theorem we have $k = \Theta \left(\frac{\log N}{\log \log N}\right)$. I will construct $\Theta\left(\exp\left(C\frac{\log N}{\log \log N}\right)\right)$ divisors of $N$ in the interval $(A, 2A]$ from what the desired estimate follows. Construction goes as follows: we choose random subset of primes $p_j$ with $j > [k\left(1 - \frac{1}{100}\min(c^5, (1-c)^5)\right)] = m$ and call their product $d_1$(note that there are already required number of $d_1$'s). It is also easy to see that $d_1 < A$. Then we are doing the following greedy algorithm: initialize $d := d_1$. Lets look at $p_j$ starting with $p_m$ in decreasing order and multiply $d$ by $p_j$ until one more multiplication will make $d$ greater than $A$. Such a moment exists since $p_1p_2\ldots p_m > A$. Call this moment $j$. We have now that $d \le A < p_jd$. If $p_j d \le 2A$ then let $d:=p_j d$ and finish the algorithm. Otherwise by Bertrand's postulate there is some prime $p$ in the interval $(\frac{A}{d}, \frac{2A}{d}]$ and $p < p_j$. Let $d := pd$ and finish the algorithm. In any case we will find divisor $d\in (A, 2A]$ of $N$ and all of them are obviously different. Thus our claim is proved.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Linear transformations map lines to lines, this is something you may already know or otherwise can show easily. Slightly more general than your statement: linear transformations also map parallelograms to parallelograms. Note that a parallelogram is completely determined by three (non-collinear) points (vertices), call them $\vec a$, $\vec b$ and $\vec c$. The parallelogram is then given by the set of all points $\vec x$:$$\vec x = \vec a + \lambda \bigl( \vec b - \vec a \bigr)+ \mu \bigl( \vec c - \vec a \bigr) \quad \quad \left( 0 \le \lambda, \mu \le 1 \right)$$A linear transformation $L$ maps these points to:$$\begin{align}L(\vec x) & = L\left(\vec a + \lambda \bigl( \vec b - \vec a \bigr)+ \mu \bigl( \vec c - \vec a \bigr)\right) \quad \quad\quad \left( 0 \le \lambda, \mu \le 1 \right)\\ & = L(\vec a) + \lambda \bigl( L(\vec b) - L(\vec a) \bigr)+ \mu \bigl( L(\vec c) - L(\vec a) \bigr)\end{align}$$ And the set of all points $L(\vec x)$ is now a parallelogram determined by the points $L(\vec a)$, $L(\vec b)$ and $L(\vec c)$. Note that these vertices are not collinear since $\vec a$, $\vec b$ and $\vec c$ weren't. I would greatly appreciate it if people could please take the time to explain why this phenomenon occurs and what the benefits of it are to us. In the context of integration, you can try to use this to map an ugly parallogram to a nice(r) parallogram, at least as a region of integration. If you can map an arbitrary parallogram to a rectangle in the coordinate space, the integral limits will become simple constants.
With recent discussions and active attempts to handle the declining riddle quality on the site, I thought I'd also resurrect this question too, as I think it's related (since broad low quality riddles => lots of guesses => HNQ)... As GentlePurpleRain points out, the algorithm to select posts for HNQ is as follows: $$\frac{(min(\text{AnswerCount}, 10) \times \text{QuestionScore}) \div 5 + sum(\text{AnswerScores})}{max(\text{QuestionAgeInHours} + 1, 6) ^ {1.4}}$$ Issue: As noted, this tends to promote exactly the wrong puzzles from PSE, as it's usually broad, low quality questions that gather many responses in a very short space of time. Assumption: The SE devs won't be willing/able to change this algorithm on a per-site basis. Challenge: How could the algorithm be modified across the entire network such that it doesn't make it worse for other sites, whilst still helping to somewhat mitigate the issues we see here on PSE? Suggestion: Ask the powers-that-be to modify the algorithm slightly, such that it has little to no impact on other existing sites*, but gives us a little more control over keeping low quality puzzling content out of the HNQ. Specifically, I suggest that we modify $\text{QuestionScore}$, in the algorithm above, such that it is calculated, not as a simple score, but as: $\text{QuestionScore}$ = $\text{QuestionUpVotes} - 3 \times \text{QuestionDownVotes} - 5 \times \text{QuestionVTC} $ Reasoning: All this change does is to allow downvotes to have a higher impact in preventing a question from qualifying for the HNQ list, and to allow our experienced, 3000+ rep users to have their close votes help even further. As it currently stands, there's a feedback loop where a broad/low quality puzzle hits the HNQ quickly, gaining it more drive-by upvotes than can be balanced out by downvotes from the community, which in turn keeps it "hot". This proposal, I think, would help to slow/reverse that feedback loop on poor quality stuff, whilst neither blocking "good" content nor impacting what qualifies as "hot" on other sites in the network. * In fact, it would arguably have a slightly positive effect as it would help filter out more controversial posts in favour of universally praised ones.
It turns out that recovering $N$ from $e, d$ is a hard problem; in particular, if you can, you can factor values that are currently believed to be intractable (!). To start with, a necessary and sufficient condition on $e, d$ being valid RSA exponents to a square-free modulus $N$ is that, for every prime factor $p$ of $N$, we have: $$ed - 1 = k(p-1)$$ for some integer $k$. Now, let us assume that we have an Oracle that, given $e, d$, will recover a value $N$ for which $e, d$ are valid RSA exponents (assuming there is such an $N$); we further assume that it gives a reasonably large value $N$, specifically, one in the range $\ell \sqrt{eq} < N < 5ed$ (for a modest constant $\ell$). Now, suppose that we have a value $N = pq$, where $p, q$ are both unknown Sophie-Germain primes (that is, $2p+1$ and $2q+1$ are also prime), and are approximately the same size; that is $q < p < 2q$. We will also assume that the value $2pq+1$ and $4pq+1$ both happen to be composite (which it will be for a majority of the possible $p, q$ pairs). Assuming $N$ is sufficiently large, there is no known way to factor it. We note that $p \equiv q \equiv 2 \pmod 3$, and hence $2N + 1$ is a multiple of 3. So, we set $e = 3$ and $d = (2N + 1)/e$; and give $d, e$ to our Oracle. What the Oracle will do is return a value $N' = p_1' p_2' … p_n' $ (where $p_1', p_2', …, p_n'$ is the prime factorization of $N'$. Such an $N'$ will always exist, as $N' = (2p+1)(2q+1)$ is such a valid modulus (hence the Oracle must return some value, if not necessarily $(2p+1)(2q+1)$ Because of the condition on RSA exponents, we have $ed - 1 = 2pq = k_i(p_i' - 1)$ for every prime $p_i'$. Because of the range limitation on $N'$ (that is, $\ell \sqrt{eq} < N'$), we must have $p$ as one of the factors of $p_i' - 1$ (for some $i$), and similarly have $q$ as a factor of $p_j'-1$ (for some different $j$; it must be different, otherwise this prime factor would be $2kpq+1$; we assumed that $k=1$ and $k=2$ didn't yield a prime, and $k>2$ have a value outside the $5ed$ range we assumed). Hence, we have $N' = k''(k'''p + 1)(k''''q + 1)$, for modest $k'', k''', k''''$. Given that, and $N = pq$, it's easy to factor $N$. This is much more of a sketch than I originally intended; there are a number of missing details. However it should not be hard to fill in the details...
MENSURATION Category : 7th Class Learning Objectives: PERIMETER AND AREA PERIMETER: Perimeter is the total boundary length of a closed figure. The commonly used units of perimeter are kilometer, meter and centimeter. 1. Square 4 4\[\times \]length of a side 2. Rectangle 2 length 2 width 2\[\times \]length + 2 \[\times \]width = 2 (length + width) 3. Triangle (Equilateral) 3 3\[\times \]length of a side 4. Triangle (isosceles) 2 equal 1 unequal 2\[\times \]length of equal sides + 1 length of different side 5. Triangle (Scalene) 3 (all unequal) Sum of all unequal sides Example: Find the perimeter of Solution: Perimeter\[=4\times \]side \[=4\times 2.5\] \[=10\,m\] Perimeter\[=2\,(\ell +w)\] \[=2\,(30+30)\] \[=2\times 50\] \[=100\,cm\] Perimeter = 2 + 3 + 5 = 10 cm. AREA: Area is the amount of surface covered by any shape 1. Square side\[\times \]side 2. Rectangle length\[\times \]breadth 3. Triangle \[\frac{1}{2}\]\[\times \]base\[\times \]height 4. Equilateral triangle \[\frac{\sqrt{3}}{4}\times \text{sid}{{\text{e}}^{2}}\] 5. Parallelogram base\[\times \]height Example: (i) Find the area of square with side 4 cm. Solution: Area = side\[\times \]side \[=4\times 4\] \[=16\,c{{m}^{2}}\] (ii) Find the area of rectangle whose \[\ell ~=4\,cm,\] \[b=6\,cm\] Solution: Area\[=\ell \times b\] \[=4\times 6=24\,c{{m}^{2}}\] (iii) Find the area of triangle with \[b=5\,cm,\]\[h=2.5\,cm\] Solution: \[\Delta \]Area \[=\frac{1}{2}\times b\times h\] \[=\frac{1}{2}\times 5\times 2.5\] \[=6.25\,c{{m}^{2}}\] Solution: Area of parallelogram = Base\[\times \]height \[=7\times 5=35\,c{{m}^{2}}\] Areas of Rectangular Path: Type 1: Paths running around (inside/outside) a rectangular shape. Rule 1: When the path runs outside, twice the width of the path should be added to length and breadth of the inner rectangle. Measures of Area: \[1\,cm=10\,mm\Rightarrow 1\,c{{m}^{2}}=10\,mm\times 10\,mm=100\,m{{m}^{2}}\] \[1\,dm=10\,cm\Rightarrow 1\,d{{m}^{2}}=10\,cm\times 10\,cm=100\,c{{m}^{2}}\] \[1\,m=100\,cm\Rightarrow 1\,{{m}^{2}}=100\,cm\times 100\,cm=10,000\,c{{m}^{2}}\] \[1\,dam=10\,m\Rightarrow 1\,da{{m}^{2}}=10\,m\times 10\,m=100\,{{m}^{2}}\] \[1\,km=1000\,m\Rightarrow 1\,k{{m}^{2}}=1000\,m\times 1000\,m=10,00,000\,{{m}^{2}}\] \[1\,Arc=10\,m\times 10\,m=100\,{{m}^{2}}\] \[1\,hectare\,(1\,ha)=100\,m\times 100\,m=10,000\,{{m}^{2}}\] Example: A garden is 80 m long and 65 m broad. A path 5 m wide is to be fruit outside all around it along its border. Find the area of the path. Solution: Let ABCD represents the garden and dotted region is 5m path around it Area of path = Area of EFGH – Area of ABCD HG = (65 + 5 + 5) m = 75 m HE = (80 + 5 + 5) m = 90 m Area of EFGH\[=90\times 75=6750\,{{m}^{2}}\] Area of ABCD\[=80\times 65=5200\text{ }{{m}^{2}}\] Area of path\[=6750-5200\] \[=1550\,{{m}^{2}}\] Rule 2: When the path runs inside, twice the width of path should be subtracted from the length and breadth of the outer rectangle. Example: A path 2 m wide is built along the border inside a square park of side 20 m. Find the cost of covering the remaining portion of the park by grass at the rate of Rs. 2/sq. m. Solution: Let ABCD is a park, dotted portion is 2 m wide path. The plain portion EFGH is remaining part of the park. \[\text{EF}=(20-1-1)\,m=18\,m\] \[\text{EH}=(20-1-1)\,m=18\,m\] Area of remaining portion EFGH\[=18\times 18=324\,{{m}^{2}}\] Area of park ABCD\[=20\times 20\,{{m}^{2}}\] \[=400\,{{m}^{2}}\] Area of dotted portion = Area of ABCD \[-\] Area of EFGH \[=400-324=76\,{{m}^{2}}\] Cost of covering \[76\,{{m}^{2}}\]area by grass\[=76\times 2=\text{Rs}\text{.}\,152\] Type 2: Central paths, i.e. when paths are constructed in the center of the field. Example: A grassy plot is \[80m\times 50m.\] Two cross paths each 4m wide are constructed at right angles through the center of the field, such that each path is parallel to one of the sides of the rectangle. Find the total area used as path. Solution: From the figure: Area of EFGH \[=\left( 80\times 4 \right){{m}^{2}}=320\,{{m}^{2}}\] Area of PQRS \[=\left( 50\times 4 \right){{m}^{2}}=200\,{{m}^{2}}\] Area of Common Square \[=\left( 4\times 4 \right)\,c{{m}^{2}}=16\,{{m}^{2}}\] Area of cross path\[=\left( 320+200-16 \right){{m}^{2}}\] \[=504\,{{m}^{2}}\] Note: From above fig. it is clear that darker region has been considered twice, because it has been include in both parts EFGH and PQSR. So in order to get the total area of the cross paths, we need to subtract this area from the sum of the areas of the two paths. AREA OF PARALLELOGRAM: Example: Find the area of parallelogram with base 7 cm and altitude 4.3 cm. Solution: Area of parallelogram = Base\[\times \]altitude \[=7\,cm\times 4.3\,cm\] \[=30.1\,c{{m}^{2}}\] Example: A field is in form of parallelogram, has one of its diagonal 40 m long and the perpendicular distance of this diagonal from either of the outlying vertices is 10 m. Find the area of the field. Solution: Diagonal AC = 42 m. BE = DF = 10 m. Area of parallelogram = Area of\[\Delta \,ABC\]+ Area of\[\Delta ADC\] \[=\frac{1}{2}\times AC\times BE+\frac{1}{2}\times AC\times DF\] \[=\frac{1}{2}\times 42\times (10+10)\] \[=\frac{1}{2}\times 42\times 20\] \[=420\,{{m}^{2}}\] AREA OF TRIANGLE: Example: Find the area of triangle, whose base is 9.6cm and altitude is 5 cm. Solution: Area of triangle\[=\frac{1}{2}\times \] base\[\times \]height \[=\frac{1}{2}\times 9.6\times 5\] \[=24\,c{{m}^{2}}\] AREA OF EQUILATERAL TRIANGLE: Example: Find the area of equilateral triangle whose one side is 5 cm. Solution: Area of equilateral triangle\[=\frac{\sqrt{3}}{4}\times \text{sid}{{\text{e}}^{2}}\] \[=\frac{\sqrt{3}}{4}\times 5\times 5\] \[=10.82\,c{{m}^{2}}\] CIRCUMFERENCE AND AREA OF A CIRCLE: The distance around a circle is called the circumference of circle. Circumference\[=2\pi r.\] Example: Find the circumference of a circle when radius is 14 cm. \[\left( \pi =3\frac{1}{7} \right)\] Solution: Circumference\[=2\pi r\] \[=2\times 3\frac{1}{7}\times 14\] \[=2\times \frac{22}{7}\times 14\] \[=88\,cm.\] Example: How many times will the wheel of a car rotate in a journey of 66 km if the diameter of wheel is 50cm? Solution: Diameter of wheel (D) = 50 cm, \[r=\frac{D}{2}=\frac{50}{2}=25\,cm\] \[\therefore \] Circumference of the wheel\[=2\pi r\] \[=2\pi \times 50\] \[=100\pi \frac{22}{7}\] \[=\frac{2200}{7}=314.28\,cm.\] Length of journey\[=66\,km=66\times 1000\times 100\,cm.\] \[=6600000\text{ }cms.\] No. of times wheel will rotate to cover the journey \[=\frac{\text{Length of journey}}{\text{Circumference of wheel}}\] \[=\frac{6600000}{314.28}=21000\] AREA OF A CIRCLE: Area of circle \[=\pi {{r}^{2}}\](r = radius) Example: Find the area of circle when radius is 7 cm. Solution: Area of circle\[=\pi {{r}^{2}}=\frac{22}{7}\times 7\times 7=154\,c{{m}^{2}}\] AREA BETWEEN TWO CONCENTRIC CIRCLES: If two concentric circles are there with radius R and r Area between the circles = Area of outer circle area of inner circle \[=\pi {{R}^{2}}-\pi {{r}^{2}}=\pi ({{R}^{2}}-{{r}^{2}})=\pi \,(R+r)\,(R-r)\] Example: A 7 cm wide path is to be constructed all around, and outside a circular garden of diameter 112m. Find the cost of constructing the path at Rs. 10/square. meter. Solution: Radius of inner circle\[=\frac{112}{2}=56\,cm.\] Radius of outer circle\[=\frac{112+7+7}{2}\] \[=\frac{126}{2}=63\,m.\] Area of path = Area of outer circle\[-\]Area of inner circle \[=\pi {{63}^{2}}-\pi {{56}^{2}}\] \[=\pi (63+56)\,(63-56)\] \[=\pi \times 119\times 7\] \[=\frac{22}{7}\times 119\times 7\] \[=2618\,{{m}^{2}}\] Cost of constructing path\[=10\times 2618=Rs.\,26180\] QUADRILATERAL Quadrilateral ABCD is shown in the following figure. Its diagonal BD divides it’s into two triangles. AL and CN are perpendicular to BD from A and C respectively. Area (A) of quadrilateral ABCD is given by: A = (area of\[\Delta ABD\]) + (area of\[\Delta BCD\]) \[=\left( \frac{1}{2}\times BD\times AL \right)+\left( \frac{1}{2}\times BD\times CM \right)\] \[=\frac{1}{2}\times BD\times (AL+CM)\] RHOMBUS Where \[{{d}_{1}}\] and \[{{d}_{2}}\] are the measurements of the diagonals. TRAPEZIUM · \[=\frac{1}{2}\](sum of parallel sides)\[\times \]height \[=\frac{1}{2}({{b}_{1}}+{{b}_{2}})h\] CUBE CUBOID Let \[l,\text{b,h}\] are the edges of the cuboid, then, Volume of cuboid\[=l\text{bh}\]cubic unit \[=2(lb+bh+hl)\]sq. unit You need to login to perform this action. You will be redirected in 3 sec
Every Group of Order p^2q is Solvable Every Group of Order p^2q is Solvable Proposition 1: Let $G$ be a group of order $p^2q$ where $p$ and $q$ are primes. Then $G$ is solvable. Proof:Let $G$ be a group of order $n: = p^2q$ and let $n_p$ denote the number of Sylow $p$-subgroups of $G$, and let $n_q$ denote the number of Sylow $q$-subgroups of $G$. There are three cases to consider. Case 1:Suppose that $p = q$. Then $G$ is a group of order $p^3$. So $G$ is a $p$-group. From the result on the Every p-Group is Solvable page we have that $G$ is solvable. Case 2:Suppose that $p > q$. Since $G$ is a finite group and $p$ is a prime such that $p \mid n = p^2q$ and $p \nmid q$ we have by The Third Sylow Theorem on The Sylow Theorems page that: \begin{align} \quad n_p &\equiv 1 \pmod p \\ \quad n_p &\mid q \end{align} Since $n_p \mid q$ and since $q$ is a prime we have that either $n_p = 1$ or $n_p = q$. If $n_p = q$ then the above congruence implies that $q \equiv 1 \pmod p$, i.e., $p \mid q - 1$ which is a contradiction since $q - 1 > 0$ and $p > q$. Thus $n_p = 1$. So there is only one Sylow $p$-subgroup of $G$, call it $G_1$. By Lagrange's Theorem the possible orders of a subgroup of $G$ are $1$, $p$, $p^2$, $q$, $pq$, and $p^2q$. So $G_1$ has order $p^2$ Since $G$ has only one subgroup of order $p^2$ (since $n_p = 1$) we have by the result on the Subgroups of Finite Groups with Unique Order are Normal Subgroups page that $G_1$ is normal in $G$. So $\{ 0 \} = G_0 \leq G_1 \leq G_2 = G$ is such that $G_0$ is (trivially) normal in $G_1$ and $G_1$ is normal in $G_2$, while $|G_1/G_0| = p^2$ and from the the result on the Every Group of Order p^2 is Abelian page we have that $G_1/G_0$ is abelian. Also, $|G_2/G_1| = q$ and since $q$ is a prime we have that $|G_2/G_1|$ is cyclic and thus abelian. So $G$ is solvable. Case 3:Suppose that $p < q$. Since $G$ is a finite group and $q$ is a prime such that $q \mid n = p^2q$ and $q \nmid p^2$ we have by The Third Sylow Theorem that: \begin{align} \quad n_q & \equiv 1 \pmod q \\ \quad n_q & \mid p^2 \end{align} The only positive divisors of $p^2$ are $1$, $p$, and $p^2$. So $n_q$ is either $1$, $p$, or $p^2$. Note that if $n_q = p$ then from the above congruence we have that $p \equiv 1 \pmod q$. So $q \mid p - 1$ which cannot happen since $p -1 \neq 0$ and $p < q$. Thus either $n_q = 1$ or $n_q = p^2$. If $n_q = 1$ then similarly from above, then let $G_1$ be the unique Sylow $q$-subgroup of $G$. Then by Lagrange's Theorem it has order $q$. Since it is the only subgroup of $G$ with order $q$, it is normal in $G$. So the chain $\{ 0 \} = G_0 \leq G_1 \leq G_2 = G$ is such that $G_0$ is normal in $G_1$ and $G_1$ is normal in $G_2$. The composition factors are $G_1/G_0 \cong G_1$ which is abelian since it has prime order $q$, and $G_2/G_1$ which has order $p^2$ - which is abelian since every group of order $p^2$ is abelian. So $G$ is solvable. If $n_q = p^2$. Then $G$ has $p^2$ Sylow $q$-subgroups, i.e., $G$ has $p^2$ subgroups of order $q$. Every element except for the identity in each subgroup has order $q$. So $G$ has $p^2 \cdot (q - 1)$ elements of order $q$. But $p^2 \cdot (q - 1) = p^2q - p^2 = n - p^2$. So $G$ has $p^2$ elements not of order $q$. Since $p \neq q$, these $p^2$ elements must lie in a Sylow $p$-subgroup as the intersection of a Sylow $p$-subgroup and a Sylow $q$-subgroup with $p \neq q$ is trivial. Let $G_1$ denote this unique group of order $p^2$. Then it is normal in $G$, and $\{ 0 \} = G_0 \leq G_1 \leq G_2 = G$ is a chain of subgroups with $G_0$ being (trivially) normal in $G_1$, $G_1$ being normal in $G_2$, and $|G_1/G_0| = p^2$, $|G_2/G_1| = q$, both of which must then be abelian. So $G$ is solvable.
The Lemma to the Uniform Boundedness Principle The Lemma to the Uniform Boundedness Principle Lemma 1: Let $X$ be a complete metric space and let $\mathcal F$ be a collection of continuous functions on $X$. If for each $x \in X$ we have that $\displaystyle{\sup_{f \in \mathcal F} |f(x)| < \infty}$ then there is a nonempty open subset $U \subseteq X$ such that $\displaystyle{\sup_{x \in U, \: f \in \mathcal F} |f(x)| < \infty}$. Proof:For each $n \in \mathbb{N}$ let: \begin{align} \quad U_n = \{ x \in X : |f(x)| \leq n, \forall f \in \mathcal F \} \end{align} Then $U_n$ can be alternatively represented as: \begin{align} \quad U_n = \bigcap_{f \in \mathcal F} \{ x \in X : |f(x)| \leq n \} \end{align} Observe that for each $f \in \mathcal F$ we have that $\{ x \in X : |f(x)| \leq n \}$ is a closed set since $f$ is continuous. Therefore each $U_n$ is an arbitrary intersection of closed sets and is hence closed. Now since for each $x \in X$, $\displaystyle{\sup_{f \in \mathcal F} |f(x)| < \infty}$, there must exists an $N_x \in \mathbb{N}$ such that $x \in U_{N_x}$. Therefore: \begin{align} \quad X = \bigcup_{n=1}^{\infty} U_n \end{align} Hence there exists an $n^* \in \mathbb{N}$ such that $U_{n^*}$ has nonempty interior. Let $U = \mathrm{int} (U_{n^*})$. Then for all $x \in U$ we have that $|f(x)| \leq n^*$ for all $f \in \mathcal F$, that is: \begin{align} \quad \sup_{x \in U, \: f \in \mathcal F} |f(x)| < \infty \quad \blacksquare \end{align}
TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blue is basically 100%. Matthew Drury already gave the right answer but I'd just like to add to that with some numerical examples, because you chose your numbers such that you actually get pretty similar answers for a wide range of different parameter settings. For example, let's assume, as you said in one of your comments, that the probability that people judge the color of a car correctly is 0.9. That is: $$p(\text{say it's blue}|\text{car is blue})=0.9=1-p(\text{say it isn't blue}|\text{car is blue})$$and also$$p(\text{say it isn't blue}|\text{car isn't blue})=0.9=1-p(\text{say it is blue}|\text{car isn't blue})$$ Having defined that, the remaining thing we have to decide is: what is the prior probability that the car is blue? Let's pick a very low probability just to see what happens, and say that $p(\text{car is blue})=0.001$, i.e. only 0.1% of all cars are blue. Then the posterior probability that the car is blue can be calculated as: \begin{align*}&p(\text{car is blue}|\text{answers})\\&=\frac{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})}{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})+p(\text{answers}|\text{car isn't blue})\,p(\text{car isn't blue})}\\&=\frac{0.9^{900}\times 0.1^{100}\times0.001}{0.9^{900}\times 0.1^{100}\times0.001+0.1^{900}\times0.9^{100}\times0.999}\end{align*} If you look at the denominator, it's pretty clear that the second term in that sum will be negligible, since the relative size of the terms in the sum is dominated by the ratio of $0.9^{900}$ to $0.1^{900}$, which is on the order of $10^{58}$. And indeed, if you do this calculation on a computer (taking care to avoid numerical underflow issues) you get an answer that is equal to 1 (within machine precision). The reason the prior probabilities don't really matter much here is because you have so much evidence for one possibility (the car is blue) versus another. This can be quantified by the likelihood ratio, which we can calculate as:$$\frac{p(\text{answers}|\text{car is blue})}{p(\text{answers}|\text{car isn't blue})}=\frac{0.9^{900}\times 0.1^{100}}{0.1^{900}\times 0.9^{100}}\approx 10^{763}$$ So before even considering the prior probabilities, the evidence suggests that one option is already astronomically more likely than the other, and for the prior to make any difference, blue cars would have to be unreasonably, stupidly rare (so rare that we would expect to find 0 blue cars on earth). So what if we change how accurate people are in their descriptions of car color? Of course, we could push this to the extreme and say they get it right only 50% of the time, which is no better than flipping a coin. In this case, the posterior probability that the car is blue is simply equal to the prior probability, because the people's answers told us nothing. But surely people do at least a little better than that, and even if we say that people are accurate only 51% of the time, the likelihood ratio still works out such that it is roughly $10^{13}$ times more likely for the car to be blue. This is all a result of the rather large numbers you chose in your example. If it had been 9/10 people saying the car was blue, it would have been a very different story, even though the same ratio of people were in one camp vs. the other. Because statistical evidence doesn't depend on this ratio, but rather on the numerical difference between the opposing factions. In fact, in the likelihood ratio (which quantifies the evidence), the 100 people who say the car isn't blue exactly cancel 100 of the 900 people who say it is blue, so it's the same as if you had 800 people all agreeing it was blue. And that's obviously pretty clear evidence. (Edit: As Silverfish pointed out, the assumptions I made here actually implied that whenever a person describes a non-blue car incorrectly, they will default to saying it's blue. This isn't realistic of course, because they could really say any color, and will say blue only some of the time. This makes no difference to the conclusions though, since the less likely people are to mistake a non-blue car for a blue one, the stronger the evidence that it is blue when they say it is. So if anything, the numbers given above are actually only a lower bound on the pro-blue evidence.)
I've been given the following problem to solve: Let $a$ be an integer and $m$ and $d$ natural numbers. Assume $a^d \equiv 1 \pmod{m}$, and that $d$ is the smallest possible number for which this holds. In that case, show that $a^i \equiv a^j \pmod{m}$ if and only if $i \equiv j \pmod{d}$ for all natural numbers $i$ and $j$. If we assume that $a^d \equiv 1 \pmod{m}$ and $a^i \equiv a^j \pmod{m}$, then we can write\begin{align}mx = (a^d - 1) \\my = (a^i - a^j)\end{align} for some $x$ and $y$. But I don't seem to get anywhere by comparing these expressions for $m$. If we assume $i \equiv j \pmod{d}$, then we may write \begin{align} dz = (i-j) \end{align} for some $z$. I can use this to make an expression for $i = dz +j$ and insert into $a^i$, but I don't seem to get anywhere there either. I'm really confused by this problem. Any help is much appreciated.
Well, yes and no.Triple DES using 3 different keys is still considered secure because there are no known attack which completely break its security to a point where it is feasible nowadays to crack it.The Triple DES algorithm provides around 112 bits of security against bruteforce attacks (when taking into account the meet-in-the-middle attack).For ... The main difference is that with two 56 bit keys the maximal security level is 112 bit, and thus an attack that has a cost of $2^{112}$ operations is no attack, whereas for three 56 bit keys the maximal security level is 168 bits, and an attack that has a cost of $2^{112}$ operations counts as an attack.This means that two-key 3DES is still a bit weaker ... Well, the standard answer is to preserve compatibility with DES; a hardware circuit that implemented 3DES (with EDE) could also be used to do DES as well (by, say, making all three subkeys the same).Now, there is one slight problem with this straightforward argument; 3DES (EEE, that is, with three encrypt operations) would have this property as well; if we ... In my opinion, there are no reason to choose 3DES over AES, ever.Especially if it is in software, since 3DES performances have always been terrible. Furthermore, most CPUs ship with AES accelerators nowadays, which means that AES is even faster.But, sadly, change management is hard, certain smart card or hardware module do not support AES, but support ... This claim is bogus. DES itself has a 13-round differential with probability around $2^{-47}$, so TripleDES with its 48 rounds is resistant to any sort of differential attack.The paper authors are not really competent in the subject. There is none. All cryptography involves the number 2, which is prime, whenever dealing with information in strings of bits—or in esoteric cases like ROT13, well, there's a prime number right there, 13, not to mention that 26, the size of the alphabet on which ROT13 works, is the product of primes 2 and 13. If you use a key for close to $2^{n/2}$ blocks in CBC mode, then the chance of getting a collision in the ciphertext is getting rather high because of the birthday paradox. As the ciphertext is used as a vector for the next calculation, and since that vector should be unpredictable, you would likely lose confidentiality.Note that the author seems to have ... The triple DES (3DES) block cipher works by essentially running the block through DES three times. Triple DES is also known as "DES EDE" (encrypt-decrypt-encrypt) and under the name given by the standard document: "TDEA". The TDEA algorithm is described in FIPS NIST Special Publication 800-67Revision 1 where paragraph 3.2 describes the TDEA Keying Options.... The answer is:Why do the encrypted files always start with "Salted__" ("U2FsdGVkX1" in base64)? Isn't giving away information like this insecure?The encrypted files must always start with "Salted_" to interoperate with OpenSSL. OpenSSL expects this. The 8 bytes that spell "Salted_" are always immediately followed by another random 8 bytes of salt. ... DES has a block size of 8 bytes. Two blocks therefore come to 16 bytes.It looks like Adbobe were encrypting passwords using two blocks of 3-DES in ECB mode.Because all these passwords are eight bytes long, the second block is empty and is just filled with zeros. The second block gets started at all because of the string-terminating NUL character at the ... NIST just recently (11/27/2017) put out a bulletin that Triple-DES will be deprecated in the future, and will be disallowed in protocols like TLS and IPsec, with a future deprecation timeline to be released. NIST is urging vendors to transition TLS implementations to use AES as soon as possible. It will soon be removed from the set of FIPS approved ... The article mentions that 3-DES was used to encrypt these passwords in ECB mode. DES has a 64-bit/8-byte block.So let's say you use ECB to encrypt a nine byte password. The first 8-bytes are encrypted using ECB. So far so good.But what happens when we come to the ninth byte? Well we're now in a new block but only the first byte is populated with any ... As far as I know your attack is the best attack known, unless something better has very recently been published.Please note that for DES as the basic cipher the chosen $A$ may not work, but you can choose another $A$ and try againAlso, for a generic cipher with $k$ bit key, the complexity is $$2^{k+1}=2\times 2^k=O(2^k),$$as $k$ increases. The computational complexity of the attack you describe is $2^{112}$, since that's how much work it takes to build the look-up table.In fact, for standard 2-key 3DES like you describe, an attacker capable of building such a look-up table could just as well store $C = E_{K_1}( D_{K_2}( E_{K_1}( P )))$ instead of just $D_{K_2}( E_{K_1}( P ))$ in the table, ... Note: I'll disregard the base64 encoding in the following text; the base64 encoding does not change the properties of the generated ciphertext.What you are running into is padding together with ECB mode. This padding can be any static padding. Most common is PKCS#5 padding, but zero padding is also possible. It is not possible to test which padding is used,... There is a very interesting paper that relates to this exact question (but you wouldn't guess it from the title). The paper is titled Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems. In Section 3, the paper considers the multiple encryption problem and gives novel attacks that are ... Three problems here:The online tool used expects a 24-byte (48 hex-character) key; thus you should enter E6F1081FEA4C402CC192B65DE367EC3EE6F1081FEA4C402C as the key, duplicating the first 8 bytes; this is the customary way to extend a two-block triple DES key of 16 bytes to a three-block triple DES key of 24 bytes.You gave 16 bytes (32 hex chars) as input,... EEE with $K_1$=$K_2$=$K_3$ is measurably less insecure than EDE with $K_1$=$K_2$=$K_3$, because the former has 48 rounds, but the later reduces to just one encryption E, thus 16 rounds. Two consequences:This makes brutes force require 3 times more rounds, thus adds about $\log_2(3)\approx 1.58$ bit of practical security against brute force (security in ... Well, whether $AES'$ is as secure as $AES$ depends on the length of $k_1, k_2$.If they are both 128 bit, then what you effectively have is a standard 128-bit AES, except that prior to round 6, you replace the running key with an independent key (and you tweaked the last round, but that's cryptographically harmless). Now, it is never a good idea to do ... 3DES is a block cipher which processes "blocks" of 64 bits. A block cipher is not sufficient to encrypt a message, defined as a sequence of potentially many bytes. Hence the use of a mode of operation which organizes things; this may imply some padding, and an Initialization Vector.TripleDESCryptoServiceProvider can do all that: you specify the key, the ... I do not understand how can we decrypt a cypher which was encrypted with $K_1$, with $K_2$.Triple DES essentially involves three encryptions on the plain text. First is using $K_1$, second using $K_2$, and third using $K_3$. Now one may argue that $K_2$ is not being used for encryption but decryption. Well, technically speaking, encryption and decryption ... In two key 3DES two keys are equal so that key size is only 112 bits, compared to the 168 bits of full 3DES. The advantage is a smaller key size without a correspondingly large loss in security: both two and three key 3DES can be attacked in about $2^{112}$ time.With the encrypt-decrypt-encrypt construction it clearly must be the first and last key that ... Definitely a mistake. The text clearly contradicts itself.... 2DES has an effective key length of 57.And later...There does notappear to be a meet-in-the-middle attack on 3DES2 however, so that its key length of 112 is alsoits effective key length.which clearly contradicts2DES, although having the same effective key length as 3DES2 and ... I am learning the meet-in-the-middle on DES attack.I don't know of any meet-in-the-middle attack on DES; I'll assume you're talking about 2DES (where you apply DES with one key $k_1$, and then apply another iteration of DES (possibly in decrypt mode) with another key $k_2$.why can we guarantee to find one and only one pair of k1 and k2?We don't. In ... Yes. The following papers should be exactly what you are looking for.The following paper shows that the answer is "Yes" and provides evidence that 3-key Triple DES is more secure than single DES:Code-Based Game-Playing Proofs and the Security of Triple Encryption.Mihir Bellare, Phillip Rogaway. IACR ePrint 2004/331. (Full version of a paper published ... Assuming the mod 11 check digit is among 0123456789X, disclosing it reduces the number of possible plaintexts among 8-digit numbers by a factor of about 11 (from 100000000 to about 9090909; exactly how much depends very slightly on the value of the check digit), thus reveals about $\log_2(11)$ bits of information about the plaintext, that is just a little ... Yes. The keys are indeed used in a linear manner.In particular, they are used in $E$-$D$-$E$ mode: encrypt using first 56 bits as key, decrypt using next 56 bits as key and then again encrypt using final 56 bits.This way its possible to use triple DES (which is officially called TDEA) for the DES, 2-DES and 3-DES variations. The first would use $K_1$-$... If we talk about key search attacks (rather than key compromise or/and side-channel attacks), the answer must be no, for the best known method is impractical.On the other hand there has been numerous successful key-recovery attacks against devices using TDES, including on some that try hard to avoid it. One example here, another there. My first thought was that I could set the IV to the first 8 bytes of the CT [and] decode the rest[.]This is exactly how CBC works.For all blocks but the first, encryption is defined by $C_n=E_K(C_{n-1}\oplus P_n)$ and, therefore, decryption is achieved by $P_n=C_{n-1}\oplus D_K(C_n)$.Since there is no previous ciphertext for the first plaintext block ($...
Background: This is the first unit in junior year for a course in theoretical computer science. Prior to this, the students have already had AP CS A and a mish-mash of other topics. They have, by this point, studied: Python (as their introduction to programming) Java for AP CS A (loops, arrays, object design, recursion, polymorphism) 6502 Assembly (for stack operation, including recursion again) C (largely for pointers) Additional things in Java (e.g. trees, linked lists, tries, Hashmaps, generics) and some algorithms (such as BFS, DFS, Huffman Encoding, Conway's Game of Life, Prim's, etc) This course, then, is a year of theoretical computer science. The first unit is background mathematics that they will need during the rest of the course, and it focuses on boolean algebra and sets. The requested Unit Review here is for my boolean algebra opening. The general goals are to help the students gain serious familiarity with algebraic manipulations and with the symbols themselves. After these lessons, we will move onto Conjunctive and Disjunctive Normal Forms, fairly substantial algebraic manipulations involving 5-10 steps, and then into logic gates for a gentle introduction to computer circuitry, so it is important that they come out prepared for that. Word of warning: The unit below has some serious weaknesses, as you will see. Please be gentle! This year will be my second time giving these lessons, and while I have made a series of improvements from what I did this last September, there is still a long way to go. There is a lot of lecture, and I would really like to make it more engaging when I do it again27. The Curriculum Lesson one: We begin our foray into boolean algebra by converting our Java boolean symbols to formal math symbology: && to $\land$, || to $\lor$, and ! to $\lnot$. We then discuss how != is essentially $\oplus$, and begin our discussion of $\Longleftrightarrow$ as being a near approximation of == (with a promise that we will come back to this again, because we are not done with it yet). We then introduce tautologies (such as $B \lor \lnot B$), and go over their truth tables. I finally provide them with an extremely short homework worksheet where they have to circle the boolean statements that are tautologies. (It also provides practice with all of the symbols we have discussed) Lesson two: We review lesson 1, and then introduce the concept of contradictions as the opposite of tautologies, go over their truth tables, and look at the obvious contradiction $B \land \lnot B$. I ask them to create two tautologies with a partner, the more creative the better, walk around, and ask for a number of students to write interesting ones on the board. We move on to $A \Rightarrow B$, discuss its meaning and truth table, and then ask what in Java is similar? I first propose: if (A) B = true; We talk about this for awhile, and talk about why, though it may seem like a reasonable choice at first, it is not really the same idea. We then spend some time on the hardest idea of implies: what we mean when we say that $False \Rightarrow True$ is $True$. At this point, I take a break to introduce the way I approach proofs in this class. Over the course of the year, certain very important proofs must be reproduced by a student chosen at random during the following period. This forces the students not to ignore these proofs, and helps them to internalize both the mathematical symbols that I need them to gain fluency with, and some very clever proof techniques. I spend a substantial chunk of time then doing a formal proof that $False \Rightarrow True \equiv True$. I finish the class with a second, informal, and intuitive proof of the same (which they will not need to reproduce.) Lesson 3: We review from the prior day, and a student is called up to reproduce the proof. We then spend about 15 minutes going over necessary and sufficient in depth, and re-examine both $\Longleftrightarrow$ and $\Rightarrow$ through this new lense. During this process, I introduce $\Longleftrightarrow$ more properly as $(A \Rightarrow B) \land (A \Leftarrow B)$ and as iff, and we talk about the similarities that $\Longleftrightarrow$ shares with $\equiv$. I go over DeMorgan's Laws, and briefly cover 5 more symbols: ∀, ∃, |, ∈, and ∴. At this point, I give them English statements to translate to mathematical statements, and mathematical statements to translate to English statements, one at a time. We use the following format: I provide an exercise, they work on it with a buddy or on their own while I bounce around the room offering help, and then I go over how we do it. The final exercise is to translate Goldbach's Conjecture (which I do not identify to them until after the translation work is done) from boolean symbology into English: $\forall n|n \gt 2 \wedge (\frac{n}{2}) \in \mathbb{N}$, $\exists (m,ℓ)\,\,|\,\, m \in P \wedge ℓ \in P \wedge m+ℓ=n$ As they leave, I give them a homework assignment with practice problems in very simple 1- or 2-step boolean algebraic reductions (including applying DeMorgan's Law), translating statements from English back and forth into symbols, creating truth tables from algebraic statements, and creating algebraic statements from truth tables. The Request This is my opener for the year, and it is so. very. dry. There is also very little activity! I am already aware that I've got a real snoozer here. This material must to come first in the year for large-scale organizational reasons, but I would really like ideas to make it more engaging. I am also particularly seeking out ideas for active learning and ways to utilize pair partners to improve both engagement and mastery.
Integrals of Complex Functions Along Piecewise Smooth Curves Examples 1 Recall from the Integrals of Complex Functions Along Piecewise Smooth Curves page that if $h : [a, b] \to \mathbb{C}$ is a single real-variable, complex-valued function where $u, v : [a, b] \to \mathbb{C}$ and such that $h(t) = u(t) + iv(t)$ then the integral of $h$ over $[a, b]$ is defined as:(1) Furthermore, if $A \subseteq \mathbb{C}$ is open, $f : A \to \mathbb{C}$, and $\gamma : [a, b] \to \mathbb{C}$ is a piecewise smooth curve contained in $A$, (i.e., there exists a partition $a = a_0 < a_1 < ... < a_n = b$ for which $\gamma$ exists on $(a_k, a_{k+1})$ and continuous on $[a_k, a_{k+1}]$ for all $k \in \{ 0, 1, ..., n - 1 \}$), then the integral of $f$ along $\gamma$ is defined as:(2) We will now look at some examples of computing integrals of complex functions. Example 1 Evaluate the integral $\displaystyle{\int_{\gamma} \mathrm{Re} (z) \: dz}$ where $\gamma$ is the line segment with initial point at the origin and with terminal point at $2 - i$. The curve $\gamma$ be can be parameterized for $t \in [0, 1]$ by:(3) The derivative of $\gamma$ is:(4) Therefore the integral of $\mathrm{Re} (z)$ along $\gamma$ is:(5) Example 2 Evaluate the integral $\displaystyle{\int_{\gamma} \mathrm{Im} (z) \: dz}$ where $\gamma$ is the line segment with initial point at $1 + i$ and with terminal point at $-1 - i$. The curve $\gamma$ can be parameterized for $t \in [0, 1]$ by:(6) The derivative of $\gamma$ is:(7) Therefore the derivative of $\mathrm{Im} (z)$ along $\gamma$ is:(8)
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
If you really believed the CAPM's prediction that $\alpha=0$, then imposing $\alpha=0$ in your estimation would indeed lead to your 2nd formula. The problems? The CAPM doesn't work so imposing a false restriction during estimation is problematic. More generally, taking factor models extremely seriously and imposing $\alpha=0$ in estimation to gain efficiency loses you some robustness because factor models are almost certainly at least somewhat misspecified. Empirical researchers generally don't restrict a constant to zero during estimation. Model 1 (without a constant): Let's assume we have the following regression model (without a constant): $$ r_{st} - r_{ft} = \beta_1 \left( r_{mt} - r_{ft} \right) + \epsilon_t$$ Assuming the orthogonality condition $\operatorname{E}\left[\epsilon_t \left( r_{mt} - r_{ft}\right)\right] = 0$, then $\beta_1$ would be given by: $$ \beta_1 = \frac{\operatorname{E}\left[\left( r_{st} - r_{ft} \right)\left(r_{mt} - r_{ft} \right) \right] }{\operatorname{E}\left[\left(r_{mt} - r_{ft}\right)^2\right]}$$ If you really take the CAPM theory seriously, then there is something principled to imposing the restriction $\alpha= 0$ in estimation (which is what we did above). Quoting Cochrane (2004) with regards to more general factor models with normally distributed errors, "The maximum likelihood estimate of $\beta$ is the OLS regression without a constant." As Cochrane describes though, researchers don't generally estimate without a constant because it sacrifices some robustness. Model 2 (add a constant): $$ r_{st} - r_{ft} = \alpha_2 + \beta_2 \left( r_{mt} - r_{ft} \right) + \epsilon_t$$ Now with $\alpha_2$ there and assuming the orthogonality conditions $\operatorname{E}[\epsilon_t] = 0$ and $\operatorname{E}\left[\epsilon_t \left( r_{mt} - r_{ft}\right)\right] = 0$, you get: $$ \beta_2 = \frac{\operatorname{Cov}\left( r_{st} - r_{ft} , r_{mt} - r_{ft} \right) }{\operatorname{Var}\left( r_{mt} - r_{ft} \right)}$$ Model 1 is a special case of Model 2 where $\alpha $ is restricted to 0. Model 3 (if the risk free rate weren't random): If the risk free rate isn't random then it drops out: $$ \beta_3 = \frac{\operatorname{Cov}\left( r_{st}, r_{mt} \right) }{\operatorname{Var}\left( r_{mt} \right)}$$ In periods like the present where the risk free rate is constantly about 0, maybe this bogus assumption is innocuous. I think it's hand-wavy, intro MBA type stuff though. A comment on the CAPM Be aware that the CAPM is a zombie theory: long ago shot dead in academia because it doesn't work, the CAPM continues to skulk the earth. Quoting Fama and French (2004), "... the empirical record of the model is poor—poor enoughto invalidate the way it is used in applications." References Cochrane, John. 2005. Asset Pricing, p. 273 Fama, Eugene, F., and Kenneth R. French. 2005. "The Capital Asset Pricing Model: Theory and Evidence." Journal of Economic Perspectives, 18 (3): 25-46.
257 23 Homework Statement There is an infinite charged plate in yz plane with surface charge density ##\sigma = 8*10^8 C/m^2## and negatively charged particle at coordinate (4,0,0) Find magnitude of efield at coordinate (4,4,0) Homework Equations E= E1+E2 So I figured to get e-field at point (4,4,0), I need to find the resultant e-field from the negatively charged particle and the plate ##E_{resultant}=E_{particle}+E_{plate}## ##E_{particle}=\frac{kq}{d^2}=\frac{(9*10^9)(-2*10^-6)}{4^2}=-1125N/C## Now for the plate is where I'm confused. If this was a wire, it would have been okay for me since I only need to deal with one dimension. Since what they requested was a plate in yz plane, does this means that my: ##\sigma=dy*dy*x?## where ##dy## is the 'slice' I take and x is the width of the plate? Is that accurate? If it is true, then to find the e-field created by that slice at the point, ##dE=\frac{kdq}{R^2}## ##dE=\frac{k\sigma *x*dy}{a^2+y^2}## I know that the vertical components of the resultant e-field will cancel out because there are same amount of segments on top and below the point. So need to find ##dE_{x}##, which = ##dEcos\theta##, where ##\theta## is shown: So ##dE_{x} = dEcos\theta = (\frac{k\sigma *x*dy}{a^2+y^2}) (\frac{a}{\sqrt{y^2+a^2}})##, Now the problem is I can't integrate this to find my resultant e-field because I do not know what the value of x is. If this was a wire in a plane it will have been solvable for me, but now I'm kind of stuck. Any clues/help? Thanks :)
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Real Convergent Sequence is Cauchy Sequence Theorem Let $\epsilon > 0$. Then also $\dfrac \epsilon 2 > 0$. Because $\sequence {x_n}$ converges to $l$, we have: $\exists N: \forall n > N: \size {x_n - l} < \dfrac \epsilon 2$ So if $m > N$ and $n > N$, then: \(\displaystyle \size {x_n - x_m}\) \(=\) \(\displaystyle \size {x_n - l + l - x_m}\) \(\displaystyle \) \(\le\) \(\displaystyle \size {x_n - l} + \size {l - x_m}\) Triangle Inequality \(\displaystyle \) \(<\) \(\displaystyle \frac \epsilon 2 + \frac \epsilon 2\) by choice of $N$ \(\displaystyle \) \(=\) \(\displaystyle \epsilon\) Thus $\sequence {x_n}$ is a Cauchy sequence. $\blacksquare$ The result then follows as a special case of Convergent Sequence is Cauchy Sequence. $\blacksquare$ Also see
$$\sum\limits_{n = 1}^\infty {\frac{{\sin (n)}}{{\sqrt {{n^3} + {{\cos }^3}(n)} }}} $$ I tried to check with Maplesoft and Microsft Excel and seems this series is divergent. Is my conjecture true? How can I prove it? Thanks in advance. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $$\sum\limits_{n = 1}^\infty {\frac{{\sin (n)}}{{\sqrt {{n^3} + {{\cos }^3}(n)} }}} $$ I tried to check with Maplesoft and Microsft Excel and seems this series is divergent. Is my conjecture true? How can I prove it? Thanks in advance. For $n \geq 2$ we have $$ \frac{|\sin n|}{\sqrt{n^{3}+\cos^{3}(n)}} \leq \frac{1}{\sqrt{n^{3} + \cos^{3}(n)}} \leq \frac{1}{\sqrt{n^{3}-1}}. $$ We have $$ \frac{1}{\sqrt{n^{3}-1}} \sim \frac{1}{n^{3/2}} $$ as $n \to \infty$, so the series $\sum_{n \geq 2}\frac{1}{\sqrt{n^{3}-1}}$ converges by limit comparison test; hence by comparison test we conclude that the series $$ \sum_{n \geq 1}\frac{\sin n}{\sqrt{n^{3}+\cos^{3}(n)}} $$ converges absolutely, and the convergence follows. Hint$$\left|\frac{\sin(n)}{\sqrt{n^3+\cos^3(n)}}\right|\leq \frac{1}{\sqrt{n^3-1}}$$ Observe that $\frac{\sin(n)}{\sqrt{n^3+\cos^3(n)}}\approx\frac{1}{n^{3/2}}$. Then apply limit comparison test.
Section 11.2 Problem 10: Find the limit, if it exists, or show that the limit does not exist. \lim_{(x,y)\to(0,0)}\frac{x^2\sin^2y}{x^2+2y^2}. Comment: Notice that for 0\leq\frac{x^2\sin^2y}{x^2+2y^2}=\left(\frac{x^2}{x^2+2y^2}\right)\sin^2y. Then apply the squeeze theorem. Section 11.2 Problem 26: Determine the set of points at which the function is continuous. f(x,y,z)=\sqrt{y-x^2}\ln z. Comment: As the function is a composition of several continuous function, it is continuous on its domain. Section 11.2 Problem 28: Determine the set of points at which the function is continuous. f(x,y)=\begin{cases}\frac{xy}{x^2+xy+y^2}&\text{if }(x,y)\neq(0,0)\\0&\text{if }(x,y)=(0,0)\end{cases}. Comment: Apparently f is continuous on \mathbb{R}^2-\{(0,0)\}. To see if it is continuous at (0,0), we need to check \lim_{(x,y)\to(0,0)}f(x,y)=0. Section 11.2 Problem 30: Use polar coordinates to find the limit. \lim_{(x,y)\to(0,0)}(x^2+y^2)\ln(x^2+y^2). Comment: Let r=\sqrt{x^2+y^2}. Then \lim_{(x,y)\to(0,0)}(x^2+y^2)\ln(x^2+y^2)=\lim_{r\to 0}r^2\ln(r^2). Then apply the l’Hospital’s rule. Section 11.3 Problem 26: Find the first partial derivatives of the function. u=x^{y/z}. Comment: Recall that (x^a)'=ax^{a-1}. Section 11.3 Problem 55: Find the indicated partial derivative. f(x, y, z)=e^{xyz^2}; f_{xyz}. Solution: First f_x=yz^2f. Second f_{xy}=z^2f+yz^2f_y=z^2f+yz^2xz^2f=(z^2+xyz^4)f. Last f_{x,y,z}=(2z+4xyz^3)f+(z^2+xyz^4)f_z=(2z+4xyz^3)f+(z^2+xyz^4)2xyzf=2z(1+3xyz^2+x^2y^2z^4)e^{xyz^2}. Section 11.3 Problem 64a: Show that each of the following functions is a solution of the wave equation u_{t t} = a^2u_{xx}. u=\sin(kx)\sin(akt). Comment: Use the expression to get u_{tt}, u_{xx}. Check the equation holds.
Circuit Complexity I think the first issue is to really understand what is meant by 'controlling' a quantum system. For this, it might help to start thinking about the classical case. How many different $n$-bit input, 1-bit output classical computations are there? For each of the $2^n$ possible inputs, there are $2$ different possible outputs. Thus, there are $2^{2^n}$ different possible functions that you could be asked to build, if what you're talking about in terms of controllability is "build any of the possible functions". You might then go on to ask "what fraction of these functions can I create by using no more than $2^n/n$ two-bit gates?" (you could presumably generalise this to $k$-bit gates to get a relative complexity argument between two circuit sizes). There's a detailed calculation you can perform to get a good bound on this number, showing that it's small. This is something called Shannon's Theorem (but what isn't?), but there's at least an intuitive explanation: it requires a bit string of $2^n$ bits to specify which possible computation you're wanting to perform. This information must be incompressible, as there's no 'space' to be saved. But, if you could create all of these functions using shorter circuits, then describing that circuit would be a way of compressing the data. The equivalent statement in quantum computing is "build any $n$-qubit unitary to within some accuracy, $\epsilon$". But the classical answer is already horrific, even before we have to take into account the precision issues of specifying an arbitrary unitary. The point is that with both classical and quantum computations, we focus very specifically on the algorithms that we can implement 'easily', for some definition of 'easily', which is usually that the algorithms that we want to implement scale as some polynomial of the input size (with the possible exception of things like Grover's algorithm). So really the answer to the question depends on the algorithms you wish to run on the computer. If the algorithm scales as $O(n^2)$, then appropriately controlling an 1000-qubit machine is kind of 10000 times harder than controlling a 10-qubit machine, in the sense that you need to protect it from decoherence for that much longer, implement that many more gates etc. Decoherence Following up on the comments, Let's consider a specific algorithm or a specific kind of circuit. My question could be restated--is there any indication, theoretical or practical, of how the (engineering) problem of preventing decoherence scales as we scale the number of these circuits? This divides into two regimes. For small scale quantum devices, before error correction, you might say we're in the NISQ regime. This answer is probably most relevant to that regime. However, as your device gets larger, there will be diminishing returns; it gets harder and harder to accomplish the engineering task just to add a few more qubits. At that point, you have to transition to using error correction and, indeed, fault-tolerance (which is just a form of error correction which is capable of tolerating errors in the gates that implement the correction). Specifically, fault-tolerance says that there exists a threshold error probability $p$ such that, if you can perform every gate with an error probability $\leq p$, you can define some logical qubits (made up of multiple physical qubits) such that the result of any computation or arbitrary length can be accomplished with arbitrary precision. Whatever your physical hardware, by the time you've left the NISQ regime, you've done a lot of work eliminating decoherence as much as possible, and made sure you're as far below the $p$ threshold as possible. Current estimates place $p$ somewhere around the $1\%$ mark. The question becomes "what are the overheads for these fault-tolerant processes". The precise details are scheme dependent, and much work continues into how to minimise these costs. The scaling argument, however, says that for each logical qubit, you require $O(-\log\epsilon)$ physical qubits to achieve an overall accuracy of $\epsilon$. There is also a time cost; most of your time is spent performing error correction rather than the logical gates. Again, this is an $O(-\log\epsilon)$ scale factor. For specific numbers, you might be interested in the sorts of calculations that Andrew Steane has performed: see here (although the numbers could probably be improved a bit now). What is really quite compelling is to see how the coefficients in these relations change as your gate error gets closer and closer to the error correcting threshold. I can't seem to lay my hands on a suitable calculation (I'm sure Andrew Steane did one at some point. Possibly it was a talk I went to.), but they blow up really badly, so you want to be operating with a decent margin below the threshold. That said, there are a few assumptions that have to be made about your architecture before these considerations are relevant. For example, there has to be sufficient parallelism; you have to be able to act on different parts of the computer simultaneously. If you only do one thing at a time, errors will always build up too quickly. You also want to be able to scale up your manufacturing process without things getting any worse. It seems that, for example, superconducting qubits will be quite good for this. Their performance mainly depends on how accurately you can make different parts of the circuit. You get it right for one, and you can "just" repeat many times to make many qubits.
Short answer I think the formula for the expected successes is this: \begin{align}E &= n \cdot \frac{3d - t - 2e + 1}{e-1}, &\text{where } & 1 ≤ t ≤ e ≤ d\end{align} While the variance could be this (not tested): \begin{align} V = n \cdot \left(\frac{d-t+1}{d-1} - \frac{(e-t)^2-(d-e+1)^2}{(d-1)^2}\right)\end{align}Here is what all the variables mean: \$d\$ ... number of sides a single die has (in Shadowrun \$d = 6\$ - we roll plain old six-sided dice) \$n\$ ... number of such dice in the pool (usually \$n = Attribute + Skill\$ in Shadowrun) \$e\$ ... minimum roll for a die to explode (\$e = 6\$ in Shadowrun - only the 6 explodes) \$t\$ ... minimum roll for a success (\$t = 5\$ in Shadowrun - 5 and 6 are successes) \$h\$ ... number of hits, i.e. dice with a result \$≥t\$ in the roll (not needed here) Knowing the average spread (from the variance) is nice too, because you'll also want to know if it is still a frequent occurrence to get, I don't know, 12 successes on a roll of just 16 dice, or if 8 hits is already very unlikely. I.e. with a lower explosion threshold, higher hit counts become more likely. However, the expectation value might be very similar to that of a lower hit-threshold \$t\$ at higher explosion-threshold \$e\$. The Math behind Exploding on 6 only: If you want formulae, I thought I might give a brief summary of my question about exploding die pools and its answers. You can show the formulae below to be true for probabilities of exactly \$h\$ hits, the expectation values of hits \$E\$ and their variances \$V\$: \begin{align}p^\text{non-exp}_{d,n,t,h} &= \binom{n}{h}\left(\frac{d-t+1}{d}\right)^h\left(1-\frac{d-t+1}{d}\right)^{n-h}\\E^\text{non-exp}_{d,n,t} &= n\ \frac{d-t+1}{d}\\V^\text{non-exp}_{d,n,t} &= n\ \frac{(d-1)(d-t+1)}{d^2}\\%p^\text{exp}_{d,n,t,h} &= \frac{(t-1)^n}{d^{n+h}} \sum_{k=0}^{\max(h,n)} \binom{n}{k}\binom{n+h-k-1}{h-k}\left[\frac{d(d-t)}{t-1}\right]^k\\E^\text{exp}_{d,n,t} &= n\ \frac{d-t+1}{d-1}\\V^\text{exp}_{d,n,t} &= n\ \frac{t\,(d-t+1)}{(d-1)^2}\\\end{align} The ideas for proofs can be found on math stackexchange. Now this assumes, that dice only explode at the maximum roll of 6 in your case. So it can't tell you anything about rolls where dice explode e.g. on 5 and 6. Except, that it stands to reason that a a roll of a six sided where 1 and 2 are no successes, 3 and 4 are successes without re-rolls and 5 and 6 are successes with explosion is equal to a roll of three-sided dice where 1 is not a success, 2 is a success without re-roll and 6 is an exploding success. I've put together a small web-page (useful for Shadowrun or the oWoD) for this and tested it with a simulation: Arbitary explosion thresholds: The formulae should be fairly easy to modify for arbitrary explosion thresholds with the same reasoning used in my link. Let's call the explosion threshold \$e\$. So if the roll explodes on 5 and 6, then \$e = 5\$ in this case (for Shadowrun we'd have \$e = d = 6\$). The expectation value \$E_1\$ of a single die has to fulfill this equation: $$ E_1 = 0 \cdot \frac{t-1}{d} + 1 \cdot \frac{2d-t-e}{d} + (E_1+1) \cdot \frac{d-e-1}{d}$$ Zero successes with a probability \$\frac{t-1}{d}\$, on success and no explosions with a probability of \$\frac{2d-t-e}{d}\$ and in case of exploding dice we have a probability of \$\frac{d-e-1}{d}\$ to get \$E_1\$ more successes. This can be solves for \$E_1\$. Now the expectation value for \$n\$ dice is just \$n\$ times that for one dice (\$E = n E_1\$): \begin{align}E &= n \cdot \frac{3d - t - 2e + 1}{e-1}, &\text{where }& 1 ≤ t ≤ e ≤ d\end{align} Note, that while the formulae for exploding on the highest value are thoroughly tested, I did not test the above formula.
Answer $y=0.4 \times 4^x$ Work Step by Step Here, we have $6.4 =ab^2; 409.6 =ab^5$ $\dfrac{6.4}{b^{2}}=409.6$ This gives: $b=\sqrt [3] {64} =4$ Then, $a= \dfrac{6.4}{16} =0.4$ Hence, $y=0.4 \times 4^x$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Praneschar, CR (1999) A class of 'matching-equivalent' bipartite graphs. In: Discrete Mathemetics, 203 (1-3). pp. 207-213. Abstract Two bipartite graphs $G_1$ = ($V_1=S_1$\cup$T_1,E_1$) $G_2$ = ($V_2 = S_2$\cup$T_2,E_2$) in which there are no isolated points and in which the cardinalities of the ‘upper’ sets are equal, that is,\mid$S_1$\mid = \mid$S_2$\mid = n (say), are said to be matching-equivalent if and only if the number of r-matchings (i.e., the number of ways in which r disjoint edges can be chosen) is the same for each of the graphs $G_1$ and $G_2$ for each r, 1 $\leq$ r \leq n. We show that the number of bipartite graphs that are matching-equivalent to $K_n_._n$, the complete bipartite graph of order (n,n) is $2^n^-^1$ subject to an inclusion condition on the sets of neighbors vertices of the ‘upper set’. The proof involves adding an arbitrary number of vertices to the ‘lower’ set which are neighbors to all the vertices in the upper set and then analyzing the ‘modified’ rook polynomial that is specially defined for the purpose of the proof. Item Type: Journal Article Additional Information: The copyright of this article belongs to Elsevier Science. Keywords: r-matchings; Modified rook polynomial; Matching-equivalent bipartite graphs Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr Naveen Mathad Date Deposited: 08 Feb 2008 Last Modified: 27 Aug 2008 13:10 URI: http://eprints.iisc.ac.in/id/eprint/12947 Actions (login required) View Item
Continuous Mappings on Topological Spaces Recall from the Local Bases of a Point in a Topological Space page that if $(X, \tau)$ is a topological space then a local basis of a point $x \in X$ is a collection $\mathcal B_x$ of open neighbourhoods of $x$ such that for each $U \in \tau$ with $x \in U$ there exists a $B \in \mathcal B_x$ such that $B$ is contained in $U$, that is:(1) Definition: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be a mapping from $X$ to $Y$. A mapping $f$ is said to be Continuous at the point $a \in X$ if there exists local bases of $a$ and $f(a)$, denote them $\mathcal B_a$ and $\mathcal B_{f(a)}$ such that for all $B \in \mathcal B_{f(a)}$ there exists a $B' \in \mathcal B_a$ such that $f(B') \subseteq B$. In other words, $f$ is continuous at $a \in X$ if there exists local bases of $\mathcal B_a$ of $a$ and $\mathcal B_{f(a)}$ of $f(a)$ such that for every set set $B \in \mathcal B_{f(a)}$ there exists a $B' \in \mathcal B_a$ whose image, $f(B')$ is contained in $B$, i.e., $f(B') \subseteq B$. Note that $f(a)$ denotes an element in $Y$, i.e., $f(a) \in Y$, while $f(B')$ denotes the image of the set $B'$ under $f$. We will later see that there are many equivalent definitions for a mapping $f : X \to Y$ to be continuous at a point $a \in X$. Let's look at an example. Let $X = Y = \mathbb{R}$ be the topological space the usual topology $\tau$ of open intervals. Let $f : \mathbb{R} \to \mathbb{R}$ be mapping defined for all $x \in X = \mathbb{R}$ by:(2) We claim that $f$ is continuous at the point $0 \in X = \mathbb{R}$. To verify this, we must show that there exists local bases of $a = 0 \in X = \mathbb{R}$ and $f(a) = 1 \in Y = \mathbb{R}$, denoted then $\mathcal B_{a=0}$ and $\mathcal B_{f(a) = 1}$ such that for all $B \in \mathcal B_{f(a) = 1}$ there exists a $B' \in \mathcal B_{a=0}$ such that:(3) Consider the following local bases of $a = 0$ and $f(a) = 1$:(4) Let $B \in \mathcal B_{f(a) = 1}$. Then $B = (c, d)$ where $c, d \in \mathbb{R}$ and $c < 1 < d$. Let $B' \in \mathcal B_{a=0}$ be defined by $B' = (a, b) = (c - 1, d - 1)$. Then we have that:(5) Therefore $f$ is continuous at $0 \in \mathbb{R}$. In fact, it should not be too hard to see that $f$ is actually continuous for all $x \in X = \mathbb{R}$ using a similar argument for each $x \in X = \mathbb{R}$. Such mapping $f : X \to Y$ that are continuous for all $x \in X$ are said to be continuous (globally) which we define below. Definition: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be a mapping from $X$ to $Y$. Then $f$ is said to be Continuous or Continuous on All of $X$ if $f$ is continuous at every point $a \in X$.