text stringlengths 256 16.4k |
|---|
Morgenstern first defines the notion of a linear algorithm. A linear algorithm gets as input $x_1,\ldots,x_n$ and its goal is to compute some $y_1,\ldots,y_m$, each of which is a (specific) linear combination of $x_i$s. The algorithm proceeds in steps, starting with step $n+1$. At step $t$, the algorithm computes $x_t = \lambda_t x_i + \mu_t x_j$ for some $i,j < t$. At the end of the computation, for each $i$, $y_i = x_j$ for some $j$.
For example, here is an algorithm computing the unnormalized DFT on 2 variables:$$x_3 \gets x_1 + x_2 \\x_4 \gets x_1 - x_2$$Similarly, the unnormalized two dimensional DFT on $2^2$ variables is computed by:$$x_5 \gets x_1 + x_2 \\x_6 \gets x_1 - x_2 \\x_7 \gets x_3 + x_4 \\x_8 \gets x_3 - x_4 \\x_9 \gets x_5 + x_7 \\x_{10} \gets x_5 - x_7 \\x_{11} \gets x_6 + x_8 \\x_{12} \gets x_6 - x_8$$
We can view each $x_t$ as an $n$-dimensional vector which gives the linear combination of $x_1,\ldots,x_n$ producing $x_t$; call this vector $v_t$. The vectors $v_1,\ldots,v_n$ are just the $n$ basis vectors.
Morgenstern defines the quantity $\Delta_t$, which is the maximum magnitude of the determinant of any square submatrix of the matrix $V_t$ whose rows are $v_1,\ldots,v_t$.
Lemma. Let $c \geq 1/2$. If $|\lambda_s|,|\mu_s| \leq c$ for all $s$ then $\Delta_{n+t} \leq (2c)^t$.
Proof. The proof is by induction on $t$. When $t = 0$, this is easy to verify directly since $V_n$ is just the identity matrix. Consider now any $t > 0$. Every square submatrix of $V_t$ is either a square submatrix of $V_{t-1}$, in which case its determinant is at most $(2c)^{t-1} \leq (2c)^t$ by induction, or it involves the new row $v_t = \lambda_t v_i + \mu_t v_j$. In the latter case, we can write the square submatrix $A$ as $A = \lambda_t B + \mu_t C$, where $B,C$ are square submatrices of $V_{t-1}$ (replace the relevant part of $v_t$ by the corresponding parts of $v_i$ and $v_j$). Since the determinant is a linear function of any of its rows, $\det(A) = \lambda_t \det(B) + \mu_t(C)$. By induction, $|\det(B)|,|\det(C)| \leq (2c)^{t-1}$, and so $$|\det(A)| \leq |\lambda_t| |\det(B)| + |\mu_t| |\det(C)| \leq c(2c)^{t-1} + c(2c)^{t-1} = (2c)^t.$$
Corollary. Computing the DFT on $n$ variables using a linear algorithm with bounded coefficients requires $\Omega(n\log n)$ steps.
Proof. The determinant of the DFT matrix is $n^{n/2}$. Hence any linear algorithm computing the DFT in $t$ steps satisfies $\Delta_t \geq n^{n/2}$. If the bound on the coefficients is $c$, then the lemma shows that $(2c)^t \geq n^{n/2}$ and so $t = \Omega(n\log n)$.
Remark. Strassen has shown that any algebraic algorithm (algorithm involving $+,-,\cdot,/$) for computing the DFT can be transformed to a linear algorithm using the same number of steps. |
I need to solve this Partial Differential Equation for $\lambda(x,y)$,
$$\frac{\partial \lambda}{\partial x} + h(x,y)\frac{\partial \lambda}{\partial y} - \lambda \frac{\partial h}{\partial y} = 0$$ where $$\frac{dy}{dx} = h(x,y).$$
The additional information given is $\lambda$ is a bivariate polynomial in $x$ and $y$. My initial approach was to try using the method of characteristics, but I know I can't since $y$ is dependent on $x$.
So I guess I should use some sort of degree bound, and find the coefficients by equating powers on both sides, However I just wanted to know if there is actually a better method to do this before I proceed? And even if it has to be done by powers, how do I get the degree?
Additional information: And this PDE is part of a Symmetry Solver to find the infinitesimals, $\xi$ and $\eta$ of the transformed canonical co-ordinates of a first order differential equation. |
AliFMDMultCuts () AliFMDMultCuts (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) AliFMDMultCuts (const AliFMDMultCuts &o) AliFMDMultCuts & operator= (const AliFMDMultCuts &o) void Reset () Double_t GetMultCut (UShort_t d, Char_t r, Double_t eta, Bool_t errors) const Double_t GetMultCut (UShort_t d, Char_t r, Int_t etabin, Bool_t errors) const void SetMultCuts (Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void SetMPVFraction (Double_t frac=0) void SetNXi (Double_t nXi) void SetIncludeSigma (Bool_t in) void SetProbability (Double_t cut=1e-5) void Set (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void Print (Option_t *option="") const void FillHistogram (TH2 *h) const void Output (TList *l, const char *name=0) const Bool_t Input (TList *l, const char *name) EMethod GetMethod () const const char * GetMethodString (Bool_t latex=false) const
Cuts used when calculating the multiplicity.
We can define our cuts in four ways (in order of priorty)
Using a fixed value \( v\)- AliFMDMultCuts:: SetMultCuts Using a fraction \( f\) of the most probably value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi+\sigma\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using the \( x\) value for which \( P(x>p)\) given some cut value \( p\) Using the lower fit range of the energy loss fits
The member function AliFMDMultCuts::Reset resets all cut values, meaning the lower bound on the fits will be used by default. This is useful to ensure a fresh start:
The member function AliFMDMultCuts::GetMethod will return the method identifier for the current method employed (AliFMDMultCuts::EMethod). Like wise will the method AliFMDMultCuts::GetMethodString give a human readable string of the current method employed.
Definition at line 38 of file AliFMDMultCuts.h.
Set the cut for specified method.
Note, that if
method is kFixed, and only fmd1i is specified, then the outer rings cut value is increased by 20% relative to fmd1i.
Also note, that if
method is kLandauWidth, and cut2 is larger than zero, then \(\sigma\) of the fits are included in the cut value. Parameters
method Method to use fmd1i Value for FMD1i fmd2i Value for FMD2i (if < 0, use fmd1i) fmd2o Value for FMD2o (if < 0, use fmd1i) fmd3i Value for FMD3i (if < 0, use fmd1i) fmd3o Value for FMD3o (if < 0, use fmd1i)
Definition at line 63 of file AliFMDMultCuts.cxx.
Referenced by AliFMDMultCuts(), AliFMDSharingFilter::AliFMDSharingFilter(), and DepSet(). |
First-Order Systems: The Happy Family Все счастли́вые се́мьи похо́жи друг на дру́га, ка́ждая несчастли́вая семья́ несчастли́ва по-сво́ему.
— Лев Николаевич Толстой,
Анна Каренина Happy families are all alike; every unhappy family is unhappy in its own way.
— Lev Nicholaevich Tolstoy,
Anna Karenina
I was going to write an article about second-order systems, but then realized that it would be premature to do so, without starting off on the subject of first-order systems.
Warning: this article isn't exciting. Sorry, it is what it is; that's the nature of first-order systems.
I'm sure you've run into first-order systems before. The RC filter is probably the most common one. It has a differential equation \( \frac{dV_{out}}{dt} = \frac{1}{RC}\left(V_{in}-V_{out}\right) \), and a transfer function of \( \frac{V_{out}}{V_{in}} = \frac{1}{RCs+1} \).
Time response
Here's what the unit step response of the RC filter looks like:
Some things to note:
The final value of the step response is 1 (e.g. V out= V in) The initial value right after the step is 0, so the output waveform is continuous The step response is a decaying exponential with time constant τ = RC The slope of the output changes instantaneously after the step to a value of 1/τ
And there's not much else to say. All first order systems can be modeled in a general way as follows, for input
u, output y, and internal state x:
$$ \begin{eqnarray} \frac{dx}{dt} &=& \frac{x-u}{\tau}\cr y &=& ax + bu \end{eqnarray}$$
This produces a system with transfer function \( H(s) = \frac{Y(s)}{U(s)} = b + \frac{a}{\tau s + 1} \), which has a time response after t=0 of \( b+a\left(1-e^{-t/\tau}\right) \) that looks like this:
Again, some things to note:
The final value of the step response is a+b The initial value right after the step is b The step response is a decaying exponential with time constant τ The slope of the output changes instantaneously after the step to a value of 1/τ
It's possible for either
a or b to be negative, but that's about all that can change here.
If
a = - b, then we have a high-pass filter, which returns to a final value of zero: Frequency response
If we create a Bode plot of the result, we'll see this:
Things to note:
The term \( \frac{a}{\tau s+1} \) contains a pole at \( \omega = \frac{1}{\tau} \) The constant term bforms a zero that, if present, counters the pole The magnitude of the transfer function decreases at 20dB/decade for frequencies ω > 1/τ, until it reaches a point where the constant term bis larger than the rest of the transfer function The phase of the transfer function goes from 0° to -90° because of the pole, but then returns to 0° because of the zero.
If one of the terms is negative, it does not affect the magnitude plot, but it does affect the phase:
If
b and a are not separated as much, the zero kicks in shortly after the pole:
For a high-pass filter with
a = - b, it changes the waveform somewhat: Following error of first-order systems
Back to the time domain....
The
following error or tracking error of a first-order system measures how closely a particular first-order system is able to track its input. This really only makes sense for systems with unity gain and zero steady-state error, so we'll only consider first-order systems with b=0 and a=1, namely \( H(s) = \frac{1}{\tau s+1} \). Step input
We've already seen how the first-order system tracks a step input:
There's an initial error, but it decays to zero steady-state error with time constant τ.
Ramp input
Here's what happens if you pass in a ramp input:
There's not zero steady-state error any more. This 1st-order system isn't good enough to follow a ramp with zero error. If we go back to the differential equation \( \frac{dV_{out}}{dt} = \frac{V_{in} - V_{out}}{\tau} \) and multiply both sides by τ we get \( V_{in}-V_{out} = \tau\frac{dV_{out}}{dt} \). In other words, the output can follow the ramp rate R of the input, but in order to do so, it has to have steady state error of \( \tau\frac{dV_{out}}{dt} = \tau R \). The slower the filter, the larger the steady-state error.
Sinusoidal input
The case of a sinusoidal input is mildly interesting. Rather than give the closed-form solution at first, let's use trapezoidal integration to simulate the response.
At "steady state" (there really isn't a true steady-state here) the following error is sinusoidal. Here we can use complex exponentials to help us. If our input is constant frequency, e.g. \( s=j2\pi f \,\Rightarrow\, V_{in} = e^{j2\pi ft} \), then the output is \( \frac{1}{\tau s+1}V_{in} \, \Rightarrow\, V_{out}=\frac{1}{1+j2\pi f\tau}e^{j2\pi ft} \), and that means that the steady-state error \( V_{in}-V_{out} = \left(1 - \frac{1}{1+j2\pi f\tau}\right) V_{in} \):
Here the critical quantity is \( \alpha = 2\pi f \tau \). We can quantify the input-to-output gain, phase lag, and error magnitude as a function of α. The exact values are input-to-output gain \( |H| = \frac{1}{\sqrt{1+\alpha^2}} \), phase lag \( \phi_{lag} =\measuredangle H = -\arctan \alpha \), and error magnitude \( |\tilde{H}| = \frac{\alpha}{\sqrt{1+\alpha^2}} \).
You can see the general behavior of these values in the graphs above.
For \( \alpha \ll 1 \), the input-to-output gain \( |H| \approx 1 - \frac{1}{2}\alpha^2 \); the phase lag in radians \( \phi_{lag} \approx -\alpha \), and the error magnitude \( |\tilde{H}| \approx \alpha \).
For \( \alpha \gg 1 \), the input-to-output gain \( |H| \approx \frac{1}{\alpha} \); the phase lag in radians \( \phi_{lag} \approx \frac{\pi}{2}-\frac{1}{\alpha} \), and the error magnitude \( |\tilde{H}| \approx 1 - \frac{1}{2 \alpha^2} \).
Wrapup
That's really it. All first-order systems are essentially alike. If you remove the constant term
b, they are exactly alike and can be graphed with the same shape: the magnitude can be normalized by dividing by the gain term a, the time can be normalized by dividing by the time constant τ, and the frequency can be normalized by multiplying by the time constant τ.
The time response is exponential, and the frequency response contains one pole and, if the constant term
b is present, one zero.
The steady-state error for a tracking first-order system \( H(s) = \frac{1}{\tau s + 1} \) is zero for a unit step, τ for a unit ramp, and has frequency-dependent behavior (see above) for sinusoidal inputs.
Not very interesting.
Things get more interesting when we get to higher-order systems. I'll talk about second-order systems in a future article.
© 2014 Jason M. Sachs, all rights reserved.
Previous post by Jason Sachs:
Lost Secrets of the H-Bridge, Part IV: DC Link Decoupling and Why Electrolytic Capacitors Are Not Enough
Next post by Jason Sachs:
How to Include MathJax Equations in SVG With Less Than 100 Lines of JavaScript!
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads. |
Answer
$a\approx13.6$ m $b\approx14.7$ m
Work Step by Step
$\sin43^{\circ}=\frac{a}{20}$ $a=20\times\sin43^{\circ}$ $a\approx13.6$ Using Pythagorean Theorem to find $b$. $b=\sqrt {20^{2}-a^{2}}$ $b\approx\sqrt{20^{2}-13.6^{2}}$ $b\approx14.7$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Answer
Undefined or, $\infty $
Work Step by Step
Here, $ x= -1$; $ y=0$ and $ r=\sqrt {(1)^2+(0)^2}=1$ The trigonometric ratios are as follows: $\csc \theta =\dfrac{r}{y}$ Then, we have $\csc (\pi) =\dfrac{1}{0}=\infty $ or, Undefined
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Answer
Undefined
Work Step by Step
Here, $ y= -1$; $ x=0$ and $ r=\sqrt {(1)^2+(0)^2}=1$ The trigonometric ratios are as follows: $\tan \theta =\dfrac{y}{x}$ Then, we have $\tan (3\pi/2) =\dfrac{-1}{0}=$ Undefined
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
--Asan 03:48, 14 June 2008 (EDT)The instructor has added some comments.
Determine if the following are:
Memoryless Time Invariant Linear Causal Stable A)y(t)=x(t-2) + x(2-t)
Let x3=ax1(t) + bx2(t) then the output of the function is y(t)=x3(t-2) + x(2-t) = (ax1(t-2) + bx2(t-s)) + (ax1(2-t) + bx2(2-t)) --Asan 03:42, 14 June 2008 (EDT)(Check the details) Therefore the system is Linear. --Asan 03:42, 14 June 2008 (EDT)(It is the system... not the signal)
x(t-T)-->S-->y(t)=x(t-T-2) + x(2-t-T) $ \neq $ x(t-T) Since t-T-2 $ \neq $ 2-t-T the system is not TI. --Asan 03:42, 14 June 2008 (EDT)(Wrong. This system is not TI. Try again)
Assuming that x(t) is bounded then the output y(t) is also bound because it is the sum of two bound functions. Since there is a time shift in both the positive and negative direction, the function is neither memoryless or causal
This function is Linear, Time Invariant, and Stable.
B)y(t)=[cos(3t)]x(t)
Since there is no time shift in the output function it is both memoryless and causal.
Let x3=ax1(t) + bx2(t)-->S-->y(t)=cos(3t)x3(t)=cos(3t)[ax1(t) + bx2(t)]=ay(t) + by(t) Therefore the function is linear.
Let x2(t)=x1(t-T)-->S-->y2(t)=cos(3t)x2(t)=cos(3t)x1(t-T) This is not equal to y1(t-T), therefore the function is not Time Invariant.
Assuming that x(t) is bound, the function y(t) is also bound since it is the multiple of two bound functions.
The function is Memoryless, Causal, Linear, Stable.
C)y(t)=$ \int_-\infty^{2T} x(\tau)\,d\tau $
Let x3(t)=ax1(t) + bx2(t)-->S-->$ \int_-\infty^{2T} x3(\tau)\,d\tau $=$ a\int_-\infty^{2T} x1(\tau)\,d\tau $ + $ b\int_-\infty^{2T} x2(\tau)\,d\tau $
Therefore the function is linear.
The function is not memoryless or causal since it takes into account past and future time with the integration going from $ -/infty $ to 2t.
Let x2(t)=x1(t-T)-->S-->$ \int_\infty^{2T} x2(\tau)\,d\tau $-->$ \int_\infty^{2T} x1(\tau)\,d\tau $ The function is not time invariant because the integral will evaluate from negative infinity to twice the current time. This will cause the output to be shifted by more than the inputs time shift. --Asan 03:48, 14 June 2008 (EDT)(Wrong. Try again)
The function is not stable since the integrand has no lower limit, therefore the sum can grow infinitely large without bound. |
Show that $\Gamma(y) = \int_0^{\infty}{e^{-x}x^{y-1}\,dx}$ is finite for $y>0$ both as an improper Riemann integral and as a Lebesgue integral.
Show $\Gamma'(y) = \int_0^{\infty}{e^{-x}x^{y-1}\ln{x}\,dx}$ for $y>0$.
For one: I've tried simply integrating it as an improper Riemann integral, but you always end up with another integral of the "same type" (which is how you eventually show $\Gamma(y+1)=y\Gamma(y)$ ). How do I get around this? As for the Lebesgue integral, I think it'd be easiest to compare the integrand to a larger function whose integral converges, but I haven't come up with a good candidate.
For two: Fix $y_0>0$. Write $$\Gamma'(y) = \lim_{y\to y_0}{\int_0^{\infty}{\frac{e^{-x}x^{y-1}-e^{-x}x^{y_0-1}}{y-y_0}\,dx}}\,.$$ By the MVT, there exists $\eta$ between $y$ and $y_0$ such that the above limit is equal to $$\lim_{y\to y_0}{\int_0^{\infty}{e^{-x}x^{\eta-1}\ln{x}\,dx}}\,.$$ But now I'm not sure what to do. This is similar to a previous question I posted; for that problem we knew the derivative of the original integrand was bounded, so we applied the bounded convergence theorem. Would it be enough to prove that the derivative of my integrand is bounded, and apply BCT? |
Modeling Electromagnetic Waves and Periodic Structures
We often want to model an electromagnetic wave (light, microwaves) incident upon periodic structures, such as diffraction gratings, metamaterials, or frequency selective surfaces. This can be done using the RF or Wave Optics modules from the COMSOL product suite. Both modules provide Floquet periodic boundary conditions and periodic ports and compute the reflected and transmitted diffraction orders as a function of incident angles and wavelength. This blog post introduces the concepts behind this type of analysis and walks through the set-up of such problems.
The Scenario
First, let’s consider a parallelepided volume of free space representing a periodically repeating unit cell with a plane wave passing through it at an angle, as shown below:
The incident wavevector, \bf{k}, has component magnitudes: k_x = k_0 \sin(\alpha_1) \cos(\alpha_2), k_y = k_0 \sin(\alpha_1) \sin(\alpha_2), and k_z = k_0 \cos(\alpha_1) in the global coordinate system. This problem can be modeled by using Periodic boundary conditions on the sides of the domain and Port boundary conditions at the top and bottom. The most complex part of the problem set-up is defining the direction and polarization of the incoming and outgoing wave.
Defining the Wave Direction
Although the COMSOL software is flexible enough to allow any definition of base coordinate system, in this posting, we will pick one and use it throughout. The direction of the incident light is defined by two angles, \alpha_1 and \alpha_2; and two vectors, \bf{n}, the outward pointing normal of the modeling space and \bf{a_1}, a vector in the plane of incidence. The convention we choose here is to align \bf{a_1} to the global
x-axis and align \bf{n} with the global z-axis. Thus, the angle between the wavevector of the incoming wave and the global z-axis is \alpha_1, the elevation angle of incidence, where -\pi/2 > \alpha_1 > \pi/2 with \alpha_1 = 0, meaning normal incidence. The angle between the incident wavevector and the global x-axis is the azimuthal angle of incidence, \alpha_2, which lies in the range, -\pi/2 > \alpha_2 \geq \pi/2. As a consequence of this definition, positive values of both \alpha_1 and \alpha_2 imply that the wave is traveling in the positive x- and y-direction.
To use the above definition of direction of incidence, we need to specify the \bf{a_1} vector. This is done by picking a
Periodic Port Reference Point, which must be one of the corner points of the incident port. The software uses the in-plane edges coming out of this point to define two vectors, \bf{a_1} and \bf{a_2}, such that \bf{a_1 \times a_2 = n}. In the figure below, we can see the four cases of \bf{a_1} and \bf{a_2} that satisfy this condition. Thus, the Periodic Port Reference Point on the incoming side port should be the point at the bottom left of the x-y plane, when looking down the z-axis and the surface. By choosing this point, the \bf{a_1} vector becomes aligned with the global x-axis.
Now that \bf{a_1} and \bf{a_2} have been defined on the incident side due to the choice of the Periodic Port Reference Point, the port on the outgoing side of the modeling domain must also be defined. The normal vector, \bf{n}, points in the opposite direction, hence the choice of the Periodic Port Reference Point must be adjusted. None of the four corner points will give a set of \bf{a_1} and \bf{a_2} that align with the vectors on the incident side, so we must choose one of the four points and adjust our definitions of \alpha_1 and \alpha_2. By choosing a periodic port reference point on the output side that is diametrically opposite the point chosen on the input side and applying a \pi/2 rotation to \alpha_2, the direction of \bf{a_1} is rotated to \bf{a_1'}, which points in the opposite direction of \bf{a_1} on the incident side. As a consequence of this rotation, \alpha_1 and \alpha_2 are switched in sign on the output side of the modeling domain.
Next, consider a modeling domain representing a dielectric half-space with a refractive index contrast between the input and output port sides that causes the wave to change direction, as shown below. From Snell’s law, we know that the angle of refraction is \beta=\arcsin \left( n_A\sin(\alpha_1)/n_B \right). This lets us compute the direction of the wavevector at the output port. Also, note that this relationship holds even if there are additional layers of dielectric sandwiched between the two half-spaces.
In summary, to define the direction of a plane wave traveling through a unit cell, we first need to choose two points, the Periodic Port Reference Points, which are diametrically opposite on the input and output sides. These points define the vectors \bf{a_1} and \bf{a_2}. As a consequence, \alpha_1 and \alpha_2 on the input side can be defined with respect to the global coordinate system. On the output side, the direction angles become: \alpha_{1,out} = -\arcsin \left( n_A\sin(\alpha_1)/n_B \right) and \alpha_{2,out}=-\alpha_2 + \pi/2.
Defining the Polarization
The incoming plane wave can be in one of two polarizations, with either the electric or the magnetic field parallel to the
x-y plane. All other polarizations, such as circular or elliptical, can be constructed from a linear combination of these two. The figure below shows the case of \alpha_2 = 0, with the magnetic field parallel to the x-y plane. For the case of \alpha_2 = 0, the magnetic field amplitude at the input and output ports is (0,1,0) in the global coordinate system. As the beam is rotated such that \alpha_2 \ne 0, the magnetic field amplitude becomes (-\sin(\alpha_2), \cos(\alpha_2),0). For the orthogonal polarization, the electric field magnitude at the input can be defined similarly. At the output port, the field components in the x-y plane can be defined in the same way.
So far, we’ve seen how to define the direction and polarization of a plane wave that is propagating through a unit cell around a dielectric interface. You can see an example model of this in the Model Gallery that demonstrates an agreement with the analytically derived Fresnel Equations.
Defining the Diffraction Orders
Next, let’s examine what happens when we introduce a structure with periodicity into the modeling domain. Consider a plane wave with \alpha_1, \alpha_2 \ne 0 incident upon a periodic structure as shown below. If the wavelength is sufficiently short compared to the grating spacing, one or several diffraction orders can be present. To understand these diffraction orders, we must look at the plane defined by the \bf{n} and \bf{k} vectors as well as in the plane defined by the \bf{n} and \bf{k \times n} vectors.
First, looking normal to the plane defined by \bf{n} and \bf{k}, we see that there can be a transmitted 0
th order mode with direction defined by Snell’s law as described above. There is also a 0 th order reflected component. There also may be some absorption in the structure, but that is not pictured here. The figure below shows only the 0 th order transmitted mode. The spacing, d, is the periodicity in the plane defined by the \bf{n} and \bf{k} vectors.
For short enough wavelengths, there can also be higher-order diffracted modes. These are shown in the figure below, for the m=\pm1 cases.
The condition for the existence of these modes is that:
for: m=0,\pm 1, \pm 2,…
For m=0 , this reduces to Snell’s law, as above. For \beta_{m\ne0}, if the difference in path lengths equals an integer number of wavelengths in vacuum, then there is constructive interference and a beam of order m is diffracted by angle \beta_{m}. Note that there need not be equal numbers of positive and negative m-orders.
Next, we look along the plane defined by the \bf{n} and \bf{k} vectors. That is, we rotate our viewpoint around the
z-axis such that the incident wavevector appears to be coming in normally to the surface. The diffraction into this plane are indexed as the n-order beams. Note that the periodic spacing, w, will be different in this plane and that there will always be equal numbers of positive and negative n-orders.
COMSOL will automatically compute these m,n \ne 0 order modes during the set-up of a Periodic Port and define listener ports so that it is possible to evaluate how much energy gets diffracted into each mode.
Last, we must consider that the wave may experience a rotation of its polarization as it gets diffracted. Thus, each diffracted order consists of two orthogonal polarizations, the
In-plane vector and Out-of-plane vector components. Looking at the plane defined by \bf{n} and the diffracted wavevector \bf{k_D}, the diffracted field can have two components. The Out-of-plane vector component is the diffracted beam that is polarized out of the plane of diffraction (the plane defined by \bf{n} and \bf{k}), while the In-plane vector component has the orthogonal polarization. Thus, if the In-plane vector component is non-zero for a particular diffraction order, this means that the incoming wave experiences a rotation of polarization as it is diffracted. Similar definitions hold for the n \ne 0 order modes.
Consider a periodic structure on a dielectric substrate. As the incident beam comes in at \alpha_1, \alpha_2 \ne 0 and there are higher diffracted orders, the visualization of all of the diffracted orders can become quite involved. In the figure below, the incoming plane wave direction is shown as a yellow vector. The n=0 diffracted orders are shown as blue arrows for diffraction in the positive z-direction and cyan arrows for diffraction into the negative z-direction. Diffraction into the n \ne 0 order modes are shown as red and magenta for the positive and negative directions. There can be diffraction into each of these directions and the diffracted wave can be polarized either in or out of the plane of diffraction. The plane of diffraction itself is visualized as a circular arc. Note that the plane of diffraction for the n \ne 0 modes is different in the positive and negative z-direction.
All of the ports are automatically set up when defining a periodic structure in 3D. They capture these various diffracted orders and can compute the fields and relative phase in each order. Understanding the meaning and interpretation of these ports is helpful when modeling periodic structures.
Comments (14) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Answer
$c\approx8.8$ $d\approx28.7$
Work Step by Step
$\sin17^{\circ}=\frac{c}{30}$ $c=30\times\sin17^{\circ}$ $c\approx8.8$ Using Pythagorean Theorem to find $d$. $d=\sqrt {30^{2}-c^{2}}$ $d\approx\sqrt{30^{2}-8.8^{2}}$ $d\approx28.7$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Under the auspices of the Computational Complexity Foundation (CCF)
We study the complexity of arithmetic in finite fields of characteristic two, $\F_{2^n}$.
We concentrate on the following two problems:
Iterated Multiplication: Given $\alpha_1, \alpha_2,..., \alpha_t \in \F_{2^n}$, compute $\alpha_1 \cdot \alpha_2 \cdots \alpha_t \in \F_{2^n}$.
Exponentiation: Given $\alpha \in \F_{2^n}$ and a $t$-bit integer $k$, compute $\alpha^k \in \F_{2^n}$.... more >>>
We construct pseudorandom generators of seed length $\tilde{O}(\log(n)\cdot \log(1/\epsilon))$ that $\epsilon$-fool ordered read-once branching programs (ROBPs) of width $3$ and length $n$. For unordered ROBPs, we construct pseudorandom generators with seed length $\tilde{O}(\log(n) \cdot \mathrm{poly}(1/\epsilon))$. This is the first improvement for pseudorandom generators fooling width $3$ ROBPs since the work ... more >>> |
Permutation/Examples/Addition of Constant on Integers Examples of Permutations
Let $a \in \Z$.
Let $f: \Z \to \Z$ denote the mapping defined as:
$\forall x \in \Z: \map f x = x + a$
Then $f$ is a permutation on $\Z$.
Proof
\(\, \displaystyle \forall x, y \in \Z: \, \) \(\displaystyle \map f x\) \(=\) \(\displaystyle \map f y\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x + a\) \(=\) \(\displaystyle y + a\) Definition of $f$ \(\displaystyle \leadsto \ \ \) \(\displaystyle x\) \(=\) \(\displaystyle y\)
demonstrating that $f$ is injective.
Then:
\(\, \displaystyle \forall y \in \Z: \, \) \(\displaystyle y\) \(=\) \(\displaystyle \paren {y - a} + a\) \(\displaystyle \) \(=\) \(\displaystyle \map f {y - a}\) Definition of $f$
As $y - a \in \Z$ it follows that $f$ is a surjection
By definition, then, $f$ is a bijection.
$\blacksquare$ |
In many engineering problems, the knowledge of center of mass is required to make the calculations. This concept is derived from the fact that a body has a center of mass/gravity which interacts with other bodies and that this force acts on the center (equivalent force). It turns out that this concept is very useful in calculating rotations, moment of inertia, etc. The center of mass doesn't depend on the coordinate system and on the way it is calculated. The physical meaning of the center of mass is that if a straight line force acts on the body in away through the center of gravity, the body will not rotate. In other words, if a body will be held by one point it will be enough to hold the body in the direction of the center of mass. Note, if the body isn't be held through the center of mass, then a moment in additional to force is required (to prevent the body for rotating). It is convenient to use the Cartesian system to explain this concept. Suppose that the body has a distribution of the mass (density, \(\rho\)) as a function of the location. The density ``normally'' defined as mass per volume. Here, the the line density is referred to density mass per unit length in the \(x\) direction.
Fig. 3.2. Description of how the center of mass is calculated.
In \(x\) coordinate, the center will be defined as \[\bar{x} = \frac{1}{m}\int_V x\rho \left(x\right) dV \tag{9}\] Here, the \(dV\) element has finite dimensions in y–z plane and infinitesimal dimension in the \(x\) direction (see Figure 3.2). Also, the mass, \(m\) is the total mass of the object. It can be noticed that center of mass in the x–direction isn't affected by the distribution in the \(y\) nor by \(z\) directions. In same fashion the center of mass can be defined in the other directions as following
\(x_i\) of Center Mass
\[\bar{x_{i}} = \frac{1}{m}\int_V x_{i}\rho \left(x_{i}\right) dV \tag{10}\]
where \(x_{i}\) is the direction of either, \(x\), \(y\), or \(z\). The density, \(\rho\left(x_{i}\right)\) is the line density as a function of \(x_{i}\). Thus, even for solid and uniform density the line density is a function of the geometry.
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
Define the function f(z)=z*Real Part(z)/|z|,z!= 0 0 ,z=0Prove that f(z) is continuous in the entire complex plane. Pls somebody help me with dis one......thanx in advance.
Follow Math Help Forum on Facebook and Google+
Originally Posted by deepakpc007 Define the function f(z)=z*Real Part(z)/|z|,z!= 0
0 ,z=0
Prove that f(z) is continuous in the entire complex plane. If $\displaystyle z=re^{\mathif{i}\theta}$ then $\displaystyle \frac{z\cdot\text{Re}(z)}{|z|}=\frac{(re^{\mathif{ i}\theta})(r\cos(\theta)}{r}$.
Now what happens when $\displaystyle r\to 0~?$
Originally Posted by Plato If $\displaystyle z=re^{\mathif{i}\theta}$ then $\displaystyle \frac{z\cdot\text{Re}(z)}{|z|}=\frac{(re^{\mathif{ i}\theta})(r\cos(\theta)}{r}$.
Now what happens when $\displaystyle r\to 0~?$ when $\displaystyle r\to 0~?$, $\displaystyle z=re^{\mathif{i}\theta}$ then $\displaystyle \frac{z\cdot\text{Re}(z)}{|z|}=\frac{(re^{\mathif{ i}\theta})(r\cos(\theta)}{r}$=infinity???I think so.
Originally Posted by deepakpc007 when $\displaystyle r\to 0~?$, $\displaystyle z=re^{\mathif{i}\theta}$ then $\displaystyle \frac{z\cdot\text{Re}(z)}{|z|}=\frac{(re^{\mathif{ i}\theta})(r\cos(\theta)}{r}$=infinity???I think so. Oh my no.
Is this true: $\displaystyle r\not= 0$ so $\displaystyle \frac{(re^{\mathif{i}\theta})(r\cos(\theta)}{r}=(r e^{\mathif{i}\theta})(\cos(\theta))~?$
Originally Posted by Plato Oh my no.Is this true: $\displaystyle r\not= 0$ so $\displaystyle \frac{(re^{\mathif{i}\theta})(r\cos(\theta)}{r}=(r e^{\mathif{i}\theta})(\cos(\theta))~?$ Yes thats true.How can i prove with this?
Originally Posted by deepakpc007 Yes thats true.How can i prove with this? What is there to prove.
As $\displaystyle r\to 0$ the other factors are bounded, so what is the limit?
Originally Posted by Plato What is there to prove.As $\displaystyle r\to 0$ the other factors are bounded, so what is the limit? Sorry sir,i dont know.i am not dat good in maths.please help me. Edit : As $\displaystyle r\to 0$ the other factors are bounded,means a finite limit.Is it so? Limit is 0 and hence the function is continuous in the entire complex plane.Am i right?
Last edited by deepakpc007; Apr 29th 2011 at 05:53 AM.
View Tag Cloud |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Question: Let $N=\{1,2,3\}$ be a set of three people and $X=\{a,b,c,d\}$ a set of 4 political parties. The individuals have the following preferences:
Individual 1: $a \succ_1 b \sim_1 c \succ_1 d$
Individual 2: $d \sim_2 a \succ_2 c \succ_2 b$
Individual 3: $a \succ_3 b \succ_3 c \succ_3 d$
List all pairs of $(x,y) \in X \times X$ such that $x$ Pareto dominates $y$ and deduce the optimal Pareto set.
I think I am being thrown off by the $X \times X$ bit. But it seems like if I multiply out $X$ I will get the following:
$X \times X = \{(a,a),(a,b),(a,c),(a,d),(b,b),(b,c),(b,d),(c,c),(c,d),(d,d)\}$
So if I define the Pareto outcomes as $x \succeq^P y \iff x \succeq_i y$ for all $i=1,\dots,n$. Then my answer to the question would be the pairs are:
$(a,b),(a,c)$ since those are the only two combinations which dominate in all three individuals. I don't think it makes sense to examine the $(a,a),(b,b),...$ case, but I could be completely wrong. For some reason I can't seem to wrap my head around it, and I could have gone wrong right from the beginning. |
Answer
The proof is below.
Work Step by Step
We must prove that these units are the same as the units of speed, with are $m/s$. Thus, we find: $[\sqrt{\frac{P}{\rho}}]=\sqrt{\frac{kg/ms^2}{kg/m^3}}= \sqrt{\frac{m^2}{s^2}}=m/s$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Let $k$ be a non-zero real number. Consider the problem
$$ \nabla^2 \phi = 0 \ \ \ \mbox{for} \ \ \ 1 \leq r \leq 2, \ \ \ \ \alpha\phi + \frac{\partial\phi}{\partial r} = k\cos\theta \ \ \ \mbox{on} \ \ r = 1, \ \ \alpha\phi + \frac{\partial\phi}{\partial r} = 0 \ \ \ \mbox{on} \ \ r = 2 $$
What can you say about existence and uniqueness of the solution to the problem in terms of $\alpha$?
For uniqueness, I am aware e.g. that for $\alpha > 0$ it holds (well-known for Robin conditions) and that for $\alpha = 0$ it holds up to additive constants.
However, now with the Fourier series approach I got a unique solution (respecting the general solution to Laplace's equation) for $\alpha \neq 0, \frac{5\pm \sqrt{97}}{12}, \frac{1}{2\log 2}$, and existence for $\alpha \neq \frac{5\pm \sqrt{97}}{12}$.
My solutions for $\alpha = \frac{1}{2\log 2}$ are $c(2\log 2 - \log r) + (A_1r + B_1/r)\cos\theta$ where $c$ is arbitrary and $A_1, B_1$ satisfy $k=( \alpha + 1)A_1 + (\alpha - 1)B_1$ and $(2\alpha + 1)A_1 + (\frac{\alpha}{2} - \frac{1}{4})B_1 = 0$. How does this happen, as $\frac{1}{2\log 2} > 0$?
Any help appreciated! |
Twice a year UGC Jointly with CSIR conducts the Net exam for PhD and Lecturership aspirants. For science subjects such as Life Science, Chemical Science etc, the question paper is divided into three parts - Part A, B and C. Part A is on Maths related topics and contains 20 questions any 15 of which have to be answered. The other two parts are on the specific subject chosen by the student.
The questions in Maths are tuned towards judging the problem solving ability of the student using basic concepts in maths rather than procedural competence in maths.
The fourth set of 15 Net level questions follow. To gain maximum benefits from this resource, the student must first answer this set before referring to the corresponding 15 question solution set.
This is a set of 15 questions for practicing UGC/CSIR Net exam: Question Set 4
Answer all 15 questions. Each correct answer will add 2 marks to your score and each wrong answer will deduct 0.5 mark from your score. Total maximum score 30 marks. Time: 25 mins. Q1. The unit's digit of the product $(693\times{694}\times{695}\times{698})$ is, 2 8 0 4 Q2. In a city average age of men and women are 72.4 years and 67.4 years respectively, while the average of the total number of citizens is 69.4 years. The percentage of men in the city is, 40 50 60 66.7 Q3. Two angles of a triangle are 40% and 60% of the largest angle. The largest angle is, $120^0$ $90^0$ $80^0$ $100^0$ Q4. A bird sitting at the top of a pole spots a centipede 36m away from the base of the pole. If the bird catches the centipede at 16m from the base of the pole both moving at the same speed, the height of the pole is, 18m 15m 12m $12\sqrt{2}$m Q5. In the circle with centre at $O$, $AB$ is a chord. $\angle{ACB} + \angle{OAB}$ =
$120^0$ $60^0$ $90^0$ $180^0$ Q6. A car travels uphill to a point at a speed of 20km/hr and returns back to its original position at a speed of 60km/hr. The average speed in which the car covered the whole distance is, 30km/hr 40km/hr 45km/hr 35km/hr Q7. If $5^{\sqrt{x}} + 12^{\sqrt{x}} - 13^{\sqrt{x}} = 0$, then x is, $\displaystyle\frac{25}{4}$ 4 9 6 Q8. The angle between the hour and minute hands of a clock at 10 past 10 is, $120^0$ $115^0$ $65^0$ $55^0$ Q9. In a river, a boat travelling at a still water speed of 6m/sec overtook a second boat travelling in the same direction and of same length at a still water speed of 4m/sec in 10secs. The length of each boat is, 100m 5m 10m Cannot be determined Q10. 16 men can do a piece of work in 20 days. How many extra percentage of men will have to be brought in to complete the work in 40% of the original time? 50% 250% 80% 150% Q11. A cube of ice of volume $100cm^3$ floats in water with $\frac{9}{10}$ths of its volme under water. If its portion above water melts at a rate of $10{\%}$ per minute, what will be its approximate volume above water after 3 minutes? $7cm^3$ $7.29cm^3$ $9.7cm^3$ $7.3cm^3$ Q12. The minimum value of $2p^2 + 3q^2$, where $p^2 + q^2=1$ and $-1 {\leq} p,q {\leq} 1$ is, 0 2 1 2.5 Q13. From a bag containing 1 green, 4 blue and 5 red balls, if a ball is picked up blindfolded, what is the probability of picking up a blue ball? $\frac{1}{4}$ $\frac{2}{5}$ $\frac{1}{10}$ $\frac{2}{3}$ Q14. A rubber mat of thickness 1cm is rolled tightly with no gaps between layers into a solid cylindrical shape that stood on ground with a base area of $1m^2$ and height 1m. The total surface area of the sheet is, $2.02m^2$ $202.02m^2$ $101.01m^2$ $200.02m^2$ Q15. In a queue of girls Lakshmi stood at the 5th position from the front and 15th position from the end. In another queue girls Veeny stood at the 18th position from the front and 7th from the end. The total number of girls in two queues together is, 20 38 40 43 |
If I understand correctly,
Compressed Sensing as an application of
Sparse Representation is defined as:
To find linear compression schemes for huge input signals that are known to have a sparse representation, so that the input signal can be recovered efficiently from the compression (the "sketch").(ref: this question)
So based on my understanding,
compressed sensing is like a precoding before analog to digital data conversion. And my take on sparse representation is: A signal can be represented as a linear combination of basis functions where the set of basis functions is called dictionary and data samples are much more than their features. Mathematically, in the system of linear equations $Y=DX$ where $Y \in \mathbb{R}^{n \times N} (n \ll N)$ we seek a dictionary that results in
sparse representation of $Y$.
So as far as I've understood, the output of
sparse representation is used in
compressed sensing. In my thesis on dictionary learning, after an introduction about compressed sensing, I'd like to talk a bit about the connection between compressed sensing and sparse representation before getting into the details of sparse representation. What I fail to do is to explain the connection between the two even though I am aware of the concepts. I'd like to have a good understanding of the connection between compressed sensing and sparse representation. Any explanation would be greatly appreciated! |
Definition talk:Inner Product
Maybe there is need for a proof that $<Ax,y>=<x,A^Ty>$? --Espen180 22:25, 22 October 2009 (UTC)
I would say so, definitely. Feel free to write such a page. --Prime.mover 05:28, 23 October 2009 (UTC)
It's done. I didn't know what to call it though. Espen180 05:51, 23 October 2009 (UTC)
Complex and reals?
How important is it to specify "or $\R$" in that first sentence? A subfield of $\R$ is by definition also a subfield of $\C$ is it not? And if you have the language of abstract algebra under your belt enough to understand what a subfield is, you'll know that. Furthermore, the concept of an "inner product" is fairly well advanced down the route of abstract algebra (coming as it does after vector spaces) that such an interpolation would seem clumsy. --prime mover 04:43, 29 April 2012 (EDT)
Well, in my class we're doing inner product spaces, and we haven't been introduced to fields yet. But maybe my class's curriculum is pathological. Fraleigh has inner product spaces as chapter $3.5$ and fields as $9.2$. We're introduced to it as a specific type of a regular vector space, and answering questions like "is $\left \langle {\cdot, \cdot} \right \rangle : \mathbf M_2 \left({\R}\right) \times \mathbf M_2 \left({\R}\right) \to \mathbb \R, \left \langle {\begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}, \begin{bmatrix} b_1 & b_2 \\ b_3 & b_4 \end{bmatrix}} \right \rangle = a_1b_1 + a_2b_2 + a_3b_3 + a_4b_4$ an inner product?" --GFauxPas 07:46, 29 April 2012 (EDT) IMO Fraleigh *is* pathological. OK so what's anyone else think? --prime mover 09:51, 29 April 2012 (EDT)
I would deem the addition 'or $\R$' unnecessary and indeed a bit clumsy; introducing inner product spaces before fields is sort of possible, as its usually practically restricted to $\Bbb F$ being $\R$ or $\C$ anyway. It's not my preferred route, though. --Lord_Farin 17:36, 29 April 2012 (EDT) |
We often have two independent random samples instead of paired samples, and want to make inferences about the two populations from which the samples are drawn.
\[\begin{aligned} &\text{Sample 1}: x_1, x_2, \cdots, x_m \\ &\text{Sample 2}: y_1, y_2, \cdots, y_n \end{aligned}\]
for convenience and without loss of generality, we assume $n \geq m$. What can we say about the populations from which these samples were drawn? We may want to ask about the
centrality or location of the population distributions. In particular, our usual question is are these the same for the two. More generally, do the population distributions coincide, or do they differ by a shift in location? We're often interested in whether one population tends to yield larger values than the other, or do they tend to yield similar values.
The most common approach is a modification of Wilcoxon for two independent samples. It has a different formulation called the
Mann-Whitney test. Usually they're talked together as the
WMW test.
The Mann-Whitney Test
There are three different (but essentially equivalent) ways of
formulating the hypotheses of interest: In terms of the underlying population distributions:
\[\begin{aligned} &H_0: F(z) = G(z) &&\text{for all } z \\ &H_1: F(z) \neq G(z) && \text{for all } z \end{aligned}\]
where $F(\cdot)$ is the distribution corresponding to $X$ (first sample) and $G(\cdot)$ is the distribution corresponding to $Y$ (both are CDFs).
In terms of means / expectations:
\[\begin{aligned} &H_0: E(X) = E(Y) \\ &H_1: E(X) \neq E(Y) \end{aligned}\]
Here we can use
WMW as a test of mean comparison.
In terms of probabilities:
\[\begin{aligned} &H_0: P(X > Y) = P(X < Y) \\ &H_1: P(X > Y) \neq P(X < Y) \end{aligned}\]
Usually we see the hypotheses in the $2^{nd}$ or $3^{rd}$ form. One-sided questions could also be of interest:
\[\begin{aligned} &H_0: F(z) = G(z) &&\text{for all } z \\ \text{vs. } &H_1: F(z) > G(z) && \text{for some } z \\ \text{or vs. } &H_1: E(X) < E(Y) \\ \text{or vs. } &H_1: P(X > Y ) < P(X < Y) \end{aligned}\]
Here all the alternatives are for $X$ tend to be smaller than $Y$.
Test Statistic Formulations
After forming the hypothesis, we may define our test statistic as:
\[\begin{aligned} S_m &\text{ - sum of the ranks from sample 1} \\ &= \sum\limits_{i=1}^m {R(x_i)} \end{aligned}\]
where $R(x_i)$ is the rank from the combined sample. Equivalently, we can use $S_n$ - sum of ranks from sample $2$. From this we can derive the Mann-Whitney formulation:
\[\begin{aligned} U_m &= S_m - \frac{1}{2}m(m+1) && U_n = S_n - \frac{1}{2}n(n+1) \\ U_m &= mn - U_n && \text{equivalent information in the two stats} \end{aligned}\]
This test statistic can be computed without combining the samples, and without using ranks. Consider the following
example: we have time to complete a set of calculations in minutes using two different types of calculator. Is there a difference between the two types?
\[\begin{aligned} &\text{Group A}: 23 && 18 && 17 && 25 && 22 && 19 && 31 && 26 && 29 && 33 \\ &\text{Group B}: 21 && 28 && 32 && 30 && 41 && 24 && 35 && 34 && 27 && 39 && 36 \end{aligned}\]
In a Wilcoxon test, we combine the two samples and rank them together:
\[\begin{aligned} &17 && 18 && 19 && 21^* && 22 && 23 && 24^* && 25 && 26 \\ &27^* && 28^* && 29 && 30^* && 31 && 32^* && 33 && 34^* && 35^* \\ &36^* && 39^* && 41^* \end{aligned}\]
The ones marked with $\ast$ are from group B. In our case, $m + n = 21$:
\[\begin{aligned} S_m &= 1 + 2 + 3 + 5 + 6 + 8 + 9 + 12 + 14 + 16 = 76 \\ S_m + S_n &= \frac{1}{2} \times 21 \times 22 = 231 \\ &\Rightarrow S_n = 231 - 76 = 155 \end{aligned}\]
In the Mann-Whitney test, we order the two groups separately:
\[\begin{aligned} &\text{Group A}: 17 && 18 && 19 && 22 && 23 && 25 && 26 && 29 && 31 && 33 \\ &\text{Group B}: 21 && 24 && 27 && 28 && 30 && 32 && 34 && 35 && 36 && 39 && 41 \end{aligned}\]
We go through each observation in sample A, and count how many observations in sample B are greater than the observation from A. In other words, we count up the
exceedances:
\[11 + 11 + 11 + 10 + 10 + 9 + 9 + 7 + 6 + 5 = 89 = U_n\]
The Wilcoxon is an extension of the one-sample test, and the Mann-Whitney counts exceedances of one sample relative to the other. The two are
equivalent in terms of information about the samples, as shown by the formulations. We can easily convert the values of one test to another. Other Extensions
Other aspects of the one-sample situation extend in a relatively straightforward way to the two-sample case. For example, getting confidence intervals for the "shift" uses a similar idea to the Walsh average approach from the one-sample test, but using $x_i - y_i$:
17 18 19 $\cdots$ 29 31 33 21 -4 -3 -2 $\cdots$ 8 10 12 24 -7 -6 -5 $\cdots$ 5 7 9 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ $\vdots$ $\vdots$ $\vdots$ 39 -22 -21 -20 $\cdots$ -10 -8 -6 41 -24 -23 -22 $\cdots$ -12 -10 -8
To get critical values from the table, count from most negative up and count from most positive down. The point estimator for the difference, the
Hodges-Lehmann estimator, is the median of the differences.
Also as before, we can use a
normal approximation when $m, n$ are reasonably large (or in the case of ties).
\[\begin{aligned} \text{Wilcoxon:} && E(S_m) &= \frac{1}{2}m(m+n+1) \\ && Var(S_m) &= \frac{1}{12}mn(m+n+1) \\ \text{Mann-Whitney:} && E(U_m) &= \frac{1}{2}mn \\ && Var(U_m) &= \frac{1}{12}mn(m+n+1) \end{aligned}\]
With
ties, we make our standard modifications. For the Wilcoxon, we use mid-ranks as before. For the Mann-Whitney, ties from one sample to the other (i.e. $x_i = y_i$ for some $i, j$) counts as $\frac{1}{2}$ in our calculation of $U_m$.
Another thing we can do is to use a normal approximation with the
score representation. Let $T$ be the sum of the scores test statistic ($S_m$ or $U_m$) based on ranks or tied ranks. The score for observation $i$ is $S_i$. Then
\[\begin{aligned} E(T) &= \frac{m}{m+n}\sum\limits_{j=1}^{m+n}{S_j} \\ Var(T) &= \frac{mn}{(m+n)(m+n-1)}\left[\sum\limits_{j=1}^{m+n}{S_j^2} - \frac{1}{m+n}\left(\sum\limits_{j=1}^{m+n}{S_j}\right)^2\right] \end{aligned}\]
Implementation in R
In R,
wilcox.test handles all 3 cases (one sample, paired samples, two independent samples):
One sample:
wilcox.test(x)
Paired samples:
wilcox.test(x, y, paired = T)
xand
yshould be two data vectors of the same length
Independent samples:
wilcox.test(x, y)
xand
yare two data vectors of possibly different lengths
paired = Fby default
We can get confidence intervals for all 3 cases by setting the parameters
conf.int = T and
conf.level to a desired value.
The Median Test
The median test is a simple "quick and dirty" method which is neither very efficient nor robust. The test asks a very specific question:
location (median) shift.
Basic setup: two samples of independent observations of sizes $m$ and $n$. They come from distributions that differ, if at all,
only by a shift of location.
\[\begin{aligned} F(x) &\rightarrow \text{population CDF for the first sample} \\ G(y) &\rightarrow \text{population CDF for the second sample} \end{aligned}\]
The general idea of shifting would be
\[G(y) = F(x - \theta)\]
Under $H_0$, $\theta = 0 \Rightarrow$ no shift, which means there's a common median. The sample number of observations above the common median $M$ is $B(n+m, 0.5)$.
Example: Suppose two different methods of growing corn were randomly assigned to number of different plots of land, and the yield per acre was computed for each plot.
\[\begin{aligned} &\text{Method 1: } 83, 91 ,94, 89, 89, 96, 91, 92, 90 && m = 9 \\ &\text{Method 2: } 91, 90, 81, 83, 84, 83, 88, 91, 89, 84 && n = 10 \end{aligned}\]
The common median of the combined samples $M = 89$, and this is a
point estimate for the joint median in the population under $H_0$. We arrange the data in a $2 \times 2$
contingency table:
Method 1 Method 2 $>89$ 6 3 9 $\leq 89^\ast$ 3 7 10 9 10 19
The third row, or the
column margins, is fixed by design ($9$ and $10$). The row margins are fixed by the definition of median. With the margins fixed, if we know one value in the table, we can fill in the other three values.
Note that different approaches are possible, for example the book would drop the values exactly equal to $M$ to give:
Method 1 Method 2 $>89$ 6 3 9 $\leq 89^\ast$ 1 5 6 7 8 15
The most common analysis approach is the
Fisher's exact test. The idea is to consider all tables that are "as or more extreme than" the observed,
consistent with the fixed margins. For example, if the $(1, 1)$ cell happened to be 4 instead of 6:
Method 1 Method 2 $>89$ 4 5 9 $\leq 89^\ast$ 5 5 10 9 10 19
we get a more balanced table than the observed one so it's less extreme, making it more favorable to $H_0$. We get a more extreme table (evidence against $H_0$) if the $(1, 1)$ cell is really big or small. $\{7, 8, 9\}$ in the $(1, 1)$ cell are all more extreme than the observed in one direction. $\{3, 2, 1, 0\}$ are as extreme or more than the observed in the other direction.
The other benefit of the particular table structure is that we can easily work out the
exact probability of each configuration. In our example, we have two independent samples, one from each method. Each behaves as a
Binomial with the appropriate number of trials ($m = 9, n = 10$) and probability $\frac{1}{2}$ of being above or below the common median under $H_0$.
But they are also
conditional on the fixed row totals, which are also Binomial - the number of trials is $m + n = 19$. So, for our particular configuration, the probability is:
\[\frac{\binom{9}{6}0.5^9 \cdot \binom{10}{3}0.5^{10}}{\binom{19}{9}0.5^{19}} = \frac{\binom{9}{6} \binom{10}{3}}{\binom{19}{9}} \Rightarrow \text{prob. for observed configuration}\]
We need to compute this type of probability for all the more extreme tables mentioned above, and add them all up.
In general, with two independent samples of sizes $m$ and $n$; with $r$ values above the combined sample median in the first sample, and no sample values are equal to the combined sample median (just for ease of notation as clearly we can do the computation), the probability of that table configuration is
\[\begin{aligned} \frac{\binom{m}{r} \binom{n}{k-r}}{\binom{m+n}{k}}, && k = \frac{m+n}{2} \end{aligned}\]
Two-sample Distribution Tests
We can also extend the tests for a single sample's distribution. We'd like to ask if it's reasonable to assume that two samples of data come from the same distribution. Note that we haven't specified which distribution.
The Kolmogorov-Smirnov Test
The test to use is called the
Smirnov or
K-S test. Our hypotheses are
\[\begin{aligned} &H_0: \text{the two samples come from the same distribution} \\ \text{vs. } &H_1: \text{the distributions are different} \end{aligned} \]
This is inherently a two-sided alternative. One-sided alternatives exist, but are very rarely used. Recall that in the one sample case, we compared our empirical CDF from the data with the CDF of the distribution hypothesized under $H_0$. In this more general case, we will now compare the two CDFs. The test statistic is the
difference of greatest magnitude between the two. As before, this is taken as the vertical distance between empirical CDFs.
Ranked Observation $S_1(x)$ $S_2(y)$ $K$ $x_1$ $\frac{1}{3}$ $0$ $\frac{1}{3}$ $x_2$ $\frac{2}{3}$ $0$ $\frac{2}{3}^\ast$ $y_1$ $\frac{2}{3}$ $\frac{1}{2}$ $\frac{1}{6}$ $x_3$ $1$ $\frac{1}{2}$ $\frac{1}{2}$ $y_2$ $1$ $1$ $0$
The
K-S test is a test for general or overall difference in distributions. You can get a more sensitive analysis with a specific test for particular characteristics, if that is your goal (test for difference in means, differences in variance, etc.)
In R, we use the
ks.test() as in the one-sample case. By default, the option
exact = NULL is used. An exact p-value is computed if $m \times n < 10000$. There are also asymptotic approximation which will be used for even larger samples.
Interestingly, the K-S statistic depends only on the order of the $x$ and $y$s in the ordered combined sample - the actual numerical values are not needed, because empirical CDFs jump at data points. Under standard two-sided tests, all orderings are equally likely, so the exact distribution of the test statistic can be known for the given sample sizes using just the ranks.
Cramer-von Mises Test
The
Cramer-von Mises test is another test which examines whether two samples of data come from the same distribution. The difference is that it looks at
all the differences $S_1(x) - S_2(y)$, where $S_1(\cdot)$ is the empirical CDF of sample 1, and $S_2(\cdot)$ is the empirical CDF of sample 2. These values are evaluated at all $(x, y)$ pairs.
The test statistic is:
\[T = \frac{mn}{(m+n)^2}\sum_{x, y} \left[S_1(x) - S_2(y) \right]^2\]
Similarly, this test is only used for a two-sided alternative. Values of $T$ have been tabulated and there is an approximation that is not strongly dependent on sample size. The exact null distribution can be found, as for other tests, by considering all ordered arrangements of the combined sample to be equally likely, and by computing $T$ for each ordered arrangement.
Note: Ties won't affect the results unless the sample sizes are really small. Usually the tests assume by nature there's a true ordering.
So far we've covered one-sample tests, paired sample tests and tests for two independent samples. What if we have three or more samples? |
This question stems from a point of confusion that I still have about the causality, linearity, and time-invariance in LCCDEs. I wanted to use the capacitor as an example.
Consider a capacitor with capacitance $C$. Taking the current $i(t)$ to be the input to the system and the voltage $v(t)$ to be the output we have $$i(t) = C \frac{\mathrm{d} v(t)}{\mathrm{d}t}$$
This differential equation can be solved to obtain $$v(t) = v(t_0) + \frac{1}{C} \int_{t_0}^{t} i(\tau) \mathrm{d} \tau$$
My first question is: isn't this mathematically valid for all $t$? In other words, does this give us the response for all $t$ or is it only valid for $t > t_0$? If it is valid for all $t$, including the $t < t_0$ case, doesn't this make the system non-causal since it anticipates future input and output values? Are we allowed to integrate backwards in time?
My second question relates to the assertion that a for the LCCDE to describe a linear system, the initial conditions must be zero. Suppose $t_0 = 0$ such that $$v(t) = v(0) + \frac{1}{C} \int_{0}^{t} i(\tau) \mathrm{d} \tau$$
With $v(0) = 0$ the system is linear. But the choice of $t_0 = 0$ is arbitrary, since for example $$v(t) = v(0) + \frac{1}{C} \int_{0}^{t} i(\tau) \mathrm{d} \tau = v(2) + \frac{1}{C} \int_{2}^{t} i(\tau) \mathrm{d} \tau$$
Why shouldn't we require that $v(2) = 0$ as well for that matter? What am I missing here? Thank you in advance. |
How to Perform a Nonlinear Distortion Analysis of a Loudspeaker Driver
A thorough analysis of a loudspeaker driver is not limited to a frequency-domain study. Some desirable and undesirable (but nonetheless exciting) effects can only be caught by a nonlinear time-domain study. Here, we will discuss how system nonlinearities affect the generated sound and how to use the COMSOL Multiphysics® software to perform a nonlinear distortion analysis of a loudspeaker driver.
Understanding Linear and Nonlinear Distortions
A transducer converts a signal of one energy form (input signal) to a signal of another energy form (output signal). In regard to a loudspeaker, which is an electroacoustic transducer, the input signal is the electric voltage that, in the case of a moving coil loudspeaker, drives its voice coil. The output signal is the acoustic pressure that the human ear perceives as a sound. A distortion occurs when the output signal quantitatively and/or qualitatively differs from the input signal.
Schematic representation of a moving coil loudspeaker.
The distortion can be divided into two principal parts:
Linear distortion Nonlinear distortion
The term
linear distortion, which might sound rather confusing, implies that the output signal has the same frequency content as the input signal. In this distortion, it is the amplitude and/or phase of the output signal that is distorted. In contrast, the term nonlinear distortion suggests that the output signal contains frequency components that are absent in the input signal. This means that the energy is transferred from one frequency at the input to several frequencies at the output. Input and output signals in linear and nonlinear transducers.
Let the input sinusoidal signal, A_\text{in} \sin \left( 2\pi f t \right), be applied to a transducer with a nonlinear transfer function. The frequency content of the output signal will then have more than one frequency. Apart from the fundamental portion, which corresponds to the frequency f, there will be a distorted portion. Its spectrum usually (but not always) consists of the frequencies f^{(2)}, f^{(3)}, f^{(4)}, \ldots, which are multiples of the fundamental frequency f^{(n)} = n f, in which n \geq 2. These frequencies, called
overtones, are present in the sound, and it is the overtones that make musical instruments sound different: A note played on a violin sounds different from the same note played on a guitar. The same happens with the sound emitted from a loudspeaker.
The distortion is a relative quantity that can be described by the value of the
total harmonic distortion (THD). This value is calculated as the ratio of the amplitude of the distorted portion of the signal to that of the fundamental part:
The profile of a signal with a higher THD visibly differs from the pure sinusoidal.
Unfortunately, the value of the THD of the output signal itself might not be enough to judge the quality of the loudspeaker. A signal with a lower THD may sound worse than a signal with a higher THD. The reason is that the human ear perceives various overtones differently.
The distortion can be represented as a set of individual even-order, 2nf, and odd-order, (2n-1)f, components. The former are due to asymmetric nonlinearities of the transducer, while the latter are due to symmetric nonlinearities. The thing is that the sound containing even-order harmonics is perceived as “sweet” and “warm”. This can be explained by the fact that there are octave multiples of the fundamental frequency among them. The odd-order harmonics sound “harsh” and “gritty”. That is quite alright for a guitar distortion pedal, but not for a loudspeaker. What matters is, of course, not just the presence of those harmonics, but rather their level in the output signal.
Another interesting effect, called
intermodulation, occurs when the input signal contains more than one frequency component. The corresponding output signals start to interact with each other, producing frequency components absent in the input signal. In practice, if a two-tone sine wave such as A_\text{in} \sin \left( 2\pi f_1 t \right) + B_\text{in} \sin \left( 2\pi f_2 t \right) (in which f_2 > f_1) is applied to the input, the system nonlinearities will result in the modulation of the higher-frequency component by the lower one. That is, the frequencies f_2 \pm f_1, f_2 \pm 2f_1, and so on will appear in the frequency spectrum of the output signal. The quantitative measure of the intermodulation that corresponds to the frequency, f_2 \pm (n-1) f in which n \geq 2, is the n th-order intermodulation distortion (IMD) coefficient. It is defined as:
In practice, using an input signal containing three or more frequencies for the IMD analysis is not advisable, as the results become harder to interpret.
Transient Nonlinear Analysis of a Loudspeaker Driver
To summarize, the linear analysis of the loudspeaker, though a powerful tool for a designer, might not be sufficient. The loudspeaker can only be completely described if an additional nonlinear analysis is carried out. The nonlinear analysis is supposed to answer the following questions:
How does the nonlinear behavior of the loudspeaker affect the output signal? What are the limits of the input signal that ensure the loudspeaker functions acceptably? How should I compensate for the undesired distortion of the loudspeaker?
From the simulation point of view, there is both bad and good news. The bad news is that the full nonlinear analysis cannot be performed in the frequency domain. It requires the transient simulation of the loudspeaker, which is more demanding and time consuming than the frequency-domain analysis. The good news is that the effect of certain nonlinearities is only significant at low frequencies.
For example, the voice coil displacement is greater at lower frequencies and therefore the finite strain theory must be used to model the mechanical parts of the motor. Using the finite strain theory is redundant at higher frequencies, where the infinitesimal strain theory is applicable. The figures below show the results for the transient loudspeaker tutorial, driven by the same amplitude (V_0 = 10 V) of input voltage:
Voice coil motion in the air gap of the loudspeaker driver for a single-tone input voltage signal: 70 Hz on the left and 140 Hz on the right. Acoustic pressure at the listening point for a single-tone input voltage. The blue curves correspond to the nonlinear time-domain analysis, while the red curves correspond to the frequency-domain analysis: 70 Hz on the left and 140 Hz on the right.
The animations above depict the magnetic field in the voice coil gap and the motion of the former and the spider (both in pink) as well as the voice coil (in orange). As expected, the displacements, as well as the spider deformation, are higher at the lower frequency. The spider deformation obeys the geometrically nonlinear analysis and therefore the linear approximation is inaccurate in this case. This is confirmed by the output signal plots. These plots depict the acoustic pressure at the listening point located about 14.5 cm in front of the speaker dust cap tip.
The acoustic pressure profile obtained from the nonlinear time-domain modeling for the 70-Hz input signal deviates from the sinusoidal shape to a certain extent, which means that higher-order harmonics start playing a definite role. This is not visible for the input signal at 140 Hz: There’s only a slight difference in the amplitude between the linear frequency-domain and nonlinear time-domain simulation results. The THD value of the output signal drops from 4.3% in the first case to 0.9% in the second case. The plots below show how the harmonics contribute to the sound pressure level (SPL) at the listening point.
Frequency spectra of the SPL at the listening point: single-tone input voltage (70 Hz on the left and 140 Hz on the right).
The IMD analysis of the loudspeaker is carried out in a similar way. What’s different is the input signal applied to the voice coil, which contains two harmonics parts:
whose amplitudes, V_1 and V_2, usually correlate as 4 : 1, which corresponds to 12 dB.
The example below studies the IMD of the same test loudspeaker driver. The dual-frequency input voltage, in which f_1 = 70 Hz and f_2 = 700 Hz, serves as the input signal. The SPL plot on the left shows how the second- and third-order harmonics arising in the low-frequency part of the output signal generate a considerable level of the corresponding order IMDs in the high-frequency part. The IMD level becomes sufficiently lower if the signal frequency f_1 is increased to 140 Hz. This is seen in the right plot below.
Frequency spectra of the SPL at the listening point for a two-tone input voltage. Modeling Tips for Analyzing a Loudspeaker Driver
Since transient nonlinear simulations tend to be demanding, the loudspeaker driver model should not be overcomplicated. The 2D axisymmetric formulation is a good starting approach and was used for the tutorial examples in the previous section. After that, it’s important to estimate which effects are more important than others. This will help you set up an adequate multiphysics model of a loudspeaker.
The system nonlinearities include, but are not limited to, the following:
Nonlinear behavior of the magnetic field in the loudspeaker pole piece made of high-permeability metal Geometric nonlinearities in the moving parts of the motor Topology change as the voice coil moves up and down in the air gap
Speaking the lumped parameters’ language, this means that they are no longer constants like the Thiele-Small parameters, but functions of the voice coil position, x, and the input voltage, V. The above-mentioned nonlinearities will be reflected in the nonlinear inductance, L \left( x, V \right); compliance, C \left( x, V \right); and dynamic force factor, Bl \left( x, V \right). For instance, the tutorial example shows that the nonlinear behavior of the force factor is more distinct at 70 Hz, whereas it is almost flat (that is, closer to linear) at 140 Hz.
Nonlinear (left) and almost linear (right) behavior of the dynamic force factor: 70 Hz on the left and 140 Hz on the right.
With the following steps, the discussed nonlinearities can be incorporated into the model. First, the nonlinear magnetic effects are taken into account through the constitutive relation for the corresponding material. In the test example, the BH curve option is chosen for the iron pole piece. Next, the
Include geometric nonlinearity option available under the Study Settings section forces the structural parts of the model to obey the finite strain theory. Lastly, the topology change is captured by the Moving Mesh feature. Whenever applied, the feature ensures that the mesh element nodes move together with the moving parts of the system. Since the displacements can be quite high, it is likely that the mesh element distortion reaches extreme levels and the numerical model becomes unstable. The Automatic Remeshing option is used as a remedy against highly distorted mesh elements.
All in all, the nonlinear time-domain analysis of the loudspeaker requires much more effort and patience than the linear frequency-domain study. This is especially relevant when the model includes the
Moving Mesh feature with the Automatic Remeshing option activated. Investing some time in the geometry and mesh preprocessing will pay off, as the moving mesh is very sensitive to the mesh quality. That is, highly distorted mesh elements and near-zero angles between the geometric entities have to be avoided. A proper choice of the Condition for Remeshing option may also require some trial and error.
The loudspeaker design discussed here might not be considered “good” by most standards. The odd-order harmonics prevail in the frequency content of the output signal.
Next Steps
To perform your own nonlinear distortion analysis of a loudspeaker, click on the button below. This will take you to the Application Gallery, where you can find the MPH-files for this model together with detailed modeling instructions. (Note: You must have a COMSOL Access account and valid software license.)
Additional Resources Check out other examples of modeling loudspeakers in these tutorials: Further reading: L.L. Beranek and T.J. Mellow, Acoustics: Sound Fields and Transducers, Academic Press, 2012. Brüel & Kjær, “Audio Distortion Measurements,” Application Note BO0385, 1993. W. Marshall Leach, Jr., Introduction to Electroacoustics and Audio Amplifier Design, Kendall Hunt, 2010. L.L. Beranek and T.J. Mellow, Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
№ 9
All Issues Volume 70, № 3, 2018
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 299-317
We study nonregular elliptic problems with boundary conditions of higher orders and prove that these problems are Fredholm on appropriate pairs of the inner-product H¨ormander spaces that form a two-sided refined Sobolev scale. We prove a theorem on the regularity of generalized solutions to the problems in these spaces.
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 318-336
We investigate the order of growth of the moduli of arbitrary algebraic polynomials in the weighted Bergman space $A_p(G, h),\; p > 0$, in regions with interior zero angles at finitely many boundary points. We obtain estimations for algebraic polynomials in bounded regions with piecewise smooth boundary.
A problem for one class of pseudodifferential evolutionary equations multipoint in the time variable
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 337-355
We establish the correct solvability of the multipoint (in the time variable) problem for the evolution equation with operator of differentiation of infinite order in generalized $S$-type spaces. The properties of the fundamental solution of this problem and the behavior of the solution $u(t, x)$ as $t \rightarrow +\infty$ are investigated.
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 356-365
A nonlinear two-point boundary-value problem for an ordinary differential equation is studied by the method of parametrization. We construct systems of nonlinear algebraic equations that enable us to find the initial approximation to the solution to the posed problem. In terms of the properties of constructed systems,we establish necessary and sufficient conditions for the existence of an isolated solution to the boundary-value problem under consideration.
Bifurcation conditions for the solutions of weakly perturbed boundary-value problems for operator equations in Banach spaces
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 366-378
We obtain bifurcation conditions for the solutions of weakly perturbed boundary-value problems for operator equations in Banach spaces from the point $\varepsilon = 0$. A convergent iterative procedure is proposed for the construction of solutions as parts of series in powers of $\varepsilon$ with pole at the point $\varepsilon = 0$.
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 379-403
We introduce the moduli of smoothness with Jacobi weights $(1 x)\alpha (1+x)\beta$ for functions in the Jacobi weighted spaces $L_p[ 1, 1],\; 0 < p \leq \infty $. These moduli are used to characterize the smoothness of (the derivatives of) functions in the weighted spaces $L_p$. If $1 \leq p \leq \infty$, then these moduli are equivalent to certain weighted $K$-functionals (and so they are equivalent to certain weighted Ditzian – Totik moduli of smoothness for these $p$), while for $0 < p < 1$ they are equivalent to certain “Realization functionals”.
Continuity in the parameter for the solutions of one-dimensional boundary-value problems for differential equations of higher orders in Slobodetsky spaces
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 404-411
We introduce the most general class of linear boundary-value problems for systems of ordinary differential equations of order $r \geq 2$ whose solutions belong to the Slobodetsky space $^{Ws+r}_p\bigl( (a, b),C_m\bigr),$ where $m \in N,\; s > 0$ and $p \in (1,\infty )$. We also establish sufficient conditions under which the solutions of these problems are continuous functions of the parameter in the Slobodetsky space $W^{s+r}_p\bigl( (a, b),C_m\bigr)$.
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 412-428
We develop the ideas of the method of averaging for some classes of fuzzy systems (fuzzy differential equations with delay, fuzzy differential equations with pulsed action, fuzzy integral equations, fuzzy differential inclusions and differential inclusions with fuzzy right-hand sides without and with pulsed action).
Approximate and information aspects of the numerical solution of unstable integral and pseudodifferential equations
Ukr. Mat. Zh. - 2018. - 70, № 3. - pp. 429-444
We present a review of the latest results obtained in the field of numerical solution of unstable integral and pseudodifferential equations. New versions of fully discrete projection and collocation methods are constructed and justified. It is shown that these versions are characterized by the optimal accuracy and cost efficiency, as far as the use of computational resources is concerned. |
№ 9
All Issues Moklyachuk M. P. Robust interpolation of random fields homogeneous in time and isotropic on a sphere, which are observed with noise
Ukr. Mat. Zh. - 1995. - 47, № 7. - pp. 962–970
We study the problem of optimal linear estimation of the functional $$A_N \xi = \sum\limits_{k = 0}^{\rm N} {\int\limits_{S_n } {a(k,x)\xi (k,x)m_n (dx),} }$$ , which depends on unknown values of a random field ξ(
k, x), k∃Z, x∃S n homogeneous in time and isotropic on a sphere S n, by observations of the field ξ( k,x)+η( k,x) with k∃ Z{0, 1, ..., N}, x∃S n (here, η ( k, x) is a random field uncorrelated with ξ( k, x), homogeneous in time, and isotropic on a sphere S n). We obtain formulas for calculation of the mean square error and spectral characteristic of the optimal estimate of the functional A Nξ. The least favorable spectral densities and minimax (robust) spectral characteristics are found for optimal estimates of the functional A Nξ.
Ukr. Mat. Zh. - 1993. - 45, № 3. - pp. 389–397
We study the problem of optimal linear estimation of the transformation $A\xi = \smallint _0^\infty< a(t), \xi ( - t) > dt$ of a stationary random process $ξ(t)$ with values in a Hilbert space by observations of the process $ξ(t) + η(t)$ for $t ⩽ 0$. We obtain relations for computing the error and the spectral characteristic of the optimal linear estimate of the transformation $Aξ$ for given spectral densities of the processes $ξ(t)$ and $η(t)$. The minimax spectral characteristics and the least favorable spectral densities are obtained for various classes of densities.
Ukr. Mat. Zh. - 1991. - 43, № 2. - pp. 216–223
Ukr. Mat. Zh. - 1991. - 43, № 1. - pp. 92-99
Ukr. Mat. Zh. - 1985. - 37, № 6. - pp. 730–734
Ukr. Mat. Zh. - 1977. - 29, № 3. - pp. 324–332 |
There have been a number of cases where efficient hidden subgroup algorithms have been found for specific non-Abelian groups with very specific structures. Why haven't we found any efficient quantum algorithm to find a hidden subgroup in a symmetric group even when the structure of the hidden subgroup is heavily restricted (e.g. the hidden subgroup could be a very special direct or semidirect product)?
From what I understand, it is partially because we don't have any techniques currently that take advantage of structure of the hidden subgroup itself. Weak Fourier sampling solves the problem whenever the hidden subgroup is normal (but this is not a property of the subgroup itself - analogous to being a direct product etc - but rather how the subgroup sits within the larger group), but that's no help for the symmetric group because its only normal subgroup is the alternating group (for $n \geq 5$).
From the examples of [1] - which shows that in the symmetric group strong Fourier sampling produces distributions for the hidden trivial group vs a hidden random involution that are exponentially close - and [2], it would seem that there may be some hope if the hidden subgroup is promised to be relatively large. But you would need to either be able to show that Fourier sampling succeeds in this case, or to come up with a different method that (likely) really took advantage of the largeness of the hidden subgroup.
Moore, Russell, and Sniady [3] also showed that the other relatively successful method - a Kuperberg-style sieve - doesn't work efficiently in a group very close to the symmetric group (namely, $S_n \wr S_2 = (S_n \times S_n) \rtimes S_2$), to distinguish a hidden trivial group from a hidden involution.
[1] Moore, C., A. Russell, and L. J. Schulman. The Symmetric Group Defies Strong Fourier Sampling. SIAM J. Comput. 37, p. 1842, 2008. Preliminary version in FOCS 2005. (arXiv version)
[2] Grigni, M., L. J. Schulman, M. Vazirani, and U. Vazirani. Quantum Mechanical Algorithms for the Nonabelian Hidden Subgroup Problem. Combinatorica 24, p. 137, 2004. Preliminary version in STOC 2001.
[3] Moore, Russell, and Sniady. On the impossibility of a quantum sieve algorithm for graph isomorphism. SIAM J. Comput. 39 (2010), no. 6, 2377–2396. Preliminary version in STOC 2007.
Exact classical bounds are known, https://oeis.org/A186202 , you only have to sample certain prime cycles as they form a min dominating set on $S_n$ under a detection relation. Smaller than $n!$ but still about the order of $p!$ were $p$ is the largest prime less than or equal to $n$. |
Four Color Theorem (4CT) states that every planar graph is four colorable. There are two proofs given by [Appel,Haken 1976] and [Robertson,Sanders,Seymour,Thomas 1997]. Both these proofs are computer-assisted and quite intimidating.
There are several conjectures in graph theory that imply 4CT. Resolution of these conjectures probably requires a better understanding of the proofs of 4CT. Here is one such conjecture :
Conjecture : Let $G$ be a planar graph, let $C$ be a set of colors and $f : C \rightarrow C$ a fixed-point free involution. Let $L = (L_v : v \in V(G))$ be such that $|L_v| \geq 4$ for all $v \in V$ and if $\alpha \in L_v$ then $f(\alpha) \in L_v$ for all $v \in V$, for all $\alpha \in C$.
Then there exists an $L$-coloring of the graph $G$.
If you know such conjectures implying 4CT, please list them one in each answer. I could not find a comprehensive list of such conjectures. |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
Problem
Consider Hamiltonian $H = H_0 + \lambda H'$ with
$$ H_0 = \Bigg(\begin{matrix}E_+ & 0 \\ 0 & E_-\end{matrix}\Bigg) $$ $$ H' = \vec{n}\cdot\vec{\sigma} $$ for 3D Cartesian vector $\vec{n}$ and $\sigma_i$ the Pauli matrices.
Solve exactly for $E_+ = E_i$ and $E_+ \ne E_-$. (NOTE: As an earlier part of this problem, both cases were solved to second-order energy and state corrections, and those results are suppose to be compared to these results.)
My Question
As part of my pertubation solutions, I found the eigenenergies and eigenstates for both $H$ and $H'$. Before I do a brute force approach and actually diagonalize $H$, I want to make sure there isn't a more elegant approach. It seems like I should be able to use the information about the components---namely the eigenstates and eigenvalues of $H_0$ and $H'%---to find information about the sum $H$. |
I would model this as a one-state system (x), with the gyro as the control input. The gyro noise becomes state input noise, the compass noise becomes measurement noise. So your system model becomes $$\hat{\dot \theta} = \omega_{gyro} + w$$ $$\hat y = \hat x$$ where $\hat y$ is the filter's estimate of direction, which you compare to the compass direction to get your Kalman update. The magnetic distortion is going to be difficult, because if you sit in any one place it will appear as a constant offset term -- the Kalman filter won't deal with this well. I'm pretty sure you'll either need to map the distortion, get some second absolute direction reference, or just accept the distortion. You are confusing spectral content with probability distribution. If the noise is white, then each sample in perfectly independent of any other sample. If the noise is Laplacian, each sample obeys the Laplace distribution. Kalman filters don't like colored noise (but you can deal with that by adding states). A Kalman filter is only the overall optimal filter when the noise is of Gaussian distribution and the cost function is sum-of-squares. For any other noise and cost function, the optimal filter is probably nonlinear. But for any zero-mean, white noise and sum-of-squares cost function, the Kalman filter is the best linear filter to be found.
(Note that the system model I gave ends up with a pretty trivial Kalman filter -- you may be better off, if you can't find some other means of estimating compass offset, using a complimentary filter to combine these two sensor inputs. Doing all the Kalman computations will just end up coughing up a complimentary filter anyway, and chances are that you'll have enough guesses for your constants that you may as well just guess at the crossover point in a complimentary filter and be done with it).
(Note, too, that if you have some absolute
position reference, and some means estimating speed, and a vehicle that always goes in the direction you point it, that you can use an extended Kalman filter very profitably to correct the compass distortion by using the direction it actually moves to correct for the compass direction).
Optimal State Estimation by Dan Simon, Wiley 2006, is -- in my opinion -- a very rich and clear treatment of the subject of Kalman filtering and its more sophisticated brethren (H-infinity, extended Kalman, unscented Kalman, and even a bit on Baysian and particle filtering). It won't tell you how to apply that to navigation problems like this, but where would be the fun in life if all the problems were solved?. If you can't follow the math in Simon's book, then you should probably be asking yourself if you're going to be able to apply a Kalman filter in any sort of intelligent way. |
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko. |
In a random effect meta-analysis model with one categorical independent variable $\theta_{ij}=\theta_i+s_j+\epsilon_j$, where $\theta_{ij}$ is the observed effect size in category $i$ in study $j$, $\theta_i$ is the mean effect size in category $i$, $s_j$ is the random study effect, and $\epsilon_j$ is the within study error (or sampling error). We often further assume that $s_i$ follows a normal distribution with mean 0 and variance $\tau^2$. $\epsilon_j$ is also normally distributed and its variance ($\sigma^2$) is calculated from primary literature and is assumed to be known when fitting the model.
In a model like this, meta-analysis software, such as metafor in R, can perform two heterogeneity test. The first is testing if among study heterogeneity is 0, that is, $\tau=0$. This is done by computing a Cochran's $Q=\sum w_i(\theta_{ij}-\hat{\theta_i})^2$. Here, $w_i=1/\sigma^2$ and $\hat{\theta_i}$ is the weighted average ($w_i$ as the weight) of all $\theta_{ij}$ within the $i$th group. Meta-analysis text book often says that Q is
asymptopically chi-square distributed under the null hypothesis $\tau=0$.
The second homogeneity test is whether all $\theta_i$ are equal. This is done by computing a $Q=\sum w^*_i(\hat{\theta_i}-\hat{\theta})^2$, where $\hat{\theta_i}$ is the weighted ($w^*_i=1/(\hat{\tau^2}+\sigma^2)$) average of all $\theta_{ij}$ within the $i$he category and $\hat{\theta}$ is the weighted overall mean. Again, Q follows a Chi-square distribution.
The questions I have are
In the first test, Why is Q asymptotically chi-square, not exactly Chi-square? We know for a normally distributed random variable, $(n-1)S^2/\sigma^2$ follows chi-square distribution. Q is essentially the same. Is it the weighted average that makes it asymptotically, but not exactly, Chi-square?
Is the heterogeneity test for $\tau$ still valid in general for a linear mixed effect model? In a linear mixed model, we usually don't know $\sigma$ and have to estimate it using "ML" or "REML". Does this break the Chi-square distribution of Q? I have never seen the Q statistics for testing random effect in a linear mixed model text book. We know that likelihood ratio or Wald test for random effect tends to be conservative. If this Q statistics works in general, why isn't it adopted as a general test for linear mixed models?
For the second heterogeneity test about the categorical predictor, is it just a Wald test? We know that Wald test or likelihood ratio test are anti-conservative in linear mixed model. Many text book recommends conditional F test (with df corrections) or simulation based inference. Is such anti-conservative nature still a concern for meta-analysis model? I don't know whether the fact that $\sigma$ is given to the model as a known quantity alleviate such problem.
I have read a few meta-analysis textbook and search online but cannot find detailed explanations for these technical details. Any insights or references would be very helpful. |
pre-calculus-trigonometric-identity-calculator
prove \frac{\csc(\theta)+\cot(\theta)}{\tan(\theta)+\sin(\theta)}=\cot(\theta)\csc(\theta)
en
1. Sign Up free of charge:
Join with Office365
Join with Facebook
OR
Join with email
2. Subscribe to get much more:
From $0.99
Please try again using a different payment method
One Time Payment $5.99 USD for 2 months Weekly Subscription $0.99 USD per week until cancelled Monthly Subscription $2.49 USD per month until cancelled Annual Subscription $19.99 USD per year until cancelled
Please add a message.
Message received. Thanks for the feedback. |
There have been a number of cases where efficient hidden subgroup algorithms have been found for specific non-Abelian groups with very specific structures. Why haven't we found any efficient quantum algorithm to find a hidden subgroup in a symmetric group even when the structure of the hidden subgroup is heavily restricted (e.g. the hidden subgroup could be a very special direct or semidirect product)?
From what I understand, it is partially because we don't have any techniques currently that take advantage of structure of the hidden subgroup itself. Weak Fourier sampling solves the problem whenever the hidden subgroup is normal (but this is not a property of the subgroup itself - analogous to being a direct product etc - but rather how the subgroup sits within the larger group), but that's no help for the symmetric group because its only normal subgroup is the alternating group (for $n \geq 5$).
From the examples of [1] - which shows that in the symmetric group strong Fourier sampling produces distributions for the hidden trivial group vs a hidden random involution that are exponentially close - and [2], it would seem that there may be some hope if the hidden subgroup is promised to be relatively large. But you would need to either be able to show that Fourier sampling succeeds in this case, or to come up with a different method that (likely) really took advantage of the largeness of the hidden subgroup.
Moore, Russell, and Sniady [3] also showed that the other relatively successful method - a Kuperberg-style sieve - doesn't work efficiently in a group very close to the symmetric group (namely, $S_n \wr S_2 = (S_n \times S_n) \rtimes S_2$), to distinguish a hidden trivial group from a hidden involution.
[1] Moore, C., A. Russell, and L. J. Schulman. The Symmetric Group Defies Strong Fourier Sampling. SIAM J. Comput. 37, p. 1842, 2008. Preliminary version in FOCS 2005. (arXiv version)
[2] Grigni, M., L. J. Schulman, M. Vazirani, and U. Vazirani. Quantum Mechanical Algorithms for the Nonabelian Hidden Subgroup Problem. Combinatorica 24, p. 137, 2004. Preliminary version in STOC 2001.
[3] Moore, Russell, and Sniady. On the impossibility of a quantum sieve algorithm for graph isomorphism. SIAM J. Comput. 39 (2010), no. 6, 2377–2396. Preliminary version in STOC 2007.
Exact classical bounds are known, https://oeis.org/A186202 , you only have to sample certain prime cycles as they form a min dominating set on $S_n$ under a detection relation. Smaller than $n!$ but still about the order of $p!$ were $p$ is the largest prime less than or equal to $n$. |
Contents Homework 6, ECE438, Fall 2011, Prof. Boutin
Due Wednesday October 26, 2011 (in class)
Questions 1
Obtain the frequency response and the transfer function for each of the following systems. Sketch the magnitude of the frequency response, and indicate the location of the poles and zeros of the transfer function.
a. $ y[n]= \frac{x[n]+x[n-2]}{2}; $ b. $ y[n]= \frac{x[n]-x[n-1]}{2}; $ Question 2
Consider a DT LTI system described by the following equation
$ y[n]=x[n]+2x[n-1]+x[n-2]. $
Find the response of this system to the input
$ x[n]=\left\{ \begin{array}{rl} -2, & \text{ if }n=-2,\\ 1, & \text{ if }n=0,\\ -2 & \text{ if }n=2,\\ 0, & \text{ else. } \end{array} \right. $
by the following approaches:
a. Directly substitute x[n] into the difference equation describing the system; b. Find the impulse response h[n] and convolve it with x[n]; c. Find the frequency response by the following two approaches: i. apply the input $ e^{ j \omega_0 n} $ to the difference equation describing the system, ii. find the DTFT of the impulse response. (verify that both methods lead to the same result) then find the DTFT of the input, multiply it by the frequency response of the system to yield the DTFT of the output, and finally calculate the inverse DTFT y[n]. d. Verify that all three approaches for finding y[n] lead to the same result. Question 3
Consider a causal LTI system with transfer function
$ H(z)= \frac{1-\frac{1}{2}z^{-2}} {1-\frac{1}{\sqrt{2}} z^{-1} +\frac{1}{4} z^{-2}} $
a. Sketch the locations of the poles and zeros. b. Determine the magnitude and phase of the frequency response $ H(\omega) $, for $ \omega =0,\frac{\pi}{4}, \frac{\pi}{2}, \frac{3\pi}{4}, \text{ and }\pi $. c. Is the system stable? Explain why or why not? d. Find the difference equation for y[n] in terms of x[n], corresponding to this transfer function H(z). Question 4
Consider a DT LTI system described by the following non-recursive difference equation (moving average filter)
$ y[n]=\frac{1}{8} \left( x[n]+x[n-1]+x[n-2]+x[n-3]+x[n-4]+x[n-5]+x[n-6]+x[n-7]\right) $
i.e.
$ y[n]=\frac{1}{8} \sum_{k=0}^{7}x[n-k] $
a. Find the impulse response h[n] for this filter. Is it of finite or infinite duration? b. Find the transfer function H(z) for this filter. c. Sketch the locations of poles and zeros in the complex z-plane.
Hint: To factor H(z), use the geometric series and the fact that the roots of the polynomial $ z^N- p_0 =0 $ are given by
$ z_k =|p_0|^{\frac{1}{N}} e^{j \frac{(\text{arg }p_0+2\pi k)}{N}} ,\quad k=0,\ldots ,N-1 $
Question 5
Consider a DT LTI system described by the following recursive difference equation
$ y[n]= \frac{1}{8} \left( x[n]-x[n-8]+y[n-1] \right) $
a. Find the transfer function H(z) for this filter. b. Sketch the locations of poles and zeros in the complex z-plane. Hint: See Part c of the previous problem. c. Find the impulse response h[n] for this filter by computing the inverse ZT of H(z). Is it of finite or infinite duration? Discussion
Write your questions/comments here
Yimin:
1.what does Q2-c-i mean? i. apply the input $ e^{ j n} $ to the difference equation describing the system.. My understanding is that, first DTFT the difference equation to get the equation contains Y(w) and X(w) then develop H(w), second approach is to DTFT the h[n] obtained in b. part, then compare? Yes, but there are two ways to get the frequency response. One method is to compute the DTFT of h[n]. Another method, which I used in class, is to compute the system's response to a complex exponential. You need to use both methods. -pm Oops, I see what you mean. There was a typo in the question (a missing $ \omega_0 $). -pm Yes, but there are two ways to get the frequency response. One method is to compute the DTFT of h[n]. Another method, which I used in class, is to compute the system's response to a complex exponential. You need to use both methods. -pm 2.I changed the equation on Q4. That's too long to view on portable devices. 3.in the last question, the initial values of y is not given, so the LTI system is not uniquely defined, so the IZF of H(z) is not uniquely defined. Do you want us to discuss both situation? Excellent! You should say something like: "we must assume the system is XXX, otherwise it is not uniquely defined. Now assuming XXX, we have ... (answer the questions)". -pm
Professor, could you give some hint for Q3,d? I used factorization, but it seemed too complicated. Should just I used the property of causal system, like bringing n=0 in, and the parts with negative index are all 0? Does that make sense? Thanks.
Sure. Basically, you have to retrace the steps for deriving H(z) backward. Start from the fact that y(z)=H(z)X(z), then replace H(z) by its expression in terms of z. After that, multiply both sides of the equation by the denominator of H(z). Can you figure out how to continue? -pm So should our solution for 3d be recursive or do we need to factor everything into terms of x[n]...? It should be recursive. -pm |
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption
1.
Institut für Mathematik, Universität Paderborn, Warburger Str. 100, 33098 Paderborn, Germany
2.
School of Science, Xihua University, Chengdu 610039, China
$\left\{ \begin{align} & {{u}_{t}}=\Delta u-\chi \nabla \cdot \left( u\nabla v \right)+\kappa u-\mu {{u}^{2}},\ \ \ \ \ \ \ x\in \mathit{\Omega },t>0, \\ & {{v}_{t}}=\Delta v-uv,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in \mathit{\Omega },t>0, \\ \end{align} \right.$
$N$
$μ>0$
$κ>0$
$(\frac{κ}{μ },0)$ Keywords:Chemotaxis, logistic source, global existence, boundedness, asymptotic stability, weak solution. Mathematics Subject Classification:35Q92, 35K55, 35A01, 35B40, 35D30, 92C17. Citation:Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities,
[3]
X. He and S. Zheng,
Convergence rate estimates of solutions in a higher dimensional chemotaxis system with logistic source,
[4] [5] [6] [7]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[8] [9]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[10]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[11] [12]
X. Li, Global existence and uniform boundedness of smooth solutions to a parabolic-parabolic chemotaxis system with nonlinear diffusion,
[13] [14] [15]
N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar
[16] [17]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[18]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[19] [20] [21]
Y. Tao and M. Winkler,
Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,
[22] [23]
L. Wang, S. U. -D. Khan and S. U. -D. Khan, Boundedness in a chemotaxis system with consumption of chemoattractant and logistic source,
[24]
M. Winkler, A three-dimensional Keller-Segel-Navier-Stokes system with logistic source: Global weak solutions and asymptotic stabilization, Preprint.Google Scholar
[25] [26]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[27]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[28] [29]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,
[30] [31]
show all references
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities,
[3]
X. He and S. Zheng,
Convergence rate estimates of solutions in a higher dimensional chemotaxis system with logistic source,
[4] [5] [6] [7]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[8] [9]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[10]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[11] [12]
X. Li, Global existence and uniform boundedness of smooth solutions to a parabolic-parabolic chemotaxis system with nonlinear diffusion,
[13] [14] [15]
N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar
[16] [17]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[18]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[19] [20] [21]
Y. Tao and M. Winkler,
Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,
[22] [23]
L. Wang, S. U. -D. Khan and S. U. -D. Khan, Boundedness in a chemotaxis system with consumption of chemoattractant and logistic source,
[24]
M. Winkler, A three-dimensional Keller-Segel-Navier-Stokes system with logistic source: Global weak solutions and asymptotic stabilization, Preprint.Google Scholar
[25] [26]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[27]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[28] [29]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,
[30] [31]
[1]
Ling Liu, Jiashan Zheng.
Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source.
[2]
Chunhua Jin.
Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source.
[3]
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa.
Global existence for an attraction-repulsion chemotaxis fluid model with logistic source.
[4]
Hua Zhong, Chunlai Mu, Ke Lin.
Global weak solution and boundedness in a three-dimensional competing chemotaxis.
[5] [6]
Liangchen Wang, Yuhuan Li, Chunlai Mu.
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source.
[7] [8]
Pan Zheng, Chunlai Mu, Xuegang Hu.
Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source.
[9]
Shijie Shi, Zhengrong Liu, Hai-Yang Jin.
Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source.
[10]
Feng Li, Yuxiang Li.
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux.
[11]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[12]
Rachidi B. Salako, Wenxian Shen.
Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source.
[13]
Tong Li, Anthony Suen.
Existence of intermediate weak solution to the equations of multi-dimensional chemotaxis systems.
[14]
Ke Lin, Chunlai Mu.
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source.
[15]
Xie Li, Zhaoyin Xiang.
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source.
[16]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[17] [18] [19]
Masaki Kurokiba, Toshitaka Nagai, T. Ogawa.
The uniform boundedness and threshold for the global existence of the radial solution to a drift-diffusion system.
[20]
Sachiko Ishida.
Global existence and boundedness for chemotaxis-Navier-Stokes systems
with position-dependent sensitivity in 2D bounded domains.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Under the auspices of the Computational Complexity Foundation (CCF)
This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class $\mathcal{C}$ and a parameter $B=B(n)$, given a circuit $C\in\mathcal{C}$ with $n$ input bits, decide whether $C$ rejects all of its inputs, or accepts all but $B(n)$ of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the ``threshold'' values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization.
For constant-depth circuits, we construct an algorithm for quantified derandomization that works for a parameter $B(n)$ that is only slightly smaller than a ``threshold'' parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For constant-depth circuits with parity gates, we lower a ``threshold'' of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-$3$ circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate polynomials over large fields that vanish rarely, and prove two lower bounds on the seed length of such generators.
Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a ``good'' object is to construct a simple deterministic test that decides the set of good objects, and then ``fool'' that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test.
A final version.
* A gap in the proof of Proposition 39 is filled. * There are minor changes in notation and exposition.
This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class $\mathcal{C}$ and a parameter $B=B(n)$, given a circuit $C\in\mathcal{C}$ with $n$ input bits, decide whether $C$ rejects all of its inputs, or accepts all but $B(n)$ of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the "threshold" values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization.
For {\bf constant-depth circuits}, we construct an algorithm for quantified derandomization that works for a parameter $B(n)$ that is only slightly smaller than a "threshold" parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For {\bf constant-depth circuits with parity gates}, we lower a "threshold" of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-3 circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate {\bf polynomials over large fields that vanish rarely}, and prove two lower bounds on the seed length of such generators.
Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a "good" object is to construct a simple deterministic test that decides the set of good objects, and then "fool" that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test.
An improvement in the parameters of the main theorem for constant-depth circuits, which is obtained by a slightly nicer construction/proof; various minor corrections.
This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class $\mathcal{C}$ and a parameter $B=B(n)$, given a circuit $C\in\mathcal{C}$ with $n$ input bits, decide whether $C$ rejects all of its inputs, or accepts all but $B(n)$ of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the "threshold" values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization.
For constant-depth circuits, we construct an algorithm for quantified derandomization that works for a parameter $B(n)$ that is only slightly smaller than a "threshold" parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For constant-depth circuits with parity gates, we lower a
"threshold" of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-$3$ circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate polynomials over large fields that vanish rarely, and prove two lower bounds on the seed length of such generators.
Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a "good" object is to construct a simple deterministic test that decides the set of good objects, and then "fool" that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test.
A significant revision, which includes a new and more general main theorem for constant-depth circuits, a new derandomization of the switching lemma, and a clearer exposition of the randomized tests technique.
Goldreich and Wigderson (STOC 2014) initiated a study of quantified derandomization, which is a relaxed derandomization problem: For a circuit class $\mathcal{C}$ and a parameter $B=B(n)$, the problem is to decide whether a circuit $C\in\mathcal{C}$ rejects all of its inputs, or accepts all but $B(n)$ of its inputs.
In this work we make progress on several frontiers that they left open. Specifically, for constant-depth circuits, we construct an algorithm for quantified derandomization that is significantly faster than the best currently-known algorithms for standard derandomization, and works for a parameter $B(n)$ that is only slightly smaller than a ``barrier'' parameter that was shown by Goldreich and Wigderson. For constant-depth circuits with parity gates, we tighten a ``barrier'' of Goldreich and Wigderson (from depth five to depth four), and construct algorithms for quantified derandomization of a remaining type of layered depth-$3$ circuit that they did not handle and left as an open problem (i.e., circuits with a top $\oplus$ gate, a middle layer of $\land$ gates, and a bottom layer of $\oplus$ gates).
In addition, we extend Goldreich and Wigderson's study of multivariate polynomials that vanish rarely to the setting of large finite fields. We prove two lower bounds on the seed length of hitting-set generators for polynomials over large fields that vanish rarely. As part of the proofs, we show a form of ``error reduction'' for polynomials (i.e., a reduction of the task of hitting arbitrary polynomials to the task of hitting polynomials that vanish rarely) that causes only a mild increase in the degree. |
\(\mathbb{N}=\{0,1,2,3, \ldots\}\)
However, this is an informal notation, which is not really well-defined, and it should only be used in cases where it is clear what it means. It’s not very useful to say that “the set of prime numbers is {2, 3, 5, 7, 11, 13, . . . }”, and it is completely meaningless to talk about “the set {17, 42, 105, . . . }”. Clearly, we need another way to specify sets besides listing their elements. The need is fulfilled by predicates.
If
P( x) is a predicate, then we can form the set that contains all entities a such that ais in the domain of discourse for P and P( a) is true. The notation { x | P( x)} is used to denote this set. This is the intensional definition of the set. The name of the variable, x, is arbitrary, so the same set could equally well be denoted as { z | P( z)} or { r | P( r)}. The notation { x | P( x)} can be read “the set of x such that P( x)”. We call this the set-builder notation, as you can think of the predicate as a building material for the elements of the set. For example, if E( x) is the predicate ‘ x is an even number’, and if the domain of discourse for E is the set N, then the notation { x | E( x)} specifies the set of even natural numbers. That is,
\(\{x | E(x)\}=\{0,2,4,6,8, \dots\}\)
It turns out, for deep and surprising reasons that we will discuss later, that we have to be a little careful about what counts as a predicate. In order for the notation {
x | P( x)} to be valid, we have to assume that the domain of discourse of P is in fact a set. (You might wonder how it could be anything else. That’s the surprise!)
Often, it is useful to specify the domain of discourse explicitly in the notation that defines a set. In the above example, to make it clear that
x must be a natural number, we could write the set as { x ∈ N | E( x)}. This notation can be read as “the set of all x in N such that E( x)”. More generally, if X is a set and P is a predicate whose domain of discourse includes all the elements of X, then the notation
\(\{x \in X | P(x)\}\)
is the set that consists of all entities
a that are members of the set X and for which P( a) is true. In this notation, we don’t have to assume that the domain of discourse for P is a set, since we are effectively limiting the domain of discourse to the set X. The set denoted by { x ∈ X | P( x)} could also be written as { x | x ∈ X ∧ P( x)}.
We can use this notation to define the set of prime numbers in a rigorous way. A prime number is a natural number
n which is greater than 1 and which satisfies the property that for any factorization n = xy, where x and yare natural numbers, either x or y must be n. We can express this definition as a predicate and define the set of prime numbers as
\(\{n \in \mathbb{N} |(n>1) \wedge\)
\(\forall x \forall y((x \in \mathbb{N} \wedge y \in \mathbb{N} \wedge n=x y) \rightarrow(x=n \vee y=n)) \}\)
Admittedly, this definition is hard to take in in one gulp. But this example shows that it is possible to define complex sets using predicates. |
What implementation details need to change if I use a cell average approach rather than a cell total approach for the finite-volume method?
For example, consider the conservation law,
$$ u_t + \mathcal{F}_x = s(x,t) $$
A
cell average approach yields,
$$ \frac{\partial}{\partial t}\int_{x_{j-1/2}}^{x_{j+1/2}} u(x,t)~dx = -\int_{x_{j-1/2}}^{x_{j+1/2}} F_x~dx + \int_{x_{j-1/2}}^{x_{j+1/2}}s(x,t)~dx \\ \frac{\partial}{\partial t} \tilde{u}(x,t) = \frac{1}{x_{j+1/2} - x_{j-1/2}}\left(\mathcal{F}_{j-1/2} - \mathcal{F}_{j+1/2} \right) + \tilde{s}(x,t) $$
The
cell total is the same but without dividing the flux term by the cell length $(x_{j+1/2} - x_{j-1/2})$ (I am considering 1D only).
When solving the equation numerically, do I need to define $\tilde{u}$ and $\tilde{s}$ differently in these two cases?
A little confused about this subtlety. My first impression what that nothing needs to change other than the dividing the flux term by $(x_{j+1/2} - x_{j-1/2})$ when using a cell averaged approach. Is that correct? |
You can use the standalone class to produce tight PDF files for one or multiple TikZ pictures. I originally wrote it to simplify the creation of the many pictures of my thesis. Since v1.0 it includes a convert option which can convert the produced PDF into a graphics file automatically (using external software, which requires the -shell-escape compiler ...
Using PDF as an intermediate format when converting from LaTeX to HTML is not very good idea. LaTeX and HTML are both mostly structural markup langauages, which means you use them to describe the document structure (sections, emphasize, formulas etc.), whereas PDF is mostly about representation of your document on the screen or paper. When converting LaTeX ...
Often, I have the use-case that I want to convert a given data table into a "suitable" LaTeX table.Typically, my data is of numeric type and requires number formatting, perhaps alignment at a decimal point, and in most cases, it requires elementary post-processing (like quotients, differences, gradients).Since I needed such stuff very often, I wrote ...
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!DOCTYPE html><html><head><script type="...
I implemented this for a large R&D lab. We produced several hundred (if not thousand) documents per year, and the LaTeX Users' community there wanted to be able to produce documents using 'tex as well as WYSIWYG software.The OP was right in that a well-defined workflow is essential. Part of this is the process, but you may also need to think about ...
I am in the typesetting, or composition business. So publishers are our clients. In the main, publishers themselves do not do anything with the files you send them. They pass them on to their suppliers. These days, most are in India (as is our production office) or other places with lower labour costs.What you need to remember is that to a good ...
I use the following very simple script:latexml --dest=$1.xml $1.texlatexmlpost -dest=$1.html $1.xmlebook-convert $1.html $1.epub --language en --no-default-epub-coverIt makes use of latexml and ebook-converter (the command line tool that comes with calibre). This works very well with lots of formulas.Once mathML is encoded in newer epub versions ...
Here's Pandoc-based solution. You will have to enable --shell-escape for this to work, since it uses \write18. Depending on what you want, you may need to customize the Pandoc options.\documentclass{article}\usepackage{fancyvrb}\newenvironment{markdown}%{\VerbatimEnvironment\begin{VerbatimOut}{tmp.markdown}}%{\end{VerbatimOut}%\...
It is true that the main page of tex4ht and documentation seems obsolete, but the project itself is alive, as can be seen from the history of commits and the mailing list. We added MathJax output support recently, for example.It is also true that the full distribution which is on CTAN haven't been updated since the original author passed away, but the ...
I was reluctant to publish this humble tool, but here it goes.excel2datatoolexcel2datatool is a Java application I wrote to help me with a personal project. Since it was useful for me and I do believe it's generic enough for other purposes, I decided to make it publicly available. The repository is hosted on GitHub.The whole idea is to read Microsoft ...
If you're going completely free/open source, then I guess dropping MS Word for something like OpenOffice Writer might also be considered. For this, there's OOoLaTeX. From the OOoLaTeX SourceForge project webpage:OOoLatex is a set of macros designed to bring the power of LaTeXinto OpenOffice. It contains two main modules: the first one,Equation, ...
New version of writer2latex is pretty good. It works with the Open Office, but I think their command line utility should work without the OO. You can set quality of the converted document - from LaTeX as clean as possible, to version which tries to emulate appearance of source word document.Structure and basic formatting should be converted well, but I am ...
Adding\def\patterns#1{}\catcode`\{=12\let\newtoks\relax\let\dump\relax\let\+\relax\let\newinsert\relax\input latex.ltxTo a LaTeX file makes it a plain TeX file.This is a LaTeX answer I gave to a question earlier today which runs without error in pdftex as modified:\def\patterns#1{}\catcode`\{=12\let\newtoks\relax\let\dump\relax\let\+\relax\...
I'm not sure why a Pandoc-based solution/answer on this page harvested so many upvotes, while being overly complicated.If you use Pandoc, the method is far easier:Just write Markdown.At occasions where you want to apply LaTeX features in your final document, just sprinkle the Markdown with your LaTeX snippets...Here is a working (not so minimal) ...
Excel2LaTeXAn Excel add-in that converts parts of a spreadsheet into equivalent LaTeX tables.Free.CTAN: http://www.ctan.org/tex-archive/support/excel2latex/Source: https://github.com/krlmlr/Excel2LaTeXProsMost Excel formatting is supported.Bold and italic (if applied to the whole cell)Left, right, center, and general alignment (per-cell or per-...
A proper markdown parser is a task too complex for latex. Not because TeX is not a Turing complete language (it is), but because it would be very difficult to implement, and probably will have a very poor performance.One idea which immediately comes to mind is to use LuaTeX, and code the markdown parser in Lua language. This sound certainly feasible....
You can use the fmtcount package to achieve that:\documentclass{minimal}\usepackage{fmtcount}\begin{document}4: \numberstringnum{4}31: \numberstringnum{31}\end{document}Use \Numberstringnum and \NUMBERstringnum respectively for capitalized and full-caps versions.
OrgmodeOrgmode is a notes/planning mode for Emacs that has a nice table feature.http://orgmode.org/ProsColumn widths auto-adjust based on contentsBehaves as a spreadsheet, including calculations, row/column insert and delete, row/column moves, rectangular selectionImport and export of TAB or whitespace separated dataLots of export options (LaTeX, ...
Assume that you want to create an animation as follows.Step 1:Create a PDF file consists of at lease 2 pages. Compiling the following input file with either latex-dvips-ps2pdf or xelatex produces a PDF file with 30 pages. Each page represent one frame.% input.tex\documentclass[pstricks,border=0pt]{standalone}\usepackage{pstricks-add,fp}\FPeval{Size}...
Check out InftyReader.Quoting the start page:InftyReader is an Optical Character Recognition (OCR) application that recognizes and translates scientific documents (including math symbols) into LaTeX, MathML and XHTML!
Update: 2012-01:The standalone class now has a varwidth option, so with the current release, one would simply use the [varwidth] package option:\ifdefined\formula\else\def\formula{E = m c^2}\fi\documentclass[border=2pt,varwidth]{standalone}\usepackage{standalone}\usepackage{amsmath}\begin{document}\[ \formula \]\end{document}Save the ...
The Mathpix app (for iOS only, Android coming soon) actually does this all on your phone via the camera.Just take pictures, and you can export as Latex, PDF, or you can get an Overleaf link (they have a really nice browser based editor). The iOS link is:https://itunes.apple.com/us/app/mathpix/id1075870730?ls=1&mt=8and the main website is just http:...
After an add-in has been installed, it is available to use, but it has to be activated first. For this, follow these instructions. An abbreviated list of steps are:Click the Microsoft Office Button.Click Excel Options.Click Add-Ins.Note the Add-in Type in the list displayed.Select the Add-in Type in the manage box and click Go.Select [clear] the check ...
You can define a macro as follows:\newcommand\twodigits[1]{%\ifnum#1<10 0#1\else #1\fi}\twodigits{12} % 12\twodigits{4} % 04\twodigits{123} % 123This macro is fully expandable.If you also want to cut trailing zeros you can use:\newcommand\twodigits[1]{%\ifnum#1<10 0\number#1 \else #1\fi}\twodigits{004} % 04If you want to ...
ConTeXt does not directly output XHTML, it outputs XML. However the current browsers (at least Opera, Firefox and Chromium) are able to display XML correctly. The XML can be styled using CSS.When you want real XHTML, you have to transform the XML to XHTML using external tools. ConTeXt standalone ships with an example file: texmf-context/tex/context/base/...
Proof of conceptThe following code implements a "proof of concept" showing that the approach (1) I proposed in another answer is feasible.This example defines a Markdown environment which dumps its contents verbatim to an auxiliar file (called \jobname-aux.md), and inmediatelly uses Lua to parse that file as markdown. For this parsing it uses the library ...
I'm the developer of MyScript (formerly VisionObjects), but I'm not in the research team.TeX characters are not all supported. Gregory posted on HN a list of chars we support.Suggestions are welcomed. Do not hesitate to send us missing symbols or UI improvement ideas. |
587 19
Let ##(x_1,x_2,x_3)=\vec{r}(\theta,\phi)## the parametrization of a usual sphere.
If we consider a projection in two dimension ##(a,b)=\vec{f}(x_1,x_2,x_3)## Then I don't understand how to use the metric, since it is ##g_{ij}=\langle \frac{\partial\vec{f}}{\partial x_i}|\frac{\partial\vec{f}}{\partial x_j}\rangle## which is a 3x3 matrix but we have only two coordinates ##a,b## in the projection.
If we consider a projection in two dimension ##(a,b)=\vec{f}(x_1,x_2,x_3)##
Then I don't understand how to use the metric, since it is ##g_{ij}=\langle \frac{\partial\vec{f}}{\partial x_i}|\frac{\partial\vec{f}}{\partial x_j}\rangle## which is a 3x3 matrix but we have only two coordinates ##a,b## in the projection. |
This content will become publicly available on October 1, 2020 Dark Energy Survey Year 1 Results: Methods for Cluster Cosmology and Application to the SDSS Abstract
We perform the first blind analysis of cluster abundance data. Specifically, we derive cosmological constraints from the abundance and weak-lensing signal of \redmapper\ clusters of richness $$\lambda\geq 20$$ in the redshift range $$z\in[0.1,0.3]$$ as measured in the Sloan Digital Sky Survey (SDSS). We simultaneously fit for cosmological parameters and the richness--mass relation of the clusters. For a flat $$\Lambda$$CDM cosmological model with massive neutrinos, we find $$S_8 \equiv \sigma_{8}(\Omega_m/0.3)^{0.5}=0.79^{+0.05}_{-0.04}$$. This value is both consistent and competitive with that derived from cluster catalogues selected in different wavelengths. Our result is also consistent with the combined probes analyses by the Dark Energy Survey (DES) and the Kilo-Degree Survey (KiDS), and with the Cosmic Microwave Background (CMB) anisotropies as measured by \planck. We demonstrate that the cosmological posteriors are robust against variation of the richness--mass relation model and to systematics associated with the calibration of the selection function. In combination with Baryon Acoustic Oscillation (BAO) data and Big-Bang Nucleosynthesis (BBN) data, we constrain the Hubble rate to be $$h=0.66\pm 0.02$$, independent of the CMB. Future work aimed at improving our understanding of the scatter of the richness--mass relation has the potential to significantly improve the precision of our cosmological posteriors. The methods described in this work were developed for use in the forthcoming analysis of cluster abundances in the DES. Our SDSS analysis constitutes the first part of a staged-unblinding analysis of the full DES data set.
Authors: Publication Date: Research Org.: Brookhaven National Lab. (BNL), Upton, NY (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21) Contributing Org.: DES; DES Collaboration OSTI Identifier: 1487402 Alternate Identifier(s): OSTI ID: 1557009; OSTI ID: 1560421 Report Number(s): arXiv:1810.09456; FERMILAB-PUB-18-554-AE oai:inspirehep.net:1699971 Grant/Contract Number: AC02-07CH11359; AC02-76SF00515; SC0015975; AC02-05CH11231; 716762; FG-2016-6443; AC05-00OR22725 Resource Type: Accepted Manuscript Journal Name: Mon.Not.Roy.Astron.Soc. Additional Journal Information: Journal Volume: 488; Journal Issue: 4 Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS; galaxies: clusters: general; cosmological parameters; large-scale structure of Universe Citation Formats
Costanzi, M., and et al. Dark Energy Survey Year 1 Results: Methods for Cluster Cosmology and Application to the SDSS. United States: N. p., 2019. Web. doi:10.1093/mnras/stz1949.
Costanzi, M., & et al. Dark Energy Survey Year 1 Results: Methods for Cluster Cosmology and Application to the SDSS. United States. doi:10.1093/mnras/stz1949.
Costanzi, M., and et al. Tue . "Dark Energy Survey Year 1 Results: Methods for Cluster Cosmology and Application to the SDSS". United States. doi:10.1093/mnras/stz1949.
@article{osti_1487402,
title = {Dark Energy Survey Year 1 Results: Methods for Cluster Cosmology and Application to the SDSS}, author = {Costanzi, M. and et al.}, abstractNote = {We perform the first blind analysis of cluster abundance data. Specifically, we derive cosmological constraints from the abundance and weak-lensing signal of \redmapper\ clusters of richness $\lambda\geq 20$ in the redshift range $z\in[0.1,0.3]$ as measured in the Sloan Digital Sky Survey (SDSS). We simultaneously fit for cosmological parameters and the richness--mass relation of the clusters. For a flat $\Lambda$CDM cosmological model with massive neutrinos, we find $S_8 \equiv \sigma_{8}(\Omega_m/0.3)^{0.5}=0.79^{+0.05}_{-0.04}$. This value is both consistent and competitive with that derived from cluster catalogues selected in different wavelengths. Our result is also consistent with the combined probes analyses by the Dark Energy Survey (DES) and the Kilo-Degree Survey (KiDS), and with the Cosmic Microwave Background (CMB) anisotropies as measured by \planck. We demonstrate that the cosmological posteriors are robust against variation of the richness--mass relation model and to systematics associated with the calibration of the selection function. In combination with Baryon Acoustic Oscillation (BAO) data and Big-Bang Nucleosynthesis (BBN) data, we constrain the Hubble rate to be $h=0.66\pm 0.02$, independent of the CMB. Future work aimed at improving our understanding of the scatter of the richness--mass relation has the potential to significantly improve the precision of our cosmological posteriors. The methods described in this work were developed for use in the forthcoming analysis of cluster abundances in the DES. Our SDSS analysis constitutes the first part of a staged-unblinding analysis of the full DES data set.}, doi = {10.1093/mnras/stz1949}, journal = {Mon.Not.Roy.Astron.Soc.}, number = 4, volume = 488, place = {United States}, year = {2019}, month = {10} } |
Oh great shaman!
Somehow the village idiot got his hands on this fancy control machine controlling things. Obviously, we also want to control things (who wouldn’t?), so we reverse-engineered the code. Unfortunately, the machine is cryptographically protected against misuse.
Could you please maybe spend a few seconds of your inestimably valuable time to break that utterly simple cryptosystem and enlighten us foolish mortals with your infinite wisdom?
nc 104.155.168.28 31031
NOTE:Since I am really bad at math, the share received from the server won’t be accepted when sent back. Don’t get confused by this — the challenge is solvable nevertheless. Summary: hash length extension, manipulation of secret shares.
The challenge server consists of a mix of secret sharing, authentication and command execution. Briefly, it works as follows:
The server generates a command of the form
cmd = checksum + "echo Hello!#" + [32 random bytes],
and interprets it as an integer modulo 256-bit prime $p$. The server splits the command into 3 shares with threshold equal to 2, meaning that knowing any 2 of the 3 shares is enough to recover the secret. Shamir’s secret sharing is used (using polynomials over $\mathbb{F}_p$). One of the shares is signed by prepending a MAC:
signed_share = SHA256(key + share) + share.
The signed share is sent to the client. Now the server listens to queries from the client, the number is limited by 0x123. In each query, the client cant send a signed share. The server combines it with another share. The server checks the checksum and if it is good, executes the command. Modifying the share
First observation: if the MAC is good enough, we can’t generate any other signed share and therefore we can’t execute any command other than “echo Hello”. Therefore we have to attack the MAC. Luckily, it is vulnerable to the well known Hash Length Extension attack. There is a very convenient tool called hash_extender for performing the attack.
The attack allows us, given a hash of the form
SHA256(key + share), to append some uncontrolled padding and, additionally, arbitrary amount of data to the hash input. This appending is done in such a way that we can compute the new hash value without knowing the key. That is, we can generate more signed shares of the form
share + [padding] + [any data].
What does it give to us? The share has the following format: (32-byte $x$, 32-byte $y$) and the values are packed in the Little Endian order (least significant bytes first). When the values are unpacked, everything after $y$ will be considered as a part of $y$. Even if it’s more than 32 bytes, it will be taken modulo $p$. That is, we can modify the share’s $y$ by adding $(2^{256} \times padding + 2^{256+e} \times value)$ to it, where $e$ is determined by padding. Since it is reduced modulo $p$ afterwards, we can actually obtain arbitrary $y_{wanted}$. Indeed, setting $$value \equiv (y_{wanted} – y_{orig} – 2^{256} \times padding) / 2^{256+e} \pmod {p}$$ will do the job.
Here’s python code for modifying the signed share’s $y$ component to arbitrary value:
# hash_extender is a wrapper around the command line hash_extender # trick: first we append nothing to obtain the base value testdata, _sig = hash_extender( data=share, hash=sig, append="", hashname="sha256", length=keylen ) # y = 2^e * c + sometrash (mod p) # c = (y - sometrash) / 2^e (mod p) e = len(testdata) * 8 inv2e = invmod(2**e, p) trash = from_bytes(testdata) % p def gen_share_with_y(y): c = (y - trash) * inv2e % p newdata, newsig = hash_extender(data=share, hash=sig, append=to_bytes(c), hashname="sha256", length=keylen) return newsig + newdata.encode("hex") Digging into the secret sharing
Our final goal is to execute some evil command on the server. The command is obtained from combining the two shares: our share and one of the server’s shares. We can partially modify our share, but the server’s share stays untouched! Is it still possible to modify the command in a meaningful way? Hoping for lucky random commands (like “sh\n”) is a bad idea, since the command is prepended by a SHA256 checksum…
Let’s look closer at the sharing scheme. When splitting the shares, the server creates a random polynomial of degree 1 (it has (threshold) coefficients) with the constant coefficient equal to the secret being shared. Then it is evaluated on three random points, and the three $(x_i, y_i)$ pairs are the shares. For a polynomial of degree 1, any two different points are enough to recover the full polynomial, that’s how it works! The interpolation algorithm is implemented on the server and we actually don’t care about the implementation, we care only about semantics of it.
We can think about the combining step as follows: the server has two shares $(x_1, y_1)$ and $(x_2, y_2)$ and it wants to find two coefficients $a, b$ in the finite field $\mathbb{F}_p$ for which the equation $a*x + b = y$ holds for both shares. That is, it solves the following linear system with two unknowns $a, b$:
$$\begin{cases}
a x_1 + b = y_1,\\ a x_2 + b = y_2,\\ \end{cases}$$
Then $b$ is the constant coefficient in the polynomial and so is the initial secret.
Let’s say our share is number 2, so we control $y_2$. Let’s solve the system for $b$:
$$\begin{split}
a & = & \frac{y_1 – y_2}{x_1 – x_2}, \\ b & = & y_2 – a x_2 = y_2 – x_2\frac{y_1 – y_2}{x_1 – x_2} = y_2 – u + y_2 / v = w y_2 – u. \end{split}$$
Note that we have done some variable replacements, since we don’t really care about all values $(x_1, y_1, x_2)$, but only about the minimum number of expressions involving them. We obtained that
the algorithm the server uses to combine the shares is basically a linear function of $y_2$, which we control! So we have only two unknowns, the coefficients $w$ and $u$! How can we learn them? Obviously we need to get some information from the server, since we know nothing about the share #1. Recovering the linear coefficients
For simplicity, let’s rename variables and assume that the server computes $secret \equiv a y_2 + b \pmod{p}$ ($a$ and $b$ are the new unknowns, unrelated to previous ones).
Since we have no information about $a, b$ we can’t set $secret$ in a meaningful way first. Therefore the checksum check will fail… and the server will leak a part of the $secret$ that he combined!
def unpack(msg, key = b''): tag = msg[:SHA.digest_size] if tag == SHA.new(key + msg[len(tag):]).digest(): return msg[len(tag):] print('bad {}: {}'.format(('checksum', 'tag')[bool(key)], tohex(tag)))
Unluckily, the server leaks only 256 least significant bits of the secret (from 512). To sum up, we have the following oracle:
(Share with $y_2$) $\mapsto$ (256 LSBs of the resulting secret).
First, we can query $y_2 = 0$ and obtain the LSBs of $secret = (0y_2 + b) \mod{p} = b$. Then we can query $y_2 = 1$ and obtain LSBs of $(a + b) \mod{p}$, which quite often will be equal to just LSBs of $a + b$, therefore we can learn also LSBs of $a$. So, we can learn 256 LSBs of $a$ and $b$ in 2 queries.
Now, let’s try to shift $a$ down so that we learn the MSBs of it. To do this we set $y_2$ to $2^{-256} \pmod{p}$. However this is not exactly the shift due to modulo $p$. What we obtain is:
$$(y_2 a + b) \mod{p} = (2^{-256} a + b) \mod {p} = ((a + kp) / 2^{256} + b) \mod {p},$$
where $k$ is such that $a + kp \equiv 0 \pmod{2^{256}}$ and the last division is done in integers. Luckily, we know LSBs of $a$, that is, we know $a \mod{2^{256}}$, we can predict LSBs of $k$:
$$k \equiv -a/p \pmod{2^{256}}.$$
Now comes an interesting point. The smallest such $k = k_0$ is less than $2^{256}$. All other $k$ satisfying this condition can be described as $k = k_0 + t2^{256}$ for some integer $t$. Note that:
$$\begin{split}
((a + kp) / 2^{256} + b) \mod{p} & = & ((a + k_0p + 2^{256}tp) / 2^{256} + b) \mod{p}\\ & = & ((a + k_0p) / 2^{256} + b + tp) \mod{p}, \end{split}$$
…and $tp$ goes away due to modulo! This means that we need to know only $k \equiv -a/p \mod{2^{256}}$, which we can compute as mentioned before.
Moreover, we can guess whether the last addition of $b$ overflowed the modulo or not. Let’s assume that it did not, it is quite probable. Then for the query $y_2 = (2^{-256} \mod{p})$ we get (note the modulo change):
$$r = (2^{-256}a + b) \mod{p} \equiv (a + kp) / 2^{256} + b \pmod{2^{256}}.$$
Then using known LSBs of $b$ we get:
$$2^{256} (r – b) \equiv a + kp \pmod {2^{512}}.$$
$$a \equiv 2^{256} (r – b) – kp \pmod {2^{512}}.$$
We learned the full $a$! Recall that we assumed that addition of $b$ does not overflow the modulo. We can guess this and try to add $p$, or choose first $y_0=2^{128}$ instead of $y_0=2^{256}$ and learn full $a$ in two steps. This trick allows us to notice the additional subtraction of $p$ since in the first query half of the obtained bits should match the known bits of $a$.
Ok, how to learn full $b$ now? Sadly, we can’t shift it and the effect of it’s high bits is quite limited. We now can exploit the effect of $b$ on overflowing the modulo: for specially crafted queries we will check how many times we overflow the modulo and deduce some information about $b$. Recall that we can query
$$(ay_0 + b) \mod {p} = ay_0 + b -kp$$
for some $k$. Note that since $b < p$, its value may change $k$ only by $1$. Let's perform a binary search of the real value of $b$. Assume that we know that $b_l \le b \le b_r$ for some known $b_l, b_r$. Let $mid = (b_l + b_r) / 2$. Then we can craft such $y_0$ that for fall $b_l \le b < mid$ we will get known $k = k_0$ and for $mid \le b \le b_r$ we will get $k = k_0 - 1$. Such $y_0$ can be obtained as $y_0 \equiv -mid / a \pmod{p}$, since then $$(y_0 a + b) \mod{p} = (b - mid) \mod{p}$$ and then we will get either $b - mid$ or $b - mid + p$. Since we know LSBs of $b$ and $mid$, we can easily distinguish between two cases and divide the search space for $b$ by 2. Note that we need to learn only 256 MSBs of $b$, so we are good with around 256 queries for this binary search.
To sum up, we need 2 queries to learn LSBs of $a$ and $b$, 2 queries to learn full $a$ and at most 256 queries to learn full $b$.
Here’s python code for recovering $a, b$:
def oracle(y): assert 'what have you got' in f.read_line() f.send_line(gen_share_with_y(y)) res = f.read_line() assert "bad checksum" in res, "oracle failed: %r" % res return from_bytes(res.split()[-1].decode("hex")) MOD = 2**256 blow = oracle(0) % MOD alow = (oracle(1) - blow) % MOD i2 = invmod(2, p) a = alow for e in (128, 256): mod2 = 2**e k = invmod(p, mod2) * (-a) % mod2 assert (a + k * p) % mod2 == 0 res = (oracle(i2**e) - blow) % MOD shifted = ((res << e) - k * p) % (MOD << e) # did we get additional -p because of b? if shifted % MOD != a % MOD: res = (oracle(i2**e) - blow + p) % MOD shifted = ((res << e) - k * p) % (MOD << e) assert shifted % MOD == a % MOD a |= shifted print "Learned full a:", tohex(a) if a >= p: assert 0, "Failed, seems in the second query there was an overflow." ia = invmod(a, p) bl = 0 br = p - a - 1 while bl < br: mid = (bl + br) // 2 x = ia * (-mid) % p assert (x*a + mid) % p == 0 res = oracle(x) % MOD if res == (blow - mid) % MOD: bl = mid elif res == (blow - mid + p) % MOD: br = mid - 1 else: assert 0, "Failed, seems in the second query there was an overflow." if bl >> 256 == br >> 256: break b = br - br % MOD + blow print "Learned full b:", tohex(b) if b >= p: assert 0, "Failed, seems in the second query there was an overflow." Forging the Evil Command
Finally, when we have learned $a$ and $b$, we can forge and execute arbitrary commands:
def pack(msg, key=''): return SHA.new(key + msg).digest() + msg packed = from_bytes(pack(r"echo PWNED; bash")) assert packed < p y = (packed - b) * ia % p assert (a * y + b) % p == packed print "Command executing:" assert 'what have you got' in f.read_line() f.send_line(gen_share_with_y(y)) f.interact()
In half of the runs we will get a shell! |
Let $C$ be a smooth projective connected curve of genus $g$ over $\bar{\mathbf{Q}}$. Fix a finite
non-empty (Edit) set of closed points $S$ in $C$ and let $U$ be the complement of $S$ in $C$. Q1. (Algebraic formulation) Does there exist a finite (surjective) morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ such that $\pi|_{U}$ is etale?
Equivalently, let $X$ be a compact connected Riemann surface of genus $g$ which can be defined over $\bar{\mathbf{Q}}$ and let $B$ be a finite set of of closed points in $X$ with complement $Y$.
Q1. (Analytic formulation ) Does there exist a finite topological cover $Y\longrightarrow \mathbf{P}^1(\mathbf{C})-\{0,1,\infty\}$ ?
The equivalence of these two questions follows from the proof of Belyi's theorem and Riemann's existence Theorem.
If the answer to Question 1 is positive, I would be very interested in knowing if the degree of $\pi$ can be bounded effectively.
Q2. Does there exist a finite (surjective) morphism $\pi:C\longrightarrow \mathbf{P}^1$ such that $\pi|_{U}$ is etale and $\deg \pi \leq c$, where $c$ is a constant depending only on $S$ and $g$? Example. Suppose that $g=0$. Then, following Belyi's proof of his theorem, the answer to Question 1 is yes. The answer to Question 2 is also positive and an explicit upper bound for such a rational function is given by Khadjavi in An effective version of Belyi's Theorem.
I don't expect the answer to Question 1 to be easy. In fact, what I'm asking is to prove the existence of a Belyi morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ with
prescribed ramification. Now, that's probably very hard but definitely very interesting to find out. Trivial Remark. Suppose that $g>1$. Then the automorphism group of $C$ is finite. Choose a Belyi morphism $\pi:C\longrightarrow \mathbf{P}^1_{\bar{\mathbf{Q}}}$ and let $U_0\subset C$ be the complement of the ramification points of $\pi$. Then we see that Question 1 has a positive answer if we take $U$ to be $\sigma(U_0)$ with $\sigma$ an automorphism of $C$. But that's only finitely many examples. |
HP 17bII+ Silver solver
09-14-2018, 09:20 PM
Post: #1
HP 17bII+ Silver solver
I'm very satisfied with my HP 17bII+ Silver, which I find very powerful, but also nice and not looking too complicated (no [f] and [g] functions on keys like the HP 12c or HP 35s that I used to use at work every day, but a really complete calculator except for trigs and complex numbers calculations).
So the 17BII+ is my new everyday calculator. I don't come back to arguments where a good (HP) calculator is a perfect complement or subsitute to Excel.
I chose the 17BII+ after having carefully studied the programs I use in my day to day work:
- price and costs calculations: can be modeled in the solver, with 4 or 5 equations and share vars
- margin calculations: built-in functions
- time value of money: built-in functions
- time functions: built-in functions
For the few moments I need trigs or complex calculations, I always have Free42 or a real 35s / 15c not really far from me.
After having rebuilt my work environment in the (so powerful) solver, I started to study the behavior of the solver, looking at loops (Σ function), conditional branchings (IF function), menu selection (S function), and the Get and Let functions (G(), L()).
I googled a lot and found an interesting pdf file about the Solvers of the 19B, 17B, 17BII and - according to the author, the 17BII+ Silver.
The file is here : http://www.mh-aerotools.de/hp/documents/...ET-LET.pdf
I tried a few equations in the "Using New and Old Values" chapter, pages 4 and following.
There I found lots of differences with my actual 17BII+ Silver, which I would like to share here with you.
For instance, the following equation found page 4:
Code:
In the next example found page 5:
Code:
Then the equation :
Code:
I don't understand the first 2 cases. In the first one, A should be set to -B, not B, if not using the "old" value of A. In the second one, the solver does not use the "old" value, but it does not also solve the equation, as there is no defined solution.
In the last case, I understand that the equation is evaluated twice before finding an answer. So the old value is used there, but not the way I could expect.
I finally found one - and only one - way to use iterations in the solver, with the equation :
Code:
Note that neither A=G(A)+1 or A=1+G(A) or G(A)+2=A works.
I'm not disappointed, as the solver is a really interesting and useful feature of the calculator, but I'm just surprised not having found more working cases of iterations, or a clear understanding of how the solver works.
Comments are welcomed.
Regards,
Thibault
09-14-2018, 10:14 PM (This post was last modified: 09-14-2018 10:25 PM by rprosperi.)
Post: #2
RE: HP 17bII+ Silver solver
The 17BII+ (Silver Edition) is an excellent machine, in fact it has the best keyboard of any machine made today by HP, but the solver does have a bug, and there are also a few other smaller issues making the solver slightly inferior to the 17B/17BII/19B/19BII/27S version.
See these 3 articles for details:
http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=242551
http://www.hpmuseum.org/forum/thread-657...l#pid58685
http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=134189
Overall these are not dramatic issues, and once you understand the solver bug, you likely can create equations that can avoid the issue.
I have most HP machines but the 17BII and 17BII+ are the ones I use most often for real work (vs. playing, exploring or following along interesting threads here).
Edit: added 3rd link
--Bob Prosperi
09-15-2018, 12:05 AM
Post: #3
RE: HP 17bII+ Silver solver
(09-14-2018 09:20 PM)pinkman Wrote: I'm not disappointed, as the solver is a really interesting and useful feature of the calculator, but I'm just surprised not having found more working cases of iterations, or a clear understanding of how the solver works.
Thibault, when Kinpo built the 17bii+ calculator years ago (both the gold one and the silver one), they basically goofed the solver implementation. This has been discussed at length in the HPMuseum forum. The solvers on the 17b and 17bii work fine, just as you would expect them to. If you plan on making significant use of the solver and its incredible capabilities, forget the + and get an original 17b or 17bii. You won't be sorry.
Also, get the manual (it's on the Museum DVD) Technical Applications for the HP-27s and HP-19b. It applies to the 17b as well.
The Sigma function also works fine on the 17b and 17bii.
09-15-2018, 04:36 AM
Post: #4
RE: HP 17bII+ Silver solver
Thanks to both of you for the details, links and advice.
I've read the threads carefully, I did not find them by myself first. It's the end of the night now, I'll try to make few testing later.
09-15-2018, 11:55 PM (This post was last modified: 09-15-2018 11:58 PM by rprosperi.)
Post: #5
RE: HP 17bII+ Silver solver
If you want to really explore the capabilities of the awesome Pioneer Solver, and confidently try to push it without worrying about using L() this way or that, I agree with Don, buy a 17BII and use that for the Solver stuff, but continue to use the 17BII+ for every day stuff.
Here's a very nice 17BII for only $25 (shipping included, in US) and you can even find them cheaper if you're willing to wait:
https://www.ebay.com/itm/HP-17Bll-Financ...3252533550
The 17BII is bug-free for solver use, while the 17BII+ has a much better LCD, readable in a wider range of lighting & use conditions.
In case you haven't seen this yet, here's an example of what can be done with the solver:
http://www.hpmuseum.org/forum/thread-2630.html
--Bob Prosperi
09-16-2018, 09:59 PM
Post: #6
RE: HP 17bII+ Silver solver
Well I'll try to find one, even if I'm not in the US.
I also want to continue using my actual 17bII+ at work, as the solver is powerful enough for me (for the moment), and it looks really good.
Don and you Bob have done a lot to help understand what the solver can do, that's pretty good stuff.
Regards,
Thibault
09-16-2018, 10:23 PM
Post: #7
RE: HP 17bII+ Silver solver
Nah, Don and Gerson are the real Pioneer Solver Masters, I just have a good collection of links.
Of all the various tools built-in to the various calculator models I've explored, the Pioneer Solver is easily the one that most exceeds it's initial apparent capability. This awesome tool must have been incredibly well-tested by the QA team. The fact that the sheer size (and audacity!) of Gerson's Trig formulas still return amazingly accurate results says more about the underlying design and code than any comments I could add.
Enjoy exploring it, and when you've mastered it (or at least tamed it a bit), come back here and share some interesting Solver formulas. There are numerous folks here that enjoy entering the formulas and running some test cases. (well, I suppose "enjoy" is not really the right word about entering the formulas - I guess feel good about accomplishing it successfully is more accurate).
When I see some of these long Solver equations, it reminds of TECO commands back in the PDP-11 days.
--Bob Prosperi
09-16-2018, 11:05 PM
Post: #8
RE: HP 17bII+ Silver solver
If you feel like entering long formulas you can solve the 8-queens problem.
):0)
Cheers
Thomas
09-17-2018, 06:30 AM (This post was last modified: 09-18-2018 09:47 AM by Don Shepherd.)
Post: #9
RE: HP 17bII+ Silver solver
How about a nifty, elegant, simple number base conversion Solver equation for the 17b/17bii, courtesy Thomas Klemm:
BC:ANS=N+(FROM-TO)\(\times \Sigma\)(I:0:LOG(N)\(\div\)LOG(TO):1:L(N:IDIV(N:TO))\(\times\)FROM^I)
Note: either FROM or TO must be 10 unless you are doing HEX conversions
09-17-2018, 07:49 AM
Post: #10
RE: HP 17bII+ Silver solver
09-17-2018, 09:17 AM (This post was last modified: 09-17-2018 10:28 AM by Don Shepherd.)
Post: #11
RE: HP 17bII+ Silver solver
(09-17-2018 07:49 AM)Thomas Klemm Wrote:
Wow, I didn't realize that Thomas. Stunning.
Don
09-17-2018, 10:24 AM
Post: #12
RE: HP 17bII+ Silver solver
As has already been written, unfortunately the solver in this calculator is flawed.
Part of the problems comes from the fact that it evaluates the equation twice which leads to incrementing by two instead of one etc. But there seem to be more quirks.
This is too bad, as the solver is a very capable tool (with G() and L()), while still easy and versatile to use (with the menu buttons).
Accomplishing something similar (solve for one variable today, for another tomorrow) with modern tools like Excel is more complicated.
Martin
09-17-2018, 12:48 PM
Post: #13
RE: HP 17bII+ Silver solver
(09-16-2018 10:23 PM)rprosperi Wrote: Nah, Don and Gerson are the real Pioneer Solver Masters, I just have a good collection of links.
An obvious correction is in order here:
Don, Gerson and Thomas Klemm are the real Pioneer Solver Masters...
No disrespect intended by the omission. A review of past posts of significant solver formulas quickly reveals just how often all three of these three guys were the authors.
--Bob Prosperi
09-28-2018, 01:33 PM
Post: #14
RE: HP 17bII+ Silver solver
Be sure I have great respect of everyone mentioned above
Thanks for the advices, my new old 17bii is now in my hands ($29 on French TAS), I'm just waiting for the next rainy Sunday for pushing the solver to it's limits (mmh, to MY limits I guess).
09-28-2018, 07:47 PM (This post was last modified: 09-28-2018 07:48 PM by Jlouis.)
Post: #15
RE: HP 17bII+ Silver solver
Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S?
TIA
09-29-2018, 01:37 AM
Post: #16
RE: HP 17bII+ Silver solver
(09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S?
Check out the document by Martin Hepperle in this thread which lists the functions available for each calculator. The document is an excellent introduction to the Solver application.
Solver GET-LET
~Mark
Who decides?
09-29-2018, 02:25 AM
Post: #17
RE: HP 17bII+ Silver solver
(09-29-2018 01:37 AM)mfleming Wrote:(09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S?
Thanks mfleming, I should have read the beginning of the manual. It is the same solver for the calculators of my question. I prefer to use the 19bII due the dedicated alpha keyboard, but the 27s is a pleasure to use too.
Cheers
09-29-2018, 11:13 AM
Post: #18
RE: HP 17bII+ Silver solver
09-29-2018, 11:53 AM (This post was last modified: 09-29-2018 05:12 PM by Don Shepherd.)
Post: #19
RE: HP 17bII+ Silver solver
(09-28-2018 07:47 PM)Jlouis Wrote: Just one doubt here, Are the solver of 18C and 19B / BII the same of the 17BII and 27S?Here is a 3-page cross-reference I created a few years ago of Solver functions present in the 19bii, 27s, and 17bii.
solver 1.PDF (Size: 1.08 MB / Downloads: 33)
solver 2.PDF (Size: 968.77 KB / Downloads: 21)
solver 3.PDF (Size: 494.75 KB / Downloads: 19)
09-29-2018, 01:36 PM (This post was last modified: 09-29-2018 01:49 PM by Jlouis.)
Post: #20
RE: HP 17bII+ Silver solver
(09-29-2018 11:53 AM)Don Shepherd Wrote:
Really thanks Dom for taking time to make these documents.
This is what makes MoHC a fantastic community.
Cheers
JL
Edited: Looks like the 19BII is a little more complete than the 27s and that the 17BII is the weaker one.
User(s) browsing this thread: 1 Guest(s) |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
How you parameterize your sparsity will depend on your application. The authors of that paper, in a paragraph on page 231 say:
which is why they clump the coefficients together in $P$ blocks of start time $t_{B_k}$ of duration $n_{B_k}$.
For one impulse response this is shown in their figure 1(b).
The overall sparsity will depend on the reflections and delays that they have when this impulse response is used by the direct signal and all its indirect reflections.
How are the impulse response/coefficients made sparse?
You need to decide which coefficients in the overall impulse response are zero. To do that, you need to decide why they are zero (what is the cause?). Perhaps there is a long dead-time between the onset and the first secondary response.
How does one do parameterization of impulse response to decide for sparse/non-sparse?
Once you've decided on the form, you decide on the structure of the non-sparse parts of the response and how they are separated from each other.
Suppose your non-sparse parts are all defined by $\alpha_k h[n]$, where $\alpha_k$ is the gain of the $k^\mbox{th}$ non-sparse segment and $h[n]$ is the FIR response of all segments (modulo a gain term). $h[n]$ is of duration $H$. Then the sparsity will come about by how you space out the $\alpha_k h[n]$:$$g[n] = \sum_{k=1}^K \alpha_k h[n-t_{B_k}]$$where $t_{B_k} \gg H$ for sparsity to be true (so there is lots of space between the $\alpha_k h[n]$.
In general, how is parameterization applied to make the impulse response to zero and in general how does one know which coefficients are sparse?
This comes down to how do you choose $t_{B_k}$ and $H$. That will depend on what system you're trying to model. |
The Annals of Statistics Ann. Statist. Volume 10, Number 2 (1982), 502-510. Construction Methods for $D$-Optimum Weighing Designs when $n \equiv 3 (\operatorname{mod} 4)$ Abstract
In the setting where the weights of $k$ objects are to be determined in $n$ weighings on a chemical balance (or equivalent problems), for $n \equiv 3 (\operatorname{mod} 4)$, Ehlich and others have characterized certain "block matrices" $C$ such that, if $X'X = C$ where $X(n \times k)$ has entries $\pm1$, then $X$ is an optimum design for the weighing problem. We give methods here for constructing $X$'s for which $X'X$ is a block matrix, and show that it is the optimum $C$ for infinitely many $(n, k)$. A table of known constructibility results for $n < 100$ is given.
Article information Source Ann. Statist., Volume 10, Number 2 (1982), 502-510. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176345791 Digital Object Identifier doi:10.1214/aos/1176345791 Mathematical Reviews number (MathSciNet) MR653525 Zentralblatt MATH identifier 0489.62068 JSTOR links.jstor.org Citation
Galil, Z.; Kiefer, J. Construction Methods for $D$-Optimum Weighing Designs when $n \equiv 3 (\operatorname{mod} 4)$. Ann. Statist. 10 (1982), no. 2, 502--510. doi:10.1214/aos/1176345791. https://projecteuclid.org/euclid.aos/1176345791 |
Limits aren't radio dials.$$\lim \left(1 +\color{blue}{\frac 1x}\right)^{\color{orange}x}$$You can't tweak the $\color{blue}{\text{blue}}$ limit first to get it down to $1 + \color{blue}0$ and then tweak the $\color{orange}{\text{orange}}$ limit second to get $(1+ \color{blue}0)^{\color{orange}\infty} = 1$Food for thought: Why don't you tweak the ...
$\tan x-\sin x=\sin x(1/\cos x-1)=\sin x(1-\cos x)/\cos x$ Now notice that continuing now from where you got stuck you can group terms like $$\frac{\sin(x)}x \cdot \frac{1-\cos x}{x^2} \cdot \frac1{ \cos x (\sqrt{1+\tan x}+\sqrt{1+\sin x}} $$.
Your limit can be found using the series expansions and hence asymptotics of each function near $x=0$.$$\begin{align}\lim_{x\to0^+}\left(\frac{\cos^2{(x)}}{x}-\frac{e^x}{\sin{(x)}}\right)&=\lim_{x\to0^+}\left(\frac{\sin{(x)}\cos^2{(x)}-xe^x}{x\sin{(x)}}\right)\\&=\lim_{x\to0^+}\left(\frac{\sin{(x)}(1+\cos{(2x)})-2xe^x}{2x\sin{(x)}}\right)\\&=...
One wayApply definition of derivative of $2^x$$$\Longrightarrow \lim_{x\to 0} \frac{2^x-2^0}{x-0}$$$$=\frac{d}{dx}(2^x)|_0$$$$=2^0\log 2$$$$\Longrightarrow \lim_{x\to 0}\frac{x}{2^x-1}=\frac{1}{\log 2}$$Second wayApply change of variable, let $2^x=y$$$\Longrightarrow x=\frac{\log y}{\log 2}$$Also, as $x\rightarrow 0, y\rightarrow 1$. So, required ...
You can apply the binomial theorem to see that it's bigger than $1$. If you change $x$ to $n$ and restrict $n$ to the integers, you find that when $n \ge 1$, $$(1+1/n)^n = \sum_{i=0}^n {n \choose i}{1 \over n^i} \ge 1 + n \cdot {1 \over n} = 2 $$ (because all terms are positive).So the limit (if it exists) must be at least 2.To go further, fix an integer ...
You can use Stirling's approximation, here is another solution. Let $u_n=\frac{n^n}{(n!)^2}$, we have$$ \frac{u_{n+1}}{u_n}=\frac{(n!)^2}{((n+1)!)^2}\frac{(n+1)^{n+1}}{n^n}=\frac{1}{n+1}\left(1+\frac{1}{n}\right)^n $$Since $\left(1+\frac{1}{n}\right)^n=e^{n\ln\left(1+\frac{1}{n}\right)}\underset{n\rightarrow +\infty}{\longrightarrow}e$, we have that $\lim\...
$$=\left(\lim_{x\to0}\dfrac{1-\sin^2x}x-\dfrac1{\sin x}\right)-\lim\dfrac{e^x-1}{\sin x}$$The second limit converges to $1$For first, either use $\sin x\approx x$ for $x\to0$Or use Are all limits solvable without L'Hôpital Rule or Series Expansion to find $$\lim_{x\to0}\left(\dfrac1x-\dfrac1{\sin x}\right)$$
The limit doesn't exist because the left-hand limit does not equal the right-hand limit$$-\infty=\lim_{x\to1^-}\frac{1}{\log(x)} \neq \lim_{x\to1^+}\frac{1}{\log(x)}=\infty$$if we extended the real number line and added positive and negative infinity then the limit still wouldn't exist because $\infty \neq -\infty$.
There is a proof based entirely on the methods of differential calculus; see thisDifferentiability of Exponential Functionsby Philip M. Anselone and John W. LeeIn that paper you will find the following.Theorem 1. Let $f (x) = a^x$ with any $a > 1$. Then f is differentiable at $ x = 0$ and$f'(0) > 0$.Theorem 2. Let $f (x) = a^x$ with any $a &...
Using only limits you have:$$f'(0) = \lim_{h \to 0} \frac{f(0+h) - f(0)}{h}$$$$= \lim_{h \to 0} \frac{a^h-1}{h}$$$$\therefore f'(x) = a^x \times f'(0)$$However, you cannot prove that $f'(0) = \ln a$ without using the property that $e^x$ is its own derivative.If you accept the fact as described in this answer, use the fact that $a^x = e^{x \ln a} = f(...
Here's a fun, unconventional way to think about it. If we suppose the limit exists, we have\begin{align}\lim\limits_{x\to 0} \frac{e^x-1}{x^2}&=\lim\limits_{x\to 0} \frac{e^x-1}{x}\cdot\frac1x\\&=\lim\limits_{x\to 0} \frac{e^x-1}{x}\cdot\lim\limits_{x\to 0}\frac1x\\&=1\cdot\lim\limits_{x\to 0}\frac1x\end{align}But we know that for $\lim\...
Instead of moving to Stirling's approximation, you can go with an easy method: ratio test. Specifically, you can compute the ratio of $x_{n+1}$ and $x_n$ as$$\frac{x_{n+1}}{x_{n}}=\frac{(n+1)^{n+1}}{((n+1)!)^2}\frac{(n!)^2}{n^n}=\frac1{n+1}\left(1+\frac1n\right)^n\to\frac{e}{n+1}\to 0.$$Then the series $x_n=\frac{n^n}{(n!)^2}$ converges. Denote its ...
$$\lim_{x\to\pi/4}\cot x^{\cot4x}=\left(\lim_{x\to\pi/4}(1+\cot x-1)^{1/(\cot x-1)}\right)^{\lim_{x\to\pi/4}{\cot4x(\cot x-1)}}$$The inner limit converges to $e$For the exponent,$$\lim_{x\to\pi/4}\cot4x(\cot x-1)=\lim_{x\to\pi/4}\dfrac{\cos4x}{\sin x}\cdot\lim_{x\to\pi/4}\dfrac{\cos x-\sin x}{\sin4x}$$Now$$\lim_{x\to\pi/4}\dfrac{\cos4x}{\sin x}=\...
You may consider that$$ \zeta(s)=\sum_{n\geq 1}\frac{1}{n^s} = \sum_{n\geq 1}\frac{1}{\Gamma(s)}\int_{0}^{+\infty}x^{s-1}e^{-nx}\,dx =\frac{1}{\Gamma(s)}\int_{0}^{+\infty}\frac{x^{s-1}}{e^x-1}\,dx$$holds for any $s$ such that $\Re(s)>1$. Similarly$$ \eta(s) = \sum_{n\geq 1}\frac{(-1)^{n+1}}{n^s} = \frac{1}{\Gamma(s)}\int_{0}^{+\infty}\frac{x^{s-1}}{e^x+...
It’s not directly using $\ln$ in the limit itselfConsider $$f’(x) = a^x(f’(0))$$$$\frac{f’(x)}{f(x)} = f’(0)$$Taking definite integral$$\displaystyle\int_0^1 \frac{df(x)}{f(x)dx} dx = f’(0)$$$$f’(0) = \ln a$$ so we have the desired result.
Let $(1+1/x)^x \to a$. Taking logarithm of both sides,$$\log a = x\log\left(1+\frac{1}{x}\right) = x \left(\frac{1}{x}- \frac{1}{2x^2} + \cdots\right) =\left( 1 - \frac{1}{2x} + \cdots\cdots\right) \to 1$$If the limit was $1$ then we should have got $\log a \to 0$ which is clearly not the case.
Hint$$A={\left({\frac{\cos (x)}{\cos (2x)}}\right)^{\frac{1}{x^2}}}\implies \log(A)={\frac{1}{x^2}}\log\left({\frac{\cos (x)}{\cos (2x)}}\right)$$Use the series of $\cos(x)$ and $\cos(2x)$ and then long division. Continue with Taylor expansion fo the result. Divide by $x^2$. At this point, you have the Taylor series for $\log(A)$. Continue with Taylor ... |
Since from your previous questions you're obviously interested in learning how this is done I'll go into the detail of the calculation. Note that a lot of what follows can be found in existing answers, but I'll tailor this answer specifically at you.
Time dilation is calculated by calculating the proper time change, $d\tau$, using the expression:
$$ c^2d\tau^2 = -g_{ab}dx^adx^b \tag{1} $$
where $g_{ab}$ is the metric tensor. The reason we can use this to calculate the time dilation is that the proper time is an invariant i.e. all observers will agree on its value. To illustrate how we do this let's consider the simple example of an astronaut moving at velocity $v$ in flat spacetime. In this case the metric is just the Minkowski metric, and equation (1) simplifies to:
$$ c^2d\tau^2 = c^2dt^2 - dx^2 - dy^2 - dz^2 \tag{2}$$
First we do the calculation in the astronaut's rest frame. In that frame the astronaut isn't moving so $dx = dy = dz = 0$, and the proper time as observed by the astronaut is just:
$$ d\tau_{astronaut} = dt $$
Now let us here on earth calculate the proper time. We'll arrange our coordinates so the astronaut is moving along the $x$ axis, so $dy = dz = 0$. In that case equation (2) becomes:
$$ c^2d\tau^2 = c^2dt^2 - dx^2 $$
To procede we have note that if the astronaut is moving at velocity $v$ that means $dx/dt = v$, because that's what we mean by velocity. So $dx = vdt$. Put this into our equation and we get:
$$ c^2d\tau^2 = c^2dt^2 - (vdt)^2 $$
which rearranges to:
$$ d\tau_{Earth} = \sqrt{1 - \frac{v^2}{c^2}}dt $$
Because the proper time is an invariant both we and the astronaut must have calculated the same value i.e. $d\tau_{Earth} = d\tau_{astronaut}$, and if we substitute for $d\tau_{Earth}$ in the equation above we get :
$$ \frac{d\tau_{astronaut}}{dt} = \sqrt{1 - \frac{v^2}{c^2}} = \frac{1}{\gamma}$$
where $\gamma$ is the Lorentz factor. But the left hand side is just the variation of the astronaut's time with our time - in other words it's the time dilation. And the equation is just the standard expression for time dilation in special relativity that we teach to all students of SR.
The point of all this is that we can use exactly the same procedure to work out the time dilation in gravitational fields. Let's take the gravitational field of a spherically symmetric body, which is given by the Schwarzschild metric:
$$ c^2d\tau^2 = c^2\left(1-\frac{2GM}{r c^2}\right)dt^2 - \left(1-\frac{2GM}{r c^2}\right)^{-1}dr^2 - r^2 (d\theta^2 + sin^2\theta d\phi^2) \tag{3} $$
This is very similar to the equation (2) that we used in flat spacetime, except that the coefficients for $dt$ etc are now functions of distance, and we do the calculation in exactly the same way. Let's start with calculating the time dilation for a stationary astronaut at a distance $r$. Because the astronaut is stationary we have $dr = d\theta = d\phi = 0$, and equation (3) simplifies to:
$$ c^2d\tau^2 = c^2\left(1-\frac{2GM}{r c^2}\right)dt^2 $$
and this time we get:
$$ \frac{d\tau}{dt} = \sqrt{1-\frac{2GM}{r c^2}} = \sqrt{1-\frac{r_s}{r}} \tag{4} $$
where $r_s$ is the Schwarzschild radius. And that's it - calculating the time dilation for a stationary observer in a gravitational field is as simple as that. You'll find this expression in any introductory text on GR.
But the real point of your question (finally we get to it!) is what happens if our observer in the gravitational field is moving? Well, let's assume they are moving in a radial; direction at velocity $v$, so just as in the flat space case we have $dr = vdt$ and $d\theta = d\phi = 0$. We substitute this into equation (3) to get:
$$ c^2d\tau^2 = c^2\left(1-\frac{2GM}{r c^2}\right)dt^2 - \left(1-\frac{2GM}{r c^2}\right)^{-1}v^2dt^2 $$
which rearranges to:
$$ \frac{d\tau}{dt} = \sqrt{1-\frac{r_s}{r} - \frac{v^2/c^2}{1-\frac{r_s}{r}}} \tag{5} $$
And once again, it's as simple as that. If you compare this result with equation (4) you'll see the time dilation for a moving object is different because we have an extra term $\frac{v^2/c^2}{1-\frac{r_s}{r}}$ in the square root.
One last sanity check: what happens if we go an infinite distance away from the gravitating object so $r \rightarrow \infty$? Well if $r \rightarrow \infty$ then $r_s/r \rightarrow 0$ and equation (5) becomes:
$$ \frac{d\tau}{dt} = \sqrt{1 - \frac{v^2}{c^2}} = \frac{1}{\gamma} $$
which is exactly what we calculated for flat spacetime. |
Difference between revisions of "Lower attic"
From Cantor's Attic
(the Takeuti-Feferman-Buchholz ordinal)
(10 intermediate revisions by 5 users not shown) Line 1: Line 1:
{{DISPLAYTITLE: The lower attic}}
{{DISPLAYTITLE: The lower attic}}
[[File:SagradaSpiralByDavidNikonvscanon.jpg | thumb | Sagrada Spiral photo by David Nikonvscanon]]
[[File:SagradaSpiralByDavidNikonvscanon.jpg | thumb | Sagrada Spiral photo by David Nikonvscanon]]
+
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
Line 10: Line 11:
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
−
* [[admissible]] ordinals and [[Church-Kleene#relativized Church-Kleene ordinal | relativized Church-Kleene $\omega_1^x$]]
* [[admissible]] ordinals and [[Church-Kleene#relativized Church-Kleene ordinal | relativized Church-Kleene $\omega_1^x$]]
* [[Church-Kleene | Church-Kleene $\omega_1^{ck}$]], the supremum of the computable ordinals
* [[Church-Kleene | Church-Kleene $\omega_1^{ck}$]], the supremum of the computable ordinals
+ + + + + + + + +
* the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]]
* the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
−
* [[indecomposable]] ordinal
* [[indecomposable]] ordinal
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
Latest revision as of 13:37, 27 May 2018
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ = the supremum of the game values for white of all positions in infinite chess $\omega_1^{\mathfrak{Ch},c}$ = the supremum of the game values for white of the computable positions in infinite chess $\omega_1^{\mathfrak{Ch}}$ = the supremum of the game values for white of the finite positions in infinite chess the Takeuti-Feferman-Buchholz ordinal the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Extended Veblen function the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream |
I've seen Julius' MATLAB code and I know what it does.
Essentially, given an LTI filter with impulse response, $h[n]$, and frequency response:
$$\begin{align} H(e^{j \omega}) &\triangleq \Big| H(e^{j \omega}) \Big| \, e^{j \phi(\omega) } \\&= \sum\limits_{n=-\infty}^{\infty} h[n] \, e^{-j \omega n}\end{align}$$
Then $\Big| H(e^{j \omega}) \Big| > 0$ is the
magnitude response and $\phi(\omega)$ is the phase response.
To be a minimum-phase filter, the
phase response (expressed in radians) is the negative of the Hilbert Transform of the natural log of the magnitude response.
The natural log of the magnitude response, $\log \big| H(e^{j \omega}) \big|$, is just like decibels (dB) but is expressed in a different dimensionless unit, the
neper. 8.685889638 dB is equal to 1 neper. Essentially the nepers is the real part of the complex natural log of the complex frequency response and the phase in radians is the imaginary part.
Just like radians is the mathematically natural unit for angle, so also are nepers the mathematically natural unit for relative change of magnitude. A change of magnitude of 1% is nearly the same as 0.01 neper. But, like dB, going up 0.1 neper followed by going down 0.1 neper will land you at exactly the original magnitude. But going up 10% followed by going down 10% will land you at slightly less than the original magnitude.
So, the (unwrapped) minimum phase of a filter, given it's magnitude response is:
$$\begin{align} \phi(\omega) &= - \mathscr{H} \Big\{ \log \big| H(e^{j \omega}) \big| \Big\} \\\\ &= - \int\limits_{-\pi}^{\pi} \log \Big| H(e^{j \theta}) \Big| \, \Big( 2 \pi \tan \big(\tfrac{\omega-\theta}{2} \big) \Big)^{-1} \, \mathrm{d}\theta \\\end{align}$$
Where $\mathscr{H} \big\{ \cdot \big\}$ is the Hilbert Transform.
If we were to evaluate that integral, we would need to do something called
"take the Principal Value" to deal with the division by zero when $\theta = \omega$. But we won't do the Hilbert transform that way.
Okay, so Step 2 is understanding that the Discrete-Time Fourier Transform (DTFT) of this sequence:
$$ g[n] \triangleq \begin{cases}0 \qquad & n<0 \\1 \qquad & n=0 \\2 \qquad & n>0 \\\end{cases} $$
is
$$\begin{align} G(e^{j\omega}) &= \sum\limits_{n=-\infty}^{\infty} g[n] \, e^{-j \omega n} \\\\ &= \frac{e^{j\omega}+1}{e^{j\omega}-1} \\\\ &= \frac{e^{j\omega/2}+e^{-j\omega/2}}{e^{j\omega/2}-e^{-j\omega/2}} \\\\ &= \frac{2\cos(\omega/2)}{2j\sin(\omega/2)} \\\\ &= -j \frac{1}{\tan(\omega/2)} \\\end{align}$$
i'm running outa time. i gotta return to this. |
It is known that a metric $g$ gives a Hodge decomposition: $$ \Omega^*(M)=\mathcal H^*(M)\oplus d\Omega^*(M) \oplus \delta_g \Omega^*(M) $$ Note that the usual differential restricts to an isomorphism on $\delta_g\Omega^*(M)$, denoted by $$ d_g: \delta_g\Omega^*(M) \to d\Omega^*(M) $$ (In fact, it is clearly injective and it is surjective since $d\Omega^*(M)=d\delta_g\Omega^*(M)$) Now let's say there is a family of Riemannian metrics $g_t, t\in [0,1]$, then many new operators can be created by taking derivatives, like $\frac{d }{dt}d_{g_t}$, $\frac{d}{dt} d_{g_t}^{-1}$ or $\frac{d}{dt}\mathrm{proj}_{\mathcal H_{g_t}^*(M)}$. In particularly, I am interested in the operator $$ \frac{d}{dt} {d_{g_t}^{-1}} $$ because $\mathrm{image} (d_{g_t}^{-1})=\delta_{g_t}\Omega^*(L)$ gives a smooth family of subspaces in $\Omega^*(L)$, and this derivative is like the infinitesimal deformation of this subspace family.
Question 1: How do we describe the derivative operator $\frac{d}{dt} {d_{g_t}^{-1}}$? What condition on $(g_t)$ can infer that $\frac{d}{dt}d_{g_t}^{-1}$ is 'orthogonally transversal' to $d^{-1}_{g_t}$?
Another probably related question is that
Question 2: Can we regard $\frac{d}{dt} \mathrm{proj}_{\mathcal H^*_{g_t}}$ as an operator from $\mathcal H^{*\perp}_{g_t}$ to $\mathcal H^{*\perp}_{g_t}$? |
To be honest, Bell's inequality is an inequality that gives the upper limit of the probability that Rainy Woman and Sunny Man meet.
Bell's inequality is often found in the introduction to quantum mechanics. However, I think the meaning of the inequality is very difficult to understand. So, in this article, I will try to explain the meaning of Bell's inequality easily.
Bell's inequality is as follows.
(If it is different from Bell's inequalities you know, please read "Various Bell's Inequalities" below first)
The meaning of Bell's inequality is very unclear. One of the reasons is that the symbols $W$, $A$, $B$ are abstract expressions. So, I would like to explain Bell's inequality by replacing the symbols $W$, $A$, $B$ to the following everyday words.
$P(A \cap B)$ means the probability that Alice and Bob go out at the same time. $P(A \cap W)$ means the probability that the weather is sunny and Alice goes out. $P(\overline{W} \cap B)$ means the probability that weather is rainy and Bob goes out.
Bell's inequality can be expressed in everyday words as follows:
The expression "probability that Alice and Bob go out at the same time" is not very interesting. So, I would like to think another expression "probability that Alice and Bob meet". "Probability that Alice and Bob meet" never exceeds "probability that Alice and Bob go out at the same time" no matter how high the probability. Therefore, the above statement can be said as follows.
This means that "Even if the probability that Alice and Bob meet is high, it does not exceed the sum of the probability that the weather is sunny and Alice goes out and the probability that Bob goes out and the weather is rainy."
Now, let's consider the following case.
The Venn diagram looks like this:
Then, Bell's inequality can be expressed by the following sentence.
We can see the meaning of Bell's inequality in the following Venn diagram visually.
Here, I would like to ask a question related to Bell's inequality.
What is the upper limit of the probability that Alice and Bob meet when we have the following conditions of the question?
The answer is 25%. I would like to explain how to solve the question.
As a precaution, I would like to reconfirm the answer that the question is expecting here. The answer to this question is not "probability." It is "upper limit of probability". Because it is the upper limit, you may consider the case that "Alice and Bob are easy to go out for some reason at the same time" or the case that "Alice and Bob have a tendency to meet for some reason". This question is asking the question, "How high can the probability that Alice and Bob meet as a maximum?"
We divide this question into the case of sunny day and the case of rainy day in order to calculate the conditional probability. First we consider the case of a sunny day.
The probability that Alice goes out is 50%. It is expressed in the following formula.$$ P(A) = 50\% $$
The probability that the weather is rainy is 75% when Alice goes out. The conditional probability can be expressed by the following equation.$$ P(\overline{W}|A)=\frac{P(A \cap \overline{W})}{P(A)} = 75\% $$
On the other hand, the probability that the weather is sunny is 50%, so the probability that the weather is rainy is 50%.$$ P(\overline{W}) = 50\% $$
Therefore, the probability that Alice goes out on a rainy day is 75%.$$ P(A|\overline{W})=\frac{ P(\overline{W} \cap A)}{P(\overline{W})} = 75\% $$
From this, we can see that the probability that Alice goes out on a sunny day is 25%.$$ P(A|W)=\frac{ P(W \cap A)}{P(W)} = 25\% $$
Similarly, the probability that Bob goes out on a sunny day is 75%.$$ P(B|W)=\frac{ P(W \cap B)}{P(W)} = 75\% $$
For Alice and Bob to meet, they both need to be out. Therefore, on a sunny day, the upper limit of the probability that they both will meet is 25%.
We consider like a sunny day.
The probability that Alice goes out on a sunny day is 75%.$$ P(A|\overline{W})=\frac{ P(\overline{W} \cap A)}{P(\overline{W})} = 75\% $$
On the other hand, the probability that Bob goes out on a rainy day is 25%.$$ P(B|\overline{W})=\frac{ P(\overline{W} \cap B)}{P(\overline{W})} = 25\% $$
For Alice and Bob to meet, they both need to be out. Therefore, on a rainy day, the upper limit of the probability that they both two will meet is 25%.
If you combine the case of sunny day and the case of rainy day, you can see that the upper limit of the probability that they both meet is 25%.$$ P(W)\times P(A|W)+P(\overline{W})\times P(B|\overline{W})= 25\% $$
The above expression can be rewritten as follows.$$ P(A \cap W)+P(\overline{W} \cap B) = 25\% $$
So, the right hand side of Bell's inequality on this question is 25%.
Therefore, the answer to "How much can you increase the probability that Alice and Bob meet?" is 25%.
As long as the conditions of this question are kept, we can not get any further probability by any means.
This conclusion is always correct as long as the weather can be divided
into sunny and rainy days.
But
surprisingly, in quantum mechanics the left hand side of this inequality is 37.5%!
I will explain it next.
The source $S$ emits one electron to the left observation device $L$ and the other electron to the right observation device $R$. The total of the angular momentum of two electrons is 0.
The observation devices can measure the spin of an electron in one direction. It is strange that the spin of any electron is always either $+1/2$ or $-1/2$. In the case of that the total of the angular momentum of two electrons is 0, the probability that the right observation device $R$ gets the spin $+1/2$ of one electron in the direction $W$ is 100%, when the left observation device $L$ gets the spin of the other electrons $+1/2$ in the direction $W$.
From here, the explanation becomes a little more difficult. The left observation device $L$ measure the spin of an electron in the direction $A$. On the other hand, the right observation devicce $R$ measure the spin of the other electron in the direction $B$. In the case of that the angle difference between the direction $A$ and direction $B$ is $\theta$, the probability that the right observation device $R$ gets the spin $+1/2$ of one electron in the direction $B$ can be calculated in the following formula in quantum mechanics, when the left observation device $L$ gets the spin $+1/2$ of the other electron in the direction $A$.$$ P(B|A) = \frac{ P(A \cap B)}{P(A)}= \cos^2 \left(\frac{\theta}{2} \right) $$
Here, $P(B|A)$ is the probability that the event $B$ occurs when the event $A$ occurs. And $P(A \cap B)$ is the probability that the event $A$ and the event $B$ occur at the same time. I would like to show some examples below.
The angle difference between the direction $A$ and direction $B$ is $60^\circ$, so the probability that the right observation device $R$ gets the spin $+1/2$ of one electron in the direction $B$ is 75%, when the left observation device $L$ gets the spin $+1/2$ of the other electron in the direction $A$ as follows.$$ P(B|A) = \frac{ P(A \cap B)}{P(A)}=\cos^2 \left(\frac{60^\circ}{2} \right) = 75\% $$
The probability $P(A)$ is 1/2, so the probability $P(A \cap B)$ that both observation devices get the spin $+1/2$ of electrons at the same time is 37.5%.$$ P(A \cap B) = 37.5\% $$
Now I would like to consider another case. The left observation device $L$ measure the spin of an electron in the direction $A$. On the other hand, the right observation devicce $R$ measure the spin of the other electron in the direction $W$. The angle difference between the direction $A$ and direction $W$ is $120^\circ$, so the probability that the right observation device $R$ gets the spin $+1/2$ of one electron in the direction $B$ is 25%, when the left observation device $L$ gets the spin $+1/2$ of the other electron in the direction $A$ as follows.$$ P(W|A) = \frac{ P(A \cap W)}{P(A)}=\cos^2 \left(\frac{120^\circ}{2} \right) = 25\% $$
The probability $P(A)$ is 1/2, so the probability $P(A \cap W)$ that both observation devices get the spin $+1/2$ of electrons at the same time is 12.5%.$$ P(A \cap W) = 12.5\% $$
I would like to consider another case. The left observation device $L$ measure the spin of an electron in the opposite direction $W$. (From now, we use a new symbol $\overline{W}$ which means the opposite direction of $W$) On the other hand, the right observation devicce $R$ measure the spin of the other electron in the direction $B$. The angle difference between the direction $\overline{W}$ and direction $B$ is $120^\circ$, so the probability that the right observation device $R$ gets the spin $+1/2$ of one electron in the direction $B$ is 25%, when the left observation device $L$ gets the spin $+1/2$ of the other electron in the direction $\overline{W}$ as follows.$$ P(B|\overline{W}) = \frac{ P(\overline{W} \cap B)}{P(\overline{W})}=\cos^2 \left(\frac{120^\circ}{2} \right) = 25\% $$
$P(\overline{W})$ Is 1/2, so both results are $+1/2$ The probability of becoming is 12.5%.$$ P(\overline{W} \cap B) = 12.5\% $$
Here, Bell's inequality is:
The left hand side of the above inequality is 37.5%, but the sum of the right hand side is 25%, which breaks Bell's inequality. Therefore, a strange correlation that is impossible in nature exists between the observation device $L$ and the observation device $R$. This mysterious correlation is called EPR correlation and has been confirmed in actual experiments. (For example, it is confirmed in Aspect's experiment in 1982)
Where does the mystery of EPR correlation come from? I think it comes from that the measurement results $A$ and $B$ can not be divided by the results $W$ which is not measured. This is similar to the fact that the results can not be divided by the case that an electron go throuth the left slit and the case that an electron goes through the right slit in a double-slit experiment.
Here, I would like to consider the reason why Bell's inequality breaks.
In the previous section, it is said that the mystery of EPR comes from that the measurement results $A$ and $B$ can not be divided by the results $W$ which is not measured. However, surprisingly we can separate the results $W$ which is not measured spatially. Before the explanation, I would like to explain how to measure the spin of electrons.
To measure the spin of electrons, use the following Stern-Gellerach device.
An uneven magnetic field is generated between the pointed south pole and the flat north pole. The strength of the magnetic field becomes stronger as it approaches the south pole. The electron beam passing through the inhomogeneous magnetic field is split into two trajectories. The direction of the electron's spin can be found by the trajectory. Whatever direction the original spin is facing, it always splits into two trajectories relative to the direction of the magnet.
This Stern-Gellerach device is combined to make the following device.
In the above-mentioned device, the once branched electron beams merge again. For the sake of later, I will try to abbreviate the above device in the next figure.
We put the wall $C$ in the device.
The lower trajectory is blocked, so we can find which trajectory it passed. For the sake of simplicity, the above device is abbreviated in the following figure.
The following thought experiment is performed using these devices.
Though electron beam is separated into the trajectory $W$ and the trajectory $\overline{W}$ and combined again by a device $F$ and a device $G$, EPR correlation still occurs between the device $L$ and the device $R$. This is a very strange result, because the electron beam can be divided into trajectory $W$ and trajectory $\overline{W}$ when the electron beam is separated into trajectory $W$ and trajectory $\overline{W}$. If the cases can be divided, Bell's inequality should hold. Why does Bell's inequality break?
Certainly, Bell's inequality holds when the electron beam is separated into the trajectory $W$ and the trajectory $\overline{W}$. The separated electron beams are rejoined to cause interference and break Bell's inequality. This correlation means that two electrons construct one quantum state. Therefore, we can summarize the reasons why Bell's inequality breaks as follows.
In fact, there are several types of Bell inequalities. I would like to explain each Bell's inequality.
Irish physicist John Stewart Bell proposed the following inequality in 1964:
The function $C(A,B)$ means the strength of the correlation between an event $A$ and an event $B$. However, I think the meaning of the inequality is very difficult to understand. American physicist John Clauser and others improved Bell's inequality and proposed the following inequality in 1969.
I think the meaning of this inequality is also difficult to understand. Hungarian physicist Eugene Paul Wigner improved Bell's inequality in 1970 and proposed the following inequality.
Physicist Jun John Sakurai introduced the above inequality as Bell's inequality in the textbook "Advanced Quantum Mechanics" published in 1994. Here, the function $P(A+;B+)$ is the probability that the left observation device $L$ measures the spin $+1/2$ of an electron in the direction $A$ and the right observation device $R$ measures the spin $+1/2$ in the direction $B$ in the following EPR experiment.
Bell's Inequality (1964) and CHSH inequalities (1969) were inequalities of correlation strength. On the other hand, Wigner–d'Espagnat inequality (1970) is an inequality of probability. This makes it easier to understand the meaning of inequalities than ever before. However, I think the meaning of inequality is still unclear, because the directions $A$,$B$,$W$ of the left observation device $L$ and the directions $A$,$B$,$W$ of the right observation device $R$ have same directions. I think it makes easier to rotate the right observation device $R$ of 180 degrees because the left observation device $L$ and the right observation device $R$ have inverse correlation.
Based on the above EPR experiment, Bell's inequality is more easily expressed as sets as follows.$$ P(A \cap B) \leqq P(A \cap W)+P(\overline{W} \cap B) $$
This article described the above inequality as Bell's inequality. Bell's inequality can be expressed as a Venn diagram as follows.
If you see the above Venn diagram, you can clearly understand the meaning of Bell's inequality. In fact, French physicist Bernard d'Espagnat also described Bell's inequality with a Venn diagram in 1979.
Bell's inequality is correct if the weather can be divided into sunny and rainy. However, in the world of quantum mechanics, when the weather was not observed, the weather could not be divided into sunny and rainy.
When Alice meets Bob, what answers do you get if you ask them for the weather of the day? In fact, strangely in the quantum world, if it is not observed, it remains uncertain whether the weather is sunny or rainy.
©2002, 2019 xseek-qm.net |
Composition effects on the pH of a hydraulic calcium phosphate cementin: Journal of materials science , ISSN 1573-4838, Vol. 8 (11. 1997), p. 675-681 Zugang:zum Volltext ähnliche Vorschläge
1997
Abstract The pH of a hydraulic calcium phosphate cement (HCPC) made of monocalcium phosphate monohydrate (Ca(H2PO4)2·H2O; MCPM), β-tricalcium phosphate (β-(Ca3(PO4)2; β-TCP) and water was measured as a function of reaction time and composition at room temperature. During setting, the cement... mehr
In vitro aging of a calcium phosphate cementin: Journal of materials science , ISSN 1573-4838, Vol. 11 (3. 2000), p. 155-162 Zugang:zum Volltext ähnliche Vorschläge
2000
Abstract Cement samples made of β-tricalcium phoshate (β-TCP), phosphoric acid (PA) and water mixtures were incubated in several aqueous solutions to determine their stability over time. The effects of the cement composition and the incubating temperature were investigated in more detail. The... mehr
Copyright: Copyright 2000 Kluwer Academic Publishers mehr
Effect of several additives and their admixtures on the physico-chemical properties of a calcium phosphate cementin: Journal of materials science , ISSN 1573-4838, Vol. 11 (2. 2000), p. 111-116 Zugang:zum Volltext ähnliche Vorschläge
2000
Abstract Combinations of citrate (C6H5O 7 3-− ), pyrophosphate (P2O 7 4− ) and sulfate (SO 4 2− ) ions were used to modify the physico-chemical properties of a calcium phosphate cement (CPC) composed of β-tricalcium phosphate (β-TCP) and phosphoric acid (PA) solution. The results obtained... mehr
Copyright: Copyright 2000 Kluwer Academic Publishers mehr
Relative position of trypsin banded homologous chromosomes in human (♀) metaphase figuresin: Human genetics , ISSN 1432-1203, Vol. 28 (4. 1975), p. 303-311 Zugang:zum Volltext ähnliche Vorschläge
1975
Summary “Generalized distances” between centromeres were statistically analyzed (X 2 test) on 50 normal female trypsin-banded metaphase figures. This study revealed that the homologous chromosomes of the pairs 13, 17, 14, and 21 lie closer together than would be expected by a reference... mehr
Determination of the relative density changes in the presence of high strain gradientin: Journal of materials science , ISSN 1573-4803, Vol. 15 (12. 1980), p. 3162-3165 Zugang:zum Volltext ähnliche Vorschläge
1980
Copyright: Copyright 1980 Chapman and Hall Ltd. mehr
(−)Tertatolol is a potent antagonist at pre- and postsynaptic serotonin 5-HT1A receptors in the rat brainin: Naunyn-Schmiedeberg's archives of pharmacology , ISSN 1432-1912, Vol. 347 (5. 1993), p. 453-463 Zugang:zum Volltext ähnliche Vorschläge
1993
Summary The potential 5-HT1A antagonist properties of the ß-antagonist tertatolol were assessed using biochemical and electrophysiological assays in the rat. (±) Tertatolol bound with high affinity (Ki = 38 nM) to 5-HT1A sites labelled by [3H]8-OH-DPAT in hippocampal membranes. The... mehr
The polarized ion source for COSY : Proceedings of the 6th international conference on ion sources.In: Review of Scientific Instruments. - [S.l.] : American Institute of Physics, ISSN 1089-7623, Vol. 67, No. 3 (1996), p. 1357-1358 Zugang:zum Volltext ähnliche Vorschläge
1996
The polarized ion source for COSY-Jülich has been set in operation. The source produces H− or D− ion beams by means of a charge exchange reaction. For the first time beam acceleration in the injector cyclotron and a first measurement of the beam polarization downstream of the cyclotron took... mehr
Scintillating fiber trackers with optoelectronic readout for the CHORUS neutrino experimentin: Nuclear Instruments and Methods in Physics Research Section A:, in: Nuclear Instruments and Methods in Physics Research Section A: . - Amsterdam : Elsevier, ISSN 0168-9002, ZDB-ID 1466532-3 Vol. 344, No. 1 (1994), p. 143-148Zugang:zum Volltext ähnliche Vorschläge
Calibration and performance of the CHARM-II detectorin: Nuclear Instruments and Methods in Physics Research Section A:, in: Nuclear Instruments and Methods in Physics Research Section A: . - Amsterdam : Elsevier, ISSN 0168-9002, ZDB-ID 1466532-3 Vol. 325, No. 1-2 (1993), p. 92-108Zugang:zum Volltext ähnliche Vorschläge
1993
Copyright: Copyright (c) 2002 Elsevier Science B.V. mehr
Search for muon to electron neutrino oscillationsin: The European physical journal , ISSN 1434-6052, Vol. 64 (4. 1994), p. 539-544 Zugang:zum Volltext ähnliche Vorschläge
1994
Abstract A search for νμ → ν e and $$\bar \nu _\mu \to \bar \nu _e $$ oscillations has been carried out with the CHARM II detector exposed to the CERN wide band neutrino beam. The data were collected over five years, alternating beams mainly composed of muon-neutrinos and muon-antineutrinos.... mehr
Measurement of the inclusive di-jet cross section in photoproduction and determination of an effective parton distribution in the photonin: The European physical journal , ISSN 1434-6052, Vol. 1 (1/2. 1998), p. 97-107 Zugang:zum Volltext ähnliche Vorschläge
1998
Abstract The double-differential inclusive di-jet cross section in photoproduction processes is measured with the H1 detector at HERA. The cross section is determined as a function of the average transverse jet energyE T jets for ranges of the fractional energyξ γ jets of the parton from the... mehr |
This section deals with situations where the conditions at the tube exit have not arrived at the critical condition. It is very useful to obtain the relationships between the entrance and the exit conditions for this case. Denote {m 1} and {m 2} as the conditions at the inlet and exit respectably. From equation (24)
\[
\dfrac{4\,f\,L}{D} = \left. \dfrac{4\,f\,L}{D}\right|_
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.70_Isothermal_Flow/11.7.3:_The_Entrance_Limitation_of_Supersonic_Branch), /content/body/p[2]/span, line 1, column 4
{ 1 - k\,{M_{1}}^{2} \over k\,{M_{1}}^{2}} -
{ 1 - k\,{M_{2}}^{2} \over k\,{M_{2}}^{2}} +
\ln \left( {M_{1} \over M_{2}} \right)^{2}
\label{isothermal:eq:workingFLD} \tag{37}
\]
For the case that \(M_1 > > M_2\) and \(M_1 \rightarrow 1\) equation is reduced into the following approximation
\[
\dfrac{4\,f\,L}{D} = 2 \ln \left( M_{1}\right) -1 -
\overbrace{\dfrac{ 1 - k\,{M_{2}}^{2} }{ k {M_{2}}^{2}}}^{\sim 0}
\label{isothermal:eq:workingFLDApprox} \tag{38}
\]
Solving for \(M_1\) results in
\[
M_1 \sim \text{e}^{\dfrac{1 }{ 2}\,\left(\dfrac{4\,f\,L}{D} +1\right)}
\label{isothermal:eq:workingFLDAppSol} \tag{39}
\]
This relationship shows the maximum limit that Mach number can approach when the heat transfer is extraordinarily fast. In reality, even small \(\dfrac{4\,f\,L}{D} > 2\) results in a Mach number which is larger than 4.5. This velocity requires a large entrance length to achieve good heat transfer. With this conflicting mechanism obviously the flow is closer to the Fanno flow model. Yet this model provides the directions of the heat transfer effects on the flow.
Example 11.14
Calculate the exit Mach number for pipe with \(\dfrac{4\,f\,L}{D} = 3\) under the assumption of the isothermal flow and supersonic flow. Estimate the heat transfer needed to achieve this flow.
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
There's no formal guideline about using $\tau$ instead of $2\pi$. But anyone who uses $\tau$ should remember that it is not standard notation. In particular, the notation $\tau$ for $2\pi$ is not used in textbooks, and those with little mathematical knowledge are unlikely to know what $\tau$ is supposed to mean.At the same time, if someone does use $\tau$ ...
I am sorry that you've had a bad initial experience on MSE. I have never heard of anyone encountering all the problems you describe. I suggest that you try the math chatroom. You can describe what's been happening to you and see if anyone there has had the same experiences, and perhaps someone there might have a suggestion for how to solve whatever has gone ...
A similar question was answered by the SE employee Shog9 a while ago:I hate to sound callous about this, but... This isn't a support group;y'all probably aren't trained to deal with the outpouring of grief anddespair of someone you've never met and may have absolutely nothing incommon with. I'm certainly not. Indeed, there's a decent chance that...
Well already there is a problem, to me it looks like you just said2.3=6two point three is equal to sixWhich is .. crazy :)I think it's best to just use MathJax in most places, even for trivial math, since it is the most specific and clear way of rendering math that we have.. the initial performance penalty of the MathJax dependency is already ...
In general, I prefer more tags to fewer tags. Assuming I would like to read all calculus questions (heaven forbid), it's a lot easier to mark calculus as a favorite tag than to have to keep track of dozens of sub-tags.Related: I was just getting a little annoyed with another user who removed the in my opinion most relevant tag from a bunch of his (own) ...
As the author of the post who likely initiated this one, I would like to chip in. Jonas is known for very well-written and thought-out posts. In my reply I will simply list thoughts that come to my mind, which aren't necessarily related to one another:(1) Privacy policy: It stands to assume that a privacy policy exists to avoid ugly he says-she says ...
(I'm a new user so I don't have anything to contribute as far as norms here go, but I'm really interested in how to effectively convey information in an accessible way!)Like others have said, the most obvious downside is colorblind people and people using screen readers. Colorblind people will miss information and so you should make sure the color doesn't ...
Don't worry about it. $\ddot\smile$There are many reasons why older posts may not have comments, not the least of which is that comments are ephemeral. From the Help Center:Comments are temporary "Post-It" notes left on a question or answer. They can be up-voted (but not down-voted) and flagged, but do not generate reputation. There's no revision ...
There is nothing wrong in being witty.But. Question titles should be descriptive. As the help message for the title field puts itWhat's your math question? Be specificThat's especially important nowadays when there are lots and lots questions on Math.SE.Good question titles are like (random examples from the current main page) 'If $f:X\to Y$ is ...
Don't post multiple questions as one question.Don't spam the site with too many questions.So, what should you do? Take it easy and ask a few questions at a time, respond to the comments, read the answers and take your time to absorb them. Come back the next day and ask a few more questions. And so on.(Thanks to Martin Sleziak for gathering the relevant ...
I would prefer that plagiarism were handled by the moderators, since I think it's best to minimize the amount of drama and mudslinging here on meta.However, this requires that the moderators do handle plagiarism allegations with diligence and seriousness. That the moderators were aware of amWhy's plagiarism almost a month ago and joked about it/did nothing,...
tl;dr: Your answer should have been a comment.StackExchange is a network of Q&A sites dedicated to matching high-quality answers to high-quality questions. The problem with your answer is that it was not high quality, not that it was really short. We are perfectly accepting of informative short answers.The problem with your answer is two-fold: ...
A problem is that it is not a semantically correct use.At the moment the formatting used by SE on this site for block-quotes is such that optically its usage for emphasis makes sense. Yet, this could change at any point in time. Moreover, already now I feel it is not really the case on the mobile site.In addition, imagine some machine reads the site to ...
Unless you are a moderator, there isn't really a method to communicate privately with another user. If a comment and reply were expected before every edit, the comments would become littered with messages about editing. However, if an edit might affect the meaning or tone of a post, a comment asking about the change would be a courtesy.
To answer to Jonas' question.How is plagiarism best handled by non-moderators?$(1)$ Flag the offending post for moderator attention. Try to provide as many details as possible in your flag.$(2)$ Leave a comment which indicates the source, and explain that it should be cited.$(3)$ Depending on the scenario, the user should contact the moderators ...
Moderator Erik Naslund remarked here:I would like to thank each user who pointed out and flagged these different threads, as discovering such plagiarism would be nearly impossible without your help.That could only happen because the original plagiarism was made public. In particular, I was aware of one instance but was willing to write it off as a ...
In $\LaTeX$ you would normally write $x^2=y^2$,. In $\LaTeX$ that does not cause the comma or period to get pushed to the next line. In MathJax sometimes it does, so I have begun writing $x^2=y^2,$ instead.If you use double dollar signs, thus $$ x^2=y^2, $$ then of course (either in $\LaTeX$ or in MathJax) you need the comma inside the display, since ...
I would not want such things to occur too much. If all posts had such a bounty, it would turn into chaos. But I don't think anything is wrong if you do this on occasion. I don't want to say that all must act grim and unhumorous to do math.Did this particular bounty bring about a solution with no flaws? Probably not. But did it hurt anything? Not in my ...
"Epidemic" suggests that they number of such questions has been growing dramatically, but that's not the case as far as I can see. They've been with us for all of the year I've been using MSE.There's also a lingering debate on how to deal with them. Many commenters are quick to chide new users for their use of the imperative mood specifically, which I ...
It appears to me that this is definitely behaviour in the spirit of the review system. Therefore you are definitely not doing something wrong.If this were the Low Quality queue or the Close Votes queue, I would have nothing to add.However, I would like to add that the First Posts review queue is a bit special, in that it provides us with an opportunity ...
(I've never used Meta before -- I noted there's no answer yet, but a lot of comments, so I hope I'm not doing something wrong)To first give an example of what I would consider good coloring; I like the explanation of Fourier Transform, on betterexplained and technically copied from altdevblogaday):To be fair, he could have made it a little easier for ...
Remember: accepting an answer on any of the StackExchange site is not so much an endorsement of that answer being objectively the best, but that answer being the most helpful to you, the person who posted the original query. So with that in mind, you should feel free to accept an answer, even though the question is Community Wiki.On the other hand, while ...
If you haven't yet received an answer, answer it yourself. That way, no one will add an answer which merely expresses what you'd already discovered yourself (which would be a waste of time) and your question can be taken off the unanswered list.If you have received a (useful) answer, then maybe leave a comment on your question. If an answer helped you ...
Ananth, I do wonder what you mean by "reputed" users. If you mean the experienced ones on Math.SE, of course they should be role models for new users. Yes, if no one answers a question asked by a new user, one should step in and seek clarification. It was really nice for you to be so welcoming to the new user when she posted that other question and told her ...
Your answer is definitely better as a comment; this is less a policy per-se than perhaps a thing of mathematical understanding. The question of:Solve in the positive integers...would generally mean "find every solution". I've certainly encountered questions of the form "Are there solutions to this equations?" or "Aside from these solutions I found, are ...
A couple of times I've had a problem, not with math.SE but with StackOverflow, in which pages open up with incorrect formatting of headings, menus, etc., apparently due to the CSS for the site getting corrupted in my browser's cache. So it may be worth emptying your cache if the site format looks weird/unreadable.Apart from that, I'm sorry you had a bad ...
I'm going to get a little personal here, since I have experienced suicidal ideation.About 13 years ago, I found myself living on the streets because of severe depression. After a couple of weeks, I couldn't handle it anymore and reached out to a number of agencies who were supposed to deal with these situations.When the suicide hotline asked for medical ... |
Here's a puzzle for when the site activity is low! In the diagram below there are two quarter circles of radius 1 and 2 intersecting as shown. What is the area of their intersection (i.e. the shaded region)?
\(circle_1: x^2 + y^2 = 4 \qquad r_1 = 2\\ circle_2: (x-2)^2 + (y-2)^2 = 1 \qquad r_2 = 1\\\)
intersections \(S_1 \text{ and } S_2\):
\(2 x_{s}^2 - 2a \cdot x_s + a^2 - r_1^2 = 0 \qquad a = \frac{3r_1^2-r_2^2}{2r_1} = \frac{11}{4}\\ \vec{S_1} =\dbinom{ \frac18(11-\sqrt{7}) }{ \frac18(11+\sqrt{7}) }\\ \vec{S_2} =\dbinom{ \frac18(11+\sqrt{7}) }{ \frac18(11-\sqrt{7}) }\)
sectors angels (cosinus-rule) \(\alpha_1 \text{ and } \alpha_2\): \( \cos{(\alpha_1)} = \frac{57}{64} \qquad \alpha_1 = 27.0481105464^{\circ}\\ \cos{(\alpha_2)} = \frac{9}{16} \qquad \alpha_2 = 55.7711336722^{\circ}\\\) Areas sectors \(A_{s_1} \text{ and } A_{s_2}\):
\( A_{s_1} = 4\pi \frac{27.0481105464^{\circ}}{360^{\circ}}\\ A_{s_2} = \pi \frac{55.7711336722^{\circ}}{360^{\circ}}\\\)
Areas triangles \(A_{t_1} \text{ and } A_{t_2}\):
\( A_{t_1} = \frac{1}{2} \cdot \left| \vec{S_2}\times \vec{S_1} \right| = \frac{11}{32}\sqrt{7}\\ A_{t_2} = \frac{1}{2} \cdot \left| \left[\vec{S_1}-\binom{2}{2}\right] \times \left[\vec{S_2}-\binom{2}{2}\right] \right|= \frac{5}{32}\sqrt{7} \)
\( \text{The area of their intersection }\quad A = A_{s_1}-A_{t_1}+A_{s_2}-A_{t_2}\)
\(A=\frac{\pi}{360}(4\cdot 27.0481105464^{\circ}+55.7711336722^{\circ})-\frac{ \sqrt{7} }{2}\\ A= 1.43085212603-1.32287565553\\ \mathbf{A=0.10797647050}\).
I could have solved this with some messy Algebra, but....I decided to cheat....LOL!!!
Placing the center of the larger circle at the origin, we have these two equations :
x^2 + y^2 = 4 and (x - 2)^2 + (y - 2)^2 = 1
Here's a diagram :
The area of triangle ABC = 2*sin(27.048) = about .909 units^2
And the area of the sector ABC = about 944 units^2
Similarly, the area of triangle DAB = (1/2)sin(55.771) = about .413 units^2
And the area of sector DAB = about .4867 units^2
So...the approximate total area of the area of intersection = [ .944 - .909] + [ .4867 - .413] = about .1087 units^2
OOPS....you are correct Alan....I have provided an edit......duh!!!....I think I mis-placed my decimal point.....Is my answer now correct???
Yes I know that Alan - That is why I wrote that it was not correct.
I suppose i should jut have deleted it. I will delete it now.
I am working on a solution that uses co-ordinate geometry. I am sure I can do it that way but it is quite painful especially if I want to keep the answer exact. ://
There you go Chris, Yours is wrong too. LOL
But your diagram is fantastic.
I am only joking Chris, You answer is great too :))
Thanks for this distraction Alan. I have enjoyed it.
I find it fascinating that the question is much more complicated than it appears to be at first sight.
Here's a puzzle for when the site activity is low! In the diagram below there are two quarter circles of radius 1 and 2 intersecting as shown. What is the area of their intersection (i.e. the shaded region)?
\(circle_1: x^2 + y^2 = 4 \qquad r_1 = 2\\ circle_2: (x-2)^2 + (y-2)^2 = 1 \qquad r_2 = 1\\\)
intersections \(S_1 \text{ and } S_2\):
\(2 x_{s}^2 - 2a \cdot x_s + a^2 - r_1^2 = 0 \qquad a = \frac{3r_1^2-r_2^2}{2r_1} = \frac{11}{4}\\ \vec{S_1} =\dbinom{ \frac18(11-\sqrt{7}) }{ \frac18(11+\sqrt{7}) }\\ \vec{S_2} =\dbinom{ \frac18(11+\sqrt{7}) }{ \frac18(11-\sqrt{7}) }\)
sectors angels (cosinus-rule) \(\alpha_1 \text{ and } \alpha_2\): \( \cos{(\alpha_1)} = \frac{57}{64} \qquad \alpha_1 = 27.0481105464^{\circ}\\ \cos{(\alpha_2)} = \frac{9}{16} \qquad \alpha_2 = 55.7711336722^{\circ}\\\) Areas sectors \(A_{s_1} \text{ and } A_{s_2}\):
\( A_{s_1} = 4\pi \frac{27.0481105464^{\circ}}{360^{\circ}}\\ A_{s_2} = \pi \frac{55.7711336722^{\circ}}{360^{\circ}}\\\)
Areas triangles \(A_{t_1} \text{ and } A_{t_2}\):
\( A_{t_1} = \frac{1}{2} \cdot \left| \vec{S_2}\times \vec{S_1} \right| = \frac{11}{32}\sqrt{7}\\ A_{t_2} = \frac{1}{2} \cdot \left| \left[\vec{S_1}-\binom{2}{2}\right] \times \left[\vec{S_2}-\binom{2}{2}\right] \right|= \frac{5}{32}\sqrt{7} \)
\( \text{The area of their intersection }\quad A = A_{s_1}-A_{t_1}+A_{s_2}-A_{t_2}\)
\(A=\frac{\pi}{360}(4\cdot 27.0481105464^{\circ}+55.7711336722^{\circ})-\frac{ \sqrt{7} }{2}\\ A= 1.43085212603-1.32287565553\\ \mathbf{A=0.10797647050}\)
I just had my decimal point in the wrong place....Alan said mine is [fairly] correct....
Soooooo.....
A BIG GOLD STAR FOR HEUREKA AND ME....!!!!
And for Melody????....A BAG OF SWITCHES....!!!!
Area of the square : 2 x 2 = 4
Area of Big arc = 1/4 pi 2^2 = pi
Area of small arc = 1/4 pi 1^2 = 1/4 pi
Area OUTSIDE of Small arc = 4 - 1/4 pi
Area OUTSIDE of Big arc = 4 - pi
Add the two outside areas and subtract the square area
4-1/4pi + 4 - pi -4 = 4 - 5/4 pi = ,073 Square units
This is the same as Melody's first answer Guest. Good try, but unfortunately it isn't correct - see my first reply above to see why. |
Table of Contents
Coupled coincidence point results for contraction of $C$-class mappings in ordered uniform spaces Article References A. H. Ansari, D. Binbasioglu, D. Turkoglu 3-13
Some weaker sufficient conditions of $L$-index boundedness in direction for functions analytic in the unit ball Article References A. I. Bandura 14-25
Asymptotics of the entire functions with $\upsilon$-density of zeros along the logarithmic spirals Article References M. V. Zabolotskyj, Yu. V. Basiuk 26-32
Representation of a quotient of solutions of a four-term linear recurrence relation in the form of a branched continued fraction Article References I. Bilanyk, D. I. Bodnar, L. Buyak 33-41
Note on bases in algebras of analytic functions on Banach spaces Article References I. V. Chernega, A. V. Zagorodnyuk 42-47
Spectral approximations of strongly degenerate elliptic differential operators Article References M. I. Dmytryshyn, O. V. Lopushansky 48-53
On some of convergence domains of multidimensional S-fractions with independent variables Article References R. I. Dmytryshyn 54-58
Ricci soliton and Ricci almost soliton within the framework of Kenmotsu manifold Article References A. Ghosh 59-69
Interconnection between Wick multiplication and integration on spaces of nonregular generalized functions in the Levy white noise analysis Article References N. A. Kachanovsky, T. Kachanovska 70-88
Algebraic basis of the algebra of block-symmetric polynomials on $\ell_1 \oplus \ell_{\infty}$ Article References V. V. Kravtsiv 89-95
The relationship between algebraic equations and $(n,m)$-forms, their degrees and recurrent fractions Article References I. I. Lishchynsky 96-106
Inverse problem for $2b$-order differential equation with a time-fractional derivative Article References A. O. Lopushansky, H. P. Lopushanska 107-118
Some inequalities for strongly $(p,h)$-harmonic convex functions Article References M. A. Noor, K. I. Noor, S. Iftikhar 119-135
Characterizations of regular and intra-regular ordered $\Gamma$-semihypergroups in terms of bi-$\Gamma$-hyperideals Article References S. Omidi, B. Davvaz, K. Hila 136-151
On a new application of quasi power increasing sequences Article References H. S. Özarslan 152-157
On approximation of homomorphisms of algebras of entire functions on Banach spaces Article References H. M. Pryimak 158-162
On the solutions of a class of nonlinear integral equations in cone $b$-metric spaces over Banach algebras Article References L. T. Quan, T. Van An 163-178
Classification of generalized ternary quadratic quasigroup functional equations of the length three Article References F. M. Sokhatsky, A. V. Tarasevych 179-192
On integral representation of the solutions of a model $\vec{2b}$-parabolic boundary value problem Article References N. I. Turchyna, S. D. Ivasyshen 193-203
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
Active Galaxies, Quasars, and Supermassive Black Holes
Exercises: Active Galaxies, Quasars, and Supermassive Black Holes
Collaborative Group Activities
When quasars were first discovered and the source of their great energy was unknown, some astronomers searched for evidence that quasars are much nearer to us than their redshifts imply. (That way, they would not have to produce so much energy to look as bright as they do.) One way was to find a “mismatched pair”—a quasar and a galaxy with different redshifts that lie in very nearly the same direction in the sky. Suppose you do find one and only one galaxy with a quasar very close by, and the redshift of the quasar is six times larger than that of the galaxy. Have your group discuss whether you could then conclude that the two objects are at the same distance and that redshift is not a reliable indicator of distance. Why? Suppose you found three such pairs, each with different mismatched redshifts? Suppose every galaxy has a nearby quasar with a different redshift. How would your answer change and why?
Large ground-based telescopes typically can grant time to only one out of every four astronomers who apply for observing time. One prominent astronomer tried for several years to establish that the redshifts of quasars do not indicate their distances. At first, he was given time on the world’s largest telescope, but eventually it became clearer that quasars were just the centers of active galaxies and that their redshifts really did indicate distance. At that point, he was denied observing time by the committee of astronomers who reviewed such proposals. Suppose your group had been the committee. What decision would you have made? Why? (In general, what criteria should astronomers have for allowing astronomers whose views completely disagree with the prevailing opinion to be able to pursue their research?)
Based on the information in this chapter and in Black Holes and Curved Spacetime, have your group discuss what it would be like near the event horizon of a supermassive black hole in a quasar or active galaxy. Make a list of all the reasons a trip to that region would not be good for your health. Be specific.
Before we understood that the energy of quasars comes from supermassive black holes, astronomers were baffled by how such small regions could give off so much energy. A variety of models were suggested, some involving new physics or pretty “far out” ideas from current physics. Can your group come up with some areas of astronomy that you have studied in this course where we don’t yet have an explanation for something happening in the cosmos?
Review Questions
Describe some differences between quasars and normal galaxies.
Describe the arguments supporting the idea that quasars are at the distances indicated by their redshifts.
In what ways are active galaxies like quasars but different from normal galaxies?
Why could the concentration of matter at the center of an active galaxy like M87 not be made of stars?
Describe the process by which the action of a black hole can explain the energy radiated by quasars.
Describe the observations that convinced astronomers that M87 is an active galaxy.
Why do astronomers believe that quasars represent an early stage in the evolution of galaxies?
Why were quasars and active galaxies not initially recognized as being “special” in some way?
What do we now understand to be the primary difference between normal galaxies and active galaxies?
What is the typical structure we observe in a quasar at radio frequencies?
What evidence do we have that the luminous central region of a quasar is small and compact?
Thought Questions
Suppose you observe a star-like object in the sky. How can you determine whether it is actually a star or a quasar?
Why don’t any of the methods for establishing distances to galaxies, described in Galaxies (other than Hubble’s law itself), work for quasars?
One of the early hypotheses to explain the high redshifts of quasars was that these objects had been ejected at very high speeds from other galaxies. This idea was rejected, because no quasars with large blueshifts have been found. Explain why we would expect to see quasars with both blueshifted and redshifted lines if they were ejected from nearby galaxies.
A friend of yours who has watched many StarTrek episodes and movies says, “I thought that black holes pulled everything into them. Why then do astronomers think that black holes can explain the great outpouring of energy from quasars?” How would you respond?
Could the Milky Way ever become an active galaxy? Is it likely to ever be as luminous as a quasar?
Why are quasars generally so much more luminous (why do they put out so much more energy) than active galaxies?
Suppose we detect a powerful radio source with a radio telescope. How could we determine whether or not this was a newly discovered quasar and not some nearby radio transmission?
A friend tries to convince you that she can easily see a quasar in her backyard telescope. Would you believe her claim?
Figuring for Yourself
Show that no matter how big a redshift (z) we measure, v/c will never be greater than 1. (In other words, no galaxy we observe can be moving away faster than the speed of light.)
If a quasar has a redshift of 3.3, at what fraction of the speed of light is it moving away from us?
If a quasar is moving away from us at v/c = 0.8, what is the measured redshift?
In the chapter, we discussed that the largest redshifts found so far are greater than 6. Suppose we find a quasar with a redshift of 6.1. With what fraction of the speed of light is it moving away from us?
Rapid variability in quasars indicates that the region in which the energy is generated must be small. You can show why this is true. Suppose, for example, that the region in which the energy is generated is a transparent sphere 1 light-year in diameter. Suppose that in 1 s this region brightens by a factor of 10 and remains bright for two years, after which it returns to its original luminosity. Draw its light curve (a graph of its brightness over time) as viewed from Earth.
Large redshifts move the positions of spectral lines to longer wavelengths and change what can be observed from the ground. For example, suppose a quasar has a redshift of [latex]\frac{\Delta {\lambda}}{{\lambda}}=4.1[/latex]. At what wavelength would you make observations in order to detect its Lyman line of hydrogen, which has a laboratory or rest wavelength of 121.6 nm? Would this line be observable with a ground-based telescope in a quasar with zero redshift? Would it be observable from the ground in a quasar with a redshift of [latex]\frac{\Delta {\lambda}}{{\lambda}}=4.1?[/latex]
Once again in this chapter, we see the use of Kepler’s third law to estimate the mass of supermassive black holes. In the case of NGC 4261, this chapter supplied the result of the calculation of the mass of the black hole in NGC 4261. In order to get this answer, astronomers had to measure the velocity of particles in the ring of dust and gas that surrounds the black hole. How high were these velocities? Turn Kepler’s third law around and use the information given in this chapter about the galaxy NGC 4261—the mass of the black hole at its center and the diameter of the surrounding ring of dust and gas—to calculate how long it would take a dust particle in the ring to complete a single orbit around the black hole. Assume that the only force acting on the dust particle is the gravitational force exerted by the black hole. Calculate the velocity of the dust particle in km/s.
In the Check Your Learning section of two sections prior, you were told that several lines of hydrogen absorption in the visible spectrum have rest wavelengths of 410 nm, 434 nm, 486 nm, and 656 nm. In a spectrum of a distant galaxy, these same lines are observed to have wavelengths of 492 nm, 521 nm, 583 nm, and 787 nm, respectively. The example demonstrated that z = 0.20 for the 410 nm line. Show that you will obtain the same redshift regardless of which absorption line you measure.
In the Check Your Learning section of two sections prior, the author commented that even at z = 0.2, there is already an 11% deviation between the relativistic and the classical solution. What is the percentage difference between the classical and relativistic results at z = 0.1? What is it for z = 0.5? What is it for z = 1?
The quasar that appears the brightest in our sky, 3C 273, is located at a distance of 2.4 billion light-years. The Sun would have to be viewed from a distance of 1300 light-years to have the same apparent magnitude as 3C 273. Using the inverse square law for light, estimate the luminosity of 3C 273 in solar units. |
Figure 11.17 describes the flow of gas from the left to the right. The heat transfer up stream (or down stream) is assumed to be negligible. Hence, the energy equation can be written as the following:
\[
\dfrac{d\, Q }{ \dot{m} } = c_p dT + d \dfrac{U^2 }{ 2} = c_p dT_{0} \label{isothermal:eq:CV} \tag{1} \] The momentum equation is written as the following \[ -A\, dP - \tau_{w}\, dA_{\text{wetted area}} = \dot{m}\, dU \label{isothermal:eq:momentum} \tag{2} \]
where \(A\) is the cross section area (it doesn't have to be a perfect circle; a close enough shape is sufficient.). The shear stress is the force per area that acts on the fluid by the tube wall. The \(A_{wetted\;\;area}\) is the area that shear stress acts on. The second law of thermodynamics reads
\[
{s_2 - s_1 \over C_p} = \ln {T_2 \over T_1 } - {k -1 \over k} \ln {P_2 \over P_1} \label{isothermal:eq:2law} \tag{3} \] The mass conservation is reduced to \[ \dot {m} = \text{constant} = \rho\, U\, A \label{isothermal:eq:mass} \tag{4} \] Again it is assumed that the gas is a perfect gas and therefore, equation of state is expressed as the following: \[ P = \rho\, R\, T \label{isothermal:eq:state} \tag{5} \] Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
$$ \mathcal{F} \left \{ \mathbf{x\cdot y} \right \}_k \ \stackrel{\mathrm{def}}{=} \sum_{n=0}^{N-1} x_n \cdot y_n \cdot e^{-\frac{2\pi i}{N} k n} =\frac{1}{N} (\mathbf{X * Y_N})_k, \,$$ which is the circular convolution of $\mathbf{X}$ and $\mathbf{Y}$.
I feel it's incorrect because writing $y_n$ in Fourier basis, and multiplying it by $e^{-\frac{2\pi i}{N} k n}$ will result in a shift in index by $k$, and that means the result should be the cross-corrolation of two signals.
And secondly if either this or my argument is correct why this does not hold for continuous signals.
P.S. The question has been asked here as well since I was unaware of the most proper domain for my question. |
Hint Applet (Trigonometric Substitution) Sample Problem
(→Flash Applets embedded in WeBWorK questions u-subsitution Example)
Line 209: Line 209:
SOLUTION(EV3(<<'END_SOLUTION'));
SOLUTION(EV3(<<'END_SOLUTION'));
$BBOLD Solution: $EBOLD $PAR
$BBOLD Solution: $EBOLD $PAR
−
To evaluate this integral use a trigonometric substitution. For this problem use the sine substitution. \[x = {$a}\sin(\theta)\]
+
To evaluate this integral use a trigonometric
+
substitution. For this problem use the sine
+
substitution. \[x = {$a}\sin(\theta)\]
$BR$BR
$BR$BR
−
Before proceeding note that \(\sin\theta=\frac{x}{$a}\), and \(\cos\theta=\frac{\sqrt{$a2-x^2}}{$a}\). To see this, label a right triangle so that the sine is \(x/$a\). We will have the opposite side with length \(x\), and the hypotenuse with length \($a\), so the adjacent side has length \(\sqrt{$a2-x^2}\).
+
Before proceeding note that \(\sin\theta=\frac{x}{$a}\),
+
and \(\cos\theta=\frac{\sqrt{$a2-x^2}}{$a}\). To see this,
+
label a right triangle so that the sine is \(x/$a\). We will
+
have the opposite side with length \(x\), and the hypotenuse
+
with length \($a\), so the adjacent side has length
+
\(\sqrt{$a2-x^2}\).
$BR$BR
$BR$BR
Revision as of 14:46, 21 August 2011 Flash Applets embedded in WeBWorK questions u-subsitution Example Sample Problem with uSub.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer, hint and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, gives hints after a given number of tries and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above.
Other applet sample problems: GraphLimit Flash Applet Sample Problem GraphLimit Flash Applet Sample Problem 2 Derivative Graph Matching Flash Applet Sample Problem Hint Applet (Trigonometric Substitution) Sample Problem
PG problem file Explanation ##DESCRIPTION ##KEYWORDS('integrals', 'trigonometric','substitution') ## DBsubject('Calculus') ## DBchapter('Techniques of Integration') ## DBsection('Trigonometric Substitution') ## Date('8/20/11') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2010') ## AuthorText1('') ## Section1('') ## Problem1('20') ##ENDDESCRIPTION ######################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", "parserFormulaUpToConstant.pl", );
This is the
The
# Set up problem TEXT(beginproblem()); $showPartialCorrectAnswers = 1; $a = random(2,9,1); $a2 = $a*$a; $a3 = $a2*$a; $a4 = $a2*$a2; $a4_3 = 3*$a4; $a2_5 = 5*$a2; $funct = FormulaUpToConstant("-sqrt{$a2-x^2}/{x}-asin({x}/{$a})");
This is the
The
################################### # Create link to applet ################################### $appletName = "trigSubWW"; $applet = FlashApplet( codebase => findAppletCodebase("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, # number of attempts to initialize applet height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, ); ################################### # Configure applet ################################### $applet->configuration(qq {<xml><trigString>sin</trigString></xml>}); $applet->initialState(qq {<xml><trigString>sin</trigString></xml>}); TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer.<br/> It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
This is the
Those portions of the code that begin the line with
You must include the section that follows
The lines
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT Evaluate the indefinite integral. $BR \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx \] $BR \{ans_rule( 60) \} END_TEXT ################################## Context()->texStrings;
This is the
################################### # # Answers # ## answer evaluators ANS( $funct->cmp() ); TEXT($PAR, $BBOLD, $BITALIC, "Hi $studentLogin, If you don't get this in 5 tries I'll give you a hint with an applet to help you out.", $EITALIC, $EBOLD, $PAR); $showHint=5; Context()->normalStrings; TEXT(hint( $PAR, MODES(TeX=>'object code', HTML=>$applet->insertAll( debug =>0, reinitialize_button => 0, includeAnswerBox=>0, )) )); ################################## Context()->texStrings; SOLUTION(EV3(<<'END_SOLUTION')); $BBOLD Solution: $EBOLD $PAR To evaluate this integral use a trigonometric substitution. For this problem use the sine substitution. \[x = {$a}\sin(\theta)\] $BR$BR Before proceeding note that \(\sin\theta=\frac{x}{$a}\), and \(\cos\theta=\frac{\sqrt{$a2-x^2}}{$a}\). To see this, label a right triangle so that the sine is \(x/$a\). We will have the opposite side with length \(x\), and the hypotenuse with length \($a\), so the adjacent side has length \(\sqrt{$a2-x^2}\). $BR$BR With the substitution \[x = {$a}\sin\theta\] \[dx = {$a}\cos\theta \; d\theta\] $BR$BR Therefore: \[\int\frac{\sqrt{$a2 - x^2}}{x^2}dx= \int \frac{{$a}\cos\theta\sqrt{$a2 - {$a2}\sin^2\theta}} {{$a2}\sin^2\theta} \; d\theta\] \[=\int \frac{\cos^2\theta}{\sin^2\theta} \; d\theta\] \[=\int \cot^2\theta \; d\theta\] \[=\int \csc^2\theta-1 \; d\theta\] \[=-\cot\theta-\theta+C\] $BR$BR Substituting back in terms of \(x\) yields: \[-\cot\theta-\theta+C =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C \] so \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C\] END_SOLUTION Context()->normalStrings; ################################## ENDDOCUMENT();
This is the
The |
Coupling Heat Transfer with Subsurface Porous Media Flow
In the second part of our Geothermal Energy series, we focus on the coupled heat transport and subsurface flow processes that determine the thermal development of the subsurface due to geothermal heat production. The described processes are demonstrated in an example model of a hydrothermal doublet system.
Deep Geothermal Energy: The Big Uncertain Potential
One of the greatest challenges in geothermal energy production is minimizing the prospecting risk. How can you be sure that the desired production site is appropriate for, let’s say, 30 years of heat extraction? Usually, only very little information is available about the local subsurface properties and it is typically afflicted with large uncertainties.
Over the last decades, numerical models became an important tool to estimate risks by performing parametric studies within reasonable ranges of uncertainty. Today, I will give a brief introduction to the mathematical description of the coupled subsurface flow and heat transport problem that needs to be solved in many geothermal applications. I will also show you how to use COMSOL software as an appropriate tool for studying and forecasting the performance of (hydro-) geothermal systems.
Governing Equations in Hydrothermal Systems
The heat transport in the subsurface is described by the heat transport equation:
(1)
Heat is balanced by conduction and convection processes and can be generated or lost through defining this in the source term, Q. A special feature of the
Heat Transfer in Porous Media interface is the implemented Geothermal Heating feature, represented as a domain condition: Q_{geo}.
There is also another feature that makes the life of a geothermal energy modeler a little easier. It’s possible to implement an averaged representation of the thermal parameters, composed from the rock matrix and the groundwater using the matrix volume fraction, \theta, as a weighting factor. You may choose between volume and power law averaging for several immobile solids and fluids.
In the case of volume averaging, the volumetric heat capacity in the heat transport equation becomes:
(2)
and the thermal conductivity becomes:
(3)
Solving the heat transport properly requires incorporating the flow field. Generally, there can be various situations in the subsurface requiring different approaches to describe the flow mathematically. If the focus is on the micro scale and you want to resolve the flow in the pore space, you need to solve the creeping flow or Stokes flow equations. In partially saturated zones, you would solve Richards’ equation, as it is often done in studies concerning environmental pollution (see our past Simulating Pesticide Runoff, the Effects of Aldicarb blog post, for instance).
However, the fully-saturated and mainly pressure-driven flows in deep geothermal strata are sufficiently described by Darcy’s law:
(4)
where the velocity field, \mathbf{u}, depends on the permeability, \kappa, the fluid’s dynamic viscosity, \mu, and is driven by a pressure gradient, p. Darcy’s law is then combined with the continuity equation:
(5)
If your scenario concerns long geothermal time scales, the time dependence due to storage effects in the flow is negligible. Therefore, the first term on the left-hand side of the equation above vanishes because the density, \rho, and the porosity, \epsilon_p, can be assumed to be constant. Usually, the temperature dependencies of the hydraulic properties are negligible. Thus, the (stationary) flow equations are independent of the (time-dependent) heat transfer equations. In some cases, especially if the number of degrees of freedom is large, it can make sense to utilize the independence by splitting the problem into one stationary and one time-dependent study step.
Fracture Flow and Poroelasticity
Fracture flow may locally dominate the flow regime in geothermal systems, such as in karst aquifer systems. The Subsurface Flow Module offers the
Fracture Flow interface for a 2D representation of the Darcy flow field in fractures and cracks.
Hydrothermal heat extraction systems usually consist of one or more injection and production wells. Those are in many cases realized as separate boreholes, but the modern approach is to create one (or more) multilateral wells. There are even tactics that consist of single boreholes with separate injection and production zones.
Note that artificial pressure changes due to water injection and extraction can influence the structure of the porous medium and produce hydraulic fracturing. To take these effects into account, you can perform poroelastic analyses, but we will not consider these here.
COMSOL Model of a Hydrothermal Application: A Geothermal Doublet
It is easy to set up a COMSOL Multiphysics model that features long time predictions for a hydro-geothermal application.
The model region contains three geologic layers with different thermal and hydraulic properties in a box with a volume V≈500 [m³]. The box represents a section of a geothermal production site that is ranged by a large fault zone. The layer elevations are interpolation functions from an external data set. The concerned aquifer is fully saturated and confined on top and bottom by aquitards (impermeable beds). The temperature distribution is generally a factor of uncertainty, but a good guess is to assume a geothermal gradient of 0.03 [°C/m], leading to an initial temperature distribution T
0(z)=10 [°C] – z·0.03 [°C/m]. Hydrothermal doublet system in a layered subsurface domain, ranged by a fault zone. The edge is about 500 meters long. The left drilling is the injection well, the production well is on the right. The lateral distance between the wells is about 120 meters.
COMSOL Multiphysics creates a mesh that is perfectly fine for this approach, except for one detail — the mesh on the wells is refined to resolve the expected high gradients in that area.
Now, let’s crank the heat up! Geothermal groundwater is pumped (produced) through the production well on the right at a rate of 50 [l/s]. The well is implemented as a cylinder that was cut out of the geometry to allow inlet and outlet boundary conditions for the flow. The extracted water is, after using it for heat or power generation, re-injected by the left well at the same rate, but with a lower temperature (in this case 5 [°C]).
The resulting flow field and temperature distribution after 30 years of heat production are displayed below:
Result after 30 years of heat production: Hydraulic connection between the production and injection zones and temperature distribution along the flow paths. Note that only the injection and production zones of the boreholes are considered. The rest of the boreholes are not implemented, in order to reduce the meshing effort.
The model is a suitable tool for estimating the development of a geothermal site under varied conditions. For example, how is the production temperature affected by the lateral distance of the wells? Is it worthwhile to reach a large spread or is a moderate distance sufficient?
This can be studied by performing a parametric study by varying the well distance:
Flow paths and temperature distribution between the wells for different lateral distances. The graph shows the production temperature after reaching stationary conditions as a function of the lateral distance.
With this model, different borehole systems can easily be realized just by changing the positions of the injection/production cylinders. For example, here are the results of a single-borehole system:
Results of a single-borehole approach after 30 years of heat production. The vertical distance between the injection (up) and production (down) zones is 130 meters.
So far, we have only looked at aquifers without ambient groundwater movement. What happens if there is a hydraulic gradient that leads to groundwater flow?
The following figure shows the same situation as the figure above, except that now there is a hydraulic head gradient of \nablaH=0.01 [m/m], leading to a superposed flow field:
Single borehole after 30 years of heat production and overlapping groundwater flow due to a horizontal pressure gradient. Other Posts in This Series Modeling Geothermal Processes with COMSOL Software Geothermal Energy: Using the Earth to Heat and Cool Buildings Further Reading Download the Geothermal Doublet tutorial Explore the Subsurface Flow Module Related papers and posters presented at the COMSOL Conference: Hydrodynamic and Thermal Modeling in a Deep Geothermal Aquifer, Faulted Sedimentary Basin, France Simulation of Deep Geothermal Heat Production Full Coupling of Flow, Thermal and Mechanical Effects in COMSOL Multiphysics® for Simulation of Enhanced Geothermal Reservoirs Multiphysics Between Deep Geothermal Water Cycle, Surface Heat Exchanger Cycle and Geothermal Power Plant Cycle Modelling Reservoir Stimulation in Enhanced Geothermal Systems Comments (26) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Difference between revisions of "Huge"
(huge*)
(42 intermediate revisions by 4 users not shown) Line 2: Line 2:
[[Category:Large cardinal axioms]]
[[Category:Large cardinal axioms]]
[[Category:Critical points]]
[[Category:Critical points]]
−
Huge cardinals (and their variants) were introduced by Kenneth Kunen in
+
Hugecardinals (and their variants) were introduced by Kenneth Kunen in as a very large cardinal axiom. Kenneth Kunen that the consistency of the existence of a huge cardinal the consistency of $ZFC$"there is a $\omega_2$-saturated [[filter|ideal]] $\omega_1$".
== Definitions ==
== Definitions ==
+ − +
Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of [[measurable|measurability]].
− + −
Although they are very large, there is a first-order definition which is equivalent to
+
=== Elementary embedding definitions ===
=== Elementary embedding definitions ===
+ − +
$j\M$
+
with $\kappa$ $M$ is of
+ + + + +
.
− + − +
$\kappa$ is '''$\lambda$''' $\lambda\kappa$ $M$ $\lambda$, $M^{\}\M$ $\$is '''$\lambda$$\lambda\kappa$is of $\lambda$.
− + − + −
*$
+
$\kappa$ is '''$$-''' $$ $\$ for some $\$$\kappa$ is '''''' it is $\lambda$for $\lambda$.
−
*$
+ − +
*
+
$$ is '''$n$-huge''' for some $> $$\kappa$ is $$ $() .
+ +
* $$is '''$n$-huge''' for some $> $-$\kappa$ is $$ $$.
=== Ultrafilter definition ===
=== Ultrafilter definition ===
+ − +
$$\\\\lambda\-}(\\lambda_{1}=\$
−
$$\
+
$$ \<n\\\lambda\\{}) \\\)$$
− +
$$ is -order-typeof $\in )$ .
+ +
<cite>:</cite>
+
is $$
+
, ultrafilter [],
+
[].
+ +
== Consistency strength and size ==
== Consistency strength and size ==
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + − +
$$-'. , $$-is --.
− +
== ==
− + − + − + − + − + − + − + − + − + − + − +
of $$
+
[] is $$ huge $$ .
− + − +
$$
+
huge cardinal is [[]] cardinal. for $\$ and $$, $\kappa$ $$ $\$. :/
− + +
$$ is $$ $\$ <cite>:</cite>. , is [[]] (of the of ).<cite>:</cite>
{{References}}
{{References}}
Revision as of 08:23, 25 April 2019 Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 Consistency strength and size 3 Relative consistency results 4 In set theoretic geology 5 References Definitions
Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2]
Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability.
Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals
A cardinal $\kappa$ is
$\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Hyperhuge cardinals
A cardinal $\kappa$ is
$\lambda$-hyperhuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some inner model $M$ such that $\mathrm{crit}(j) = \kappa$, $j(\kappa)>\lambda$ and $^{j(\lambda)}M\subseteq M$. A cardinal is hyperhuge if it is $\lambda$-hyperhuge for all $\lambda>\kappa$.[3, 4] Huge* cardinals
A cardinal $κ$ is
$n$-huge* if for some $α > κ$, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n (κ) < α$.[5]
Hugeness* variant is formulated in a way allowing for a virtual variant consistent with $V=L$: A cardinal κ is
virtually $n$-huge* if for some $α > κ$, in a set-forcing extension, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n (κ) < α$.[5] Ultrafilter definition
The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that:
$$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$
Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are.
As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set."
As for hyperhugeness, the following are equivalent:[4]
$κ$ is $λ$-hyperhuge; $μ > λ$ and a normal, fine, κ-complete ultrafilter exists on $[μ]^λ_{∗κ} := \{s ⊂ μ : |s| = λ, |s ∩ κ| ∈ κ, \mathrm{otp}(s ∩ λ) < κ\}$; $\mathbb{L}_{κ,κ}$ is $[μ]^λ_{∗κ}$-$κ$-compact for type omission. Coherent sequence characterization of almost hugeness Consistency strength and size
Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the
double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong
All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1]
Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge".
Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals.
In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have
both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge.
While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$.
Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact.
An $n$-huge* cardinal is an $n$-huge limit of $n$-huge cardinals. Every $n + 1$-huge cardinal is $n$-huge*.[5]
As for virtually $n$-huge*:[5]
If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals. A virtually $n+1$-huge* cardinal is a limit of virtually $n$-huge* cardinals. A virtually $n$-huge* cardinal is an $n+1$-iterable limit of $n+1$-iterable cardinals. If $κ$ is $n+2$-iterable, then $V_κ$ is a model of proper class many virtually $n$-huge* cardinals. Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The $\omega$-huge cardinals
A cardinal $\kappa$ is
almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$.
Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings.
Relative consistency results Hugeness of $\omega_1$
In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness).
Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$
If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is a Ramsey cardinal. It follows that (1) for all inner models $W$ of $\text{ZFC}$ and every singular cardinal $\kappa$, one has $\kappa^{+W} < \kappa^+$ and that (2) for all ordinal $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turn imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3]
In set theoretic geology
If $\kappa$ is hyperhuge, then $V$ has $<\kappa$ many grounds (so the mantle is a ground itself).[3] This result has been strenghtened to extendible cardinals[6]. On the other hand, it s consistent that there is a supercompact cardinal and class many grounds of $V$ (because of the indestructibility properties of supercompactness).[3]
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex |
Current browse context:
math.AG
Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Algebraic Geometry Title: Local Cohomology and Base Change
(Submitted on 30 Jun 2016)
Abstract: Let $X \overset{f}\longrightarrow S$ be a morphism of Noetherian schemes, with $S$ reduced. For any closed subscheme $Z$ of $X$ finite over $S$, let $j$ denote the open immersion $X\setminus Z \hookrightarrow X$. Koll\'ar asked whether for any coherent sheaf $\mathcal F$ on $X\setminus Z$ and any index $r\geq 1$, the sheaf $f_*(R^rj_*\mathcal F)$ is generically free on $S$ and commutes with base change. We answer this affirmatively, by proving a related statement about local cohomology: Let $ R$ be Noetherian algebra over a Noetherian domain $A$, and let $I \subset R$ be an ideal such that $ R/I $ is finitely generated as an $A$-module. Let $M$ be a finitely generated $R$-module. Then there exists a non-zero $g \in A$ such that the local cohomology modules $H^r_I(M) \otimes_A A_g$ are free over $A_g$ and for any ring map $A\rightarrow L$ factoring through $A_g$, we have $H^r_I(M) \otimes_A L \cong H^r_{I{\otimes_A}L}(M\otimes_A L)$ for all $r$. Submission historyFrom: Karen E. Smith [view email] [v1]Thu, 30 Jun 2016 22:04:50 GMT (14kb) |
In the previous sections, the density was assumed to be constant. For non constant density the derivations aren't ``clean'' but are similar. Consider straight/flat body that is under liquid with a varying density. If density can be represented by average density, the force that is acting on the body is \[F_{total} = \int_{A} g\rho h dA \sim \bar{\rho} \int_{A} g h dA \tag{164}\] In cases where average density cannot be represented reasonably, the integral has be carried out. In cases where density is non–continuous, but constant in segments, the following can be said \[F_{total} = \int_{A} g \rho h dA = \int_{A_{1}} g \rho_{1} h dA + \int_{A_{2}} g \rho_{2} h dA + \cdot \cdot \cdot + \int_{A_{n}} g \rho_{n} h dA \tag{165}\] As before for single density, the following can be written \[F_{total} = gsin \beta \left[\rho_{1} \int_{A_{1}} \xi dA + \rho_{2} \int_{A_{2}} \xi dA + \cdot \cdot \cdot + \rho_{n} \int_{A_{n}} \xi dA \right] \tag{166}\] Or in a compact form and in addition considering the ``atmospheric'' pressure can be written as
Total Static Force
\[F_{total} = P_{atmos} A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i}x_{ci}A_{i}\tag{167}\]
where the density, \(\rho_{i}\) is the density of the layer \(i\) and \(A_{i}\) and \(x_{ci}\) are geometrical properties of the area which is in contact with that layer. The atmospheric pressure can be entered into the calculation in the same way as before. Moreover, the atmospheric pressure can include all the layer(s) that do(es) not with the ``contact'' area. The moment around axis \(y\), \(M_{y}\) under the same considerations as before is \[M_{y} = \int_{A} g \rho \xi ^{2} sin \beta dA \tag{168}\] After similar separation of the total integral, one can find that
Total Static Moment
\[M_{y} = P_{atmos}x_{c}A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i} I_{x'x'i}\tag{170}\]
In the same fashion one can obtain the moment for \(x\) axis as
Total Static Moment
\[M_{x} = P_{atmos}y_{c}A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i} I_{x'y'i}\tag{171}\]
To illustrate how to work with these equations the following example is provided.
Example 4.16
Consider the hypothetical Figure 4.25 The last layer is made of water with density of \(1000 [kg/m^3]\). The densities are \(\rho_1 = 500[kg/m^3]\), \(\rho_2 = 800[kg/m^3]\), \(\rho_3 = 850[kg/m^3]\), and \(\rho_4 = 1000[kg/m^3]\). Calculate the forces at points \(a_1\) and \(b_1\). Assume that the layers are stables without any movement between the liquids. Also neglect all mass transfer phenomena that may occur. The heights are: \(h_1 = 1[m]\), \(h_2 = 2[m]\), \(h_3 = 3[m]\), and \(h_4 = 4[m]\). The forces distances are \(a_1=1.5[m]\), \(a_2=1.75[m]\), and \(b_1=4.5[m]\). The angle of inclination is is \(\beta= 45^\circ\).
Fig. 4.25 The effects of multi layers density on static forces.
Solution 4.16
Since there are only two unknowns, only two equations are needed, which are (170) and (167). The solution method of this example is applied for cases with less layers (for example by setting the specific height difference to be zero). Equation (170) can be used by modifying it, as it can be noticed that instead of using the regular atmospheric pressure the new "atmospheric'' pressure can be used as
\[ {P_{atmos}}^{'} = P_{atmos} + \rho_1\,g\,h_1 \tag{172} \] The distance for the center for each area is at the middle of each of the "small'' rectangular. The geometries of each areas are \begin{array}{lcr} {x_c}_1 = \dfrac{a_2 + \dfrac{h_2}{\sin\beta}}{2} & A_1 = ll \left( \dfrac{h_2}{\sin\beta} -a_2 \right) & {I_{x^{'}x^{'}}}_1 = \dfrac{ll\left(\dfrac{h_2}{\sin\beta}-a_2\right)^{3}}{36} + \left({x_c}_1\right)^2\, A_1 \\ {x_c}_2 = \dfrac{h_2 + h_3}{2\,\sin\beta} & A_2 = \dfrac{ll}{\sin\beta} \left(h_3 - h_2\right) & {I_{x^{'}x^{'}}}_2 = \dfrac{ll\left({h_3}-h_2\right)^{3}}{36\,\sin\beta} + \left({x_c}_2\right)^2\, A_2 \\ {x_c}_3 = \dfrac{h_3 + h_4}{2\,\sin\beta} & A_3 = \dfrac{ll}{\sin\beta} \left(h_4 - h_3\right) & {I_{x^{'}x^{'}}}_3 = \dfrac{ll\left({h_4}-h_3\right)^{3}}{36\,\sin\beta} + \left({x_c}_3\right)^2\, A_3 \tag{173} \end{array} After inserting the values, the following equations are obtained Thus, the first equation is \[ F_1 + F_2 = {P_{atmos}}^{'} \overbrace{ll (b_2-a_2)}^{A_{total}} + g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\, A_i \tag{174} \] The second equation is (170) to be written for the moment around the point "O'' as \[ F_1\,a_1 + F_2\,b_1 = {P_{atmos}}^{'}\, \overbrace{\dfrac{(b_2+a_2)}{2}{ll (b_2-a_2)}}^
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/04:_Fluids_Statics/4.5:_Fluid_Forces_on_Surfaces/4.5.1:_Fluid_Forces_on_Straight_Surfaces/4.5.1.2:_Multiply_Layers), /content/body/div[5]/div/p[2]/span, line 1, column 4
+ g\,\sin\beta\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i \tag{175}
\]
The solution for the above equations is
\[
F1=
\begin{array}{c}
\dfrac{
2\,b_1\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\, A_i
-2\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i }
{2\,b_1-2\,a_1} - \\
\qquad\dfrac{\left({b_2}^{2}-2\,b_1\,b_2+2\,a_2\,b_1-{a_2}^{2}\right)
ll\,P_{atmos}}
{2\,b_1-2\,a_1}
\end{array}
\]
\[
F2=
\begin{array}{c}
\dfrac{
2\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i
-2\,a_1\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\,
A_i}
{2\,b_1-2\,a_1}
+
\\
\dfrac{
\left( {b_2}^{2}+2\,a_1\,b_2+{a_2}^{2}-2\,a_1\,a_2\right)
ll\,P_{atmos}}
{2\,b_1-2\,a_1}
\end{array}
\]
The solution provided isn't in the complete long form since it will makes things messy. It is simpler to compute the terms separately. A mini source code for the calculations is provided in the the text source. The intermediate results in SI units ([m], [\(m^2\)], [\(m^4\)]) are:
\[
\begin{array}{lcr}
x_{c1}=2.2892& x_{c2}=3.5355& x_{c3}=4.9497\\
A_1=2.696& A_2=3.535& A_3=3.535\\
{I_{x'x'}}_1=14.215& {I_{x'x'}}_2=44.292& {I_{x'x'}}_3=86.718
\end{array}
\]
The final answer is
\[
F_1=304809.79[N] \tag{176}
\]
and
\[
F_2=958923.92[N] \tag{177}
\]
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
With $\rho_{0}>0$, take the power law density function $P(q)=\rho_{0}q^{-\gamma}$. If I have $$ {\cal N}_L=\frac{L!}{2 L} \sum_{\{n_q\}} \prod_q \frac{N_q!}{n_q! (N_q-n_q)!}\left(\frac{q^2}{Nc}\right)^{n_q} $$ which is an expression for a number of loops of length $L$, I can use a delta function representation for the condition $\sum_{q}n_{q}=L$ to obtain $$ {\cal N}_L=\frac{L!}{2 L} \int_{-\infty}^\infty \frac{dx}{2\pi} \exp\left(iLx+N\left\langle \log\left[1+q^2e^{-ix}/(Nc)\right]\right\rangle\right) $$ given $N \to \infty$, and where the angled brackets are the expected value over $q$. Also, there is a condition which reads $\Delta q \to 0$.
This is a common rearrangement in statistical mechanics. But I cannot see how its done, only sources of similar manipulations. Can someone give me the trick?
I think it comes down to how the condition in the sum in the first equation is written as a delta function. The second term in the exponent of the second equation then cancels the first via the complex exponential, after taking some approximation for large $N$?
See this paper, Eqs. 6 and 7. |
I will give a partial answer, I hope others will fill in the blanks.
In typed $\lambda$-calculi, one may give a type to usual representations of data ($\mathsf{Nat}$ for Church (unary) integers, $\mathsf{Str}$ for binary strings, $\mathsf{Bool}$ for Booleans) and wonder what is the complexity of the functions/problems representable/decidable by typed terms. I know a precise asnwer only in some cases, and in the simply typed case it depends on the convention used when defining "representable/decidable". Anyhow, I don't know of any case in which there is a doubly exponential upper bound.
First, a brief recap on the Lambda Cube. Its 8 calculi are obtained by enabling or disabling the following 3 kinds of dependencies on top of the simply typed $\lambda$-calculus (STLC):
polymorphism: terms may depend on types; dependent types: types may depend on terms; higher order: types may depend on types.
(The dependency of terms on terms is always there).
Adding polymorphism yields System F. Here, you can type the Church integers with $\mathsf{Nat}:=\forall X.(X\rightarrow X)\rightarrow X\rightarrow X$, and similarly for binary strings and Booleans. Girard proved that System F terms of type $\mathsf{Nat}\rightarrow\mathsf{Nat}$ represent exactly the numerical functions whose totality is provable in second order Peano arithmetic. That's pretty much everyday mathematics (albeit without any form of choice), so the class is huge, the Ackermann function is a sort of tiny microbe in it, let alone the function $2^{2^n}$. I don't know of any "natural" numerical function which cannot be represented in System F. Examples usually are built by diagonalization, or encoding the consistency of second order PA, or other self-referential tricks (like deciding $\beta$-equality within System F itself). Of course in System F you can convert between unary integers $\mathsf{Nat}$ and their binary representation $\mathsf{Str}$, and then test for instance whether the first bit is 1, so the class of decidable problems (by terms of type $\mathsf{Str}\rightarrow\mathsf{Bool}$) is equally huge.
The other 3 calculi of the Lambda Cube which include polymorphism are therefore at least as expressive as System F. These include System F$_\omega$ (polymorphism + higher order), which can express exactly the provably total functions in higher order PA, and the Calculus of Constructions (CoC), which is the most expressive calculus of the Cube (all dependencies are enabled). I don't know a characterization of the expressiveness of the CoC in terms of arithmetical theories or set theories, but it must be pretty frightening :-)
I am much more ignorant regarding the calculi obtained by just enabling dependent types (essentially Martin-Löf type theory without equality and natural numbers), higher order types or both. In these calculi, types are powerful but terms can't access this power, so I don't know what you get. Computationally, I don't think you get much more expressiveness than with simple types, but I may be mistaken.
So we are left with the STLC. As far as I know, this is the only calculus of the Cube with interesting (i.e., not monstrously big) complexity upper bounds. There is an unanswered question about this on TCS.SE, and in fact the situation is a bit subtle.
First, if you fix an atom $X$ and define $\mathsf{Nat}:=(X\rightarrow X)\rightarrow X\rightarrow X$, there is Schwichtenberg's result (I know there's an english translation of that paper somewhere on the web but I can't find it now) which tells you that the functions of type $\mathsf{Nat}\rightarrow\mathsf{Nat}$ are exactly the extended polynomials (with if-then-else). If you allow some "slack", i.e. you allow the parameter $X$ to be instantiated at will and consider terms of type $\mathsf{Nat}[A]\rightarrow\mathsf{Nat}$ with $A$ arbitrary, much more can be represented. For example, any tower of exponentials (so you may go well beyond doubly exponential) as well as the predecessor function, but still no subtraction (if you consider binary functions and try to type them with $\mathsf{Nat}[A]\rightarrow\mathsf{Nat}[A']\rightarrow\mathsf{Nat}$). So the class of numerical functions representable in the STLC is a bit weird, it is a strict subset of the elementary functions but does not correspond to anything well known.
In apparent contradiction with the above, there's this paper by Mairson which shows how to encode the transition function of an
arbitrary Turing machine $M$, from which you obtain a term of type $\mathsf{Nat}[A]\rightarrow\mathsf{Bool}$ (for some type $A$ depending on $M$) which, given a Church integer $n$ as input, simulates the execution of $M$ starting from a fixed initial configuration for a number of steps of the form$$2^{2^{\vdots^{2^n}}},$$with the height of the tower fixed. This does not show that every elementary problem is decidable by the STLC, because in the STLC there is no way of converting a binary string (of type $\mathsf{Str}$) representing the input of $M$ to the type used for representing the configurations of $M$ in Mairson's encoding. So the encoding is somehow "non-uniform": you can simulate elementarily-long executions from a fixed input, using a distinct term for each input, but there is no term that handles arbitrary inputs.
In fact, the STLC is extremely weak in what it can decide "uniformly". Let us call $\mathcal C_{ST}$ the class of languages decidable by simply typed terms of type $\mathsf{Str}[A]\rightarrow\mathsf{Bool}$ for some $A$ (like above, you allow arbitrary "slack" in the typing). As far as I know, a precise characterization of $\mathcal C_{ST}$ is missing. However, we do know that $\mathcal C_{ST}\subsetneq\mathrm{LINTIME}$ (deterministic linear time). Both the containment and the fact that it is strict may be shown by very neat semantic arguments (using the standard denotational semantics of the STLC in the category of finite sets). The former was shown recently by Terui. The latter is essentially a reformulation of old results of Statman. An example of problem in $\mathrm{LINTIME}\setminus\mathcal C_{ST}$ is MAJORITY (given a binary string, tell whether it contains strictly more 1s than 0s).
(Much) Later add-on: I just found out that the class I call $\mathcal C_{ST}$ above actually does have a precise characterization, which is moreover extremely simple. In this beautiful 1996 paper, Hillebrand and Kanellakis prove, among other things, that
Theorem. $\mathcal C_{ST}=\mathsf{REG}$ (the regular languages on $\{0,1\}$).
(This is Theorem 3.4 in their paper).
I find this doubly surprising: I am surprised by the result itself (it never occurred to me that $\mathcal C_{ST}$ could correspond to something so "neat") and by how little known it is. It is also amusing that Terui's proof of the $\mathrm{LINTIME}$ upper bound uses the same methods employed by Hillebrand and Kanellakis (interpreting the simply-typed $\lambda$-calculus in the category of finite sets). In other words, Terui (and myself) could have easily re-discovered this result were it not for the fact that we were somehow happy with $\mathcal C_{ST}$ being a "weird" class :-)
(Incidentally, I shared my surprise in this answer to a MO question about "unknown theorems"). |
Parentheses and brackets are very common in mathematical formulas. You can easily control the size and style of brackets in LaTeX; this article explains how.
Contents
Here's how to type some common math braces and parentheses in LaTeX:
Type LaTeX markup Renders as Parentheses; round brackets
(x+y)
\((x+y)\) Brackets; square brackets
[x+y]
\([x+y]\) Braces; curly brackets
\{ x+y \}
\(\{ x+y \}\) Angle brackets
\langle x+y \rangle
\(\langle x+y\rangle\) Pipes; vertical bars
|x+y|
\(\displaystyle| x+y |\) Double pipes
\|x+y\|
\(\| x+y \|\)
The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example:
\[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] Notice that to insert the parentheses or brackets, the
\left and
\right commands are used. Even if you are using only one bracket,
both commands are mandatory.
\left and
\right can dynamically adjust the size, as shown by the next example:
\[ \left[ \frac{ N } { \left( \frac{L}{p} \right) - (m+n) } \right] \] When writing multi-line equations with the
align,
align* or
aligned environments, the
\left and
\right commands must be balanced
on each line and on the same side of &. Therefore the following code snippet will fail with errors: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \\ & \quad + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The solution is to use "invisible" brackets to balance things out, i.e. adding a
\right. at the end of the first line, and a
\left. at the start of the second line after
&:
\[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \right. \\ & \quad \left. + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \]
The size of the brackets can be controlled explicitly
The commands
\Bigg and
\bigg stablish the size of the delimiters
< and
> respectively. For a complete list of parentheses and sizes see the reference guide.
LaTeX markup Renders as
\big( \Big( \bigg( \Bigg(
\big] \Big] \bigg] \Bigg]
\big\{ \Big\{ \bigg\{ \Bigg\{
\big \langle \Big \langle \bigg \langle \Bigg \langle
\big \rangle \Big \rangle \bigg \rangle \Bigg \rangle
\big| \Big| \bigg| \Bigg|
\(\displaystyle\big| \; \Big| \; \bigg| \; \Bigg|\)
\big\| \Big\| \bigg\| \Bigg\|
\(\displaystyle\big\| \; \Big\| \; \bigg\| \; \Bigg\|\) |
My textbook defines power delivered to an inductor as:
$$P= V_{L\rm\ peak}I_{\rm peak} \cos ( \omega t) \sin( \omega t)$$ where $\omega$ is angular frequency.
but makes no mention of $P_{RMS}$. It simply says that $P_{av}$ is zero (which makes sense since it's defined as the product of two circular functions).
However, when we covered power, current, and voltage delivered to a resistor in an AC circuit, we used RMS values for current and voltage, and an average value for power. This made sense since power delivered to a resistor is a function of a squared sinusoidal function, so average was adequate.
In this section (inductors in AC circuits), only instantaneous power was discussed. This seemed odd to me. In previous sections the book discussed how taking an average of a sinusoidal function just returns zero, which is why we use RMS values instead. That makes perfect sense, so why not apply that approach here? Do we not care about RMS power? if so, why not?
They did say that the average power is given by $I_{rms}r$ where $r$ is internal resistance, assuming internal resistance is substantial. I'm curious about cases where internal resistance is negligible. |
The continuous-time (linear) state space model can be written
\begin{align*} \text{d}\mathbf{x}_t &= \mathbf{F} \,\mathbf{x}_t \, \text{d}t + \mathbf{G} \,\text{d} \boldsymbol{\beta}_t \\ \text{d} \mathbf{z}_t &= \mathbf{H} \,\mathbf{x}_t \, \text{d}t + \text{d}\boldsymbol{\eta}_t \end{align*}
where $\mathbf{x}_t$ is the unobserved state and $\mathbf{z}_t$ is the observed process, while $\boldsymbol{\beta}_t$ and $\boldsymbol{\eta}_t$ are independent Brownian motions. A special case is when the term $\text{d} \boldsymbol{\eta}_t$ is discarded from the second equation (the observation equation). This concerns for instance Continuous-Time Auto-Regressive models. However, most books seem to consider only the case with measurement noise: see for instance Jazwinski, Øskendal or Sarkka and Solin.
Where can we find a description/discussion of the KF for the case with no measurement noise? |
I understand that the basic definition of endogeneity is that $$ X'\epsilon=0 $$ is not satisfied, but what does this mean in a real world sense? I read the Wikipedia article, with the supply and demand example, trying to make sense of it, but it didn't really help. I've heard the other description of endogenous and exogenous as being within the system and being outside the system and that still doesn't make sense to me.
JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the real world. When you write:
\begin{equation} Y_i=\beta_0+\beta_1X_i+\epsilon_i \end{equation}
you can think of this equation in a number of ways. You could think of it as a convenient way of predicting $Y$ based on $X$'s values. You could think of it as a convenient way of modeling $E\{Y|X\}$. In either of these cases, there is no such thing as endogeneity, and you don't need to worry about it.
However, you can also think of the equation as embodying causation. You can think of $\beta_1$ as the answer to the question: "What would happen to $Y$ if I reached in to this system and experimentally increased $X$ by 1?" If you want to think about it that way, using OLS to estimate it amounts to assuming that:
$X$ causes $Y$ $\epsilon$ causes $Y$ $\epsilon$ does not cause $X$ $Y$ does not cause $X$ Nothing which causes $\epsilon$ also causes $X$
Failure of any one of 3-5 will generally result in $E\{\epsilon|X\}\ne0$, or, not quite equivalently, ${\rm Cov}(X,\epsilon)\ne0$. Instrumental variables is a way of correcting for the fact that you got the causation wrong (by making another, different, causal assumption). A perfectly conducted randomized controlled trial is a way of
forcing 3-5 to be true. If you pick $X$ randomly, then it sure ain't caused by $Y$, $\epsilon$, or anything else. So-called "natural experiment" methods are attempts to find special circumstances out in the world where 3-5 are true even when we don't think 3-5 are usually true.
In JohnRos's example, to calculate the wage value of education, you need a causal interpretation of $\beta_1$, but there are good reasons to believe that 3 or 5 is false.
Your confusion is understandable, though. It is very typical in courses on the linear model for the instructor to use the causal interpretation of $\beta_1$ I gave above while pretending not to be introducing causation, pretending that "it's all just statistics." It's a cowardly lie, but it's also very common.
In fact, it is part of a larger phenomenon in biomedicine and the social sciences. It is almost always the case that we are trying to determine the causal effect of $X$ on $Y$---that's what science is about after all. On the other hand, it is also almost always the case that there is some story you can tell leading to a conclusion that one of 3-5 is false. So, there is a kind of practiced, fluid, equivocating dishonesty in which we swat away objections by saying that we're just doing associational work and then sneak the causal interpretation back elsewhere (normally in the introduction and conclusion sections of the paper).
If you are really interested, the guy to read is Judea Perl. James Heckman is also good.
Let me use an example:
Say you want to quantify the (causal) effect of education on income. You take education years and income data and regress one against the other. Did you recover what you wanted? Probably not! This is because the income is also caused by things other than education, but which are correlated to education. Let's call them "skill": We can safely assume that education years are affected by "skill", as the more skilled you are, the easier it is to gain education. So, if you regress education years on income, the estimator for the education effect absorbs the effect of "skill" and you get an overly optimistic estimate of return to education. This is to say, education's effect on income is (upward) biased because education is not exogenous to income.
Endogeneity is only a problem if you want to recover
causal effects (unlike mere correlations). Also- if you can design an experiment, you can guarantee that ${\rm Cov}(X,\epsilon)=0$ by random assignment. Sadly, this is typically impossible in social sciences.
User25901 is looking for a straight-forward simple, real-world explanation what the terms exogenous and endogenous mean. Responding with arcane examples or mathematical definitions does not really answer the question that was asked.
How do I get a gut understanding of these two terms?
Here's what I came up with:
Exo - external, outside Endo - internal, inside -genous - originating in
Exogeneous: A variable is exogenous to a model if it is not determined by other parameters and variables in the model, but is set externally and any changes to it come from external forces.
Endogenous: A variable is endogenous in a model if it is at least partly function of other parameters and variables in a model.
The OLS regression, by construction, gives $X'\epsilon=0$. Actually that is not correct. It gives $X'\hat\epsilon=0$ by construction. Your estimated residuals are uncorrelated with your regressors, but your estimated residuals are "wrong" in a sense.
If the true data-generating-process operates by $Y=\alpha +\beta X + \gamma Z + {\rm noise}$, and $Z$ is correlated with $X$, then $X'{\rm noise} \neq 0$ if you fit a regression leaving out $Z$. Of course, the estimated residuals will be uncorrelated with $X$. They always are, the same way that $\log(e^x)=x$. It is just a mathematical fact. This is the omitted variable bias.
Say that $I$ is randomly assigned. Maybe it is the day of week that people are born. Maybe it is an actual experiment. It is anything uncorrelated with $Y$ that predicts $X$. You can then use the randomness of $I$ to predict $X$, and then use that predicted $X$ to fit a model to $Y$.
That is two stage least squares, which is almost the same as IV.
In regression we want to capture the quantitative impact of an independent variable (which we assume is exogenous and not being itself dependent on something else) on an identified dependent variable. We want to know what net effect an exogenous variable has on a dependent variable- meaning the independent variable should be free of any influence from another variable. A quick way to see if the regression is suffering from the problem of endogeneity is to check the correlation between the independent variable and the residuals. But this is just a rough check otherwise formal tests of endogeneity need to be undertaken.
protected by kjetil b halvorsen Nov 12 '17 at 15:00
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I am using this paper as a reference.
The Miller-Rabin test, as classically formulated, is non-deterministic -- you pick a base $b$, check if your number $n$ is a $b$-strong probable prime ($b$-SPRP), and if it is, your number is probably prime (repeat until "confident.")
A deterministic variant, assuming your number $n$ is below some bound (say $n<2^{64}$), is to pick a small number of bases $b$, and check if $n$ is a $b$-SPRB relative to each of those bases. There seems to be a bit of a sport to finding very small sets of bases, so as to make this process as fast as possible.
In particular, the cited reference declares a theorem of Jaeschke and Sinclair, that
If $n < 2^{64}$ is a $b$-SPRP for $b\in\{2, 325, 9375, 28178, 450775, 9780504, 1795265022\}$, then $n$ is a prime.
It doesn't state any extra hypotheses on $n$, or on what it means to be a $b$-SPRP. However, the classical formulation of Miller-Rabin only talks about $n$ being a $b$-SPRP when $b\leq n-2$, whereas the theorem above seems to allow $n<b$.
In particular, I have found (purely by accident) that $n=13$ does not satisfy the above criterion, meaning that
as stated it gives wrong answers, and I don't know why (so I can't predict more of them).
So the
question: Is this a shortened form of a proper theorem, where I should only be checking the values of $b$ where $b\leq n-2$? Is this an error in the paper? Am I just crazy?
For sake of completeness, the definition of $b$-SPRB I am using is the one given in the paper:
Factor $n$ as $2^sd$, where $s$ and $d$ are nonnegative integers and $d$ is odd. Then $n$ is a $b$-SPRB iff $b^d\equiv 1\ (mod\ n)$ or, for some $r$ with $0\leq r < s$, $\left(b^d\right)^{2^r}\equiv -1\ (mod\ n)$.
Not a duplicate of: Bases required for prime-testing with Miller-Rabin up to $2^{63}-1$ seems to lead to the same questions (they don't address the issue of when $n<b$; it's just irrelevant there) and uses bases so small it doesn't matter. |
Suppose $\Psi$ is an eigenstate of observable $\text H$ with eigenvalue $E_1$. Then uncertainty in the value of $\text H$,
$(\Delta E)^2=\langle E^2\rangle-\langle E\rangle^2$ which gives, $(\Delta E)^2=E_1^2\bigg(\langle\Psi_1|\Psi_1\rangle-\big|\langle\Psi_1|\Psi_1\rangle\big|^2\bigg)$ If the eigenstate is not normalized then the right hand side is not zero. But the measurement of $\text H$ on $\Psi$ must yield $E_1$ with no uncertainty.
Suppose $\Psi$ is an eigenstate of observable $\text H$ with eigenvalue $E_1$. Then uncertainty in the value of $\text H$,
It is part of the postulates of quantum mechanics that the expectation value of the observable corresponding to the hermitian operator $A$ in the normalized state $|\psi\rangle$ is given by $\langle A\rangle_\psi =\langle\psi|A|\psi\rangle$. Alternatively, you can postulate that the expectation value is given by $\langle A \rangle_\psi = \frac{\langle\psi|A|\psi\rangle}{\langle\psi|\psi\rangle}$. See, for example, the Dirac--von Neumann axioms
Either way the normalization is necessary, because otherwise you have to give physical meaning to the normalization.
Edit: So the reason your calculation is giving a strange result is you are calculating expectation values wrong. |
I'm looking for a formal definition of
Cluster of Solutions. My current understanding is the following. Let $x$ be a boolean assignment on $n$ variables. Let $f: \{ 0,1 \} ^n \to \mathbb{N}$ be a function that, given a boolean assignment $x$ on $n$ variables, just returns the natural number $i \in [0, 2^n-1]$ corresponding to $x$. Let $g: \mathbb{N} \to \{ 0,1 \} ^n$ be the inverse of $f$: given a natural number $i$, $g$ returns the corresponding solution (i.e. the binary encoding of $i$ in $n$ bits). Now, a Cluster of Solutions is a set $S$ of solutions such that, for each solution $x \in S$ and for each solution $y \in S$, it's the case that $g(i) \in S$ for each $i \in (f(x), f(y))$. Less formally, a Cluster of Solutions is a set whose solutions are "packed", i.e. "there are no non-solutions among the solutions". Is this definition correct?
I'm looking for a formal definition of
I think a cluster of solutions is a maximal set of solutions $T$ s.t. you can reach every $\tau' \in T$ from every other $\tau \in T$ by a sequence of solutions $\{\tau_i\}_{0\leq i\leq n}$ ($\tau = \tau_0$ and $\tau' = \tau_n$) where the hamming distance between each consecutive pair of solutions is bounded, i.e. they are just connected components in the graph where two solutions are adjacent iff the hamming distance between them is less than the bound.
See these notes by Dimitris Achlioptas (or papers on
statistical physics and random k-SAT).
I think a possible alternative for solution cluster definition could be the folowing:
solution cluster is a set of satisfying Boolean assignments inside a ball of some given radius. The distance metric is Hamming distance between two satisfying assignments. This would enable a compact representation of each cluster by giving the center and the radius of the cluster. |
Is there an elliptic curve in CP^2 whose induced Remannian metric ( induced from the Fubini-Sudy metric on CP^2) is Euclidian flat?
According to this paper by Linda Ness the Gaussian curvature of a curve $C\subset \mathbb P^2$ defined by the zeros of a degree $d>1$ homogeneous polynomial $F \in \mathbb C[x,y,z]$ at a smooth point $p$ is given by $$ K(p) = 2- \frac{\|p\|^6 \cdot | \rm{Hessian}(F)(p)|^2}{ (d-1)^4 \cdot \| \nabla F(p) \|^6} , $$ where $\| \cdot \|$ stands for the usual norm in $\mathbb C^3$, and $\nabla F$ is the gradient of $F$.
In particular, if $p$ is a smooth inflection point of $C$ then $K(p) = 2$. Thus, there are no smooth cubics in $\mathbb P^2$ which are Euclidean flat, since these have $9$ inflection points.
N.B. : Ness normalizes the Fubiny-Study metric to have sectional curvature $2$.
After googling a bit I've found the paper The Riemannian geometry of holomorphic curves by Blaine Lawson which is strictly related to the subject. There he says that Eugenio Calabi proved, in Isometric imbedding of complex manifolds, that
($\ldots$) modulo holomorphic congruences, there is only one curve $C_n$ of constant Gauss curvature in $\mathbb > C P^n$ which does not lie in any linear subspace. This curve has curvature $1/n$ and is given by the following embedding of $\mathbb C P^1\to \mathbb C P^n$: $$(z_0,z_1) \mapsto \left(z_0^n, \sqrt{n} z_0^{n-1} z_1, \ldots, \sqrt{\binom{n}{k}}z_0^{n-k}z_1^k, \ldots, z_1^n \right). $$
I could not find this statement in Calabi's paper, but this does not exclude the possibility that it is indeed there. The paper is the published version of Calabi's Phd thesis, so another possibility is that the statement is in the thesis but did not make its way into the paper.
N.B. : Lawson normalizes the Fubiny-Study metric to have sectional curvature $1$.
A much more general result holds. If $M$ is a compact complex manifold and $f:M\to\mathbb{CP}^n$ is a holomorphic embedding such that $f^*g_{FS}$ is an Einstein metric, then the Einstein constant must be strictly positive. This is a theorem of D. Hulin.
One can also wonder what manifolds $M$ one can obtain in general (i.e. what compact complex submanifolds of $\mathbb{CP}^n$ are Einstein for the induced Fubini-Study metric). It is believed that these must all be complex homogeneous spaces, and a complete classification is known in the case of (complex) codimension at most $2$, and for complete intersections. See this other paper of Hulin and the references there. |
Search
Now showing items 1-10 of 27
Lifetime of helium metastable spin-states in a helium discharge
(1965)
The lifetime of a helium 23s1 metastable atom electronic spin-state is measured in helium gas using optical pumping techniques. The metastable atoms are created by an RF electrical discharge. The spin-state lifetime is ...
Electron beam excitation studies of helium
(1967)
A comparison is made of relative number densities of excited states produced in Helium gas excited by an RF discharge and by the high energy (200kev, maximum) electron beam, it being found that the 33D states are relatively ...
FT-ICR studies of giant carbon fullerenes
(1992)
FT-ICR studies of high mass $\rm (C\sb{>150})$ carbon clusters have brought insight to the controversial structures of carbon fullerenes. Laser vaporization followed by supersonic beam technique produced carbon clusters ...
Probing depths of low energy electrons in metals
(1992)
Spin-polarized electron energy-loss spectroscopy has been used to investigate the probing depth of low energy ($\sim$30 eV) electrons in metals. A beam of spin-polarized electrons is directed at the surface of the sample ...
Optical pumping dynamics and spin relaxation in gaseous He_
(1966)
The first part of this investigation is concerned with the dynamics of the optical polarization process in He3 gas subjected to an electrical discharge. The characteristic time for the build-up of polarization under the ...
A method of polarization analysis of electrons from optically pumped He_
(1967)
An experiment in progress to extract a polarized beam of electrons from an optically pumped helium source gas is described. Possible ionization mechanisms in helium gas are outlined and the methods of optical pumping are ...
Spectra of dense helium excited by electron impact
(1969)
This is an account of the visible and near infrared spectrum of dense helium excited by 160keV electrons. It has been suspected that metastable helium atoms are involved in production of strong 02 and N2 spectra when ...
Laser annealing of Ni (001)
(1983)
Experimental aspects of laser cleaning and annealing of a Ni(1) surface with a pulsed ruby laser are reported. Effects of applying laser energy densities of .4 J/cm to 1.1 J/cm to an Ar-ion sputter-cleaned surface are ...
Spin polarized metastable deexcitation spectroscopy as a probe of gases absorbed on metal surface
(1992)
Spin polarized metastable deexcitation spectroscopy provides an important surface probe, in which a beam of thermal energy metastable noble gas atoms is deexcited at the target surface under study, releasing its energy ...
Dynamics of conversion of atomic helium(triplet-2S) atoms to molecular helium(odd triplet-a-sigma-plus) molecules in ternary collisions (helium)
(1990)
The temperature dependence of conversion of He(2$\sp3$S$\sb1$) metastable atoms to He$\sb2$(a$\sp3\Sigma\sp+\sb{\rm u}$) metastable molecules in the three-body reaction$$\rm He(2\sp3S\sb1)+2He(1\sp1S\sb0) \to He\sb2(a\sp ... |
10599
12007
Deletions are marked like this. Additions are marked like this. Line 69: Line 69: ''''Pedants and thinsat programmers take note:''' The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky. Line 70: Line 71: The light harvest averages 67% around the orbit. The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible?
[[ IRfilter | Infrared-filtering thinsats ]] reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m^2^ of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage.
=== Details of the Zero NLP maneuver ===
In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, $ \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta $.
Integrating numerically from the proper choice of initial rotation rate,
MoreLater
-----
Night Side Maneuvers
We can minimize night light pollution, and advance perigee against light pressure orbit distortion, by turning the thinsat as we approach eclipse. The overall goal is to perform 1 complete rotation of the thinsat per orbit, with it perpendicular to the sun on the day-side of the earth, but turning it by varying amounts on the night side.
Another advantage of the turn is that if thinsat maneuverability is destroyed by radiation or a collision on the night side, it will come out of night side with a slow tumble that won't be corrected. The passive radar signature of the tumble will help identify the destroyed thinsat to other thinsats in the array, allowing another sacrificial thinsat to perform a "rendezvous and de-orbit". If the destroyed thinsat is in shards, the shards will tumble. The tumbling shards ( or a continuously tumbling thinsat ) will eventually fall out of the normal orbit, no longer get J_2 correction, and the thinsat orbit will "eccentrify", decay, and reenter. This is the fail-safe way the arrays will reenter, if all active control ceases.
Maneuvering thrust and satellite power
Neglecting tides, the synodic angular velocity of the m288 orbit is \Large\omega = 4.3633e-4 rad/sec = 0.025°/s. The angular acceleration of a thinsat is 13.056e-6 rad/sec
2 = 7.481e-4°/s 2 with a sun angle of 0°, and 3.740e-4°/s 2 at a sun angle of 60°. Because of tidal forces, a thinsat entering eclipse will start to turn towards sideways alignment with the center of the earth; it will come out of eclipse at a different velocity and angle than it went in with.
If the thinsat is rotating at \omega and either tangential or perpendicular to the gravity vector, it will not turn while it passes into eclipse. Otherwise, the tidal acceleration is \ddot\theta = (3/2) \omega^2 \sin 2 \delta where \delta is the angle to the tangent of the orbit. If we enter eclipse with the thinsat not turning, and oriented directly to the sun, then \delta = 30° .
Three Strategies and a Worst Case Failure Mode
There are many ways to orient thinsats in the night sky, with tradeoffs between light power harvest, light pollution, and orbit eccentricity. If we reduce power harvest, we will need to launch more thinsats to compensate, which makes more problems if the system fails. I will present three strategies for light harvest and nightlight pollution. The actual strategies chosen will be blend of those.
Tumbling
If things go very wrong, thinsats will be out of control and tumbling. In the long term, the uncontrolled thinsats will probably orient flat to the orbital plane, and reflect very little light into the night sky, but in the short term (less than decades), they will be oriented in all directions. This is equivalent to mapping the reflective area of front and back (2 π R
2 ) onto a sphere ( 4 π R 2 ). Light with intensity I shining onto a sphere of radius R is evenly reflected in all directions uniformly. So if the sphere intercepts π R 2 I units of light, it scatters e I R 2/4 units of light (e is albedo) per steradian in all directions. While we will try to design our thinsats with low albedo ( high light absorption on the front, high emissivity on the back), we can assume they will get sanded down and more reflective because of space debris, and they will get broken into fragments of glass with shiny edges, adding to the albedo. Assume the average albedo is 0.5, and assume the light scattering is spherical for tumbling.
Source for the above animation: g400.c
Three design orientations
All three orientations shown are oriented perpendicularly in the daytime sky.
Max remains perpendicular in the night sky, Min is oriented vertically in the night sky, and Zero is edge on to the terminator in the night sky. All lose orientation control and are tilted by tidal forces in eclipse - the compensatory thrusts are not shown. Min and Zero are accelerated into a slow turn before eclipse, so they come out of the eclipse in the correct orientation. In all cases, there will probably be some disorientation and sun-seeking coming out of eclipse, until each thinsat calibrates to the optimum inertial turn rate during eclipse. So, there may be a small bit of sky glow at the 210° position, directly overhead at 2am and visible in the sky between 10pm and 6am.
Max NLP: Full power night sky coverage, maximum night light pollution
The most power is harvested if the thinsats are always oriented perpendicular to the sun. During the half of their orbit into the night sky, there will be some diffuse reflection to the side, and some of that will land in the earth's night sky. The illumination is maximum along the equator. For the M288 orbit, about 1/6th of the orbit is eclipsed, and 1/2 of the orbit is in daylight with the diffuse (Lambertian) reflection scattering towards the sun and onto the day side of the earth. Only the two "horns" of the orbit, the first between 90° and 150° (6pm to 10pm) and the second between 210° and 270° (2am to 6am) will reflect night into the light sky. The light harvest averages to 83% around the orbit.
This is the worst case for night sky illumination. Though it is tempting to run thinsats in this regime, extracting the maximum power per thinsat, it is also the worst case for eccentricity caused by light pressure, and the thinsats must be heavier to reduce that eccentricity.
Min NLP: Partial night sky coverage, some night light pollution
This maneuver will put some scattered light into the night sky, but not much compared to perpendicular solar illumination all the way into shadow. In the worst case, assume that the surface has an albedo of 0.5 (typical solar cells with an antireflective coating are less than 0.3) and that the reflected light is entirely Lambertian (isotropic) without specular reflections (which will all be away from the earth). At a 60° angle, just before shadow, the light emitted by the front surface will be 1366W/m
2 × 0.5 (albedo) × 0.5 ( cos 60° ), and it will be scattered over 2π steradians, so the illumination per steradian will be 54W/m 2-steradian just before entering eclipse.
Estimate that the light pollution varies from 0W to 54W between 90° and 150° and that the average light pollution is half of 54W, for 1/3 of the orbit. Assuming an even distribution of thinsat arrays in the constellation, that is works out to an average of 9W/m
2-steradian for all thinsats in M288 orbit.
The full moon illuminates the night side of the equatorial earth with 27mW/m
2 near the equator. A square meter of thinsat at 6400km distance produces 9W/6400000 2 or 0.22 picowatts per m 4 times the area of all thinsats. If thinsat light pollution is restricted to 5% of full moon brightness (1.3mW/m 2), then we can have 6000 km 2 of thinsats up there, at an average of 130 W/m 2, or about 780GW of thinsats at m288. That is about a million tons of thinsats.
The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun:
time min
orbit degrees
rotation rate
sun angle
Illumination
Night Light
0 to 60
0° to 90°
0 ~ \Large\omega
0°
100%
0W
60 to 100
90° to 150°
1 ~ \Large\omega
0° to 60°
100% to 50%
0W to 54W
100 to 140
150° to 210°
4 ~ \Large\omega
60° to 300°
Eclipse
0W
140 to 180
210° to 270°
1 ~ \Large\omega
300° to 0°
50% to 100%
54W to 0W
180 to 240
270° to 0°
0 ~ \Large\omega
0°
100%
0W
The angular velocity change at 0° takes 250/7.481 = 33.4 seconds, and during that time the thinsat turns 0.42° with negligible effect on thrust or power. The angular velocity change at 60° takes 750/3.74 = 200.5 seconds, and during that time the thinsat turns 12.5°, perhaps from 53.7° to 66.3°, reducing power and thrust from 59% to 40%, a significant change. The actual thrust change versus time will be more complicated (especially with tidal forces), but however it is done, the acceleration must be accomplished before the thinsat enters eclipse.
The light harvest averages 78% around the orbit.
Zero NLP: Partial night sky coverage, no night light pollution
In this case, in the night half of the sky the edge of the thinsat is always turned towards the terminator. As long as the thinsats stay in control, they will never produce any nighttime light pollution, because the illuminated side of the thinsat is always pointed away from the night side of the earth. The average illumination fraction is around 68%.
The orientation of the thinsat over a 240 minute synodic m288 orbit at the equinox is as follows, relative to the sun:
time min
orbit degrees
rotation rate
sun angle
Illumination
Night Light
0 to 60
0° to 90°
0 ~ \Large\omega
0°
100%
0W
60 to 100
90° to 150°
1.5 ~ \Large\omega
0° to 90°
100% to 0%
0W
100 to 140
150° to 210°
3 ~ \Large\omega
90° to 270°
Eclipse
0W
140 to 180
210° to 270°
1.5 ~ \Large\omega
270° to 0°
0% to 100%
0W
180 to 240
270° to 0°
0 ~ \Large\omega
0°
100%
0W
'Pedants and thinsat programmers take note: The actual synodic orbit period is 240 minutes and 6.57 seconds long; that results in 2190.44 rather than 2191.44 sidereal orbits per year, accounting for the annual apparent motion of the sun around the sky.
The light harvest averages 67% around the orbit. Why would a profit maximizing operator settle for 67% when 83% was possible?
Infrared-filtering thinsats reduce launch weight and can use the Zero NLP flip to increase the minimum temperature of a thinsat during eclipses. An IR filtering thinsat in maximum night light pollution mode will have the emissive backside pointed at 2.7K space when it enters eclipse; the thinsat temperature will drop towards 20K if it cannot absorb the 64W/m
2 of 260K black body radiation reaching it from the earth through the 3.5μm front side infrared filter. The thinsat will become very brittle at those temperatures, and the thermal shock could destroy it. If the high thermal emissivity back side is pointed towards the 260K earth, the temperature will drop to 180K - still challenging, but the much higher thermal mobility may heal atomic-scale damage.
Details of the Zero NLP maneuver
In the night sky, assuming balanced thrusters, only tidal forces act on the thinsat, \ddot\theta = -(3/2) \omega^2 \sin( 2 \theta . Integrating numerically from the proper choice of initial rotation rate,
Power versus angle
Night light pollution versus hour
The night light pollution for 1 Terawatt of thinsats at M288. Mirror the graph for midnight to 6am. Some light is also put into the daytime sky (early morning and late afternoon), but it will be difficult to see in the glare of sunlight. The Zero NLP option puts no light in the night sky, so that curve is far below the bottom of this graph.
Source for the above two graphs: nl02.c |
Wave equation with memory
1.
Dipartimento di Matematica, Università di Roma "La Sapienza", P.le Aldo Moro, 5 - 00185 Roma, Italy
$u'(t) = Au(t) + \int_{-r}^0 k(s)A_1 u(s) ds + f(t),\quad t\ge 0;\quad u(t) = z(t), \quad t\in [-r,0]$
(where $A : D(A)\subset X \to X$ is a closed operator and $A_1 : D(A)\to X$ is continuous) is proved and applied to get a classical solution of the wave equation with memory effects
$ w_{t t} (t,x) = w_{x x}(t, x) + \int_{-r}^0 k(s) w_{x x} (t + s, x)ds + f(t, x), \quad t\ge 0,\quad x\in [0,l]$
To include also the Dirichlet boundary conditions and to get $C^2$-solutions, $D(A)$ is not supposed to be dense hence A is only a Hille-Yosida operator. The methods used are based on a reduction of the inhomogeneous equation to a homogeneous system of the first order and then on an immersion of $X$ in its extrapolation space, where the regularity and perturbation results of the classical semigroup theory can be applied.
Mathematics Subject Classification:34G10, 34K30, 35L0. Citation:Eugenio Sinestrari. Wave equation with memory. Discrete & Continuous Dynamical Systems - A, 1999, 5 (4) : 881-896. doi: 10.3934/dcds.1999.5.881
[1]
Serge Nicaise, Cristina Pignotti, Julie Valein.
Exponential stability of the wave equation with
boundary time-varying delay.
[2]
Serge Nicaise, Cristina Pignotti.
Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback.
[3]
Yaru Xie, Genqi Xu.
Exponential stability of 1-d wave equation with the boundary time delay based on the interior control.
[4]
Yanni Guo, Genqi Xu, Yansha Guo.
Stabilization of the wave equation with
interior input delay and mixed Neumann-Dirichlet boundary.
[5]
Guo Lin, Haiyan Wang.
Traveling wave solutions of a reaction-diffusion equation with state-dependent delay.
[6]
Serge Nicaise, Julie Valein.
Stabilization of the wave equation on 1-d networks with a delay term in the nodal feedbacks.
[7]
Ferhat Mohamed, Hakem Ali.
Energy decay of solutions for the wave equation with a time-varying delay term in the weakly nonlinear internal feedbacks.
[8] [9] [10]
Bernold Fiedler, Isabelle Schneider.
Stabilized rapid oscillations in a delay equation: Feedback control by a small resonant delay.
[11]
Stéphane Junca, Bruno Lombard.
Stability of neutral delay differential equations modeling wave propagation in cracked media.
[12]
Kun Li, Jianhua Huang, Xiong Li.
Traveling wave solutions in advection hyperbolic-parabolic system with nonlocal delay.
[13] [14] [15]
Q-Heung Choi, Changbum Chun, Tacksun Jung.
The multiplicity of solutions and geometry in a wave equation.
[16] [17] [18]
Elena Braverman, Karel Hasik, Anatoli F. Ivanov, Sergei I. Trofimchuk.
A cyclic system with delay and its characteristic equation.
[19]
Nick Bessonov, Gennady Bocharov, Tarik Mohammed Touaoula, Sergei Trofimchuk, Vitaly Volpert.
Delay reaction-diffusion equation for infection dynamics.
[20]
Xue Yang, Xinglong Wu.
Wave breaking and persistent decay of solution to a shallow water wave equation.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Under the auspices of the Computational Complexity Foundation (CCF)
One of the most important open problems in the theory
of error-correcting codes is to determine the tradeoff between the rate $R$ and minimum distance $\delta$ of a binary code. The best known tradeoff is the Gilbert-Varshamov bound, and says that for every $\delta \in (0, 1/2)$, there are codes with minimum distance $\delta$ and rate $R = R_{GV}(\delta) > 0$ (for a certain simple function $R_{GV}(\cdot)$). In this paper we show that the Gilbert-Varshamov bound can be achieved by codes which support local error-detection and error-correction algorithms.
Specifically, we show the following results.
1. Local Testing: For all $\delta \in (0,1/2)$ and all $R < R_{GV}(\delta)$,
there exist codes with length $n$, rate $R$ and minimum distance $\delta$ that are locally testable with $quasipolylog(n)$ query complexity.
2. Local Correction: For all positive $\epsilon$, for all $\delta < 1/2$ sufficiently
large, and all $R < (1-\epsilon) R_{GV}(\delta)$, there exist codes with length $n$, rate $R$ and minimum distance $\delta$ that are locally correctable from $\frac{\delta}{2} - o(1)$ fraction errors with $O(n^{\epsilon})$ query complexity.
Furthermore, these codes have an efficient randomized construction,
and the local testing and local correction algorithms can be made to run in time polynomial in the query complexity. Our results on locally correctable codes also immediately give locally decodable codes with the same parameters.
Our local testing result is obtained by combining Thommesen's random concatenation technique
and the best known locally testable codes. Our local correction result, which is significantly more involved, also uses random concatenation, along with a number of further ideas: the Guruswami-Sudan-Indyk list decoding strategy for concatenated codes, Alon-Edmonds-Luby distance amplification, and the local list-decodability, local list-recoverability and local testability of Reed-Muller codes. Curiously, our final local correction algorithms go via local list-decoding and local testing algorithms; this seems to be the first time local testability is used in the construction of a locally correctable code. |
I have studied direct products. I know a few applications of direct products, like group isomorphism,
etc. What are some applications of sub-direct product of groups?
Subdirect products arise naturally as intransitive subgroups of $S_n$, where they are subdirect products of the induced actions of the group on its orbits. Similarly for completely reducible linear groups.
A powerful tool in group theory is something called the fibre product. This is a special type of subdirect product of the group with itself.
Associated to a short exact sequence $1\rightarrow N\rightarrow H\rightarrow Q\rightarrow 1$, where $\pi(H)=Q$ is the natural map, is the
fibre product$P\subset H\times H$, $$ P:=\{(h_1, h_2)\mid \pi(h_1)=\pi(h_2)\}. $$
Clearly the diagonal component $\Delta:=\{(h, h)\mid h\in H\}$ is contained in $P$, so $P$ is a subdirect product.
The first application of fibre products which is know of is that there exist finitely generated subgroups of $F_2\times F_2$ which have insoluble membership problem: Firstly, let $Q$ be finitely presented and have insoluble word problem, then $P$ has insoluble membership problem. To see that $P$ is finitely generated, since $Q$ is finitely presented we have that $N \subset H$ is finitely generated
as a normal subgroup, and then to obtain a finite generating set for $P$ one chooses a finite normal generating set for $N\times\{1\}$ and then appends a generating set for the diagonal $\Delta\cong H=F_2$.
A surprisingly powerful result, called the $1$-$2$-$3$ theorem, gives conditions on the fibre products to be finitely
presentable, see: G. Baumslag, M.R. Bridson, C.F. Miller III, H. Short, Fibre products, non-positive curvature, and decision problems, Comm. Math. Helv. 75 (2000), 457–477.
A super-powerful application of fibre products is the following (the application is the main result of a paper in the Annals of Mathematics, one of
the top journals - undisputed top 4 journal, disputed no. 1). If $\Gamma$ is a group then $\widehat{\Gamma}$ denotes its profinite completion. Question (Grothendieck, 1970). Let $\Gamma_1$ and $\Gamma_2$ be finitely presented, residually finite groups and let $u :\Gamma_1\rightarrow \Gamma_2$ be a homomorphism such that $\widehat{u} :\widehat{\Gamma}_1\rightarrow \widehat{\Gamma}_2$ is an isomorphism of profinite groups. Does it follow that $u$ is an isomorphism from $\Gamma_1$ onto $\Gamma_2$? Theorem (Bridson, Grunewald, 2004, Ann. Math., link). There exists a short exact sequence $1\rightarrow N\rightarrow\Gamma\rightarrow Q\rightarrow 1$ where $\Gamma$ has a whole host of nice properties, and where $P$ and $\Gamma$ are a counter-example to Grothendieck's question: $P$ and $\Gamma$ are finitely presentable, with $\widehat{P}\cong \widehat{\Gamma}$ but $P\not\cong \Gamma$.
Typically, when you know that an object is a subdirect product of other objects, you know that it inherits all properties that are passed from products of those objects to their subsets. For example, a subdirect product of abelian groups must be abelian (and the same for any other group identity). |
Under the auspices of the Computational Complexity Foundation (CCF)
In this paper we study the degree of non-constant symmetric functions $f:\{0,1\}^n \to \{0,1,\ldots,c\}$, where $c\in
\mathbb{N}$, when represented as polynomials over the real numbers. We show that as long as $c < n$ it holds that deg$(f)=\Omega(n)$. As we can have deg$(f)=1$ when $c=n$, our result shows a surprising threshold phenomenon. The question of lower bounding the degree of symmetric functions on the Boolean cube was previously studied by von zur Gathen and Roche who showed the lower bound deg$(f)\geq \frac{n+1}{c+1}$ and so our result greatly improves this bound.
When $c=1$, namely the function maps the Boolean cube to $\{0,1\}$, we show that if $n=p^2$, when $p$ is a prime, then
deg$(f)\geq n-\sqrt{n}$. This slightly improves the previous bound of von zur Gathen and Roche for this case.
This paper is superseded by the ECCC Technical
Report TR11-002. |
I've been looking for a way to prevent Vortex (whirlpool) formation in a pressurized vessel as it drains, but I am not having any luck finding the parameters at which vortex formation occurs. Is there any way to prevent the formation of a vortex entirely?
The vortex forms due to a process called vortex stretching. Essentially as soon as the plug is pulled and the tank begins draining, a vortex forms at the center of the drain where the length of the vortex line is increased, which in turn increases the rotation rate and the familiar tornado shape forms.
The only surefire way to ensure that the vortex does not form is to ensure there is no vorticity in the tank to begin with. If there is no initial vorticity, there are no vortex lines to stretch. In reality, this means you would have to have the tank sit, perfectly still, until all of the rotation disappears. This could take days, maybe longer, maybe not even possible if the environment isn't exactly consistent (ie. the temperature changes, or there's vibrations or an earthquake or something).
If the liquid is incompressible, a good assumption for oils or water, it may be possible for viscous effects to balance the production of vorticity due to stretching and prevent the vortex from forming. The vorticity transport equation for an incompressible fluid is:
$$ \frac{D \vec{\omega}}{D t} = (\vec{\omega} \cdot \nabla) \vec{u} + \nu \nabla^2 \vec{\omega}$$
The first term on the RHS is the vortex stretching term and the second is the dissipation of vorticity by viscous forces. If you don't want a vortex to form, you would need $|\nu \nabla^2 \vec{\omega}| \geq |(\vec{\omega}\cdot\nabla)\vec{u}|$. You likely don't have much control over the viscosity -- your fluid is what it is, and you may not be able to change from water to oil or something. But, you can attempt to control the stretching term through careful design of the drain diameter, shape, etc..
How to actually complete such a design is beyond the scope of what we can do on this site. But given known conditions of your fluid, it should be possible to design a drain that does not produce large enough stretching terms such that viscosity will kill out any vortex that forms. Whether that design will be good for your purposes or not will depend on what you need to do.
There is a simple way to prevent vortex from occurring. Assuming the drain in the tank is on the bottom of the take, install a a horizontal plate several inches above the drain fully covering the drain. It should be such that any water traveling to the drain must travel in a horizontal motion to get under the plate to reach the drain. A simple way to think of it as a table sitting above the drain with short legs. This is done in many systems and is called "A Vortex Breaker". |
The next result is a generalization of [9; Proposition 1.2].
Proposition 6.3
Let \(P_t(x, \cdot)\) be symmetric and have density \(p_t(x, y)\) with respect to \(\mu.\) Suppose that the diagonal elements \(p_{s}(\cdot, \cdot)\in L_{{\rm loc}}^{1/2}(\mu)\) for some \(s>0\) and a set \({\fancyscript{K}}\) of bounded functions with compact support is dense in \(L^2(\mu).\) Then \(\lambda_0 = \varepsilon_{\rm max}.\) Proof
The proof is similar to the ergodic case (cf. [6
; Sect. 8.3] and 9
; proof of Theorem 7.4]), and is included here for completeness. (a) Certainly, the inner product and norm here are taken with respect to \(\mu.\)
First, we have
$$ \begin{aligned} P_t(x, K)&= P_s P_{t-s} {\bf 1}_K (x)\\ &=\int \mu(\hbox{d} y) {{\hbox{d} P_s(x, \cdot)}\over {{\hbox{d}} \mu}}(y) P_{t-s} {\bf 1}_K (y)\quad(\hbox{since} P_s\ll \mu)\\ &=\mu\bigg({{\hbox{d} P_s(x, \cdot)}\over {\hbox{d} \mu}} P_{t-s} {\bf 1}_K\bigg)\\ &=\mu\bigg({\bf 1}_K P_{t-s}{{\hbox{d} P_s(x, \cdot)}\over {{\hbox{d}} \mu}} \bigg)\quad(\hbox{by symmetry of} P_t) \\ &\le \sqrt{\mu(K)} \bigg\|P_{t-s}{{\hbox{d} P_s(x, \cdot)}\over {\hbox{d} \mu}}\bigg\|\quad\hbox {(by Cauchy--Schwarz inequality)}\\ &\le \sqrt{\mu(K)} \bigg\|{{\hbox{d} P_s(x, \cdot)}\over {\hbox{d} \mu}}\bigg\| e^{-\lambda_0 (t-s)} \;(\hbox{by}\, L^2-\hbox{exponential convergence})\\ &=\Big(\sqrt{\mu(K) p_{2s}(x, x)} e^{\lambda_0 s}\Big) e^{-\lambda_0 t}\quad(by [6 (8.3)]). \end{aligned} $$
By assumption, the coefficient on the right-hand side is locally \(\mu\)
-integrable. This proves that \(\varepsilon_{\rm max}\ge \lambda_0.\)
(b) Next, for each \(f\in{{\fancyscript{K}}}\)
with \(\|f\|=1,\)
we have
$$ \begin{aligned} \|P_t f\|^2&= (f, P_{2t} f)\quad(\hbox{by symmetry of }P_t)\\ &\le \|f\|_{\infty}\int\nolimits_{{\rm supp} (f)} \mu(\hbox{d} x) P_{2t} |f|(x) \\ &\le \|f\|_{\infty}^2 \int\nolimits_{{\rm supp} (f)} \mu (\hbox{d} x) P_{2t} (x, \hbox{supp}(f)) \\ &\le \|f\|_{\infty}^2 \int\nolimits_{{\rm supp} (f)} \mu (\hbox{d} x) c(x, \hbox{supp} (f)) e^{-2\varepsilon_{\rm max} t} \\ &=: C_f e^{-2\varepsilon_{\rm max} t}. \end{aligned} $$
The technique used here goes back to [17
].
(c) The constant \(C_f\)
in the last line can be removed. Following Lemma 2.2 in [24
], by the spectral representation theorem and the fact that \(\|f\|=1,\)
we have
$$ \begin{aligned} \|P_t f\|^2&=\int\nolimits_0^\infty e^{-2 \lambda t}\hbox{d} (E_\lambda f, f) \\ &\ge \bigg[\int\nolimits_0^\infty e^{-2 \lambda s}\hbox{d} (E_\lambda f, f)\bigg]^{t/s}\quad \hbox {(by Jensen's inequality)}\\ &=\|P_s f\|^{2t/s},\qquad \;t\ge s. \end{aligned} $$
Note that here the semigroup is allowed to be subMarkovian. Combining this with (b), we have \(\|P_s f\|^2\le C_f^{s/t} e^{-2 \varepsilon_{\rm max} s}.\)
Letting \(t\to \infty,\)
we obtain
$$ \|P_s f\|^2\le e^{-2\varepsilon_{\rm max} s}, $$
first for all \(f\in {{\fancyscript{K}}}\)
and then for all \(f\in L^2(\mu)\)
with \(\|f\|=1\)
because of the denseness of \({{\fancyscript{K}}}\)
in \(L^2(\mu).\)
Therefore, \(\lambda_0\ge \varepsilon_{\rm max}.\)
Combining this with (a), we complete the proof. \(\square\)
The main result (Theorem 6.2) of this paper is presented in the last section (Sect. 10) of the paper [9], as an analog of birth–death processes. Paper [9], as well as [8] for \(\varphi^4\)-model, is available on arXiv.org. |
Difference between revisions of "N-fold variants"
(→$\omega$-fold extendible: definition)
(→$\omega$-fold Vopěnka: def., results)
Line 78: Line 78:
=== $\omega$-fold Vopěnka ===
=== $\omega$-fold Vopěnka ===
+ + + + + + + + + +
''(To be added)''
''(To be added)''
Revision as of 13:04, 6 May 2019 This page is a WIP.The $n$-fold variants of large cardinal axioms were created by Sato Kentaro in [1] in order to study and investigate the double helix phenomena. The double helix phenomena is the strange pattern in consistency strength between such cardinals, which can be seen below.
This diagram was created by Kentaro. The arrows denote consistency strength, and the double lines denote equivalency. The large cardinals in this diagram will be detailed on this page (unless found elsewhere on this website).
This page will only use facts from [1] unless otherwise stated.
Contents 1 $n$-fold Variants 2 $\omega$-fold variants 3 References $n$-fold Variants
The $n$-fold variants of large cardinals were given in a very large paper by Sato Kentaro. Most of the definitions involve giving large closure properties to the $M$ used in the original large cardinal in an elementary embedding $j:V\rightarrow M$. They are very large, but rank-into-rank cardinals are stronger than most $n$-fold variants of large cardinals.
Generally, the $n$-fold variant of a large cardinal axiom is the similar to the generalization of superstrong cardinals to $n$-superstrong cardinals, huge cardinals to $n$-huge cardinals, etc. More specifically, if the definition of the original axiom is that $j:V\prec M$ has critical point $\kappa$ and $M$ has some closure property which uses $\kappa$, then the definition of the $n$-fold variant of the axiom is that $M$ has that closure property on $j^n{\kappa}$.
$n$-fold Variants Which Are Simply the Original Large Cardinal
There were many $n$-fold variants which were simply different names of the original large cardinal. This was due to the fact that some n-fold variants, if only named n-variants instead, would be confusing to the reader (for example the $n$-fold extendibles rather than the $n$-extendibles). Here are a list of such cardinals:
The $n$-fold superstrongcardinals are precisely the $n$-superstrong cardinals The $n$-fold almost hugecardinals are precisely the almost $n$-huge cardinals The $n$-fold hugecardinals are precisely the $n$-huge cardinals The $n$-fold superhugecardinals are precisely the $n$-superhuge cardinals The $\omega$-fold superstrongand $\omega$-fold Shelahcardinals are precisely the I2 cardinals $n$-fold supercompact cardinals
A cardinal $\kappa$ is
$n$-fold $\lambda$-supercompact iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\lambda<j(\kappa)$ and $M^{j^{n-1}(\lambda)}\subset M$ (i.e. $M$ is closed under all of its sequences of length $j^{n-1}(\lambda)$). This definition is very similar to that of the $n$-huge cardinals.
A cardinal $\kappa$ is
$n$-fold supercompact iff it is $n$-fold $\lambda$-supercompact for every $\lambda$. Consistency-wise, the $n$-fold supercompact cardinals are stronger than the $n$-superstrong cardinals and weaker than the $(n+1)$-fold strong cardinals. In fact, if an $n$-fold supercompact cardinal exists, then it is consistent for there to be a proper class of $n$-superstrong cardinals.
It is clear that the $n+1$-fold $0$-supercompact cardinals are precisely the $n$-huge cardinals. The $1$-fold supercompact cardinals are precisely the supercompact cardinals. The $0$-fold supercompact cardinals are precisely the measurable cardinals.
$n$-fold strong cardinals
A cardinal $\kappa$ is
$n$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^{n-1}(\kappa+\lambda)}\subset M$.
A cardinal $\kappa$ is
$n$-fold strong iff it is $n$-fold $\lambda$-strong for every $\lambda$. Consistency-wise, the $(n+1)$-fold strong cardinals are stronger than the $n$-fold supercompact cardinals, equivalent to the $n$-fold extendible cardinals, and weaker than the $(n+1)$-fold Woodin cardinals. More specifically, in the rank of an (n+1)-fold Woodin cardinal there is an $(n+1)$-fold strong cardinal.
It is clear that the $(n+1)$-fold $0$-strong cardinals are precisely the $n$-superstrong cardinals. The $1$-fold strong cardinals are precisely the strong cardinals. The $0$-fold strong cardinals are precisely the measurable cardinals.
$n$-fold extendible cardinals
For ordinal $η$, class $F$, positive natural $n$ and $κ+η<κ_1<···<κ_n$:
Cardinal $κ$ is $n$-fold $η$-extendible for $F$with targets $κ_1,...,κ_n$ iff there are $κ+η=ζ_0<ζ_1<···<ζ_n$ and an iteration sequence $\vec e$ through $〈(V_{ζ_i},F∩V_{ζ_i})|i≤n〉$ with $\mathrm{crit}(\vec e)=κ$, and $e_{0,i}(κ)=κ_i$. Cardinal $κ$ is $n$-fold extendible for $F$iff, for every $η$, $κ$ is $n$-fold $η$-extendible for $F$. Cardinal $κ$ is $n$-fold extendibleiff it is $n$-fold extendible for $\varnothing$.
$n$-fold extendible cardinals are precisely $n+1$ strong cardinals.
$n$-fold $1$-extendibility is implied by $(n+1)$-fold $1$-strongness and implies $n$-fold superstrongness.
(To be added) $n$-fold Woodin cardinals
A cardinal $\kappa$ is
$n$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{n}(f)(j^{n-1}(\alpha))}\subset M$. Consistency-wise, the $(n+1)$-fold Woodin cardinals are stronger than the $(n+1)$-fold strong cardinals, and weaker than the $(n+1)$-fold Shelah cardinals. Specifically, in the rank of an $(n+1)$-fold Shelah cardinal there is an $(n+1)$-fold Woodin cardinal, and every $(n+1)$-fold Shelah cardinal is also an $(n+1)$-fold Woodin cardinal.
The $2$-fold Woodin cardinals are precisely the Vopěnka cardinals (therefore precisely the Woodin for supercompactness cardinals). In fact, the $n+1$-fold Woodin cardinals are precisely the $n$-fold Vopěnka cardinals. The $1$-fold Woodin cardinals are precisely the Woodin cardinals.
(More to be added) $\omega$-fold variants
The $\omega$-fold variant is a very strong version of the $n$-fold variant, to the point where they even beat some of the rank-into-rank axioms in consistency strength. Interestingly, they follow a somewhat backwards pattern of consistency strength relative to the original double helix. For example, $n$-fold strong is much weaker than $n$-fold Vopěnka (the jump is similar to the jump between a strong cardinal and a Vopěnka cardinal), but $\omega$-fold strong is much, much stronger than $\omega$-fold Vopěnka.
$\omega$-fold extendible
For ordinal $η$ and class $F$:
Cardinal $κ$ is $ω$-fold $η$-extendible for $F$iff there are $κ+η=ζ_0<ζ_1<ζ_2<...$ and an iteration sequence $\vec e$ through $〈(V_{ζ_i},F∩V_{ζ_i})|i∈ω〉$ with $\mathrm{crit}(\vec e)=κ$, and $e^{(1)}(κ)>κ+η$. Cardinal $κ$ is $ω$-fold extendible for $F$iff, for every $η$, $κ$ is $ω$-fold $η$-extendible for $F$. Cardinal $κ$ is $ω$-fold extendibleiff it is $ω$-fold extendible for $\varnothing$. (To be added) $\omega$-fold Vopěnka
Definition:
A set $X$is $ω$-fold Vopěnkafor a cardinal $κ$ iff, for every $κ$-natural sequence $〈\mathcal{M}_α|α<κ〉$, there are an increasing sequence $〈α_n|n∈ω〉$ with $α_n<κ$ and an iteration sequence $\vec e$ through $〈\mathcal{M}_{α_n}|n∈ω〉$ such that $\mathrm{crit}(\vec e)∈X$ . A cardinal$κ$ is $ω$-fold Vopěnkaiff $κ$ is regular and $κ$ is $ω$-fold Vopěnka for $κ$. $F^{(ω)}_{Vop,κ}=\{X∈\mathcal{P}(κ)|\text{κ \\ X is not $ω$-fold Vopěnka for $κ$}\}$.
Results:
An $ω$-fold superstrong cardinal is the $κ$-th $ω$-fold Vopěnka cardinal. The critical point $κ$ of a witness of $IE_ω$ is the $κ$-th $ω$-fold Vopěnka cardinal. If $κ$ is a regular cardinal and $F⊂V_κ$, we have$\{α<κ|(V_κ,F)\models \text{“$α$ is $ω$-fold extendible for $F$”}\}∈F^{(ω)}_{Vop,κ}$. If there is an $ω$-fold Vopěnka cardinal, then the existence of a proper class of $ω$-fold extendible cardinals is consistent. (To be added) $\omega$-fold Woodin
A cardinal $\kappa$ is
$\omega$-fold Woodin iff for every function $f:\kappa\rightarrow\kappa$ there is some ordinal $\alpha<\kappa$ such that $\{f(\beta):\beta<\alpha\}\subseteq\alpha$ and $V_{j^{\omega}(f)(\alpha))}\subset M$.
Consistency-wise, the existence of an $\omega$-fold Woodin cardinal is stronger than the I2 axiom, but weaker than the existence of an $\omega$-fold strong cardinal. In particular, if there is an $\omega$-fold strong cardinal $\kappa$ then $\kappa$ is $\omega$-fold Woodin and has $\kappa$-many $\omega$-fold Woodin cardinals below it, and $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold Woodin cardinals.
$\omega$-fold strong
A cardinal $\kappa$ is
$\omega$-fold $\lambda$-strong iff it is the critical point of some nontrivial elementary embedding $j:V\rightarrow M$ such that $\kappa+\lambda<j(\kappa)$ and $V_{j^\omega(\kappa+\lambda)}\subset M$.
$\kappa$ is
$\omega$-fold strong iff it is $\omega$-fold $\lambda$-strong for every $\lambda$.
Consistency-wise, the existence of an $\omega$-fold strong cardinal is stronger than the existence of an $\omega$-fold Woodin cardinal and weaker than the assertion that there is a $\Sigma_4^1$-elementary embedding $j:V_\lambda\prec V_\lambda$ with an uncountable critical point $\kappa<\lambda$ (this is a weakening of the I1 axiom known as $E_2$). In particular, if there is a cardinal $\kappa$ which is the critical point of some elementary embedding witnessing the $E_2$ axiom, then there is a nonprincipal $\kappa$-complete ultrafilter over $\kappa$ which contains the set of all cardinals which are $\omega$-fold strong in $V_\kappa$ and therefore $V_\kappa$ satisfies the existence of a proper class of $\omega$-fold strong cardinals.
References Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
Let $X$ be a Hausdorff locally compact in $x \in X$. Show that for each open nbd $U$ of $x$ there exists an open nbd $V$ of $x$ such that $\overline{V}$ is compact and $\overline{V} \subset U$.
My work:
Since $X$ is Hausdorff and locally compact then $X$ is regular. Let $U$ be an open nbd of $x$. By assumption $X$ is locally compact so there exists some open nbd $W$ of $x$ such that $\overline{W}$ is compact. Now consider the open set $W \cap U$ this is non-empty since $x$ lies in the intersection. By regularity find an open set $V$ such that:
$x\in V \subset \overline{V} \subset W \cap U$
Then in particular $\overline{V} \subset U$. But also $\overline{V} \subset W \subset \overline{W}$. Since $\overline{W}$ is compact then $\overline{V}$ is a closed subset of a compact set, hence compact.
Is the above OK? Thank you. |
Boundaries "at infinity" of spaces and groups had already been in the air for a while, perhaps going back to the boundary of the hyperbolic plane. Then in the 1940's we had the Freudenthal-Hopf Theorem in group theory, in which the concept of the ends of a (locally compact, locally connected) topological space and the ends of a finitely generated group were ...
For Euclid, an "angle" is formed by two rays which are not part of the same line (see Book I Definition 8). So, to Euclid, a "straight angle" is not an angle at all, and so Proposition 31 is not a special case of Proposition 20 since Proposition 20 only applies when you have an angle at the center.
Equivalently, the problem is to compute $L=\sum_{s\in S}(-1)^s/(s-1)$ where $S$ is the set of all perfect powers. A similar problem of computing $\sum_{s\in S}1/(s-1)\color{gray}{[{}=1]}$ goes back to Goldbach as far as I know.The idea is pretty much the same. Let $N$ be the set of all integers (strictly) greater than $1$. Then each $s\in S$ has a ...
What follows is taken directly from Borweins' Pi and the AGM.Let $N$ be a positive number and $q_N=e^{-\pi\sqrt{N}}$ and $$k_N=\frac{\vartheta_{2}^{2}(q_N)}{\vartheta_{3}^{2}(q_N)},k'_N=\sqrt{1-k_N^2},G_N=(2k_Nk'_N)^{-1/12},g_N=\left(\frac{2k_N}{{k'} _N^{2}}\right)^{-1/12}\tag{1}$$ where $\vartheta _2,\vartheta_3$ are theta functions of Jacobi defined by $$...
In more familiar terms, by the formula for the square of a sum,$$\frac1n\sum_{i=1}^n x_i^2-\left(\frac1n\sum_{i=1}^n x_i\right)^2=\frac1n\sum_{i=1}^n x_i^2-\frac1{n^2}\sum_{i=1}^n x_i^2-\frac2{n^2}\sum_{i=1}^n\sum_{j=i+1}^n x_ix_j.$$The notation $S(x_1x_2)$ is questionable, because it doesn't clearly shows that the summation indexes do not cover the ...
First, here's the geometric example illustrated:Assume that on the circumference of a circle there is a fixed point $A$ about which a ray revolves. When this ray passes through the center of the circle, we call the other point at which it intersects the circle the point B associated with this position of the ray.Now imagine that the ray “revolves ...
Statements that are undecidable in Peano arithmetic, but can be stated in it and are provable in something larger such as ZF or second-order arithmetic, include $\varepsilon_0$ induction, Goodstein's theorem and the Paris–Harrington theorem. I'm not sure, though, if they can be formatted as Gödel sentences. See also @DanielWainfleet's comment.
There is a lot of analytic number theory even in Landau's other books on algebraic number theory. A standard reference before Landau is PaulBachmann's book on analytic number theory from the 1890s. Then there areseveral lecture notes that Siegel made available (Lectures on analytic number theory, and several on "Funktionentheorie") the university library ...
It (or, rather, its determinant) appears in the second volume of Cauchy's "Exercices d'analyse et de physique mathematique", as part of "Memoir on alternating functions and alternating sums" (https://books.google.se/books?id=DRg-AQAAIAAJ, pages 151-159, see particularly formula (10).) Reading just that section, the calculation of the determinant is presented ... |
I am only really familiar with D&D 5e, and whatever of D&D 3.5e I was exposed to whilst playing NWN2 some number of years back, but as far as I am aware, only druids have the spell
flame blade on their spell list.
At face value, I can’t see why this would be. Summoning a sword of fire seems like the sort of thing that sorcerers or warlocks should be able to do as well; in 5e at least, they can cast
shadow blade, so why not flame blade?
Originally, I thought I wanted to know the designer-reasons behind this decision (and hence didn’t ask it, as that’s off topic), but then I realised that if such a designer’s reason didn’t include anything about the in-universe lore behind it, then I would be left rather dissatisfied (i.e. if it turns out that it’s because, I don’t know… Gary Gygax lost a bet back in the day, and that’s the official reason, then technically my question would be answered, but that’s not really what I’m looking for).
What I’m really after is why this makes sense from a lore perspective. Is there something special about forming a blade out of the element of fire that is specific to druids, or requires nature-y divine magic to perform? I accept that it is the case that, RAW, only druids can cast
flame blade, but why does that make sense in-universe? If my sorcerer, the head of the Sorcerer’s Guild, saw a fellow party member, a druid, cast flame blade, then my sorcerer may well wonder “why can’t I, nor any sorcerer I’ve ever met, do that?”. That’s what I’m after.
So my questions are:
Firstly, is my premise actually true? Across the editions of D&D (for which flame blade exists), has it always been a druid-exclusive spell? (Note that I’m excluding the ability to cast it as a racial spell, such as the Mephistopheles tiefling from Mordenkainen’s Tome of Foes, and I’m also excluding bards learning it via Magical Secrets and other such hacks; this is about whether or not it’s on a classes’ default spell list); Secondly, the title question: why, from a lore perspective, are druids the only class who can cast flame blade? I’m looking for answers with citations from any official material from any edition of D&D.
If the answer is dependant on a specific setting, let’s assume that the setting is the Forgotten Realms.
I've got a photo taken about 120 years ago of a street near to where I live.
However hard I try I can't seem to get the same perspective if I photograph the same scene from roughly the same place using my current camera.
My attempts at using photoshop's image transformation tools to try to match them up aren't that more succesful.
Does anyone have any hints or pointers on how to proceeed.
My aim is to fade the old image into the new one as part of a short video.
Thanks in advance
I have multiple camera in different points.
I have their position and rotation as $ (x,y,z)$ , $ (\alpha,\beta,\gamma)$ or $ ( roll, pitch, yaw)$ .
And I have output like this :
Feed from camera-1, I know length of yellow and green.
How can I calculate position of the object, in this case head in 3D space.
I looked up a tutorial and created a function based upon its script. It’s essentially used so I can select dependent variables that’s a subset of a data frame. It runs but it is very very slow.
How would I flatten a nested for-loop such as this?
I tried implementing an enumerate version but it did not work. Ideally I’d like the complexity to be linear, currently, it’s at 2^n. I’m not sure, how I can flatten the nested for loop, such that I can append the results of a function to a list.
def BestSubsetSelection(X,Y, plot = True): k = len(X.columns) RSS_list = [] R_squared_list = [] feature_list = [] numb_features = [] # Loop over all possible combinations of k features for k in range(1, len(X.columns) + 1): # Looping over all possible combinations: from 11 choose k for combo in itertools.combinations(X.columns,k): # Store temporary results temp_results = fit_linear_reg(X[list(combo)],Y) # Append RSS to RSS Lists RSS_list.append(temp_results[0]) # Append R-Squared TO R-Squared list R_squared_list.append(temp_results[1]) # Append Feature/s to Feature list feature_list.append(combo) # Append the number of features to the number of features list numb_features.append(len(combo))
A copy of the full implementation can be found here: https://github.com/melmaniwan/Elections-Analysis/blob/master/Implementing%20Subset%20Selections.ipynb
Is there a standard reference for understanding sentential, first and higher order logics from a categorical perspective?
I’m close to knowing enough $ 1$ /$ 2$ /internal category theory to tackle the Joyal-Tierney Galois theorem for toposes and the Borceux-Janelidze generalization for internal precategories to give some idea of my knowledge base. I’ve heard that category theory can model all of these logics as the internal logic of an appropriate (possibly higher) category, and I was looking for a reference that builds up logic from this perspective for someone who has never explicitly read a logic textbook.
For the record I have “A Course in Mathematical Logic” by Bell and Machover ordered and in the mail; I fully intend to take the classical route up through logic as well, but I was curious about any categorical cheat codes along the way.
It seems (naively) like type theory might be the answer here, and I would be open to suggestions in that direction, but I am currently unfamiliar with the inner workings (or general moral) of type theory.
Perspective through CNS (canvas, paper, screen) plane
View post on imgur.com
I guess im asking for something similair to code review but for trigonometry
Greetings Johan
Let’s say that I have a class representing the Neural Network. The neural network is composed of three bigger units: a subpart_1, subpart_2 and subpart_3, being called in such a way, that the output of one part is an input to the next one. The parts are itself Neural Networks with the same basic interface. In my current approach, each of this part is created inside the constructor, the __init__method of my class, and I am passing in the configuration parameters. It inherits from some framework NeuralNetwork class which gives me access to some common interface like forward/backward methods. It looks like this:
class NeuralNetwork(framework.NeuralNetworkBase): def __init__(self, subp_1_params, subp_2_params, subp_3_params) super(...) self.subpart_1 = Subpart1Class(subp_1_params) self.subpart_2 = Subpart2Class(subp_2_params) self.subpart_3 = Subpart3Class(subp_3_params) . . . def forward(self, input): <use the subparts on the data in some way>
You can imagine the SubpartXClasses being implemented in a similar manner. Now, I have been recently reading more on Unit Testing and Unit Testing-friendly design and in particular I have stumbled upon the blog post:
http://misko.hevery.com/code-reviewers-guide/flaw-constructor-does-real-work/
in which the author claims, that unless the fields of the class are not plain data structures, they should not be created inside the constructor and instead passed as the constructor arguments to allow for the maximally decoupled and unit testing-friendly design. The tone of that posts is really strict but it is also quite old. So now my question: In a scenario like this, would it be a straight up better idea to create the objects externally, through some factory, and pass a ready and configured objects into the constructor to register them to my NeuralNetwork class? From my point of view, the current approach seems more natural (actually the framework I am using also promotes this approach), but after reading that blog posts I started having some doubts.
I have been looking for a project for earning Bitcoins for a long time and stopped at flymining. Why this particular company? The main thing is the stability of the mining, plus fast timely withdrawal. The company is young and offers good conditions. https://flymining.cloud/?promocode=4G0OPX
I would like to have a website that is loaded fast from any point in the world. From my understanding, you need to take advantage of the data center “regions” at places like Google Cloud or AWS. That’s as far as my understanding goes.
What is missing is how exactly to implement it (at a high/diagram level here, not at the code level).
For example, on Google Cloud for a specific “project” you have the choice of IPv4 or IPv6, of “Premium” vs. “Standard” (“Traffic traverses Google’s high quality global backbone, entering and exiting at Google edge peering points closest to the user.” vs. “Traffic enters and exits the Google network at a peering point closest to the Cloud region it’s destined for or originated in.”), and of Global vs. Regional. If you select IPv6 you are limited to Premium. If you select Global on IPv4 you are limited to Premium. I am not sure how this works, not at the level of detail of Google’s specific system, but what generally is going on here. I don’t see how a “global” IP address can be better than a regional one, since the regional one is closer to the request source.
On AWS, they don’t have a “Global” option, all IP addresses are IPv4 and they are regional.
That was just some background for the main question.
My question is
how to architect a system to take advantage of regional data centers. Just generally, at the IP level. I am wondering how it goes from my domain to the regional IP address. Or if it is like Google Cloud’s “Global” IP address, how you could then integrate regions into it. Or if that’s backwards, how to conceptualize of this. I saw the diagram below, but it only explains how a domain is mapped to a single region-independent IP address. I don’t see how regions are actually implemented / come into play.
Basically I would like to know at a high level how I should organize my IP addresses and servers to take advantage of regional data centers. So far my thinking is of having servers in different regions with their own copies of data. But then I get lost when thinking about IP addresses and domains. If I use the AWS model, it seems I reserve an IP per entrypoint server per region. But then I don’t see how the domain name figures out which IP/region to select. If I use the global Google Cloud model, I don’t see how I can add regional servers. |
Under the auspices of the Computational Complexity Foundation (CCF)
Properties definable in first-order logic are algorithmically interesting for both theoretical and pragmatic reasons. Many of the most studied algorithmic problems, such as Hitting Set and Orthogonal Vectors, are first-order, and the first-order properties naturally arise as relational database queries. A relatively straightforward algorithm for evaluating a property with k+1 quantifiers takes time $O(m^k)$ and, assuming the Strong Exponential Time Hypothesis (SETH), some such properties require $O(m^{k-\epsilon})$ time for any $\epsilon > 0$. (Here, m represents the size of the input structure, i.e. the number of tuples in all relations.)
We give algorithms for every first-order property that improves this upper bound to $m^k/2^{\Theta(\sqrt{\log n})}$, i.e., an improvement by a factor more than any poly-log, but less than the polynomial required to refute SETH. Moreover, we show that further improvement is equivalent to improving algorithms for sparse instances of the well-studied Orthogonal Vectors problem. Surprisingly, both results are obtained by showing completeness of the Sparse Orthogonal Vectors problem for the class of first-order properties under fine-grained reductions. To obtain improved algorithms, we apply the fast Orthogonal Vectors algorithm of [AWY15,CW16].
While fine-grained reductions (reductions that closely preserve the conjectured complexities of problems) have been used to relate the hardness of disparate specific problems both within P and beyond, this is the first such completeness result for a standard complexity class.
Made updates according to the valuable comments from the Transactions on Algorithms reviewers.
Properties definable in first-order logic are algorithmically interesting for both theoretical and pragmatic reasons. Many of the most studied algorithmic problems, such as Hitting Set and Orthogonal Vectors, are first-order, and the first-order properties naturally arise as relational database queries. A relatively straightforward algorithm for evaluating a property with k+1 quantifiers takes time $O(m^k)$ and, assuming the Strong Exponential Time Hypothesis (SETH), some such properties require $O(m^{k-\epsilon})$ time for any $\epsilon > 0$. (Here, m represents the size of the input structure, i.e. the number of tuples in all relations.)
We give algorithms for every first-order property that improves this upper bound to $m^k/2^{\Theta(\sqrt{\log n})}$, i.e., an improvement by a factor more than any poly-log, but less than the polynomial required to refute SETH. Moreover, we show that further improvement is equivalent to improving algorithms for sparse instances of the well-studied Orthogonal Vectors problem. Surprisingly, both results are obtained by showing completeness of the Sparse Orthogonal Vectors problem for the class of first-order properties under fine-grained reductions. To obtain improved algorithms, we apply the fast Orthogonal Vectors algorithm of [AWY15,CW16].
While fine-grained reductions (reductions that closely preserve the conjectured complexities of problems) have been used to relate the hardness of disparate specific problems both within P and beyond, this is the first such completeness result for a standard complexity class.
(1) Algorithmic applications: Our reduction from first-order properties to OV gives better algorithms for first-order properties.
(2) Extending the result to hypergraphs (relations of arity greater than 2).
(3) The new reduction algorithm is deterministic.
(4) We are glad that Antonina Kolokolova and Ryan Williams became our coauthors.
Fine-grained reductions, introduced by Vassilevska-Williams and Williams, preserve any improvement in the known algorithms. These have been used very successfully in relating the exact complexities of a wide range of problems, from NP-complete problems like SAT to important quadratic time solvable problems within P such as Edit Distance. However, until now, there have been few equivalences between problems and in particular, no problems that were complete for natural classes under fine-grained reductions. We give the first such completeness results. We consider the class of first-order graph property problems, viewing the input in adjacency list format (aka "sparse graph representation"). For this class, we show that the sparse Orthogonal Vectors problem is complete under randomized fine-grained reductions. In proving completeness for this problem, we also show that this sparse problem is equivalent to the standard Orthogonal Vectors problem when the number of dimensions is polynomially related to the number of vectors. Finally, we also establish a completeness and hardness result for k-Orthogonal Vectors.
Our results imply that the conjecture "not every first-order graph problem has an improved algorithm" is a useful intermediary between SETH and the conjectured hardness of problems such as Edit Distance. It follows that, if Edit Distance has a substantially subquadratic algorithm, then every first order graph problem has an improved algorithm. On the other hand, if first order graph property problems have improved algorithms, this falsifies SETH (and even some weaker versions of SETH) and gives new circuit lower bounds. We hope that this is the beginning of extending fine-grained complexity to include classes of problems as well as individual problems.
Updated the k=2 case in Section 5, "Step 1-2".
Fine-grained reductions, introduced by Vassilevska-Williams and Williams, preserve any improvement in the known algorithms. These have been used very successfully in relating the exact complexities of a wide range of problems, from NP-complete problems like SAT to important quadratic time solvable problems within P such as Edit Distance. However, until now, there have been few equivalences between problems and in particular, no problems that were complete for natural classes under fine-grained reductions. We give the first such completeness results. We consider the class of first-order graph property problems, viewing the input in adjacency list format (aka "sparse graph representation"). For this class, we show that the sparse Orthogonal Vectors problem is complete under randomized fine-grained reductions. In proving completeness for this problem, we also show that this sparse problem is equivalent to the standard Orthogonal Vectors problem when the number of dimensions is polynomially related to the number of vectors. Finally, we also establish a completeness and hardness result for k-Orthogonal Vectors.
Our results imply that the conjecture "not every first-order graph problem has an improved algorithm" is a useful intermediary between SETH and the conjectured hardness of problems such as Edit Distance. It follows that, if Edit Distance has a substantially subquadratic algorithm, then every first order graph problem has an improved algorithm. On the other hand, if first order graph property problems have improved algorithms, this falsifies SETH (and even some weaker versions of SETH) and gives new circuit lower bounds. We hope that this is the beginning of extending fine-grained complexity to include classes of problems as well as individual problems. |
Identify useful patterns and use methods to solve in a few steps
We will take up in this session two selected NCERT Trigonometry problems which we will solve in a few steps. In the process, we will
highlight the problem solving approach, in which hidden useful patterns will be identified and quick simplification achieved using useful patterns hidden in the problem.
We may classify these as
elegant within a minute problems, where conventional deductions may take a backseat leaving the lead to pattern and method based high speed problem solving.
Generally these
problems are solved in mind with minimum writing. In these solutions number of steps are reduced significantly so that even with writing, the approach saves valuable time.
We will provide explanations along with solutions.
Let us start solving now.
Solutions to NCERT class 10 level Trigonometry problems—set 1 Problem 1.
Find the value of $(1+\tan \theta +\sec \theta)(1+\text{cot } \theta - \text{cosec } \theta)$ among the following choices. Justify your answer.
$0$ $1$ $2$ $-1$ Solution 1.
In these cases of
product of two trigonometric expressions, the time taking approach is to actually proceed with expanding the multiplication. This would result in many terms. Hopefully we will then find cancellations and carry out simplifications to reach the simple solution.
Generally looking deeper,
We find possibilities that
make the two factors much easier to handle so that after multiplication of the two transformed expressions, number of terms are minimized.This is our first objective.
In this case then, first question we ask is,
how can we make the terms of the two expressions more similar so that we can use the minus sign in the second expression and minimize the number of terms in the product?
In two ways we can do it, either take $\tan \theta$
factored out of first expression, or take $\text{cot } \theta$ out of second expression. We will take the first path,
Given expression,
$E=(1+\tan \theta +\sec \theta)(1+\text{cot } \theta - \text{cosec } \theta)$
$=\tan \theta \left(\text{cot } \theta +1+\displaystyle\frac{\sec \theta}{\tan \theta}\right)(1+\text{cot } \theta-\text{cosec } \theta)$
$=\tan \theta(1+\text{cot } \theta + \text{cosec } \theta)(1+\text{cot } \theta - \text{cosec } \theta)$
$=\tan \theta \left[(1+\text{cot } \theta)^2 - \text{cosec}^2 \theta \right]$
$=\tan \theta \left[1+2\text{cot } \theta - (\text{cosec}^2 \theta - \text{cot} ^2 \theta)\right]$
$=\tan \theta.2\text{cot } \theta$, $(\text{cosec}^2 \theta - \text{cot} ^2 \theta)=1$ and it cancels out with the other 1,
$=2$.
Answer: Option c: 2.
The
pattern we detected was,
The nearly similar terms in the two expressions and the opposite signs, opening up the possibility to use, $(a+b)(a-b)=a^2 - b^2$.
To make the pattern usable we factored $\tan \theta$ out of the first expression. This is the
method we call Term factoring out. It is a general algebraic minor method that we use extensively in Algebra, Surds and Trigonometry. Here we can call it specifically Trigonometric term factoring out.
You may ask, how do I know that $\tan \theta$ factoring will make the two expressions similar and we will be able to use $(a+b)(a-b)=a^2 - b^2$?
How it was decided that $\tan \theta$ need to be factored out
Let us try to produce the action steps and reasoning,
First, we analyzed the expressions, noticed two factors, decided from experience to try simplifying the factors first before multiplication so that number of terms are reduced as much as possible. This is formation of clear objective. While solving any such problem this objective will work. Second, on closer look detected the similarity of the terms of the two factors with a significant difference of minus sign in the second factor. With this fact in mind, next step was automatic. Third, formation of specific objective—how can we make the two expressions more similar and use the minus sign? Fourth, as a first trial factored out $\tan \theta$ from first two terms of the first expressionand found that the terms became exactly same as the first two terms of the second expression. Going ahead with the third term was natural.
Mark the phrase,
first trial. Factoring out $\tan \theta$ has been a trial, but a trial based on experience and promise, and made after analyzing the problem. Trials we make often in problem solving, but none of the trials is random, and comes at a later stage of problem solving after analysis. In most cases, this type of trial we carry out with near certainty borne out of experience and deductive reasoning. A truth which aids good decision making in math problem solving is,
The solving of the problem involves simplification from a complex two expression factors to a numeric result. So there must be patterns hidden in the expressions that if used will lead to simplification, not more complexity.
In any case, most such trials need a few seconds at most to carry out in mind, so we always explore possibilities like this.
The
second truth is,
Unless you explore new possibilities, hidden useful patterns, how can you ever find them?
If there were no writing requirement, because of the
inherent pattern based simplicity of the solution steps, the problem could have been solved wholly in mind without any writing, well within a minute. Problem 2
Find the value of $(\sec A+\tan A)(1-\sin A)$ among the following choices. Justify your answer.
$\sec A$ $\sin A$ $\text{cosec } A$ $\cos A$ Solution 2
Here we will take a different approach. Observing $(1-\sin A)$ we explore the possibility of converting to $(1-\sin A)(1+\sin A)=1-\sin^2 A=\cos^2 A$, by multiplying and dividing by $(1+\sin A)$. We could see the solution immediately and follow the path without hesitation.
Let us explain,
$(\sec A+\tan A)(1-\sin A)\times{\displaystyle\frac{1+\sin A}{1+\sin A}}$
$=\displaystyle\frac{(\sec A+\tan A)\cos^2 A}{1+\sin A}$
$=\displaystyle\frac{\cos A +\sin A.\cos A}{1+\sin A}$
$=\cos A$.
Answer. Option d: $\cos A$. Action steps and reasoning of problem solving
In this problem we have used a
different type of pattern and method.
Let us first list out the pairs of trigonometric functions that we call,
friendly trigonometric functions pairs. These are, $1 \pm \sin A$ or $1 \pm \cos A$. These use the most popular trigonometric identity, $\sin^2 A + \cos^2 A=1$. $\sec A \pm \tan A$. These use the highly effective group of identities of the form, $\sec^2 A - \tan^2 A=1$, Or, $\sec A - \tan A=\displaystyle\frac{1}{\sec A + \tan A}$, and $\text{cosec } A \pm \text{cot } A$. These use the group of highly effective identities, $\text{cosec}^2 A - \text{cot}^2 A=1$, or, $\text{cosec } A - \text{cot } A=\displaystyle\frac{1}{\text{cosec } A + \text{cot } A}$.
In
trigonometric expression simplifications these standard and well known identities form our basic trigonometric problem solving resource just like arrows in the quiver of arrows of a hunter. The more such arrows you can form and put in your problem solving quiver of arrows, faster you can solve, and solve a wider range of problems.
Whenever we encounter any such expression, we make quick trials mentally, exploring simplifying possibilities. We knew that using the other factor $\sec A+\tan A$ also solution can be reached quickly, but we decided for $1-\sin A$, as $sin$ function is simpler than $sec$ or $tan$. $sin$ and $cos$ are the first level functions, the other two arise after specific operations involving first level functions. Thus if
we detect same promise in two expressions we choose the simpler one. The rest was algebraic and straightforward. Further reading materials on Trigonometry NCERT solutions for class 10 maths NCERT solutions for class 10 maths Trigonometry Set 1 Class 10 Maths Tutorials, question and answer sets for competitive exams valuable for school level
You may refer to a fair amount of
and concise tutorials created for competitive exams at the page containing list of links, MCQ type question and answer sets
Related articles on NCERT solutions follow.
NCERT Solutions for Class 10 Maths Real Numbers Introduction to Trigonometry NCERT Solutions for Class 10 Maths on Trigonometry, solution set 1 |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
Under the auspices of the Computational Complexity Foundation (CCF)
We show a new duality between the polynomial margin complexity of $f$ and the discrepancy of the function $f \circ$ XOR, called an XOR function. Using this duality,
we develop polynomial based techniques for understanding the bounded error (BPP) and the weakly-unbounded error (PP) communication complexities of XOR functions. ... more >>>
The approximate degree of a Boolean function $f$ is the least degree of a real polynomial that approximates $f$ pointwise to error at most $1/3$. The approximate degree of $f$ is known to be a lower bound on the quantum query complexity of $f$ (Beals et al., FOCS 1998 and ... more >>>
The $\epsilon$-approximate degree $\widetilde{\text{deg}}_\epsilon(f)$ of a Boolean function $f$ is the least degree of a real-valued polynomial that approximates $f$ pointwise to error $\epsilon$. The approximate degree of $f$ is at least $k$ iff there exists a pair of probability distributions, also known as a dual polynomial, that are perfectly ... more >>>
The $\epsilon$-approximate degree of a function $f\colon X \to \{0, 1\}$ is the least degree of a multivariate real polynomial $p$ such that $|p(x)-f(x)| \leq \epsilon$ for all $x \in X$. We determine the $\epsilon$-approximate degree of the element distinctness function, the surjectivity function, and the permutation testing problem, showing ... more >>> |
Given two matrices $A$ and $B$, I'd like to find vectors $x$ and $y$, such that, $$ \min \sum_{ij} (A_{ij} - x_i y_j B_{ij})^2. $$ In matrix form, I'm trying to minimize the Frobenius norm of $A - \mbox{diag}(x) \cdot B \cdot \mbox{diag}(y) = A - B \circ (x y^\top)$.
In general, I'd like to find multiple unit vectors $x$ and $y$'s in the form $$ \min \sum_{ij} (A_{ij} - \sum_{k=1}^n s_i x_i^{(k)} y_j^{(k)} B_{ij})^2. $$ where $s_i$'s are positive real coefficients.
This is equivalent to singular value decomposition (SVD) when $(B)_{ij} = 1$.
Does anybody know what this problem is called? Is there a well-known algorithm like SVD for the solution of such problem?
(migrated from math.SE) |
Under the auspices of the Computational Complexity Foundation (CCF)
We give new quantum algorithms for evaluating composed functions whose inputs may be shared between bottom-level gates. Let $f$ be a Boolean function and consider a function $F$ obtained by applying $f$ to conjunctions of possibly overlapping subsets of $n$ variables. If $f$ has quantum query complexity $Q(f)$, we give ... more >>>
We prove two new results about the inability of low-degree polynomials to uniformly approximate constant-depth circuits, even to slightly-better-than-trivial error. First, we prove a tight $\tilde{\Omega}(n^{1/2})$ lower bound on the threshold degree of the Surjectivity function on $n$ variables. This matches the best known threshold degree bound for any AC$^0$ ... more >>>
The approximate degree of a Boolean function $f$ is the least degree of a real polynomial that approximates $f$ pointwise to error at most $1/3$. The approximate degree of $f$ is known to be a lower bound on the quantum query complexity of $f$ (Beals et al., FOCS 1998 and ... more >>>
The approximate degree of a Boolean function $f \colon \{-1, 1\}^n \rightarrow \{-1, 1\}$ is the least degree of a real polynomial that approximates $f$ pointwise to error at most $1/3$. We introduce a generic method for increasing the approximate degree of a given function, while preserving its computability by ... more >>>
Threshold weight, margin complexity, and Majority-of-Threshold circuit size are basic complexity measures of Boolean functions that arise in learning theory, communication complexity, and circuit complexity. Each of these measures might exhibit a chasm at depth three: namely, all polynomial size Boolean circuits of depth two have polynomial complexity under the ... more >>>
The sign-rank of a matrix $A$ with entries in $\{-1, +1\}$ is the least rank of a real matrix $B$ with $A_{ij} \cdot B_{ij} > 0$ for all $i, j$. Razborov and Sherstov (2008) gave the first exponential lower bounds on the sign-rank of a function in AC$^0$, answering an ... more >>>
The approximate degree of a Boolean function $f: \{-1, 1\}^n \to \{-1, 1\}$ is the minimum degree of a real polynomial that approximates $f$ to within error $1/3$ in the $\ell_\infty$ norm. In an influential result, Aaronson and Shi (J. ACM 2004) proved tight $\tilde{\Omega}(n^{1/3})$ and $\tilde{\Omega}(n^{2/3})$ lower bounds on ... more >>>
Polynomial approximations to boolean functions have led to many positive results in computer science. In particular, polynomial approximations to the sign function underly algorithms for agnostically learning halfspaces, as well as pseudorandom generators for halfspaces. In this work, we investigate the limits of these techniques by proving inapproximability results for ... more >>>
We establish a generic form of hardness amplification for the approximability of constant-depth Boolean circuits by polynomials. Specifically, we show that if a Boolean circuit cannot be pointwise approximated by low-degree polynomials to within constant error in a certain one-sided sense, then an OR of disjoint copies of that circuit ... more >>>
The $\epsilon$-approximate degree of a Boolean function $f: \{-1, 1\}^n \to \{-1, 1\}$ is the minimum degree of a real polynomial that approximates $f$ to within $\epsilon$ in the $\ell_\infty$ norm. We prove several lower bounds on this important complexity measure by explicitly constructing solutions to the dual of an ... more >>> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.