text
stringlengths 256
16.4k
|
|---|
This notebook uses Python to generate a list of relations and check their properties (reflexive, symmetric, antisymmetric, transitive, and equivalence). Defintions and examples are taken from lecture 15 and 16 of a discrete math course taught by Professor Maltais at the University of Ottawa during the 2017 winter semester.
by Ben Pyrik, 2017/04/17
A function is an assignment of elements from one set (the domain) to another set (the codomain) where every element of the domain is assigned only a single element from the codomain. In other words, every
pre-image has one (and only one) image. Relations are similar, but looser in the sense that pre-images can have multiple images. If $1$ is an element of A, it can be related to $5$ and $6$. This kind of assignment is not possible with a function.
A
binary relation between two sets (A and B) is defined as "a subset of A x B" (i.e. the "Cartesian product" of A and B). Similarly, a relation between a set (A) and itself is defined as a subset of A x A. What does this mean? Note: I'm going to use lists instead of sets here due to complications with powerset.
A = [1, 2]
# for simplicity, make B = AB = [1, 2]
# Applying definition of the Cartesian ProductAB = [(a, b) for a in A for b in B]AB
[(1, 1), (1, 2), (2, 1), (2, 2)]
Though it may be tempting, the above is
not the set of all possible relations between A and B. Rather, it is the superset of any relation between A and B--in other words, any relation between A and B will be a subset of this set.
However, the powerset of AB
is the set of all possible relations between A and B.
# http://sevko.io/articles/power-set-algorithms/#tocAnchor-1-9def is_bit_flipped(num, bit): return (num >> bit) & 1def powerset(set_): subsets = [] for subset in range(2 ** len(set_)): new_subset = [] for bit in range(len(set_)): if is_bit_flipped(subset, bit): new_subset.append(set_[bit]) subsets.append(new_subset) return subsets
pAB = powerset(AB)# order sets from smallest to largestpAB = sorted(pAB, key=len)pAB
[[], [(1, 1)], [(1, 2)], [(2, 1)], [(2, 2)], [(1, 1), (1, 2)], [(1, 1), (2, 1)], [(1, 2), (2, 1)], [(1, 1), (2, 2)], [(1, 2), (2, 2)], [(2, 1), (2, 2)], [(1, 1), (1, 2), (2, 1)], [(1, 1), (1, 2), (2, 2)], [(1, 1), (2, 1), (2, 2)], [(1, 2), (2, 1), (2, 2)], [(1, 1), (1, 2), (2, 1), (2, 2)]]
pAB contains all possible subsets of A x B (and equivalently) all possible relations between A and B. Some are functions (e.g. #6), but most are not.
How many relations are there? You could count them, but I'll save you the trouble.
len(pAB)
16
There are 16 relations between the set $A$ and itself.
Another way of determining this number is to use the formula$$ \begin{align} |P(A \times B)| &= 2^{|A \times B|} \\ &= 2^{|A| |B|} \\ &= 2^{|2| |2|} & \text{Substituting the cardinalities of A and B} \\ &= 2^{4} \\ &= 16 \end{align} $$
# function that determines if a relation (r) on a set (A) is reflexive (or not)def reflexive(r, A): '''(list, list) -> boolean''' for u in A: # look for (u, u), return False if it wasn't found try: r.index((u, u)) except ValueError: return False # loop completed without error, thus the relation must be reflexive return True
# testingreflexive([(1, 1)], A)
False
reflexive([(1, 1), (2, 2)], A)
True
For all possible relations on the set $A$, which are reflexive?
for relation in pAB: if (reflexive(relation, A)): print(str(relation) + " is reflexive!" +"\n") else: print(str(relation) + " is NOT reflexive." +"\n")
[] is NOT reflexive. [(1, 1)] is NOT reflexive. [(1, 2)] is NOT reflexive. [(2, 1)] is NOT reflexive. [(2, 2)] is NOT reflexive. [(1, 1), (1, 2)] is NOT reflexive. [(1, 1), (2, 1)] is NOT reflexive. [(1, 2), (2, 1)] is NOT reflexive. [(1, 1), (2, 2)] is reflexive! [(1, 2), (2, 2)] is NOT reflexive. [(2, 1), (2, 2)] is NOT reflexive. [(1, 1), (1, 2), (2, 1)] is NOT reflexive. [(1, 1), (1, 2), (2, 2)] is reflexive! [(1, 1), (2, 1), (2, 2)] is reflexive! [(1, 2), (2, 1), (2, 2)] is NOT reflexive. [(1, 1), (1, 2), (2, 1), (2, 2)] is reflexive!
A relation $R$ on a set $A$ is called
symmetric if
While the symmetric definition could be implemented exactly as described, there is actually no reason to test $(u, v)$ pairs that are not in a particular relation. If a pair is not in the relation, then the left side of the implication is false and the implication is
vacuously true (because false $\rightarrow$ anything is always true). Of course, if $(v, u)$ happens to be in the relation, $(u, v)$ will be sought (and not found) causing
symmetric(r) to return false.
def symmetric(r): '''(list) -> boolean''' # for every pair (u, v) in the relation for pair in r: # search for (v, u) <- order switched! try: r.index((pair[1], pair[0])) except ValueError: return False # if loop completes without error, every pair in the relation has a symmetric counterpart (r is symmetric!) return True
# testsymmetric([(1, 2), (2, 1), (2, 2)])
True
symmetric([(1, 2), (2, 2)])
False
For all possible relations on the set $A$, which are symmetric?
for relation in pAB: if (symmetric(relation)): print(str(relation) + " is symmetric!" +"\n") else: print(str(relation) + " is NOT symmetric." +"\n")
[] is symmetric! [(1, 1)] is symmetric! [(1, 2)] is NOT symmetric. [(2, 1)] is NOT symmetric. [(2, 2)] is symmetric! [(1, 1), (1, 2)] is NOT symmetric. [(1, 1), (2, 1)] is NOT symmetric. [(1, 2), (2, 1)] is symmetric! [(1, 1), (2, 2)] is symmetric! [(1, 2), (2, 2)] is NOT symmetric. [(2, 1), (2, 2)] is NOT symmetric. [(1, 1), (1, 2), (2, 1)] is symmetric! [(1, 1), (1, 2), (2, 2)] is NOT symmetric. [(1, 1), (2, 1), (2, 2)] is NOT symmetric. [(1, 2), (2, 1), (2, 2)] is symmetric! [(1, 1), (1, 2), (2, 1), (2, 2)] is symmetric!
A relation $R$ on a set $A$ is called
antisymmetric if
Equivalently (in its contrapositive form)$$ (u \ne v) \rightarrow \left( (u,v) \notin R \ \text{or} \ (v,u) \notin R \right)$$
For this property (and possibly subsequent properties), I will use a helper function that searches a relation for a particular pair returning
True if found and
False otherwise. This should be easier to read than copy and pasting try/except blocks everywhere.
# helper that searches a relation (r) for a pair (u ,v)def rsearch(r, pair): '''(list, tuple) -> boolean''' try: r.index(pair) except ValueError: # not found return False # found return True
As with symmetric, rather than implement the definition (which requires assigning variables to elements in $A$) I've elected to only test pairs that are in the relation. Also, I worked from the contrapositive definition because it felt easier to code.
def antisymmetric(r): '''(list) -> boolean''' # for every pair (u, v) in the relation for pair in r: u = pair[0] v = pair[1] # only test pairs with unique elements if (u != v): # relation can contain (u, v) or (v, u), but it CANNOT contain both if (rsearch(r, (u, v)) and rsearch(r, (v, u))): return False # if loop completes without returning, then the relation is antisymmetric return True
# testantisymmetric([(1, 1), (1, 2), (2, 2)]) # Though the relation contains (1, 2), it does not contain (2, 1)
True
antisymmetric([(1, 1), (1, 2), (2, 1)]) # It contains both (1, 2) AND (2, 1) so it must be...
False
For all possible relations on the set $A$, which are antisymmetric?
for relation in pAB: if (antisymmetric(relation)): print(str(relation) + " is antisymmetric!" +"\n") else: print(str(relation) + " is NOT antisymmetric." +"\n")
[] is antisymmetric! [(1, 1)] is antisymmetric! [(1, 2)] is antisymmetric! [(2, 1)] is antisymmetric! [(2, 2)] is antisymmetric! [(1, 1), (1, 2)] is antisymmetric! [(1, 1), (2, 1)] is antisymmetric! [(1, 2), (2, 1)] is NOT antisymmetric. [(1, 1), (2, 2)] is antisymmetric! [(1, 2), (2, 2)] is antisymmetric! [(2, 1), (2, 2)] is antisymmetric! [(1, 1), (1, 2), (2, 1)] is NOT antisymmetric. [(1, 1), (1, 2), (2, 2)] is antisymmetric! [(1, 1), (2, 1), (2, 2)] is antisymmetric! [(1, 2), (2, 1), (2, 2)] is NOT antisymmetric. [(1, 1), (1, 2), (2, 1), (2, 2)] is NOT antisymmetric.
A relation $R$ on a set $A$ is called an
transitive if
I followed the definition exactly for this one, the implementation (with 3 loops!) is costly--$O(n^3)$.
def transitive(r, A): '''(list, list) -> boolean''' for u in A: for v in A: for w in A: if (rsearch(r, (u, v)) and rsearch(r, (v, w))): # (u, w) must be in the relation if (not rsearch(r, (u, w))): return False # loop completed without returning, so the relation is transitive return True
# testtransitive([(1, 2), (2, 3), (1, 3)], A)
True
transitive([(1, 2), (2, 1), (2, 2)], A) # 1 -> 2 and 2 -> 1, but 1 -> 1 is missing!
False
Can I do better?
Spoiler: not really.
def transitive2(r): # for every pair (u, v) for pair in r: # take u and v u = pair[0] v = pair[1] # iterate through the relation looking for a pair with a first element = v for v_pair in r: if (v_pair[0] == v): # take second element from that pair (v, x) x = v_pair[1] # iterate through the relation again, looking for the pair (u, x) if (not rsearch(r, (u, x))): # if the pair is not found, the relation cannot be transitive return False # if function hasn't returned, the relation must be transitive return True
# the same teststransitive2([(1, 2), (2, 3), (1, 3)])
True
transitive2([(1, 2), (2, 1), (2, 2)])
False
It is possible that
transitive2() is more efficient than
transitive(), but it is hard to tell and the logic is definitely more convoluted.
For all possible relations on the set $A$, which are transitive?
for relation in pAB: if (transitive(relation, A)): print(str(relation) + " is transitive!" +"\n") else: print(str(relation) + " is NOT transitive." +"\n")
[] is transitive! [(1, 1)] is transitive! [(1, 2)] is transitive! [(2, 1)] is transitive! [(2, 2)] is transitive! [(1, 1), (1, 2)] is transitive! [(1, 1), (2, 1)] is transitive! [(1, 2), (2, 1)] is NOT transitive. [(1, 1), (2, 2)] is transitive! [(1, 2), (2, 2)] is transitive! [(2, 1), (2, 2)] is transitive! [(1, 1), (1, 2), (2, 1)] is NOT transitive. [(1, 1), (1, 2), (2, 2)] is transitive! [(1, 1), (2, 1), (2, 2)] is transitive! [(1, 2), (2, 1), (2, 2)] is NOT transitive. [(1, 1), (1, 2), (2, 1), (2, 2)] is transitive!
A relation $R$ on a set $A$ is called an
equivalence relation if R is reflexive, symmetric, and transitive.
def equivalence(r, A): '''(list) -> boolean''' if (reflexive(r, A) and symmetric(r) and transitive(r, A)): return True else: return False
equivalence([(1, 1), (2, 2)], A)
True
equivalence([(1, 1)], A)
False
From a performance standpoint, this one is
super expensive!
For all possible relations on the set $A$, which are equivalence relations?
for relation in pAB: if (equivalence(relation, A)): print(str(relation) + " is an equivalence relation!" +"\n") else: print(str(relation) + " is NOT an equivalence relation." +"\n")
[] is NOT an equivalence relation. [(1, 1)] is NOT an equivalence relation. [(1, 2)] is NOT an equivalence relation. [(2, 1)] is NOT an equivalence relation. [(2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2)] is NOT an equivalence relation. [(1, 1), (2, 1)] is NOT an equivalence relation. [(1, 2), (2, 1)] is NOT an equivalence relation. [(1, 1), (2, 2)] is an equivalence relation! [(1, 2), (2, 2)] is NOT an equivalence relation. [(2, 1), (2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 1)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 2)] is NOT an equivalence relation. [(1, 1), (2, 1), (2, 2)] is NOT an equivalence relation. [(1, 2), (2, 1), (2, 2)] is NOT an equivalence relation. [(1, 1), (1, 2), (2, 1), (2, 2)] is an equivalence relation!
This turned out to be a good exercise. I think I have a better understanding of these properties and how to "prove" they are held by an arbitrary relation.
For me, the most interesting discovery was that every property except reflexive can be determined by looking exclusively at the relation itself. This was not obvious from reading the definitions, but it is something you learn when doing these by hand during an exam. For example, to determine if
{(1, 2), (2, 1), (2, 2)} is symmetric, the steps I find myself taking are:
In a way, the formal definition for symmetry is misleading.$$\forall \ u,v \in A \ \ \left((u, v) \in R \right) \rightarrow \left((v, u) \in R \right) \text{is true.}$$
I see "for all u, v in A" and think "for all possible pairs in the set $A \times A$...oh no, I have to generate a Cartesian Product". But really, this is never necessary. While the reflexive property does require looking at the actual set, each element is considered independently (so the Cartesian Product is not needed).
If it was up to me, I'd define symmetry like this$$ \begin{align} & \text{Let: u, v} \in A \\ & \forall \ (u,v) \in R \ \ \left((u, v) \in R \right) \rightarrow \left((v, u) \in R \right) \text{is true.} \end{align} $$
Which says: "for all ordered pairs in the relation, this implication must be true in order for the relation to be symmetric".
To finish this I should really identify the relations that are functions as well as their types (injective, subjective, bijective), but I think I'll save that for another notebook.
|
I'm studying the Principal Type (PT) Algorithm in Basic Simple Type Theory by J. Roger Hindley. One step to find the PT of a term is the Unification of types. The Robinson's Unification Algorithm uses a comparison procedure like as follow:
Comparison Procedure
Given a pair $(u, v)$ of types, write $u$ and $v$ as symbol-strings, say
$u \equiv s_1 ... s_m$ and $v \equiv t_1 ... t_n$ $(m, n >1)$
where each o f $s_1,... , s_m$, $t_1, ... , t_n$ is an occurrence of a parenthesis, arrow or variable.
If $u \equiv v$, state that $u \equiv v$ and stop.
If $u \not \equiv v$, choose the least $p < \min(m, n)$ such that $s_p \not \equiv t_p$; it is not hard to show that one of $s_p$, $t_p$, must be a variable and the other must be a left parenthesis or a different variable. Further, $s_p$ can be shown to be the leftmost symbol of a unique subtype $u^*$ of $u$. (If $s_p$ is a variable, $u^* \equiv s_p$.) Similarly $t_p$ is the leftmost symbol of a unique subtype $v^*$ of $v$. Choose one of $u^*, v^*$ that is a variable and call it $a$. (If both are variables, choose the one that is first in the sequence given in Definition 2A1.) Then call the remaining member of $(u^*, v^*)$ $\alpha$; the pair $(a, \alpha)$ is called the disagreement pair for $(u, v)$.
My issue is with the Note bellow:
3D5.1 Note To prove that $p$ exists in the case that $u \not \equiv v$ in the comparison procedure we must show that it is not possible to have
$t_1 ... t_n \equiv s_1....s_m t_{m + 1} ... t_n$
with $n > m$. This is left as a (rather dull) exercise for the reader.
I disagree with this note. I can image, say, $u \equiv a \rightarrow b$ and $v \equiv a \rightarrow b \rightarrow c$, where no $p$ is possible.
Maybe I'm missing something! How $p$ is always possible?
As asked, the definition of types is given as follow:
2A1 Definition (Types) An infinite sequence of type-variables is assumed to be given, distinct from the term-variables. Types are linguistic expressions defined thus:
i. each type-variable is a type (called an atom);
ii. if $\delta$ and $\rho$ are types then $(\delta \rightarrow \rho)$ is a type (called composite type).
2A1.1 Notation Type-variables are denoted by "a", "b", "c", "d", "e", "f", "g", with or without number-subscripts, and distinct letters denote distinct variables unless otherwise stated.
Arbitrary types are denoted by lower-case Greek letters except $\lambda$.
Parentheses will often (but not always) be omitted from types, and the reader should restore omitted ones in such a way that, for example,
$\rho \rightarrow \sigma \rightarrow \tau \equiv (\rho \rightarrow (\sigma \rightarrow \tau))$
This restoration rule is called association to the right.
|
No, you are confusing things and you are not doing what you are required to do: you are not required to integrate the derivative of $\arctan$, but $\arctan$ itself.
Start by finding the Taylor series of $\arctan x$ around $0$:
$$(\arctan x)' = \frac 1 {1 + x^2} = \sum _{n = 0} ^\infty (-x^2)^n \implies \arctan x = \int \sum _{n = 0} ^\infty (-x^2)^n \ \Bbb d x = \sum _{n = 0} ^\infty (-1)^n \frac {x^{2n+1}} {2n+1} + C .$$
Since $\arctan 0 = 0$, we get that $C=0$.
Next, plugging $2x$ instead of $x$ in the above series leads to
$$\arctan 2x = \sum _{n = 0} ^\infty (-1)^n 2^{2n+1} \frac {x^{2n+1}} {2n+1} .$$
Using this series we may write
$$\int \limits _0 ^{0.1} \arctan 2x \ \Bbb d x = \int \limits _0 ^{0.1} \sum _{n = 0} ^\infty (-1)^n 2^{2n+1} \frac {x^{2n+1}} {2n+1} \ \Bbb d x = \sum _{n = 0} ^\infty \frac {(-1)^n 2^{2n+1}} {2n+1} \int \limits _0 ^{0.1} x^{2n+1} \ \Bbb d x = \\\sum _{n = 0} ^\infty \frac {(-1)^n 2^{2n+1}} {2n+1} \frac {x^{2n+2}} {2n+2} \Bigg| _0 ^{0.1} = \sum _{n = 0} ^\infty \frac {(-1)^n 2^{2n+1}} {2n+1} \frac {(0.1)^{2n+2}} {2n+2} = \\0.02 \sum _{n = 0} ^\infty \frac {(-0.04)^n} {(2n+1)(2n+2)} .$$
Since the terms of this series decrease very quickly, it is enough to compute just the first 3 of it in order to obtain a satisfactory approximation:
$$0.02 \sum _{n = 0} ^2 \frac {(-0.04)^n} {(2n+1)(2n+2)} = 0.02 \left( \frac 1 2 - \frac {0.04} {12} + \frac {0.04^2} {30} \right) = 0.0099344 .$$
|
The 'hereditarily countable names' are as defined in Shelah's Proper and Improper Forcing, Chapter 3 Definition 4.1. Let $\mathbb{P}$ be a proper forcing notion and $\dot{Q}$ a $\mathbb{P}$-name such that $\Vdash_{\mathbb{P}}$ "$\dot{Q}$ is a proper forcing notion with set of elements $\check{\kappa}$ and maximal element $\check{0}$".
Let $A$ be the closure of {$\check{\alpha}$ : $\alpha < \kappa$} under the following 2 functions.
(1) given sequences ($p_n$ : $n \in \omega$) and ($\tau_n$ : $n \in \omega$), let $\tau$ be a name forced to be equal to: (i) $\tau_i$ where $i$ is the least $n$ satisfying $p_n \in \dot{G_\mathbb{P}}$, if such $i$ exists, and (ii) $\check{0}$, otherwise.
(2) given ($\tau_{m, n}$ : $m$, $n \in \omega$), let $\tau$ be: (i) the $\epsilon$-least element of $\dot{Q}$ such that for all $m \in \omega$, {$\tau_{m, n}$ : $n \in \omega$} is predense below $\tau$, if such an element exists, and (ii) $\check{0}$, otherwise.
My question is, is it true that if $p \in \mathbb{P}$ and $\sigma$ is a $\mathbb{P}$-name such that $p \Vdash \sigma \in \dot{Q}$, then there is a $\mathbb{P}$-name $\tau \in A$ such that $p \Vdash \tau \le \sigma$, or even better, $p \Vdash \tau = \sigma$? If so, why is that?
Much thanks in advance.
|
Answer
$ 1.52 \ h$
Work Step by Step
Using the equation for period, we find: $T = \sqrt{\frac{4\pi^2r}{a}}$ $T = \sqrt{\frac{4\pi^2(6.77\times10^6)}{8.73}}=5532\ s= 1.52 \ h$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
We have $Z= \cos(2\pi/5)+i \sin(2\pi/5)$. Prove that:
$(1+z+z^2)(1+z+z^3)(1+z+z^4)=z(1+z)$
I noticed that in each bracket contain $(1+z)$, so I let $t$ equal $(1+z)$ to simplify the equation into $(t+z^2)(t+z^3)(t+z^4)=zt$, but I don't know what to do next. I also tried to multiply the parentheses too but then the equation becomes crazier. I hope someone here could help me out. Thanks!
If you actually expand $(1+z+z^2)(1+z+z^3)(1+z+z^4) - z(1+z)$ out, you have:
$$z^9+z^8+2z^7+3z^6+4z^5+4z^4+4z^3+3z^2+2z+1$$ $$=5z^4+5z^3+5z^2+5z+5 \quad \text{(since $z^5=1$)}$$ $$=5 \left( \frac{z^5-1}{z-1} \right)$$ $$=0$$
Therefore $(1+z+z^2)(1+z+z^3)(1+z+z^4) = z(1+z)$.
|
This shows you the differences between two versions of the page.
— definable_principal_congruences [2010/08/20 20:21] (current)
jipsen created
Line 1: Line 1: + =====Definable principle congruences===== + + A (quasi)variety $\mathcal{K}$ of algebraic structures has \emph{first-order definable principal (relative) congruences} (DP(R)C) if + there is a first-order formula $\phi(u,v,x,y)$ such that for all + $\mathbf{A}\in\mathcal{K}$ we have $\langle x,y\rangle\in\mbox{Cg}_{\mathcal{K}}(u,v)\iff \mathbf{A}\models \phi(u,v,x,y)$. + + Here + $\theta=\mbox{Cg}_{\mathcal{K}}(u,v)$ denotes the smallest (relative) congruence that identifies the elements + $u,v$, where "relative" means that $\mathbf{A}//\theta\in\mathcal{K}$. + + === Properties that imply DP(R)C === + + [[Equationally def. pr. cong.|Equationally definable principal (relative) congruences]] + + === Properties implied by DP(R)C ===
|
Simple Helmholtz equation¶
Let’s start by considering the modified Helmholtz equation on a unit square, \(\Omega\), with boundary \(\Gamma\):
for some known function \(f\). The solution to this equation will be some function \(u\in V\), for some suitable function space \(V\), that satisfies these equations. Note that this is the Helmholtz equation that appears in meteorology, rather than the indefinite Helmholtz equation \(\nabla^2 u + u = f\) that arises in wave problems.
We transform the equation into weak form by multiplying by an arbitrary test function in \(V\), integrating over the domain and then integrating by parts. The variational problem so derived reads: find \(u \in V\) such that:
Note that the boundary condition has been enforced weakly by removing the surface term resulting from the integration by parts.
We can choose the function \(f\), so we take:
which conveniently yields the analytic solution:
However we wish to employ this as an example for the finite element method, so lets go ahead and produce a numerical solution.
First, we always need a mesh. Let’s have a \(10\times10\) element unit square:
from firedrake import *mesh = UnitSquareMesh(10, 10)
We need to decide on the function space in which we’d like to solve the problem. Let’s use piecewise linear functions continuous between elements:
V = FunctionSpace(mesh, "CG", 1)
We’ll also need the test and trial functions corresponding to this function space:
u = TrialFunction(V)v = TestFunction(V)
We declare a function over our function space and give it the value of our right hand side function:
f = Function(V)x, y = SpatialCoordinate(mesh)f.interpolate((1+8*pi*pi)*cos(x*pi*2)*cos(y*pi*2))
We can now define the bilinear and linear forms for the left and right hand sides of our equation respectively:
a = (dot(grad(v), grad(u)) + v * u) * dxL = f * v * dx
Finally we solve the equation. We redefine u to be a function holding the solution:
u = Function(V)
Since we know that the Helmholtz equation is symmetric, we instruct PETSc to employ the conjugate gradient method:
solve(a == L, u, solver_parameters={'ksp_type': 'cg'})
For more details on how to specify solver parameters, see the section of the manual on solving PDEs.
Next, we might want to look at the result, so we output our solution to a file:
File("helmholtz.pvd").write(u)
This file can be visualised using paraview.
We could use the built in plot function of firedrake by calling
plot to plot a surface graph. Before that,matplotlib.pyplot should be installed and imported:
try: import matplotlib.pyplot as pltexcept: warning("Matplotlib not imported")try: plot(u)except Exception as e: warning("Cannot plot figure. Error msg: '%s'" % e)
For a contour plot, it could be plotted by adding an additional key word argument:
try: plot(u, contour=True)except Exception as e: warning("Cannot plot figure. Error msg: '%s'" % e)
Don’t forget to show the image:
try: plt.show()except Exception as e: warning("Cannot show figure. Error msg: '%s'" % e)
Alternatively, since we have an analytic solution, we can check the \(L_2\) norm of the error in the solution:
f.interpolate(cos(x*pi*2)*cos(y*pi*2))print(sqrt(assemble(dot(u - f, u - f) * dx)))
A python script version of this demo can be found here.
|
I am not really familiar with the topic, thus I am looking for some references about the following problem.
Let $s>0$ be a positive real and let $p\in(1,+\infty)$. We define the Bessel Potential spaces on $\mathbb{R}^n$ via the Fourier transform $\mathcal{F}$ as follows. $$ H^{s,p}(\mathbb{R}^n)=\{f\in L^p(\mathbb{R^n}):\mathcal{F}^{-1}(1+|\xi|^2)^{\frac{s}{2}}\mathcal{F}f\in L^p(\mathbb{R^n})\}. $$
In the case of $s\in\mathbb{Z}^+$ it holds that $H^{s,p}(\mathbb{R}^n)=W^{s,p}(\mathbb{R}^n)$ where the latter is the standard Sobolev space.
Moreover, the Bessel potential spaces can be obtained via complex interpolation from the standard Sobolev spaces of integer order. Therefore the spaces $H^{s,p}(\mathbb{R}^n)$ are also referred as Fractional Sobolev Space.
Let now $\Omega\subset \mathbb{R}^n$ an open subset (bounded or unbounded), we can define $$ H^{s,p}(\Omega)=\{f\in L^p(\Omega): \exists g\in H^{s,p}(\mathbb{R}^n): g\big|_{\Omega}=f \} $$
with the norm $\|f\|_{H^{s,p}(\Omega)}=\inf\{\|g\|_{H^{s,p}(\mathbb{R}^n)}:g\big|_{\Omega}=f\}$.
Therefore, my question is. Let $k$ be in $\mathbb{Z}^+$. Are the spaces $H^{s,p}(\Omega)$ interpolation spaces between $W^{k,p}(\Omega)$ and $W^{k+1.p}(\Omega)$ if $k<s<k+1$ when $\Omega$ is an unbounded Lipschitz domain with noncompact boundary?
In particular, if $T$ is bounded linear operator $T: W^{k,p}(\Omega)\to W^{k,p}(\Omega)$ and $T:W^{k+1,p}(\Omega)\to W^{k+1,p}(\Omega)$, do we have the boundedness $T:H^{s,p}(\Omega)\to H^{s,p}(\Omega)$ for every $s\in[k,k+1]?$
Can you please suggest me some references?
|
I'm currently revising for a vibrations and waves module that I am taking as part of my physics degree.
One of the our final questions involved finding equations for the displacements of the two masses in this system as a superposition of their normal modes:
I found the equations of motion for each mass to be: $$ \ddot{x_a} + \gamma\dot{x_a} + x_a(\omega_0^2+\omega_s^2) = x_b\omega_s^2\\\ddot{x_b} + \gamma\dot{x_b} + x_b(\omega_0^2+\omega_s^2) = x_a\omega_s^2\\Here: \omega_0^2 = \frac{g}{l}~~\omega_s^2=\frac{k}{m}~~\gamma=\frac{b}{m}$$ Here I let $ q_1 = x_a+x_b~and~q_2 = x_a-x_b: $ $$\ddot{q_1} + \gamma\dot{q_1} + q_1\omega_0^2=0\\\ddot{q_2} + \gamma\dot{q_2} + q_2(\omega_0^2+2\omega_s^2)=0 $$ From here I can't see where to go. I did attempt substituting in a general solution such as $q_1 = C_1 \cos(\omega t)$ but I get a mixture of sines and cosines and I can't solve it for anything useful.
Any help would be great as this is the last topic that I need to learn! Thanks, Sean.
|
Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity
Department of Mathematics, South China University of Technology, Guangzhou 510640, China
$\begin{cases}\tag{*}n_t+u·\nabla n = \nabla ·(d(c)\nabla n)-\nabla ·(χ (c) n\nabla c)+a n-bn^2, &x∈ Ω, ~~t>0, \\ c_t+u·\nabla c = Δ c+ n-c,&x∈ Ω, ~~t>0, \\ u_t+ u·\nabla u = Δ u-\nabla P+n\nabla φ,&x∈ Ω, ~~t>0, \\\nabla · u = 0& x∈ Ω, \ t>0, \end{cases}$
$Ω\subset \mathbb{R}^2$
$a≥0$
$b>0$
$d(c)$
$χ(c)$
$(d(c), χ (c))∈ [C^2([0, ∞))]^2$
$d(c), χ(c)>0$
$c≥0$
$d'(c)<0$
$\lim\limits_{c\to∞}d(c) = 0$
$\lim\limits_{c\to∞} \frac{χ (c)}{d(c)}$
$\lim\limits_{c\to∞}\frac{d'(c)}{d(c)}$
$\lim\limits_{c\to∞}d(c) = 0$
$d(c)$
$(n, c, u)$
$(\frac{a}{b}, \frac{a}{b}, 0)$
$b>\frac{K_0}{16}$
$K_0 = \max\limits_{0≤c ≤∞}\frac{|χ(c)|^2}{d(c)}$ Mathematics Subject Classification:Primary: 35A01, 35B40, 35K55, 35Q92, 92C17. Citation:Hai-Yang Jin. Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3595-3616. doi: 10.3934/dcds.2018155
References:
[1]
N. Bellomo, A. Bellouquid, Y. S. Tao and M. Winkler,
Towards a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2] [3] [4]
M. Chae, K. Kang and J. Lee,
Global existence and temporal decay in Keller-Segel models coupled to fluid equations,
[5]
A. Chertock, K. Fellner, A. Kurganov, A. Lorz and P. A. Markowich,
Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: A high-resolution numerical approach,
[6] [7]
T. Ciéslak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[8]
T. Ciéslak and C. Stinner,
New critical exponents in a fully parabolic quasilinear Keller-Segel and applications to volume filling models,
[9]
M. DiFrancesco, A. Lorz and P. A. Markowich,
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior,
[10]
R. J. Duan, A. Lorz and P. A. Markowich,
Global solutions to the coupled chemotaxis-fluid equations,
[11] [12]
M. Eisenbach, A note on global existence for the chemotaxis-Stokes model with nonlinear diffusion. Chemotaxis, Imperial College Press, London, 2004.Google Scholar
[13] [14]
X. Fu, H. Tang, C. Liu, J. D. Huang, T. Hwa and P. Lenz,
Stripe formation in bacterial system with density-suppressed motility,
[15]
Y. Giga,
Solutions for semilinear parabolic equations in
J. Differential Equations, 61 (1986), 186-212.
doi: 10.1016/0022-0396(86)90096-3.
Google Scholar
[16]
Y. Giga and H. Sohr,
Abstract
J. Funct. Anal., 102 (1991), 72-94.
doi: 10.1016/0022-1236(91)90136-S.
Google Scholar
[17] [18] [19] [20] [21] [22] [23]
W. Jäger and S. Luckhaus,
On explosions of solutions to a system of partial differential equations modeling chemotaxis,
[24]
H. Y. Jin, Y. J. Kim and Z. A. Wang, Boundedness, stabilization and pattern formation driven by density-suppressed motility,
[25] [26] [27]
O. Ladyzhenskaya, V. Solonnikov and N. Uralceva,
[28] [29] [30] [31] [32] [33] [34] [35]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[36] [37]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[38]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion,
[39]
Y. S. Tao and M. Winkler,
Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion,
[40]
Y. S. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system
[41]
Y. S. Tao and M. Winkler,
Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,
[42]
Y. Tao and M. Winkler,
Effects of signal-dependent motilities in a Keller-Segel-type reaction-diffusion system,
[43]
I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines,
[44] [45] [46]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[47] [48] [49]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[50]
M. Winkler,
Global existence and slow grow-up in a quasilinear Keller-Segel system with exponentially decaying diffusivity,
[51]
C. Yoon and Y. J. Kim,
Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion,
show all references
References:
[1]
N. Bellomo, A. Bellouquid, Y. S. Tao and M. Winkler,
Towards a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2] [3] [4]
M. Chae, K. Kang and J. Lee,
Global existence and temporal decay in Keller-Segel models coupled to fluid equations,
[5]
A. Chertock, K. Fellner, A. Kurganov, A. Lorz and P. A. Markowich,
Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: A high-resolution numerical approach,
[6] [7]
T. Ciéslak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[8]
T. Ciéslak and C. Stinner,
New critical exponents in a fully parabolic quasilinear Keller-Segel and applications to volume filling models,
[9]
M. DiFrancesco, A. Lorz and P. A. Markowich,
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior,
[10]
R. J. Duan, A. Lorz and P. A. Markowich,
Global solutions to the coupled chemotaxis-fluid equations,
[11] [12]
M. Eisenbach, A note on global existence for the chemotaxis-Stokes model with nonlinear diffusion. Chemotaxis, Imperial College Press, London, 2004.Google Scholar
[13] [14]
X. Fu, H. Tang, C. Liu, J. D. Huang, T. Hwa and P. Lenz,
Stripe formation in bacterial system with density-suppressed motility,
[15]
Y. Giga,
Solutions for semilinear parabolic equations in
J. Differential Equations, 61 (1986), 186-212.
doi: 10.1016/0022-0396(86)90096-3.
Google Scholar
[16]
Y. Giga and H. Sohr,
Abstract
J. Funct. Anal., 102 (1991), 72-94.
doi: 10.1016/0022-1236(91)90136-S.
Google Scholar
[17] [18] [19] [20] [21] [22] [23]
W. Jäger and S. Luckhaus,
On explosions of solutions to a system of partial differential equations modeling chemotaxis,
[24]
H. Y. Jin, Y. J. Kim and Z. A. Wang, Boundedness, stabilization and pattern formation driven by density-suppressed motility,
[25] [26] [27]
O. Ladyzhenskaya, V. Solonnikov and N. Uralceva,
[28] [29] [30] [31] [32] [33] [34] [35]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[36] [37]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[38]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion,
[39]
Y. S. Tao and M. Winkler,
Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion,
[40]
Y. S. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system
[41]
Y. S. Tao and M. Winkler,
Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,
[42]
Y. Tao and M. Winkler,
Effects of signal-dependent motilities in a Keller-Segel-type reaction-diffusion system,
[43]
I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines,
[44] [45] [46]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops,
[47] [48] [49]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[50]
M. Winkler,
Global existence and slow grow-up in a quasilinear Keller-Segel system with exponentially decaying diffusivity,
[51]
C. Yoon and Y. J. Kim,
Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion,
[1]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[2]
Masaaki Mizukami.
Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[3]
Shijie Shi, Zhengrong Liu, Hai-Yang Jin.
Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source.
[4]
Mengyao Ding, Wei Wang.
Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production.
[5] [6]
Kin Ming Hui, Soojung Kim.
Asymptotic large time behavior of singular solutions of the fast diffusion equation.
[7] [8]
Joana Terra, Noemi Wolanski.
Large time behavior for a nonlocal diffusion equation with
absorption and bounded initial data.
[9] [10] [11]
Youshan Tao, Lihe Wang, Zhi-An Wang.
Large-time behavior of a parabolic-parabolic chemotaxis model
with logarithmic
sensitivity in one dimension.
[12]
Youshan Tao, Michael Winkler.
A chemotaxis-haptotaxis system with haptoattractant remodeling: Boundedness enforced by mild saturation of signal production.
[13]
Peng Jiang.
Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics.
[14]
Chunhua Jin.
Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion.
[15]
Pan Zheng, Chunlai Mu, Xiaojun Song.
On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion.
[16]
Geonho Lee, Sangdong Kim, Young-Sam Kwon.
Large time behavior for the full compressible
magnetohydrodynamic flows.
[17]
Pan Zheng.
Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion.
[18]
Jiashan Zheng.
Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion.
[19]
Sachiko Ishida.
Global existence and boundedness for chemotaxis-Navier-Stokes systems
with position-dependent sensitivity in 2D bounded domains.
[20]
Dongfen Bian, Boling Guo.
Global existence and large time
behavior of solutions to the electric-magnetohydrodynamic equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
|
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
|
...sort of... Two years ago, I was a staunch defender of the retirement of the Tevatron, the collider near Chicago, Illinois. The reason was that it just wasn't competitive anymore. The lower energy and amount of collisions relatively to the LHC translated to a much smaller probability of a legitimate discovery per unit time – which also means a much lower expected number of discoveries per dollar. The actual shape of the Wilson Hall differs from the fictitious Hilson/Higgson Hall above. I think that people gradually understood that people like me were right and it was pointless to keep on running the Tevatron and these days, everyone agrees. For the properties of the \(125\GeV\) Higgs boson, the Tevatron was said to be "roughly competitive" with the LHC. Below, we will see it's not quite the case: it was weaker, too. But when it comes to the phenomena at the energy frontier that the LHC is probing these days – and still confirming the Standard Model as of today – one may estimate that one day of the data from the Tevatron would provide us with less information than one second of the data from the LHC. The ratio of the strengths of signals approaches a million or so, mostly because the lower energy at the Tevatron just couldn't get there. Running the Tevatron along with the LHC is pointless. The Tevatron was shut down in September 2011, after 25 years. Now, almost 18 months later, the major detector collaborations at the Tevatron, CDF and D∅, finally teamed up and evaluated all of their data about the possible Higgs boson near the mass interval we know to be relevant:
They looked at the decays in the following channels:\[
\eq{
H&\to b\bar b,\\
H&\to W^+W^-,\\
H&\to Z^0Z^0,\\
H&\to \tau^+\tau^-,\\
H&\to \gamma\gamma.
}
\] and concluded that everything they see is consistent with the Standard Model Higgs at \(m=125\GeV\).
That's nice but it's not a real discovery. The Tevatron with \(20/{\rm fb}\) (aggregate) of the collisions at \(\sqrt{s}=2\TeV\) just couldn't do better.
I can't resist to point out that it's not just the brute force of the collider. Even the physicists seem to be slower. The LHC Collaborations are already flooding the market with lots of papers using almost all the 2012 collisions – which were accumulated up to November 2012 or so. It may take five months.
Eighteen months that the Tevatron folks needed just seems too much, especially when they already know what result they're likely to get. I don't suggest that they should adjust their data to the expectation based on the LHC's work; but I do think that the amount of verification and cross-checks may be a bit less extensive because the \(125\GeV\) Higgs boson claim is no longer an extraordinary hypothesis.
On the other hand, I do appreciate that the Tevatron Higgs work could have been slower also because of the lack of motivation i.e. because it was no longer a priority – the Higgs boson had already been discovered by someone else. Still, I don't like many other things about the paper – it doesn't show any graph that would make the significance 3.1 sigma for the right Higgs mass manifest and that would allow us to estimate how precisely their observation of the mass is (from the width of the bump).
At any rate, congratulations to the Fermilab and good riddance, too.
Twenty years ago, people would still write assorted papers dreaming about another powerful American collider, the SSC. The titles would usually mention the center-of-mass energy \(\sqrt{s}=40\TeV\). Two decades later, we had to get more modest by a factor of five but thank God at least for what we have!
The LHC is being upgraded which will last for two more years. It will be back at \(13\TeV\) in Spring 2015.
|
I am currently dealing with a problem concerning beamforming, where two "jointly stationary zero-mean white noise processes" form the input of an adaptive system. One of those processes resembles the actual signal $s[n]$, the other is an interference signal $v[n]$. Is it clear from the stated definition above that they are independent from each other, i.e. $E(XY)=E(X)E(Y)$? Why?
Stationarity (WSS in particular) does not imply independence, even if they are white noises.
That's to say, there exist (at least one) jointly stationary zero mean white random processes which are also dependent. A simple example is this.
Consider a zero mean, wide sense stationary (WSS), white, real random process $x[n]$ with auto-correlation sequence (ACS), $r_{xx}[k] = \sigma_x^2 \delta[k]$, which is passed through a linear time invariant (LTI) filter with real impulse response $h[n] = M \delta[n]$, to produce the output $y[n] = h[n] \star x[n]$ :
$$ y[n] = \sum_k h[k] x[n-k] = M x[n]$$
Now it can be shown that $y[n]$ will also be zero-mean, WSS white noise with its ACS given by $$ r_{yy}[k] = h[k] \star h[-k] \star r_{xx}[k] = M^2 \sigma_x^2 \delta[k] $$,
and further, they are jointly WSS and white, as their cross-correlation depends on the lag $k=m-n$; $$ \mathcal{E}\{x[n] y[m] \} = r_{yx}[k] = h[k] \star r_{xx}[k] = M \sigma_x^2 \delta[k] $$
However it can be shown that they are not independent, since the necessary condition for their independence does not hold: $$ \mathcal{E}\{x[n] y[m] \} \neq \mathcal{E}\{x[n] \} \mathcal{E}\{y[m] \} $$
Where the left hand side is the cross-correlation sequence $r_{yx}[k] = M \sigma_x^2 \delta[k] $, as defined above, which is not zero for all $k$, whereas the right hand side is identical to zero for all $n,m$ as both processes are zero-mean WSS.
Hence these two jointly WSS and white random processes $x[n]$ and $y[n]$ cannot be independent and are in fact dependent, in line with intuition as the filtering creates a dependency between them.
|
Is just it a coincidence that $$\frac{0.5}{ \cos^2(30)} = \frac{\tan(30)}{\cos(30)} $$ However $$\frac{0.5}{ \cos^2(13) } \neq \frac{\tan(13)}{\cos(13)} $$ is not equal ? And if not does anyone know a reason why they just so happen to be equatable?
It's simply because $\sin(30^\circ)=1/2$.
$$\frac{\tan(30)}{\cos(30)}=\frac{\frac{\sin(30)}{\cos(30)}}{\cos(30)}=\frac{\sin(30)}{\cos^2(30)}=\frac{0.5}{\cos^2(30)}$$since $\sin(30^\circ)=0.5$. The other equation would work if instead of $0.5$, you chose $\sin(13)\approx0.225$ as the numerator on the LHS.
Rewrite the right side: $$ \begin{align} \frac{0.5}{\cos^2(30^\circ)}&=\frac{\tan(30^\circ)}{\cos(30^\circ)}\\ &=\frac{\sin(30^\circ)}{\cos^2(30^\circ)}\\ &=\frac{0.5}{\cos^2(30^\circ)} \end{align} $$
If we use $x$ as a variable, we get the following:
$$\begin{align} & \frac {0.5}{\cos^2x} = \frac {\tan x}{\cos x} \\ \implies& \frac {1}{2} \cos x = \cos^2 x \tan x\\ \implies& \cos x \tan x\ = \frac {1}{2} \\ \implies& \cos x \frac {\sin x}{\cos x} = \frac {1}{2}\\ \implies& \sin x\ = \frac {1}{2}\ \end{align}$$
Thus the only solutions to the above equation are $x = 30^\circ \pm 360^\circ k$ or $x = 150^\circ \pm 360^\circ k$ where $k$ is any natural number.
|
Let $Q(x_1,\dots,x_n)=X'PX$ be a quadratic form with all non-negative and integral coefficients given by $$Q(x_1,\dots,x_n)=\sum_{i=1}^cf_i^+(x_1,\dots,x_n)g_i^+(x_1,\dots,x_n)$$ where $c$ is the length of the expression where both $f_i^+,g_i^+$ are both linear in each variable with all non-negative and integral coefficients. Call minimal value of $c$ among all such expressions as the length $L(Q)$.
Note if rank of $Q$ is $R(Q)$, then $Q(x_1,\dots,x_n)$ can be given by $$Q(x_1,\dots,x_n)=\sum_{i=1}^{R(Q)}f_i(x_1,\dots,x_n)g_i(x_1,\dots,x_n)$$ where both $f_i,g_i$ are both linear in each variable with any coefficients.
Note that $R(Q)\leq L(Q)$ holds true.
Is either $L(Q)=O(R(Q)^\alpha)$ or $O(2^{(\log_2R(Q))^\alpha})$ for an universal constant $\alpha>1$ true as well?
Atleast if one assumes $f_i^+g_i^+=Q_i=X'P_iX$ where each $P_i$ is a rank one matrix, can we show this? In this case, we will get $L_1(Q)$ as minimum length where $1$ stands for rank $1$ decomposition of $P$ in $Q=X'PX$ (that is $P=\sum_{i=1}^{L_1(Q)}P_i$) and $R(Q)\leq L(Q)\leq L_1(Q)$. We would then want to show $$L_1(Q)=O(R(Q)^\alpha)\mbox{ or }O(2^{(\log_2R(Q))^\alpha})$$ for an universal constant $\alpha>1$ true as well?
|
A quadratic expression is completely factorizable if and only if its discriminant is positive. Given a quadratic expression of the form #ax^2+bx+c#, the discriminant #\Delta# is defined as #b^2-4ac#. In your case, we have #a=20#, #b=13# and #c=2#. For this values, we have #\Delta=9#, which means that we can factor the expression finding two solutions #x_1# and #x_2#, and thus writing #20x^2+13x+2=(x-x_1)(x-x_2)#.
To find the solutions, we have the formula
#x_{1,2}=\frac{-b\pm \sqrt(\Delta)}{2a}#. Since #\Delta=9#, its square root equals 3. Plugging the values, we have the two solutions #x_1={-13+3}/{40}=-1/4# and #x_2={-13-3}/{40}=-2/5#.
The factorization is thus #(x+1/4)(x+2/5)#
|
Ex.11.1 Q2 Constructions Solution - NCERT Maths Class 10 Question
Construct a triangle of sides \(4 \;\rm{cm}\), \(5 \;\rm{cm}\) and \(6\; \rm{cm}\) and then a triangle similar to it whose sides are \(\begin{align}\frac{2}{3}\end{align}\) of the corresponding sides of the first triangle.
Text Solution
What is known?
Sides of the triangle and the ratio of corresponding sides of \(2\) triangles.
What is unknown?
Construction.
Reasoning: Draw the line segment of largest length \(6\, \rm{cm}\). Measure \(5 \;\rm{cm}\) and \(4 \;\rm{cm}\) separately and cut arcs from \(2\) ends of the line segment such that they cross each other at one point. Connect this point from both the ends. Then draw another line which makes an acute angle with the given line (\(6\; \rm{cm}\)). Divide the line into \((m + n)\) parts where \(m\)and \(n\)are the ratio given. Two triangles are said to be similar if their corresponding angles are equal. They are said to satisfy Angle-Angle-Angle (AAA) Axiom. Basic proportionality theorem states that, “If a straight line is drawn parallel to a side of a triangle, then it divides the other two sides proportionally".
Steps:
Steps of constructions: (i) Draw \(BC = 6 \text{cm.}\) With \(B\) and \(C\) as centres and radii \(5 \;\rm{cm}\) and \(4\,\rm{cm}\) respectively draw arcs to intersect at \(A.\) \({\rm{\Delta ABC}}\) is obtained. (ii) Draw ray \(BX\) making an acute angle with \(BC.\) (iii) Mark \(3\) ( {\(3> 2\) in the ratio \(\begin{align}\frac{2}{3}\end{align}\)points \({{{B}}_{{1}}}{{,}}\,\,{{{B}}_{{2}}}{{,}}\,\,{{{B}}_{{3}}}\) on \(BX\) such that \(\,{{B}}{{{B}}_{{1}}}{{ = }}{{{B}}_{{1}}}{{{B}}_{{2}}}{{ = }}{{{B}}_{{2}}}{{{B}}_{{3}}}\) . (iv) Join \(\,{{{B}}_{\rm{3}}}{{C}}\) and draw the line through \({B_2}\) (2nd point where \(2 < 3\) in the ratio \(\begin{align}\frac{2}{3}\end{align}\)) parallel to \(B_3C\) meeting \(BC\) at \({{C'.}}\) (v) Draw a line thorough \(\,{{C'}}\) parallel to \(CA\) to meet \(BA\) at \(A’.\) Now \(\,{{\Delta A'BC'}}\) is the required triangle similar to \(\,{{\Delta ABC}}\) where
\[\begin{align}\frac{{{{BC'}}}}{{{{BC}}}}&{{ = }}\frac{{{{BA'}}}}{{{\rm{BA}}}}{\rm{ = }}\frac{{{{C'A'}}}}{{{\rm{CA}}}}\\&{{ = }}\frac{{{2}}}{{{3}}}\end{align}\]
Proof:
In \(\Delta B B_{3} C, B_{2} C\) is parallel to \(\,{{{B}}_{{3}}}{{C}} .\)
Hence by Basic proportionality theorem,
\[\begin{align}\frac{{{{{B}}_{{2}}}{{{B}}_{{2}}}}}{{{{B}}{{{B}}_{{2}}}}}&{{ = }}\frac{{{{C'C}}}}{{{{BC'}}}}\\&{{ = }}\frac{{{1}}}{{{2}}}\end{align}\]
Adding \(1\),
\[\begin{align}\frac{{{{C'C}}}}{{{{BC'}}}}{{ + 1}}& = \frac{{{1}}}{{{2}}}{{ + 1}}\\\frac{{{{C'C + BC'}}}}{{{{BC'}}}}{{ }}&=\frac{{{3}}}{{{2}}}\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\frac{{{{BC}}}}{{{{BC'}}}}{{}}&= \frac{{{3}}}{{{2}}}\\\,\,\,\,\,\text{(or)}\,\,\,\,\,\,\frac{{{{BC'}}}}{{{{BC}}}}{{ }}&=\frac{{{2}}}{{{3}}} & & {{ \ldots (1)}}\end{align}\]
Consider \({\rm{\Delta BA'C'}}\) and \({\rm{\Delta BAC}}\)
\(\angle \text{A }\!\!'\!\!\text{ BC }\!\!'\!\!\text{ }=\angle \text{ABC}\) (Common)
\(\angle \text{BA }\!\!'\!\!\text{ C }\!\!'\!\!\text{ }=\angle \text{BAC}\) (Corresponding angles \(∵\) \(\,\,\text{C }\!\!'\!\!\text{ }A'||\,\text{CA}\) )
\(\angle \text{BA }\!\!'\!\!\text{ C }\!\!'\!\!\text{ }=\angle \text{BCA}\) (Corresponding angles \(∵\) \( \,\,\text{C }\!\!'\!\!\text{ }A'||CA \) )
Hence by AAA axiom, \({{\Delta BA'C' \sim \Delta BAC}}\,\,\)
Corresponding sides are proportional
\[\begin{align}\frac{{{{BA'}}}}{{{{BA}}}}{{ = }}\frac{{{{C'A'}}}}{{{{CA}}}}{{ = }}\frac{{{{BC'}}}}{{{{BC}}}}{{ = }}\frac{{{2}}}{{{3}}}\,\,\left( {{\text{from}}\,{{(1)}}} \right)\end{align}\]
|
Ex.5.3 Q6 Arithmetic Progressions Solution - NCERT Maths Class 10 Question
The first and the last term of an AP are \(17\) and \(350\) respectively. If the common difference is \(9,\) how many terms are there and what is their sum?
Text Solution What is Known?
\(a,{\rm{ }}l\), and \(d\)
What is Unknown?
\(n\) and \({S_n}\)
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\) or \({S_n} = \frac{n}{2}\left[ {a + l} \right]\), and \(n \rm{th}\) term of an AP is\(\,{a_n} = a + \left( {n - 1} \right)d\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms and \(l\) is the last term.
Steps:
Given,
First term, \(a = 17\) Last term, \(l = 350\) Common difference, \(d = 9\)
We know that \(n\rm{th}\) term of AP, \(\,l = {a_n} = a + \left( {n - 1} \right)d\)
\[\begin{align}350& = 17 + \left( {n - 1} \right)9\\333 &= \left( {n - 1} \right)9\\\left( {n - 1} \right) &= 37\\n &= 38\end{align}\]
Sum of \(n\) terms of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left( {a + l} \right)\\{S_{38}} &= \frac{{38}}{2}\left( {17 + 350} \right)\\
\, &= 19 \times 367\\ &= 6973\end{align}\]
Thus, this A.P. contains \(38\) terms and the sum of the terms of this A.P. is \(6973.\)
|
Answer
$\color{blue}{s=\dfrac{11\pi}{6}}$
Work Step by Step
Note that $\dfrac{\pi}{6}$ is a special angle and $\sin{(\dfrac{\pi}{6})}=\frac{1}{2}$. RECALL: An angle and its reference angle have either the same trigonometric values or they differ only in signs. Since $s$ must be in $\left[\dfrac{3\pi}{2}, 2\pi\right]$, then the angle must terminate in Quadrant IV. Note that the angle of $\dfrac{11\pi}{6}$ is in Quadrant IV and its reference angle is $\dfrac{\pi}{6}$. Sine is negative in Quadrant IV. Thus, $\sin{(\frac{11\pi}{6})} = -\frac{1}{2}$ Therefore, if $\sin{s} = -\frac{1}{2}$ and $s$ is in $\left[\dfrac{3\pi}{2}, 2\pi\right]$, then $\color{blue}{s=\dfrac{11\pi}{6}}$.
|
Consider the typical WZ model with global symmetry $U(1)\times U(1)_R$. Usually we write the superpotential $W$ as
$$W = \frac{M}{2} \Phi^2 + \frac{\lambda}{3} \Phi^3$$
but you have simply done some rescaling, which is ok. I am writting it like this to agree with the usual convention. Then, we require that the, so to be, renormalized superpotential is still a holomorphic function and transforms correctly under the global transformations I quote above. For these requirements to hold the superpotential takes the form
$$ W = \frac{M}{2} \Phi^2 \,\, f \left( \frac{\lambda \Phi^3}{3} \right) $$
Now what happens in the weak coupling limit of the theory? The quantum superpotential should be approaching the classical one, so Taylor expand the function $f$ as
$$ W = \frac{M}{2} \Phi^2 \,\, \left( 1 + \frac{2}{3}\frac{\lambda \Phi}{m} + \text{higher corrections} \right) = \frac{M}{2} \Phi^2 \,\, f \left( \frac{\lambda \Phi^3}{3} \right)+ \text{higher corrections} $$ Finally, the superpotential should be well behaved as $M \to 0$ constraining W to not having negative powers of $M$. It is obvious then that no higher terms are generated at quantum level and thus no counterterms to absorb any divergences of $M$ and $\lambda$. This is also the reason for the phrase "holomorphicity puts constraints in renormalization". Most of these stuff can be found in Terning's book "Modern Supersymmetry: Dynamics and Duality"
|
This is an interesting question, and although I don't know a rigorous answer, we can discuss some typical cases.
Usually, the inverse exists, but the cases where this inverse does not exist are not necessarily pathological (sound models can have the problem that the inverse does not exist).
For standard field theories (say, $\phi^4$, O(N) models, classical spins models, ...), generically the inverse exists, and this can be shown order by order in a loop expansion (I don't know if this has been proven at all order, but in standard textbooks, this is shown to order 1 or 2). However, the inverse will not exist necessarily for all $\phi_{cl}$, especially in broken symmetry phases. Indeed, an ordered phase is characterized by $$\bar\phi_{cl}=\lim_{J\to 0 } \phi_{cl}[J]=\lim_{J\to 0 }W'[J]\neq 0 ,$$where $\bar \phi_{cl}$ is the equilibrium value of the order parameter. Therefore, you cannot inverse the relationship $\phi_{cl}[J]$ for $\phi_{cl}\in [0,\bar \phi_{cl}]$ ($\phi_{cl}[J]$ generically increases when $J$ increases).
Furthermore, there are cases where the inverse is simply not defined, because $\phi_{cl}[J]={\rm const}$ for all $J$. This is usually the case when the field has no independent dynamics without a source. For instance, if you take a single quantum spin at zero temperature, the only dynamics is given by external magnetic field (here in the $z$ direction) $$\hat H= -h.\sigma_z.$$With $h>0$, the ground state is always $|+\rangle$, and the "classical field" $\phi_{cl}(h)=\langle \sigma_z\rangle=1/2$ for all $h$, and the Gibbs free energy (the Legendre transform of the free energy with respect to $h$, which is essentially the effective action) does not exist.This post imported from StackExchange Physics at 2014-04-13 14:08 (UCT), posted by SE-user Adam
|
Have found that in congruence arithmetic, division will not work unless the above condition is met.
A usual proof is given below, & want to algebraically find out the significance of the given condition:
Let $\exists m \in \mathbb{Z+}, \exists a, b,c \in \mathbb{Z}$. If $ac\equiv bc \pmod {m}$ and $(c, m)=1$, then $a\equiv b \pmod {m}$.
Proof. Because $ac \equiv bc \pmod {m}, m \mid (ac - bc) = c(a - b)$. As $(c, m)= 1$, so $m \mid (a- b)$. So, $a \equiv b \pmod m$.
My incomplete reasoning is:
If $(c,m)\ne1$, then there would be prime factors in common to $c,m$, let $p_1^1$. If so, then $c=p_1k, m = p_1l, (k,l)=1$. Then $p_1k \mid ap_1k \equiv bp_1k \pmod {p_1l} \implies a-b\pmod{{l\over k}}$.
I want to correlate this reasoning to the invalid division example:
$2\mid 14 \equiv 8 \pmod 6 \implies 7 \equiv 4 \pmod 6$.
On comparing, I see that my reasoning falters, where it divides the modulus too. There is no division possible for modulus, as it just forms the equivalence class and declares that the given two quantities ($a,b$) lie in the same equivalence class. Suppose, as a hypothetical case if the modulus is also divided, then no issues and leads to correct division. In the example above also, that hypothetical division would lead to $7 \equiv 4 \pmod 3.$
Edit Have found that my so-called 'hypothetical division' is not so hypothetical at all. In fact, it is completely valid, if the point of view is changed to see the modulus as another number (i.e., apart from $a$) in the linear combination of the form $ac-my = bc \implies ap_1kx - p_1ly = bc.$ This view is used in the solution of congruence equations when $(c,m)\ne 1$, as stated by Beachy, here and also here (in Theorem 1.3.5), among other places.
|
I'm taking a stochastic processes class, and we looked at the example of Gambler's ruin with infinite target, i.e. the gambler stops when he reaches 0 fortune or N, in the limit of N going to infinity.
For a finite target, in the case of $p=q$, the probability to ruin and expected time to ruin for starting fortune $a$ are given by:
$$P_a = 1 - \frac{a}{N}$$
$$T_a = a(N-a)$$
As $N\rightarrow \infty$, $P_a \rightarrow 1$ and $T_a \rightarrow \infty$. Now there seems to be something very wrong with the probability measure on the set of possible trajectories. In calculating the probability to ruin, the set of trajectories going to infinity is of measure zero, but in calculating the stopping time, it seems that the set of trajectories going to infinity have been assigned some positive measure. So what is going wrong when taking the limit $N\rightarrow \infty$?
When N is finite, the number of possible trajectories is countably infinite, is this still the case in the limit $N\rightarrow \infty$?
|
This question already has an answer here:
Im taking my first course in QFT and has stumbled upon something that I do not understand.
Given the Yang Mills lagrangian
$$\mathcal{L} = -\frac{1}{4}F^{a}_{\mu \nu}F^{a\mu \nu}$$
with $F^{\mu \nu} =F^{a\mu \nu} \frac{\sigma^{a}}{2}$ (the pauli matrices). How can I determine the number of unphysical and physical degrees of freedom of this theory?
I know this Lagrangian describes massless spin(1) gauge bosons $A^{\mu}$, which means (I think) that the gauge boson has 2 physical degrees of freedom. However, I do not understand how to count all the remaining degrees of freedom. I suspect it is related to that there are 3 generators for SU(2), although I do not know how to make the connection.
I hope that my question is clear (=
|
Considering the FRW metric with perturbations; how can I calculate the Einstein tensor (without a very very disgusting expression which comes from the variation of the difference between the Ricci tensor minus the metric tensor times the curvature)?
The problem concerns standard gravitational perturbation theory. Instead of expanding around say Minkowski space, one expands around the FLRW background.
Using the gauge symmetries and scalar-vector-tensor decomposition, one can reduce the form of the perturbed metric to an incredibly simple form, namely,
$$g_{\mu\nu} = g^{\mathrm{FLRW}}_{\mu\nu} + h_{\mu\nu}= a(t)^2 \begin{pmatrix} 1+2\Psi & 0 & 0 & 0\\ 0& 2\Phi -1 & 0 & 0\\ 0& 0 & 2\Phi -1 & 0\\ 0& 0& 0 & 2\Phi -1 \end{pmatrix}$$
One approach to finding the Einstein equations is noting to first order $\delta R_{\mu\nu} = -\frac12 \Delta_L h_{\mu\nu}$ where $\Delta_L$ is the Lichnerowicz operator. Alternative, one can in this case simply plug in the perturbed metric.
Defining the Hubble parameter, $\mathcal H = \dot a a^{-1}$, we have $G_{00} = 3\mathcal H^2 + 2\nabla^2 \Phi - 6\mathcal H \Phi'$. The spatial part mixed with the time component is,
$$G_{0i} = 2\partial_i(\Phi' + \mathcal H \Psi)$$
and finally the spatial part - the messiest - is,
$$G_{ij} = -(2\mathcal H' +\mathcal H^2)\delta_{ij} + \partial_i \partial_j (\Phi - \Psi)$$ $$ + \left[\nabla^2 (\Psi - \Phi) + 2\Phi'' + 2(2\mathcal H' + \mathcal H^2)(\Phi + \Psi) + 2\mathcal H \Psi' + 4\mathcal H \Phi' \right]\delta_{ij}.$$
Compared to the perturbation equations in full generality, even with gauge fixing, this is a relatively manageable expression.
It can be further simplified depending on the scenario. Ignoring anisotropic stress, $\Phi = \Psi$ which greatly reduces the equations and in some instances gives us only a Laplace equation to solve.
The situation doesn't get any better than this. In fact, having done perturbation theory of solutions to string theory, I can say the situation can be a
lot worse. General relativity is horrendously non-linear, there's no avoiding that.
|
Let's construct of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the all possible examples simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example. Preliminary analysis
This requires only a little bit of analysis based on definitions. The solution is of only secondary interest:
the main objective is to develop insights to help us understand the results intuitively.
First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently,
any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive.
The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along.
Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables:
$$\eqalign{Y &= \operatorname{Positive part}(X) = \max(0, X);\\Z &= \operatorname{Negative part}(X) = -\min(0, X).}$$
Let's think of $X$ as a
mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$.
To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$
Notation
That's it for preliminary simplifications. To create a nice notation, let us therefore write
$$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$
for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts
$$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$
The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named:
$$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$
and, since $1/X$ has the same sign as $X$,
$$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$
Equating the product of these two expressions with $1$ provides an essential relationship among the variables:
$$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$
Reformulation of the Problem
Suppose the parts of $X$--$Y$ and $Z$--are
any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds?
This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed.
Solution
This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is
$$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$
and the sum is
$$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$
Therefore both roots must be positive. Furthermore, their average is less than $1$, because
$$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$
(By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.)
A Theorem
Here is what we have found:
Given
any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form.
That gives us a rich set of examples indeed!
Constructing the Simplest Possible Example
Having characterized
all examples, let's proceed to construct one that is as simple as possible.
For the negative part $Z$, let's choose a degenerate variable--the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is
$$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$
For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives
$$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$
To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now
$$\mu = \nu = \frac{b + 1/b}{2}.$$
The solution $(3)$ simplifies to
$$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$
How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$.
Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is
$$\eqalign{\Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\\Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\\Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}.}$$
This is the very example offered in the textbook.
|
17 0 1. Homework Statement
Find the Fourier transform of [tex] H(x-a)e^{-bx}, [/tex] where H(x) is the Heaviside function.
2. Homework Equations
[tex] \mathcal{F}[f(t)]=\frac{1}{2 \pi} \int_{- \infty}^{\infty} f(t) \cdot e^{-i \omega t} dt [/tex]
Convolution theory equations that might be relevant:
[tex] \mathcal{F}[f(t) \cdot g(t)] = \mathcal{F}[f(t)] * \mathcal{F}[g(t)] [/tex]
[tex] \mathcal{F}[e^{\alpha t} \cdot f(t)] = \mathcal{F}[f(t+\alpha t)] [/tex]
Derivative of the Heaviside function:
[tex] H'(t) = \delta(t) [/tex]
where [itex] \delta(t) [/itex] is the Dirac Delta function.
3. The Attempt at a Solution
Using the first relevant equation, and assuming the Heaviside function simply changed the boundary conditions from a to infinity:
[tex] \mathcal{F}[H(x-a)e^{-bx}]=\frac{1}{2 \pi} \int_{a}^{\infty} e^{-bx} \cdot e^{-i \omega t} dt [/tex]
[tex] = - \frac{1}{2 \pi} \left[\frac{i e^{-bx-i \omega t}}{\omega} \right]_{a}^{\infty} = - \frac{i e^{-bx-i \omega a}}{\sqrt{2\pi} \omega} [/tex]
This was not right so I tried to take the integral of the Heaviside function via integration by parts:
[tex] \mathcal{F}[f(t)]=\frac{1}{2 \pi} \int_{- \infty}^{\infty}H(x-a)e^{-bx-i \omega t} dt [/tex]
[tex] =\frac{1}{2 \pi} \left[\frac{-H(x-a)ie^{-bx-i \omega t}}{\omega} \right]_{-\infty}^{\infty} - \int_{\infty}^{\infty} \delta(x-a)e^{-bx-i \omega t} [/tex]
At this point I could go on with the integration however I'm unsure if it'll lead to the right answer or if what I'm doing makes sense. I have considered using the last equation in part of the convolution theory section, to get an integral of perhaps [itex] H(x-a-i \omega t) [/itex] but that doesn't seem right either.. I am unsure how to take the Fourier transformation of the Heaviside function, any help will be gladly appreciated.
|
The extremal solution of a boundary reaction problem
1.
Departamento de Ingeniería Matemática, CMM (UMI CNRS 2807), Universidad de Chile, Casilla 170/3, Correo 3, Santiago, Chile
2.
LAMFA CNRS UMR 6140, Université de Picardie Jules Verne, 33 rue Saint-Leu 80039Amiens Cedex 1, France
3.
Universidade Estadual de Campinas, IMECC, Departamento de Matemática, Caixa Postal 6065, CEP 13083-970, Campinas, SP, Brazil
$\Delta u = 0$ in $ \Omega$, $\qquad \frac{\partial u}{\partial \nu} =\lambda f(u)$ on $\Gamma_1, \qquad u = 0$ on $\Gamma_2$
where $\lambda>0$, $f(u) = e^u$ or $f(u) = (1+u)^p$, $\Gamma_1$, $\Gamma_2$ is a partition of $\partial \Omega$ and $\Omega\subset \mathbb R^N$. We determine sharp conditions on the dimension $N$ and $p>1$ such that the extremal solution is bounded, where the extremal solution refers to the one associated to the largest $\lambda$ for which a solution exists.
Mathematics Subject Classification:26D07, 35J25, 35J6. Citation:Juan Dávila, Louis Dupaigne, Marcelo Montenegro. The extremal solution of a boundary reaction problem. Communications on Pure & Applied Analysis, 2008, 7 (4) : 795-817. doi: 10.3934/cpaa.2008.7.795
[1] [2]
Guillaume Warnault.
Regularity of the extremal solution for a biharmonic problem with general nonlinearity.
[3]
José Francisco de Oliveira, João Marcos do Ó, Pedro Ubilla.
Hardy-Sobolev type inequality and supercritical extremal problem.
[4]
Jagmohan Tyagi, Ram Baran Verma.
Positive solution to extremal Pucci's equations with singular and gradient nonlinearity.
[5]
Canghua Jiang, Kok Lay Teo, Ryan Loxton, Guang-Ren Duan.
A neighboring extremal solution for an optimal switched impulsive control problem.
[6]
Baishun Lai, Qing Luo.
Regularity of the extremal solution for a fourth-order elliptic problem with singular nonlinearity.
[7]
Carlo Morosi, Livio Pizzocchero.
On the constants in a Kato inequality for the Euler and Navier-Stokes equations.
[8] [9] [10] [11]
Vladimir Lubyshev.
Precise range of the existence of positive solutions of a nonlinear, indefinite in sign Neumann problem.
[12]
Aibin Zang.
Kato's type theorems for the convergence of Euler-Voigt equations to Euler equations with Drichlet boundary conditions.
[13]
Changliang Zhou, Chunqin Zhou.
Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian.
[14] [15] [16]
Leszek Gasiński, Liliana Klimczak, Nikolaos S. Papageorgiou.
Nonlinear noncoercive Neumann problems.
[17]
Albert Clop, Daniel Faraco, Alberto Ruiz.
Stability of Calderón's inverse conductivity problem in the plane for discontinuous conductivities.
[18]
Balázs Boros, Josef Hofbauer, Stefan Müller, Georg Regensburger.
Planar S-systems: Global stability and the center problem.
[19] [20]
Zhong Tan, Jianfeng Zhou.
Higher integrability of weak solution of a nonlinear problem arising in the electrorheological fluids.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Let $F$ be a Maass cusp form for $\mathrm{SL}(3,\mathbb{Z})$ (level 1 trivial character).
Let $g$ be a Maass cusp form for $\Gamma_0(N)$ with character $\chi$ mod $N$. For convenience, you may assume $\chi$ is primitive mod $N$. You may also take $g$ to be an Eisenstein series $E(z,s,\chi)=\sum_{\gamma\in \Gamma_\infty\backslash \Gamma_0(N)}\overline{\chi}(\gamma) \Im(\gamma z)^s$.
If $g$ were level 1 (full modular group $\mathrm{SL}(2,\mathbb{Z})$), then we know that the Rankin-Selberg $L$-function of $F\times g$ is a nice integral \[\text{gamma factors}\cdot L(s,F\times g)=\int_{\mathrm{SL}(2,\mathbb{Z})\backslash \mathbb{H}} F\left(\begin{pmatrix}z&\\&1\end{pmatrix}\right)g(z)\det(z)^{s-1}\, \frac{dx \,dy}{y^2}.\]
The standard unfolding technique applies nicely to the Fourier-Whittaker expansion of $F$. See page 372 of Goldfeld, Dorian. Automorphic forms and L-functions for the group $\mathrm{GL}(n,\mathbb{R})$. Vol. 99. Cambridge University Press, 2006.
However, when $g$ has higher level with character, I don't know how to define the integral. That is my question.
Also, remind that if $f$ and $g$ are automorphic forms on $\mathrm{GL}(2)$ with characters $\chi_f$ and $\chi_g$ we can define their Rankin-Selberg $f\times \bar g$ by using some Eisenstein series $E(z,s, \chi_f\bar{\chi_g})$ to balance the characters of $f\bar g$.
And for my question, adelically, there is definite answer. But I am looking for something computable in classic settings.
|
Consider I am given two functions of one random variable each for example x=cos(at),y=rect(bt) where a and b are random variables.And I am given Probability density function for a and b then if I am asked if the two functions are independent or not so, I want to confirm that before proceeding I will have to convert the pdf of a and b to pdf of x and y or can I directly proceed with pdf of a and b by proceeding I mean checking if joint probability density is equal to product of marginal probability density.
The canonical definition of independence of two random variables $X$ and $Y$ is
$X$ and $Y$ are called independent random variables if for every choice of Borel sets $B_1, B_2$, the
events$\{X \in B_1\}$ and $\{Y \in B_2\}$ are independent events, that is, $$P\{X \in B_1, Y \in B_2\} = P\{X \in B_1\}P\{Y \in B_2\} \tag{1}$$
If you don't know what Borel sets are, rest assured that every set of real numbers you have encountered (and many more that you have never even dreamt of) is a Borel set. Choosing $B_1 = \{x\colon x \leq u\}$ and $B_2 = \{y\colon y \leq u\}$, Eq. $(1)$ tells us that$$P\{X\leq u, Y \leq v\} {=} P\{X\leq u\}P\{Y \leq v\}\tag{2}$$which can also be expressed as$$F_{X,Y}(u,v) = F_X(u)F_Y(v).\tag{3}$$It can be proved that if $(3)$ holds for
all real numbers $u$ and $v$, then $(1)$ also holds and so $(3)$ is usually taken as the operational definition of independence of $X$ and $Y$:
$X$ and $Y$ are called independent random variables if $$F_{X,Y}(u,v) = F_X(u)F_Y(v) ~\text{for all}~u, v \in \mathbb R.\tag{4}$$
If $g(\cdot)$ and $h(\cdot)$ are real-valued (measurable) functions, then the random variables $W = g(X)$ and $Z = h(Y)$ are independent whenever $X$ and $Y$ are independent. This is because the
events $\{W \leq u\}$ and $\{Z \leq v\}$ are the same as the events $\{X \in B_1\}$ and $\{Y \in B_2\}$ respectively where $B_1$ and $B_2$ are the pre-images of the sets $\{x\colon x \leq u\}$ and $\{y\colon y \leq v\}$ respectively under the maps $g(\cdot)$ and $h(\cdot)$. That is,\begin{align}B_1 &= \{w\colon g(w) \leq u\},\\B_2 &= \{z\colon h(z) \leq v\},\end{align}and from $(1)$, we know that $\{X \in B_1\}$ and $\{Y \in B_2\}$ are independent events. Thus, we have that $$P\{W \leq u, Z \leq v\} = F_{W,Z}(u,v) = P\{W \leq u\}P\{Z \leq v\} = F_{W}(u)F_{Z}(v)~\text{for all}~u, v \in \mathbb R$$and so $g(X)$ and $h(Y)$ are also independent random variables.In summary,
Functions of independent random variables are independent random variables.
It happens that if $X$ and $Y$ are independent then so will their functions $g(X)$ and $h(Y)$ be; but not $g(X,Y)$ and $h(X,Y)$.
|
The Monge-Ampère equation¶
The Monge-Ampère equation provides the solution to the optimal transportation problem between two measures. Here, we consider the case where the target measure is the usual Lebesgue measure, and the template measure is \(f(x)d^nx\), both defined on the same domain. Then, in two dimensions, the optimal transportation plan is given by
where \(u\) satisfies the the Monge-Ampère equation
where \(I\) is the identity matrix, and \(D^2\) is the Hessian matrix of second derivatives, subject to the boundary conditions \(\frac{\partial u}{\partial n}=0\).
Here we follow the approach of [LP13], namely to use the mixed formulation
where \(\sigma\) is a \(2\times 2\) tensor.
Written in weak form, our problem is to find \((u, \sigma) \in V\times \Sigma = W\) such that
This is called a nonvariational discretisation since the PDE is not in a divergence form. Note that we have dropped the boundary terms that vanish due to the boundary condition. To proceed in the discretisation, we simply choose \(V\) to be a continuous degree-k finite element space, and \(\Sigma\) to be the \(2 \times 2\) tensor continuous finite element space of the same degree. Since we have Neumann boundary conditions, this variational problem has a null space consisting of the constant functions in \(V\).
For Dirichlet boundary conditions, [Awa14] proved that this algorithm converges when \(k>1\). Note that the Jacobian system arising from Newton is only elliptic when \(I + \sigma\) is positive-definite; it is observed that positive-definiteness is preserved by Newton iteration and hence we must be careful to choose an appropriate initial guess. This is one of the reasons why we have set things up with \(I + \sigma\) here instead of \(\sigma\) as is more conventional for these equations, since then \(u=0\) is an appropriate initial guess. This setup also makes the application of the weak boundary conditions easier.
We now proceed to set up the problem in Firedrake using a square mesh of quadrilaterals.
from firedrake import *n = 100mesh = UnitSquareMesh(n, n, quadrilateral=True)
We construct the quadratic function space for \(u\),
V = FunctionSpace(mesh, "CG", 2)
and the function space for \(\sigma\).
Sigma = TensorFunctionSpace(mesh, "CG", 2)
We then combine them together in a mixed function space.
W = V*Sigma
Next, we set up the source function, which must integrate to the areaof the domain. Note how in the integration of the
Constantone, we must explicitly specify the domain we wish to integrate over.
x, y = SpatialCoordinate(mesh)fexpr = exp(-(cos(x)**2 + cos(y)**2))f = Function(V).interpolate(fexpr)scaling = assemble(Constant(1, domain=mesh)*dx)/assemble(f*dx)f *= scalingassert abs(assemble(f*dx)-assemble(Constant(1, domain=mesh)*dx)) < 1.0e-8
Now we build the UFL expression for the variational form. We will use the nonlinear solve, so the form needs to be a 1-form that depends on a Function, w.
v, tau = TestFunctions(W)w = Function(W)u, sigma = split(w)n = FacetNormal(mesh)I = Identity(mesh.geometric_dimension())L = inner(sigma, tau)*dxL += (inner(div(tau), grad(u))*dx - (tau[0, 1]*n[1]*u.dx(0) + tau[1, 0]*n[0]*u.dx(1))*ds)L -= (det(I + sigma) - f)*v*dx
We must specify the nullspace for the operator. First we define a constant nullspace,
V_basis = VectorSpaceBasis(constant=True)
then we use it to build a nullspace of the mixed function space \(W\).
nullspace = MixedVectorSpaceBasis(W, [V_basis, W.sub(1)])
Then we set up the variational problem.
u_prob = NonlinearVariationalProblem(L, w)
We need to set quite a few solver options, so we’ll put them into a dictionary.
sp_it = {
We’ll only use stationary preconditioners in the Schur complement, so we can get away with GMRES applied to the whole mixed system
# "ksp_type": "gmres",
We set up a Schur preconditioner, which is of type “fieldsplit”. We also need to tell the preconditioner that we want to eliminate \(\sigma\), which is field “1”, to get an equation for \(u\), which is field “0”.
# "pc_type": "fieldsplit", "pc_fieldsplit_type": "schur", "pc_fieldsplit_0_fields": "1", "pc_fieldsplit_1_fields": "0",
The “selfp” option selects a diagonal approximation of the A00 block.
# "pc_fieldsplit_schur_precondition": "selfp",
We just use ILU to approximate the inverse of A00, without a KSP solver,
# "fieldsplit_0_pc_type": "ilu", "fieldsplit_0_ksp_type": "preonly",
and use GAMG to approximate the inverse of the Schur complement matrix.
# "fieldsplit_1_ksp_type": "preonly", "fieldsplit_1_pc_type": "gamg",
Finally, we’d like to see some output to check things are working, and to limit the KSP solver to 20 iterations.
# "ksp_monitor": None, "ksp_max_it": 20, "snes_monitor": None }
We then put all of these options into the iterative solver,
u_solv = NonlinearVariationalSolver(u_prob, nullspace=nullspace, solver_parameters=sp_it)
and output the solution to a file.
u, sigma = w.split()u_solv.solve()File("u.pvd").write(u)
An image of the solution is shown below.
A python script version of this demo can be found here.
References
|
This shows you the differences between two versions of the page.
partial_semigroups [2018/07/23 16:57]
jipsen
partial_semigroups [2018/08/04 17:55] (current)
jipsen
Line 6: Line 6: A \emph{partial semigroup} is a structure $\mathbf{A}=\langle A,\cdot\rangle$, where A \emph{partial semigroup} is a structure $\mathbf{A}=\langle A,\cdot\rangle$, where - $\cdot$ is a \emph{partial binary operation}, i.e., $\cdot: A\times A\to A+\{*\} )$ and + $\cdot$ is a \emph{partial binary operation}, i.e., $\cdot: A\times A\to A+\{*\}$ and $\cdot$ is \emph{associative}: $(x\cdot y)\cdot z\ne *$ or $x\cdot (y\cdot z)\ne *$ imply $(x\cdot y)\cdot z=x\cdot (y\cdot z)$. $\cdot$ is \emph{associative}: $(x\cdot y)\cdot z\ne *$ or $x\cdot (y\cdot z)\ne *$ imply $(x\cdot y)\cdot z=x\cdot (y\cdot z)$. Line 19: Line 19: ====Basic results==== ====Basic results==== + Partial semigroups can be identified with [[semigroups with zero]] since for any partial semigroup $A$ we can define a semigroup $A_0=A\cup\{0\}$ (assuming $0\notin A$) + and extend the operation on $A$ to $A_0$ by $0x=0=x0$ for all $x\in A$. Conversely, given a semigroup with zero, say $B$, define a partial semigroup + $A=B\setminus\{0\}$ and for $x,y\in A$ let $xy=*$ if $xy=0$ in $B$. These two maps are inverses of each other. + However, the category of partial semigroups is not the same as the category of semigroups with zero since the morphisms differ. ====Properties==== ====Properties====
|
We know that: $$\frac 1{2!}+\frac 1{3!}+\frac 1{4!}+\frac 1{5!}+\frac 1{6!}+\cdots =e-2\approx0.71828$$ But I am getting the above sum as $1,$ as shown below: \begin{align} S & = \frac 1{2!}+\frac 1{3!}+\frac 1{4!}+\frac 1{5!}+\frac 1{6!}+\cdots \\[10pt] & = \frac 1{2!} + \frac {3-2}{3!} +\fra...
Perhaps the chaos-theory and chaotic-systems tags should be merged? The second has no official description but I can hardly imagine what the difference should be.
The list of proposals on the 2016 thread that are still open: Proposal to rename the "adjoint" tag Proposal to join the "chaos theory" and "chaotic systems" tags Proposal to change the name of the "divisors" tag Proposal to make the "compactification" tag a synonym of the "compactness" tag Pr...
Please merge chaotic-systems and chaos-theory. I am active in these tags and the respective field and I fail to see a meaningful difference between them, let alone a need for a distinction. This already got 10 upvotes last year.
I propose creating relation-composition tag and making it a synonym of function-composition. I think that if composition of functions is important enough to have its own tag, then so is composition of relations. But it would probably be better to have both topics under the same tag. We definitel...
Maple shows that $$ \sum_{0 \le k \le m} \frac{2^k}{(k+1)} = -i/2\pi -2\,{2}^{m} \left( 1/4\,{\it \Phi} \left( 2,1,m \right) -1 /4\,{m}^{-1}-1/2\, \left( m+1 \right) ^{-1} \right) $$ where $\Phi$ denotes Lerch's transcendent. How can we prove this? I have checked a few books but haven't got a cl...
maybeit's clearer than having relations just as a synonym an mentioned in the tag-wiki...?)
« first day (2004 days earlier) ← previous day next day → last day (676 days later) »
|
I am reading a paper, and I came across the following definition of sinc interpolation.
Warning. I don't have a strong background in signal processing. Also, I have no clue what that bar on $\bar{F}$ means. I don't know why that would be the conjugate. Would it actually be the conjugate? Context. In this part, for a given kernel $f_k$ of size $k$, we want to expand $F_k = DFT\{f_k\}$ to $F_N = DFT\{f_N\}$ have size $N$, without going to the spatial domain, padding and back to the spectral domain.
Let's discuss on a 1D scenario for notation simplification.
$$ G(u) = F(u) \ast \left [ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \right ] $$
Q1. I am assuming that $G(u)$ and $F(u)$ are discrete, having the respective sizes of $N$ and $k$. Is this assumption right?
Q2. Would that second term actually be the $sinc$ kernel, i.e. would the expression below be true? If not, what is their between that term and the sinc function? $$ sinc(u) = \frac{sin(u)}{u} = \left [ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \right ] $$
Another way I know to avoid using the DFT to compute $F_N$, is by representing $f_N$ in terms of $f_k$ as
$$f_n = \sum_{i=0}^{k-1} \delta(n-i) f_i$$
Thus, having that $$DFT\{\delta[n]\} = 1$$
and $$ DFT\{x[n-n_o]\} = e^{-j \frac{2 \pi}{N} k n_o} DFT\{x[n]\}$$
Hence $$F_N(k) = \sum_{n_o=0}^{k-1} e^{-j \frac{2 \pi}{N} k n_o } f_{n_o}$$
I am not sure if that the convolution on the frequency domain $G(u)$, and the impulse based composition $F_N(k)$ are similar approaches of doing the same thing. I did notice the similarity between
$$ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \sim e^{\frac{-j 2 \pi k}{N}n_o} $$
But, if this is the case, then,
$$ \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \sim F\{\delta(k-n_o)\} $$
Q3. Are these the same approaches? Is there any relation between these expressions? $$\frac{k-1}{2} \sim n_o$$
|
This feat is notorious for its poor wording. The “+100%” phrasing is completely unique within D&D 3.5e as far as I know, for example. Ultimately, I can’t imagine any other interpretation here than adding again the number subtracted from your attack rolls, and it does have the nice feature of specifying the “normal” damage from Power Attack which means that features like the frenzied berserker’s supreme power attack that already give one-handed weapons 2:1 returns don’t get doubled to 4:1, but instead go to the 3:1 you would normally expect from D&D’s multiplication rules.
But then there is the line you haven’t quoted:
If you use this tactic with a two-handed weapon, you instead triple the extra damage from Power Attack.
No bizarre “+100%” in sight! But also we have lost the useful reference to “normal” and now it is multiplying “the extra damage from Power Attack,” whatever that is for you. This is going to get us in trouble, you can just tell already.
So you are tripling the extra damage—not tripling the penalty applied. The problem here, well the first problem here, is that “the extra damage from Power Attack” is “twice the number subtracted from your attack rolls” when attacking two-handed. Worse, since “the extra damage from Power Attack” is calculated as twice the penalty, but isn’t itself subject to any multiplier, arguably the repeated-multiplication rules don’t apply, and that gets you a 2×3=6 rather than 1+(2−1)+(3−1)=4. So instead of 2:1 returns on Power Attack, you get
6:1 returns on Power Attack. Or maybe you get 5:1; it’s impossible to say since it’s worded so poorly. Plus, ya know, I suspect what they meant to do was give you 3:1 returns, but of course they didn’t say that.
And that would combine quite nicely with, say, the supreme power attack feature of the frenzied berserker, who was getting 4:1 returns to begin with. Now they’re arguably getting 8:1.
On top of those issues, this is
only the Power Attack bonus damage. The result is added to the rest of your damage, and that gets you your full damage... which might be multiplied again, e.g. with valorous. This effectively multiplies your multiplier, which is exactly what the multiplication rules try to avoid, but since two different things are being multiplied, the multiplication rules don’t actually come into play.
So for the example: 2d6+1 damage from the weapon itself, +6 for Strength, and the −6 attack penalty for maximum Power Attack results in double that for +12 damage from Power Attack without Leap Attack. Thus 2d6+19 is the baseline for all interpretations, and
valorous doubles that for 4d6+38.
With the 6:1 returns, we are instead looking at Power Attack bonus of +36 (six times the penalty, triple “the extra damage from Power Attack” which would have been +12). Using 5:1 brings that down to +30, which is somewhat better, but not, ya know, great, when what they probably meant was +18. Note that +36 is nearly what
valorous was giving the entire attack before. Now with valorous, we’re looking at a total of 4d6+ 66—of which, 52 comes from Power Attack.
It may not be a bad idea to try to eliminate the multiplication of a multiplier here through houserule, but note that the Power Attack bonus damage isn’t the only case of this: the bonus damage due to Strength also has a multiplier, +1½×, which is
also being doubled by valorous. This, unlike Leap Attack, has strong precedent in the rules. The “fix” would be to apply the multiplication rule individually to all sources of damage, like so:
\begin{array}{r}2 \times ( && 2\text{d}6 && +1 && +1\tfrac{1}{2}\times 4 && +3\times 2\times 6 & ) \\= && 2\times 2\text{d}6 && + 2\times 1 && + 2\times 1\frac{1}{2}\times 4 && + 2\times 3\times 2\times 6 \\= & [1 \\ && +\left(2-1\right) \\ & ] & \times 2\text{d}6 & +[1 \\ && && +\left(2-1\right) \\ && & ] & \times 1 & +[1 \\ && && && +\left(2-1\right) \\ && && && +\left(1\frac{1}{2}-1\right) \\ && && & ] & \times 4 & +[1 \\ && && && && +\left(2-1\right) \\ && && && && +\left(3-1\right) \\ && && && && +\left(2-1\right) \\ && && && & ] & \times 6 \\= && 2\times 2\text{d}6 && +2\times 1 && +2\frac{1}{2}\times 4 && +5\times 6 \\= && 4\text{d}6 && +2 && +10 && +30 \\= && && && && 4\text{d}6+42 \\\end{array}
But this is very-definitely a houserule, and I’m not convinced that it
is good (I mean, good luck calculating that for every attack!), even though it “enforces” the idea that you’re not supposed to get to mulitply multipliers.
|
There are a number of imprecisions in your question, mostly having to do with confusing the Lie group and its Lie algebra. I suppose this will make it hard to read the mathematical literature. Having said that, the first volume of Kobayashi and Nomizu is probably the canonical reference.
Let me try to summarise. Let me assume that $H$ is connected.
The structure of the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ of the Lie algebra $\mathfrak{g}$ of $G$ into the Lie algebra $\mathfrak{h}$ of $H$ and the complement $\mathfrak{m} = \mathrm{Span}(\lbrace Y_a\rbrace)$, says that you have a
reductive homogeneous space. Such homogeneous spaces have a canonical invariant connection and hence a canonical notion of covariant derivative.
The map $G \to G/H$ defines a principal $H$-bundle. Your $\Omega$ is a local section of this bundle. On $G$ you have the left-invariant Maurer-Cartan one-form $\Theta$, which is $\mathfrak{g}$-valued. You can use $\Omega$ to pull back $\Theta$ to $G/H$: it is a locally defined one-form on $G/H$ with values in $\mathfrak{g}$. For matrix groups, it is indeed the case that $\Omega^*\Theta = \Omega^{-1}d\Omega$, but you can in fact use this notation for most computations without worrying too much.
Decompose $\Omega^{-1}d\Omega$ according to the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$:$$
\Omega^{-1} d\Omega = \omega + \theta
$$where $\omega$ is the $\mathfrak{h}$ component and $\theta$ is the $\mathfrak{m}$ component. It follows that $\theta$ defines pointwise an $H$-equivariant isomorphism from the tangent space to $G/H$ and $\mathfrak{m}$, with $H$ acting on $\mathfrak{m}$ by the restriction to $H$ of the adjoint action of $G$ on $\mathfrak{g}$ and $H$ acting on $G/H$ via the linear isotropy representation. This means that $\theta$ is a
soldering form.
On the other hand $\omega$ is $\mathfrak{h}$-valued and defines a connection one-form. You can check that if you change the parametrisation $\Omega$, then $\omega$ does transform as a connection under the local $H$-transformations.
This then allows you to differentiate sections of homogeneous vector bundles on $G/H$, such as tensors. In your notation and assuming that $\psi$ is a section of one such bundle, associated to a representation $\rho$ of $H$, the covariant derivative would be$$
\nabla \psi = d\psi + \rho(\omega)\psi~,
$$where I also denote by $\rho$ the representation of the Lie algebra of $H$.
The Maurer-Cartan structure equation satisfied by $\Theta$ is$$
d\Theta = - \tfrac12 [\Theta,\Theta]
$$and this pulls back to $G/H$ to give the following equations$$
d\theta + [\omega,\theta] = -\tfrac12 [\theta,\theta]_{\mathfrak{m}}
$$and$$
d\omega + \tfrac12 [\omega,\omega] = - \tfrac12 [\theta,\theta]_{\mathfrak{h}}
$$which say that the torsion $T$ and curvature $K$ of $\omega$ are given respectively by$$
T = -\tfrac12 [\theta,\theta]_{\mathfrak{m}} \qquad\mathrm{and}\qquad K = - \tfrac12 [\theta,\theta]_{\mathfrak{h}}.
$$
One thing to keep in mind is that in general $\nabla$ will not be the Levi-Civita connection of any invariant metric, since it has torsion. (If (and only if) the torsion vanishes, you have a (locally) This post has been migrated from (A51.SE)
symmetric space.) If you are interested in the Levi-Civita connection of an invariant metric, then you have to modify the invariant connection by the addition of a contorsion tensor which kills the torsion. The details are not hard to work out.
|
I'm studying Riemann-Darboux integration. I'm trying to prove the following rather intuitive notion for integrals. Please let me know if you find any errors in this proof, as I'm self-studying this topic.
Theorem: Suppose $f$ is Riemann-Darboux integrable on $[a,b]$. Let $c\in(a,b)$. Then, $f$ is Riemann-Darboux integrable on the intervals $[a,c]$ and $[c,b]$. Attempted Proof: Since $f$ is Riemann-Darboux integrable on $[a,b]$, for arbitrary $\epsilon>0$, there exists a partition $P$ of $[a,b]$ such that $U(f,P)-L(f,P)<\epsilon$.
Let $n_p,n_{p*}$, and $n_{p'}$ be the number of partition parts in $P$, $P^*$, and $P'$, respectively. Also, let $m_i=\inf_{x\in[x_{i-1},x_i]}f(x)$ and $M_i=\sup_{x\in[x_{i-1},x_i]}f(x)$.
Consider the partition of $[a,c]$ given by $P^*=P\cap[a,c]$. Then, $$U(f,P^*)-L(f,P^*)=\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i\le\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i+ \sum_{n_{p*}+1}^{n_p}(M_i-m_i)\Delta x_i=\sum_{i=1}^{n_p}(M_i-m_i)\Delta x_i=U(f,P)-L(f,P)<\epsilon$$ Therefore, $f$ is integrable on $[a,c]$.
Next, consider the partition $[a,b]$ given by $P'=P\cap[c,b]$. Then,
$$U(f,P')-L(f,P')=\sum_{i=n_{p*}+1}^{n_{p'}}(M_i-m_i)\Delta x_i\le\sum_{i=1}^{n_{p*}}(M_i-m_i)\Delta x_i+ \sum_{n_{p*}+1}^{n_p}(M_i-m_i)\Delta x_i=\sum_{i=1}^{n_p}(M_i-m_i)\Delta x_i=U(f,P)-L(f,P)<\epsilon$$ Therefore, $f$ is integrable on $[c,b]$. $\square$
Any and all feedback, or alternative proofs are appreciated. I love to see different arguments to expand my skill set.
|
It is a well-known result that the modular function $1728J(\tau) := \frac{1728E_4(\tau)^3}{E_4(\tau)^3-E_6(\tau)^2}$ has integral values if $\tau$ has class number 1 - for example at $\tau_{163}:=\frac{1+i\cdot\sqrt{163}}{2}$ you get $1728J(\tau_{163})=-640320^3$.
Now define the quasi-modular function $s_2(\tau):=\frac{E_4(\tau)}{E_6(\tau)}\cdot\left(E_2(\tau)-\frac{3}{\pi Im(\tau)}\right)$.
Then I have looked at all 13 class 1 discriminants and have verified numerically that $$M(\tau):=s_2(\tau)\cdot(1728J(\tau)-1728)=\frac{1728E_4(\tau)E_6(\tau)}{E_4(\tau)^3-E_6(\tau)^2}\left(E_2(\tau)-\frac{3}{\pi Im(\tau)}\right)$$ also has integral values for all these $\tau_N$.
For example $M(\tau_{163})=-2^{13}\cdot3^5\cdot5\cdot7\cdot11\cdot19\cdot23\cdot29\cdot127\cdot181$.
My Question:How can I prove that $M(\tau)\in\mathbb Z$ for all $\tau$ with class number 1?Or where can I find a proof for it? Edit: After the answer of @Zavosh (thank you!!!) it remains to prove this question. Who can help?
|
First off, the fact that the board actually blocks the sun light going into the house may have cooled down the house itself (same effect as a solar screen). Since this question is about how the pressure and temperature will be changed after installing the Eco-Cooler air conditioner (the bottle board solely), I will give the following analysis.
From the Gay-Lussac Law that \begin{align}\frac{P_1}{T_1}=\frac{P_2}{T_2},\end{align}the ratio between pressure and temperature is a constant for a given approximately fixed volume. When one installs the bottles on the windows, the shape of the bottles increases the air pressure before entering the room. This can be understood in the following manner: we assume the wind is entering an almost closed house that every cross section over the course of the bottle pipe is approximately under the force balanced/equilibrium condition that $$F_a=S_aP_a\approx S_bP_b$$ for two arbitrary cross sections $S_a$ and $S_b$. Since the bottleneck area is the smallest, pressure there may be the biggest before entering the house. In this process, however, the temperature may not be changed since it is contacting with the outdoor environment constantly.
When the air blows into the room, the pressure gets decreased immediately to the normal or even maybe below-normal (depending on the actual room pressure) atmosphere pressure as there is no bottleneck-shaped pipe limiting its volume. As a result of the equation given above, the temperature goes down immediately inside of the room.
One important note regarding the other answers and the valid condition of equation (2) above -- I have noticed other answers have been focusing on the opening of chimney and other windows, but here I don't need to assume that condition, and indeed I think the other outlets of the house should keep closed to prevent heat exchange from those openings.
Firstly, we shouldn't focus on whether there is a chimney or outlets on the other side of the house to explain the temperature change due to the bottle board installation. Because before or after the bottle board is installed, the other chimney or windows are always there if there were any, they shouldn't be the cause of the change of temperature -- it is the installation of the bottle inlets generate the temperature change. Secondly, since the house is relatively large compared to the openings from the video, air flow will experience a friction effect when it enters the house (all the other answers haven't considered this effect). In other words, you can imagine the house is approximately a closed volume that it will give a resistance on the entering air and reduce its entering speed. Therefore, the air flow going through the bottle pipes will be compressed at the bottle neck position and the speed of entering the house is lower than the case that it enters an completely open space. This validates the condition of equation (2) which is any cross section of the air flow over the bottle path is approximately in a force balanced/equilibrium condition. Thirdly, a chimney may help to let warm air go out and the houses shown in the video are made by wood and are not air-tight indeed, but it is actually important to keep other big windows of the house closed in practice to prevent the hot air entering the house to raise the temperature again! The video shows the room temperature can be $5^\circ$ lower than outside. If you keep other big windows open, it is very easy to rebalance the room temperature back to high again. This is the same common requirement as we turn on air conditioner in summer, and makes the point 2 above even valid. Obviously, other answers may have ignored this common knowledge -- instead of analyzing how the bottle board helps but to argue about there must be openings to let air flow freely go in and out of the house to make the cooling process possible if at all.
Feasibility and conditions to make it work: We see from the video that there is a $5^\circ$ temperature difference. We can assume that the outside temperature is about $30^\circ C$ or $T_1=303K$; and the temperature inside is about $25^\circ C$ or $T_2=298K$. Therefore, the pressure raised at the bottleneck relative to the normal house pressure is \begin{align}\eta=\frac{P_1}{P_2}=\frac{T_1}{T_2}\approx 1.017,\end{align}which is about $1.7\%$ of pressure increase. From equation (2), since the bottleneck cross section is a lot of smaller than the intake area, the ideal pressure increase can be a lot more than $1.7\%$ when Equ. (2) is completely an equation. Considering Equ. (2) becomes a complete equation only when the bottleneck is completely closed from the house side, which is not totally true, and the house is constantly exchanging heat from other non-ideal channels with the environment, we may find the $5^\circ$ temperature decrease is possible from a rough estimation. To make the Eco-Cooler work well, it is crucial to have a good insulation condition of the house and make sure all the other windows/openings of the house closed to make the constrain well satisfied. However, if there is no air flowing into the house from the bottles, this Eco-cooler may not work well from the pressure-temperature transitions, but it may still work to some extent by blocking the sunshine from shining into the house.
A similar rule governs the case that when you evaporate water inside of an open room, in which the volume of the water vapor is increased from water and the chemical energy of water vapor from liquid water is also changed so that in the end water vapor will absorb heat from the air. Hopefully this helps your understanding on the power of physics laws.
|
In my talk at 2018 Chinese Mathematical Logic Conference, I asked if \((V,\subset,P)\) is epsilon-complete, namely if the membership relation can be recovered in the reduct. Professor Joseph S. Miller approached to me during the dinner and pointed out that it is epsilon-complete. Let me explain how.
Theorem
Let \((V,\in)\) be a structure of set theory, \((V,\subset,P)\) is the structure of the inclusion relation and the power set operation, which are defined in \((V,\in)\) as usual. Then \(\in\) is definable in \((V,\subset,P)\).
Proof.
Fix a set \(x\). Define \(y\) to be the \(\subset\)-least such that
\[\forall z \big((z\subset x\wedge z\neq x)\rightarrow P(z)\subset y\big).\]
Actually, \(y=P(x)-\{x\}\), so \(\{x\}= P(x) – y\). Since set difference can be defined from subset relation and \((V,\subset,\{x\})\) can define \(\in\), we are done.
\(\Box\)
Here is another argument figured out by Jialiang He and me after we heard Professor Miller’s Claim.
Proof. Since \(\in\) can be defined in \((V,\subset,\bigcup)\) (see the slides). Fix a set \(A\), it suffices to show that we can define \(\bigcup A\) from \(\subset\) and \(P\).
Let \(B\) be the \(\subset\)-least set such that there is \(c\), \(B=P(c)\) and \(A\subset B\). Note that
\[ \bigcap\big\{P(d)\bigm|A\subset P(d)\big\}= P\big(\bigcap\big\{d\bigm|A\subset P(d)\big\}\big). \] Therefore, \(B\) is well-defined. Next, we show that \[ \bigcap\big\{d\bigm|A\subset P(d)\big\}=\bigcup A. \] Clearly, \(A\subset P(\bigcup A)\). This proves the direction from left to right. For the other direction, if \(x\) is in an element of \(A\), then it is in an element of \(P(d)\) given \(A\subset P(d)\), i.e. it is an element of such \(d\).
Therefore \(\bigcup A\) is the unique set whose power set is \(B\).
\(\Box\)
|
METHOD $1$:
We can proceed to evaluate the integral if we invoke Generalized Functions. To that end, we write the inverse Fourier Transform representation for $f$ as
$$f(t)=\frac{1}{2\pi}\int_{- \infty}^{ \infty}\frac{e^{j\omega t}}{a+j\omega}\,d\omega \tag 1$$
Then, note that the derivative $f'$ is given by
$$f'(t)=\frac{1}{2\pi}\int_{- \infty}^{ \infty}\frac{j\omega\,e^{j\omega t}}{a+j\omega}\,d\omega \tag 2$$
Adding $(2)$ and $a$ times $(1)$ reveals that
$$\begin{align}f'(t) +af(t)&=\frac{1}{2\pi}\int_{- \infty}^{ \infty}e^{j\omega t}\,d\omega \\\\&=\delta(t) \tag 3\end{align}$$
where $\delta(t)$ is the Dirac Delta Distribution. Solution to the ODE expressed in $(3)$ is
$$f(t)=e^{-at}h(t)$$
as was to be shown!
METHOD $2$:
We begin by writing the integral representation for $f(t)$ as the Cauchy Principal Value
$$f(t)=\frac{1}{2\pi}\lim_{R\to \infty}\int_{-R}^{ R}\frac{e^{j\omega t}}{a+j\omega}\,d\omega \tag 4$$
Note that we need not evaluate the integral in terms of its Cauchy Principal Value. Inasmuch as the improper integral converges, its Cauchy Principal Value converges to the same value.
Now, we define a new function $f_R(t)$ as
$$f_R(t)=\oint_C \frac{e^{jzt}}{a+jz}\,dz$$
where the closed contour $C$ is comprised of the real line segment from $z=-R$ to $z=+R$ and for $t>0$ ($t<0$) the semicircle $C_R$ of radius $R$ in the upper (lower) half plane.
Jordan's Lemma guarantees that as $R\to \infty$, the contribution of the integral from the integration over $C_R$ goes to zero. Therefore, we have $f(t)=\lim_{R\to \infty}f_R(t)$.
Now, we note that since $a>0$ that the only singularity of $\frac{e^{jzt}}{a+jz}$ is a $z=ja$. Thus, from the Residue Theorem we have
$$f(t)=\begin{cases}2\pi i\text{Res}\left(\frac{e^{jzt}}{a+jz},z=ja\right)=e^{-at}&, t>0\\\\0&,t<0\end{cases}$$
as expected!
|
Let $E \to F$ be a morphism of cohomology theories defined on finite CW complexes. Then by Brown representability, $E, F$ are represented by spectra, and the map $E \to F$ comes from a map of spectra. However, it is possible that the map on cohomology theories is zero while the map of spectra is not nullhomotopic. In other words, the homotopy category of spectra does not imbed faithfully into the category of cohomology theories on finite CW complexes. This is due to the existence of phantom maps:
Let $f: X \to Y$ be a map of spectra. It is possible that $f$ is not nullhomotopic even if for every finite spectrum $F$ and map $F \to X$, the composite $F \to X \stackrel{f}{\to} Y$ is nullhomotopic. Such maps are called phantom maps. For an explicit example, let $S^0_{\mathbb{Q}} = H\mathbb{Q}$ be the rational sphere. This is obtained as a filtered (homotopy) colimit of copies of $S^0$ and multiplication by $m$ maps. The universal coefficient theorem shows that there are nontrivial maps $S^0_{\mathbb{Q}} \to H \mathbb{Z}[1]$; in fact they are parametrized by $\mathrm{Ext}^1(\mathbb{Q}, \mathbb{Z}) \neq 0$. However, these restrict to zero on any of the terms in the filtered colimit (each of which is a copy of $S^0$).
In other words, the distinction between flat and projective modules is in some sense an algebraic analog of the existence of phantom maps. Given a flat non-projective module $M$ over some ring $R$, then there is a nontrivial map in the derived category $M \to N[1]$ for some module $N$. Now $M$ is a filtered colimit of finitely generated projectives -- Lazard's theorem -- and the map $M \to N[1]$ is "phantom" in that it restricts to zero on each of these finitely generated projectives (or more generally for any compact object mapping to $M$). So it should not be too surprising that phantom maps of spectra exist and are interesting.
Now spectra are analogous to the derived category of $R$-modules, but spectra also come with another adjunction: $$ \Sigma^\infty, \Omega^\infty: \mathcal{S}_* \leftrightarrows \mathcal{Sp}$$ between pointed spaces and spectra. They thus come with another distinguished class of objects, the suspension spectra. (Random question: what is the analog of a suspension spectrum in algebra?)
Definition: A map of spectra $X \to Y$ is hyperphantom if for any suspension spectrum $T$ (let's interpret that loosely to include desuspensions of suspension spectra), $T \to X \to Y$ is nullhomotopic.
In other words, a map of spectra is hyperphantom if the induced natural transformation on cohomology theories of spaces (not necessarily finite CW ones!) is zero.
Is it true that a hyperphantom map is nullhomotopic? Rudyak lists this as an open problem in "On Thom spectra, orientability, and cobordism." What is the state of this problem?
|
Is there an ordinal $\alpha$ such that $ZF$ believes that $V_{\alpha}$ is a model of $ZF$? (If it is problematic to state this since we have to check infinitely many axioms at once, formalize logic in $ZF$. ) If $\alpha > \omega$ is a limit ordinal, then $V_{\alpha}$ is a model of $ZF - R$, where $R$ stands for the axiom of replacement. The reflection principle tells us that $ZF$ knows that the set $S$ of ordinals $\alpha$ such that $R$ holds relative to $V_{\alpha}$ is unbounded. So is there some limit ordinal in $S$, thus answering the question? What is the smallest $\alpha$ such that $V_{\alpha}$ is a model of $ZF$, if it exists? I already know that if $\kappa$ is a strongly inaccessible cardinal, then $V_{\kappa}$ is a model of ZF, but the existence of such $\kappa$ is independent from $ZF$.
You cannot prove that there is such an ordinal, but (under a suitable large cardinal assumption) it is consistent that there is such an ordinal.
If you could prove that there was such an ordinal, then you will have proved Con(ZF) in ZF, contrary to the incompleteness theorem.
Another way to see it is: if there were such an ordinal, let $\alpha$ be the least ordinal with $V_\alpha\models$ZF. Thus, $V_\alpha$ is a model of ZF having no $\beta$ with $V_\beta\models$ZF, since the $V_\beta$ of $V_\alpha$ is the same as the $V_\beta$ of $V$.
However, if $\kappa$ is an inaccessible cardinal, then $V_\kappa\models$ZFC. In fact, there are many smaller $\alpha\lt\kappa$ with $V_\alpha\models$ZFC, and so the consistency strength of having an $\alpha$ with $V_\alpha\models$ZFC is strictly lower than an inaccessible, if it is consistent.
Your remark about using the Reflection Theorem to get $\alpha$ with $V_\alpha$ with Replacement is not quite right. The Replacement Axiom is a
scheme of axioms, an infinite list of axioms, and the Reflection idea will only produce $\alpha$ with $V_\alpha$ satisfying any one (or finitely many) of them. But we cannot get a model of the whole scheme this way.
Lastly, it is interesting to note that every nonstandard model $M$ of ZF, having a nonstandard $\omega$, will have a $V_\alpha$ that is a model of ZF as viewed from outside $M$. The reason is that for any finite collection of the ZF axioms, we may apply the Reflection Theorem as you indicated to get a $V_\alpha^M$ satisfying them, but then since the $\omega$ of $M$ is nonstandard and $M$ cannot identify its standard cut, it follows by overspill that there must be some nonstandard finite set of ZF axioms in $M$ that $M$ thinks is satisfied in one of its $V_\alpha^M$. But since this includes all the standard axioms, we have thus obtained a $V_\alpha^M$ satisfying the true ZF as viewed externally.
Joel has thoroughly answered the question, but let me point out a related result that I think deserves to be better known. It's due to Montague and Vaught ["Natural models of set theories" Fund. Math. 47 (1959) 219-242]. Suppose there is an inaccessible cardinal, and let $\delta$ be the first one. Define $\alpha$ to be the first ordinal such that $V_\alpha$ is a model of ZF. (As Joel pointed out, $\alpha<\delta$.) Let $\beta$ be the first ordinal such that $V_\beta$ is elementarily equivalent to $V_\delta$ (i.e., they satisfy the same first-order sentences in the language of set theory). Let $\gamma$ be the first ordinal such that $V_\gamma$ is an elementary submodel of $V_\delta$ (i.e., any elements of $V_\gamma$ satisfy the same first-order formulas in $V_\gamma$ as in $V_\delta$). Then the theorem of Montague and Vaught says that $\alpha<\beta<\gamma<\delta$ (with all inequalities strict).
|
I'm working to derive kinetic energy flux for fluids. I could not find a derivation online. I know from literature the correct answer is $ \phi_{kin} = (1/2) \rho v^3 $.
The specific context is in snapshots of fluids, so its okay to assume constant acceleration.
I being with the definition of kinetic energy
$$ E_k = \frac{1}{2} m v^2 \, .$$
We can assume constant acceleration between each snapshot (each time we can view the fluid). We are interested in calculating the kinetic energy flux of a system with one point, molecule or pixel and how it moves between snapshots.
We begin by placing a single particle in a box. It has energy only due to kinetic energy. As it has kinetic energy, from our previous definition of kinetic energy it must be moving.
Consider this particle in a box moving in an arbitrary direction as depicted in the figure below.
We shrink the cube so the particle must pass through it over the duration of the snapshot and measure the flux once the particle has moved through a face of the box. The box has edges which are $\epsilon$ wider then the diameter of the particle.
We describe the particle (with a vector field ) by a Dirac delta with an associated $E_k$ scalar. The vector field therefore looks like
$$ \underline{F_{k}} = E_K \ \delta^3(\underline{r} - \underline{r'}) $$
But - this isn't actually a vector, because the Dirac delta is a scalar. But, if I include a vector, it screws up the result.
The kinetic energy flux is defined as
$$\phi_k = \oint_S \underline{F_{k}} \cdot \underline{\hat{n}} \,dS$$
We assume the point particle does not go through an edge, so we arbitrarily take the x-y face.
$$ \phi_{k} = \int_{x-\epsilon}^{x+\epsilon} \int_{y-\epsilon}^{y+\epsilon} E_k \delta^3(\underline{r} - \underline{r'}) \cdot \underline{\hat{n}} dx dy $$
Taking the normal which will get rid of one dimension. Thus our integral becomes
$$ \phi_{k} = \int_{x-\epsilon}^{x+\epsilon} \int_{y-\epsilon}^{y+\epsilon} E_k \delta^2(\underline{r} - \underline{r'}) \cdot \underline{\hat{n}} dx dy $$
Again, I think the fault is in the above step--I can't just get rid of one dimension of the Dirac delta by using the dot product with the normal as an excuse. (also, I'm dotting something that isn't a vector with a vector)--but if I don't do that, I'll end up with a 3D Dirac delta in a 2D integral, which seems dodgy
We invoke the property of the Dirac delta
$$ \int_{a-\epsilon}^{a+\epsilon} f(x) \delta(x-a) dx = f(a) $$
And we are left with
$$ \phi_{k} = E_{k} \int_{x-\epsilon}^{x+\epsilon} \int_{y-\epsilon}^{y+\epsilon} \delta^2(\underline{r} - \underline{r'}) \cdot \underline{\hat{n}} dx dy = E_{k} $$
Our flux therefore is $\phi_{k} = E_{k} = \frac{1}{2} \ m \ (v_{2}^2 - v_{1}^2) $.
We can set the initial velocity to zero and consider the $E_{k}$ in each snapshot at that instant. Thus
$$ E_k = \frac{1}{2} \ m v^{2} $$
This is valid for a single particle. Extrapolating to a fluid and feeding in the density rather than mass, we find it:
$$ F_{kin} = \frac{1}{2} \rho v^3 $$
Which is the correct expression. (but with a flawed derivation).
|
It runs some code and puts me back into typing mode, creating a file called step.cA for awesome wrote:Try runningdrc wrote:ntzfind keeps tossing errorsand tell me what happens. There's probably some error there that isn't getting displayed.
Code: Select all
g++ ntzfind-setup.cpp -o ntzfind-setup
Current rule interest: B2ce3-ir4a5y/S2-c3-y
That makes no sense. Are you sure you're only telling me what happens when you run the one command I told you?drc wrote:It runs some code and puts me back into typing mode, creating a file called step.cA for awesome wrote:Try runningdrc wrote:ntzfind keeps tossing errorsand tell me what happens. There's probably some error there that isn't getting displayed.
Code: Select all
g++ ntzfind-setup.cpp -o ntzfind-setup
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X 1. It should be printing outputs and partialsGUYTU6J wrote: I have two questions 1.How can I know that the zfind is running? 2.I'm using the initial rows generator from the top of the 3rd page of this topic,but it fails...Why?
1.5 Your search failed because it's width is 16, remember that the width is the width for each side, as it is searching for symmmetric spaceships. Max width 10
I dunno number 2, I dont use save and load.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Happened to me too. Fix is to NOT run ntzfind compile. Type in each commanddrc wrote:ntzfind keeps tossing errors:
Code: Select all
$ ./ntzfind-compile.sh b2i34c/s2-i3 ./ntzfind-compile.sh: line 2: ./ntzfind-setup: No such file or directory ntzfind.c:7:18: fatal error: step.c: No such file or directory #include "step.c"
Code: Select all
$ ./ntzfind-setup.cpp b2i34c/s2-i3 ./ntzfind-setup.cpp: line 1: //Yes,: No such file or directory ./ntzfind-setup.cpp: line 5: //I: No such file or directory ./ntzfind-setup.cpp: line 6: class: command not found ./ntzfind-setup.cpp: line 7: $'public:\r': command not found ./ntzfind-setup.cpp: line 8: std::string: command not found ./ntzfind-setup.cpp: line 8: $'\r': command not found ./ntzfind-setup.cpp: line 9: syntax error near unexpected token `{}' '/ntzfind-setup.cpp: line 9: ` Transition(){};Windows 7, applied patches.
Code: Select all
$ ./ntzfind.c ./ntzfind.c: line 1: //Save: No such file or directory ./ntzfind.c: line 2: $'\r': command not found ./ntzfind.c: line 8: $'\r': command not found ./ntzfind.c: line 10: $'\r': command not found ./ntzfind.c: line 11: int: command not found ./ntzfind.c: line 11: $'\r': command not found ./ntzfind.c: line 12: uint32_t: command not found ./ntzfind.c: line 12: $'\r': command not found ./ntzfind.c: line 13: uint16_t: command not found ./ntzfind.c: line 13: $'\r': command not found ./ntzfind.c: line 14: int: command not found ./ntzfind.c: line 14: $'\r': command not found ./ntzfind.c: line 15: char: command not found ./ntzfind.c: line 15: $'\r': command not found ./ntzfind.c: line 16: $'\r': command not found ./ntzfind.c: line 17: int: command not found ./ntzfind.c: line 17: $'\r': command not found ./ntzfind.c: line 18: $'\r': command not found ./ntzfind.c: line 19: syntax error near unexpected token `(' '/ntzfind.c: line 19: `void plong(long a){
exactly
First, compile ntzfind setup
Code: Select all
g++ ntzfind-setup.cpp -o ntzfind-setup
Code: Select all
./ntzfind-setup b2i34c/s2-i3
Code: Select all
gcc ntzfind.c -O3 -o ntzfind
Code: Select all
./ntzfind p1337 k193
EDIT:That only happens when the width is 10
Here's my first 5 partials
Code: Select all
x = 154, y = 45, rule = B3/S2316$18b2o3b2o3b2o14bo3b2o3bo16bo3b2o3bo20bo3b2o3bo19bo3b2o3bo$18b3obo2bob3o13bobobo2bobobo14bobobo2bobobo18bobobo2bobobo18bo3b2o3bo$17b2o10b2o12bobobo2bobobo14bobobo2bobobo18bobobo2bobobo18bo3b2o3bo$20bob4obo15b2o3b2o3b2o15bo3b2o3bo20bo3b2o3bo$16b2o2bob4obo2b2o11b2o8b2o73bo4b2o4bo$19b2o6b2o13bobo8bobo15bo6bo48b2obo2b2o2bob2o$19b4o2b4o14b4o4b4o14b2obo4bob2o18b4o4b4o19bo6bo$20b2o4b2o16bo2bo2bo2bo14bo3bo4bo3bo16bo2b3o2b3o2bo14bo3bo6bo3bo$48b2o19bo2bo4bo2bo45bo2b2ob4ob2o2bo$18bo3bo2bo3bo11b3ob8ob3o11bob2o6b2obo16b3o3b2o3b3o17bo2b4o2bo$17b2obobo2bobob2o11b2obobo2bobob2o15b2o4b2o20bo2b6o2bo17b2o8b2o$16b2o3bo4bo3b2o10b3ob6ob3o45bob4obo22bo4bo$16b3ob2o4b2ob3o10b2o10b2o12b2o3bo2bo3b2o18bo8bo19b2ob4ob2o$20bo2b2o2bo40b2o2bo4bo2b2o17bo10bo18b3ob2ob3o$17b2o3bo2bo3b2o11bobob6obobo14bobo4bobo22b2o2b2o25b2o$42b2o10b2o12b2o2b2o2b2o2b2o17b2ob2o2b2ob2o16bob3o4b3obo$68bobo2bo2bo2bobo16bo12bo15bo2b3o2b3o2bo$97bo14bo14bo3bo4bo3bo$130b3o2b3o$126bob2o8b2obo$126bo3bo6bo3bo!
An awesome gun firing cool spaceships:
Yes,but my programme still can't successfully read it.Sokwe wrote: Is initrows.txt in the same folder as zfind.exe?
An awesome gun firing cool spaceships:
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Wait, shouldn't lookup tables be for ntzfind only? Use normal zfind for life! I suggest not doing width 10, do something like 6 or 7.GUYTU6J wrote:But my search comes to an end with "Building lookup tables" everytime... EDIT:That only happens when the width is 10 Here's my first 5 partials
Code: Select all
x = 154, y = 45, rule = B3/S23 16$18b2o3b2o3b2o14bo3b2o3bo16bo3b2o3bo20bo3b2o3bo19bo3b2o3bo$18b3obo2b ob3o13bobobo2bobobo14bobobo2bobobo18bobobo2bobobo18bo3b2o3bo$17b2o10b 2o12bobobo2bobobo14bobobo2bobobo18bobobo2bobobo18bo3b2o3bo$20bob4obo 15b2o3b2o3b2o15bo3b2o3bo20bo3b2o3bo$16b2o2bob4obo2b2o11b2o8b2o73bo4b2o 4bo$19b2o6b2o13bobo8bobo15bo6bo48b2obo2b2o2bob2o$19b4o2b4o14b4o4b4o14b 2obo4bob2o18b4o4b4o19bo6bo$20b2o4b2o16bo2bo2bo2bo14bo3bo4bo3bo16bo2b3o 2b3o2bo14bo3bo6bo3bo$48b2o19bo2bo4bo2bo45bo2b2ob4ob2o2bo$18bo3bo2bo3bo 11b3ob8ob3o11bob2o6b2obo16b3o3b2o3b3o17bo2b4o2bo$17b2obobo2bobob2o11b 2obobo2bobob2o15b2o4b2o20bo2b6o2bo17b2o8b2o$16b2o3bo4bo3b2o10b3ob6ob3o 45bob4obo22bo4bo$16b3ob2o4b2ob3o10b2o10b2o12b2o3bo2bo3b2o18bo8bo19b2ob 4ob2o$20bo2b2o2bo40b2o2bo4bo2b2o17bo10bo18b3ob2ob3o$17b2o3bo2bo3b2o11b obob6obobo14bobo4bobo22b2o2b2o25b2o$42b2o10b2o12b2o2b2o2b2o2b2o17b2ob 2o2b2ob2o16bob3o4b3obo$68bobo2bo2bo2bobo16bo12bo15bo2b3o2b3o2bo$97bo 14bo14bo3bo4bo3bo$130b3o2b3o$126bob2o8b2obo$126bo3bo6bo3bo!
I'm using zfind 2.0.Which is the newest version of "normal zfind"?Saka wrote: Wait, shouldn't lookup tables be for ntzfind only? Use normal zfind for life! I suggest not doing width 10, do something like 6 or 7.
An awesome gun firing cool spaceships:
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Reduce width then. 6 or 7GUYTU6J wrote:I'm using zfind 2.0.Which is the newest version of "normal zfind"?Saka wrote: Wait, shouldn't lookup tables be for ntzfind only? Use normal zfind for life! I suggest not doing width 10, do something like 6 or 7. You mean search on width 6 or 7?But how about the version of programme?Saka wrote: Reduce width then. 6 or 7
An awesome gun firing cool spaceships:
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Yes. I don't know what you mean by version of the program. Just search on lower widths. For an example, try:GUYTU6J wrote:You mean search on width 6 or 7?But how about the version of programme?Saka wrote: Reduce width then. 6 or 7
Code: Select all
./zfind p10 k1 w6 l500
Those are partials, even though they look like sparks, they are still partials. Ignore.GUYTU6J wrote:And what are these? Sorry I don't know the difference between ntzfind and "normal zfind"Saka wrote: Yes. I don't know what you mean by version of the program. Just search on lower widths. For an example, try:It should find the copperhead.
Code: Select all
./zfind p10 k1 w6 l500
And this example can't work because of the lack of symmetry type.
Those short partials are annoying
An awesome gun firing cool spaceships:
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Oh GodGUYTU6J wrote: Sorry I don't know the difference between ntzfind and "normal zfind"
ntzfind is for non-totalistic rules like salad or justfriends
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Sorry, wrong.GUYTU6J wrote: And this example can't work because of the lack of symmetry type.
Code: Select all
./zfind p10 k1 w6 v l500
The rest
Code: Select all
x = 131, y = 35, rule = B3/S237b2o20b2o21b2o21b2o20b2o23b2o$5bo4bo16bo4bo17bo4bo17bo4bo16bo4bo20b4o$4bo6bo15bo4bo17bo4bo17bo4bo16bo4bo19b6o$3bo8bo14bo4bo17bo4bo17bo4bo16bo4bo18b2o4b2o$3bo3b2o3bo14bo4bo17bo4bo17bo4bo16bo4bo18b2o4b2o$6bo2bo15bo2bo2bo2bo13bo2bo2bo2bo13bo2bo2bo2bo12bo2bo2bo2bo$5b2o2b2o14b3ob2ob3o13b3ob2ob3o13b3ob2ob3o12b3ob2ob3o16b3o2b3o$6bo2bo15bo8bo13bo8bo13bo8bo12bo8bo15b4o2b4o$4b3o2b3o109bo2bo$4b2ob2ob2o14b2o4b2o15b2o4b2o14bobo4bobo13b2o4b2o14b2o10b2o$6b4o15bo8bo13bo8bo13b10o12b3o4b3o12bo4bo4bo4bo$4bo6bo79b2o2bo4bo2b2o11bo12bo$4bo6bo14bob4obo14bo8bo15bo4bo11bo4bo4bo4bo10bo12bo$2b2obo4bob2o13bob2obo14b2obo4bob2o36bo4bo17b3o4b3o$2b2o8b2o33b2o3b2o3b2o16b2o19b4o20bo4bo$4bo6bo13b3o4b3o12b12o10bo12bo9bobo6bobo16b6o$2b3obo2bob3o11bo8bo16b4o13bob2o2bo2bo2b2obo9bo8bo16bo6bo$2bo3b4o3bo10bo2bo4bo2bo33b3ob2o2b2ob3o12bo4bo18bo6bo$2bo2bo4bo2bo9bobob2o2b2obobo13bo4bo16bo6bo15bob2obo17bobo4bobo$2b2ob2o2b2ob2o8bo3bo6bo3bo33b2obo2bob2o13bo6bo17b8o$2bobobo2bobobo8bo4bo4bo4bo11bobo2bobo14bo2bo2bo2bo11bo2bo4bo2bo15b2o4b2o$3b2obo2bob2o13b2o4b2o14b3o4b3o34bobo6bobo$3b2obo2bob2o11bo10bo12bo8bo13b2o6b2o12bo8bo16b2o4b2o$bo2bobo2bobo2bo8b3o8b3o12bo6bo13bob2ob2ob2obo10bo10bo14bob6obo$b2obo6bob2o8b2o10b2o11b2obo2bob2o12bo2b2o2b2o2bo8bo2bo8bo2bo16b2o$b2o10b2o7b2o12b2o9bo2bo4bo2bo11b2obo4bob2o12b2o4b2o15bobo2b2o2bobo$2obo8bob2o6b3ob8ob3o8b5o4b5o11bobo4bobo12b10o14b2obo4bob2o$26b8o12bob2o6b2obo13b2o2b2o13b3o6b3o11bo3bo2b2o2bo3bo$23b2o3b4o3b2o9bo2b2ob2ob2o2bo9bo3bo4bo3bo7b2ob2o2b2o2b2ob2o16b2o$24bo10bo9b2obobo4bobob2o9bo2bo4bo2bo33bo2bo8bo2bo$48bo8bo12bob2o4b2obo8b2o2bo6bo2b2o9b3o4b2o4b3o$47bo2b2o2b2o2bo14b2o2b2o16bo4bo17b4o2b4o$45b2o2b8o2b2o8b2ob2o4b2ob2o7b3ob3o2b3ob3o9b2o2bobo2bobo2b2o$96b4o15b2o2b2o4b2o2b2o$94bo6bo17bobo2bobo!
An awesome gun firing cool spaceships:
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Post results somewhere elseGUYTU6J wrote:Oh I lost some partials before "CPU time: 2006.234375 seconds" because my Command Prompt didn't show it The rest
Code: Select all
Partials
The last question:if I want to make this as the head of a c/8 spaceship
Code: Select all
x = 12, y = 4, rule = B3/S232bo6bo$bobo4bobo$bobo4bobo$2ob2o2b2ob2o!
Code: Select all
..o.....o.....o.....o.....o.....o.....o.....o....o.o...o.o...o.o...o.o...o.o...o.o...o.o...o.o..
Finally it fails.Why?
An awesome gun firing cool spaceships:
There is absolutely no way that partial could move at c/8. To verify this try putting the first two rows into JLS phase by phase until you get an error, then you will see why it fails. If you want to search for a frontend like this I suggest you use WLS or JLS to generate a partial with this SL as one phase and then once you have a reasonable partial you can try the same process again. Note that you need a certain number of rows in the partial when using the initrow generating script such that the two rows generated are part of a valid partial. For this case you need at least 8 rows if starting with the front row.GUYTU6J wrote:Oh sorry. The last question:if I want to make this as the head of a c/8 spaceshipI select the left half of the first row and gengerate initrows,then I got this
Code: Select all
x = 12, y = 4, rule = B3/S23 2bo6bo$bobo4bobo$bobo4bobo$2ob2o2b2ob2o!I open my zfind and enter"b3/s23 p8 k1 w6 v 1500 e initrows.txt"
Code: Select all
..o... ..o... ..o... ..o... ..o... ..o... ..o... ..o... .o.o.. .o.o.. .o.o.. .o.o.. .o.o.. .o.o.. .o.o.. .o.o..
Finally it fails.Why?
Code: Select all
x = 16, y = 35, rule = B3/S237b2o$6b4o$5b6o$4b2o4b2o$4b2o4b2o2$4b3o2b3o$3b4o2b4o$6bo2bo$b2o10b2o$o4bo4bo4bo$bo12bo$bo12bo$3b3o4b3o$5bo4bo$5b6o$4bo6bo$4bo6bo$3bobo4bobo$4b8o$4b2o4b2o2$4b2o4b2o$3bob6obo$7b2o$2bobo2b2o2bobo$2b2obo4bob2o$o3bo2b2o2bo3bo$7b2o$o2bo8bo2bo$3o4b2o4b3o$3b4o2b4o$2o2bobo2bobo2b2o$2o2b2o4b2o2b2o$4bobo2bobo!
Code: Select all
.......o......o...............................................oo......oo.....o.......o.......................o.......ooo.....o..
Code: Select all
zfind.exe b3/s23 p8 k1 w8 v 1500 e initrows.txt
An awesome gun firing cool spaceships:
This works as expected for me with zfind 2.0 - the longest partial found is the original partial. In what way does your search fail?GUYTU6J wrote:I tried again today. My original partial:My initial rows:
Code: Select all
<snip> rleMy input:
Code: Select all
.......o ......o. ........ ........ ........ ........ ........ ......oo ......oo .....o.. .....o.. ........ ........ .....o.. .....ooo .....o..It still failed...Does zfind 2.0 support initrows?
Code: Select all
zfind.exe b3/s23 p8 k1 w8 v 1500 e initrows.txt
Note that the 1500 is ignored in your command because of a missing 'l' or 'm'.
It only shows"Load from file initrows.txt failed"wildmyron wrote: This works as expected for me with zfind 2.0 - the longest partial found is the original partial. In what way does your search fail? Note that the 1500 is ignored in your command because of a missing 'l' or 'm'.
I'm sure my zfind.exe is put into the folder where the initrows.txt is generated.
An awesome gun firing cool spaceships:
This is not strictly required. What is important is that the initrows.txt file is in the current working directory of your shell. (I guess you are using Microsoft cmd console?)GUYTU6J wrote:It only shows"Load from file initrows.txt failed"wildmyron wrote: This works as expected for me with zfind 2.0 - the longest partial found is the original partial. In what way does your search fail? Note that the 1500 is ignored in your command because of a missing 'l' or 'm'. I'm sure my zfind.exe is put into the folder where the initrows.txt is generated.
Alternatively you could try specifying the full path to the file, like so if in cmd prompt:
Code: Select all
zfind b3s23 p8 k1 w8 v e "C:\Program Files\Golly dir\My Scripts\initrows.txt"
Wow,that works.Thanks a lot!wildmyron wrote: This is not strictly required. What is important is that the initrows.txt file is in the current working directory of your shell. (I guess you are using Microsoft cmd console?) Alternatively you could try specifying the full path to the file, like so if in cmd prompt:
Code: Select all
zfind b3s23 p8 k1 w8 v e "C:\Program Files\Golly dir\My Scripts\initrows.txt"
An awesome gun firing cool spaceships:
|
The answer is in the negative.
Let $f$ and $g$ be two upper densities (in the sense of the OP), and let $\alpha \in [0,1]$ and $q \in [1,\infty[$. Then the function $$h := (\alpha f^q + (1-\alpha) g^q)^{\frac{1}{q}}$$is an upper density too (in particular, condition (F3) follows from Minkowski's inequality, which is why we need $q \ge 1$).
Next, fix a set $X \subseteq 2\cdot\mathbf N^+$, let $x := 2f(X)$, $y := 2g(X)$ and $Y := 2 \cdot \mathbf N^+ + 1$, and suppose to a contradiction that $h$ is ``weakly additive'' (that is, $h(A \cup B) = h(A) + h(B)$ for all disjoint $A, B \subseteq \mathbf N^+$ such that $B$ is an (infinite) arithmetic progression), regardless of the actual values of the parameters $\alpha$ and $q$. Then, also $f$ and $g$ are weakly additive, and using that $f(Y) = g(Y)=\frac{1}{2}$, we obtain $$\begin{split}2h(X \cup Y) & = 2(\alpha (f(X \cup Y))^q + (1-\alpha) (g(X \cup Y))^q)^{\frac{1}{q}} \\ & = 2(\alpha (f(X) + f(Y))^q + (1-\alpha) (g(X) + g(Y))^q)^{\frac{1}{q}} \\& = (\alpha (x+1)^q + (1-\alpha)(y+1)^q)^{\frac{1}{q}}\end{split}$$and$$\begin{split}h(X) + h(Y) & = (\alpha (f(X))^q + (1-\alpha) (g(X))^q)^{\frac{1}{q}} + \frac{1}{2} \\ & = (\alpha x^q + (1-\alpha)y^q)^{\frac{1}{q}}+ \frac{1}{2}\end{split}$$which, together with $h(X \cup Y) = h(X) + h(Y)$, yields$$ (\alpha (x+1)^q + (1-\alpha)(y+1)^q)^{\frac{1}{q}} = (\alpha x^q + (1-\alpha)y^q)^{\frac{1}{q}} + 1.$$ On the other hand, an appropriate choice of $f$, $g$ and $X$ makes it possible to have $x$ equal to zero while $y$ takes any prescribed value in the interval $[0,1]$: This can be achieved, for instance, by letting $f$ be the upper asymptotic density (on $\mathbf N^+$), $g$ the upper Banach density, and $X$ a suitable subset of the intersection, $S$, of $\bigcup_{n \ge 1} [\![2^n, 2^n + n]\!]$ and $2 \cdot\mathbf N^+$, and by considering that (i) the upper asymptotic density of $S$ is $0$, (ii) the upper Banach density of $S$ is $\frac{1}{2}$, (iii) the upper asymptotic and upper Banach densities are upper densities, and (iv) upper densities have the strong, and hence the weak, Darboux property (by the main theorem here).
Accordingly, we should have$$(\alpha + (1-\alpha)(y+1)^q)^{\frac{1}{q}} = (1-\alpha)^{\frac{1}{q}}y + 1$$for all $\alpha, y \in [0,1]$ and $q \in [1,\infty[$, which, however, is blatantly false. []
Added later. If you assume $\alpha = \frac{1}{2}$ and $q = 2$ in the last displayed equation, you don't even need to know that the upper Banach density has the weak Darboux property, since then you end up with the equation $$\sqrt{1 + (y+1)^2} = y + \sqrt{2},$$which has a unique solution for $y \in \bf R$ (namely, $y = 0$).
|
Any Hamiltonian can be written in the form you give$$H=\sum_i\varepsilon_i|i\rangle\langle i|$$as long as the eigenstates form a basis. This is still true for a many-body system. So for a single qubit you'd have$$H_1=\varepsilon_0|0\rangle\langle 0|+\varepsilon_1|1\rangle\langle 1|$$And for two it'd look like$$H_2=\varepsilon_{00}|00\rangle\langle 00|+\varepsilon_{01}|01\rangle\langle 01|+\varepsilon_{10}|10\rangle\langle 10|+\varepsilon_{11}|11\rangle\langle 11|$$This representation is completely general and so offers no real insight into anything. If you assume all of your qubits are governed by the same Hamiltonian and independent of the others (noninteracting) then you can write the joint Hamiltonian as a sum over each individual one like you have above. This is generally the situation of interest.$$H=^{(*)}\sum_j H_j=\sum_j\sum_i\varepsilon_i|i\rangle_j\langle i|_j$$
*Here $H_j$ means $H_j\otimes_{k\neq j} \mathbb{I}_k$ and $|i\rangle_j\langle i|_j=|i\rangle_j\langle i|_j\otimes_{k\neq j} \mathbb{I}_k$ i.e. they just act on qubit $j$
So for a two qubit system you'd have, fully written out:$$H_2=\varepsilon_0|0\rangle\langle 0|\otimes\mathbb{I}+\varepsilon_1|1\rangle\langle 1|\otimes\mathbb{I}+\varepsilon_0\mathbb{I}\otimes|0\rangle\langle 0|+\varepsilon_1\mathbb{I}\otimes|1\rangle\langle 1|$$
Or to more closely match your notation (which is not useful when written out in full, I'd only use it to compactify expressions)$$H_2=\varepsilon_0|0\rangle_1\langle 0|_1\otimes_2\mathbb{I}_2+\varepsilon_1|1\rangle_1\langle 1|_1\otimes_2\mathbb{I}_2+\varepsilon_0|0\rangle_2\langle 0|_2\otimes_1\mathbb{I}_1+\varepsilon_1|1\rangle_2\langle 1|_2\otimes_1\mathbb{I}_1$$
|
Ex.10.2 Q3 Circles Solution - NCERT Maths Class 10 Question
If tangents \(PA\) and \(PB\) from a point \(P\) to a circle with center \(O\) are inclined to each other at angle of \(80^\circ,\) then \(\angle {POA}\) is equal to
(A) \(50^\circ\)
(B) \(60^\circ\)
(C) \(70^\circ\)
(D) \(80^\circ\)
Text Solution What is Known?
\(PA\) and \(PB\) are the tangents from \(P\) to a circle with center \( O.\) Tangents are inclined to each other at angle of \(80^\circ\)
What is Unknown?
\(\angle {POA}\)
Reasoning: The lengths of tangents drawn from an external point to a circle are equal Tangent at any point of a circle is perpendicular to theradius through the point of contact. Steps:
In \(\triangle {OAP}\) and in \(\triangle {OBP}\)
\(OA = OB\) (radii of the circle are always equal)
\(AP = BP\) (length of the tangents)
\(OP = OP\) (common)
Therefore, by \(SSS\) congruency \(\Delta {OAP} \cong \Delta {O B P} \)
SSS congruence rule: If three sides of one triangle are equal to the three sides of another triangle, then the two triangles are congruent.
If two triangles are congruent then their corrresponding pars are equal.
Hence,
\(\angle POA = \angle POB \\\angle OPA = \angle OPB\)
Therefore, \(OP\) is the angle bisector of \(\angle {APB}\) and \(\angle {AOB}\)
\[\begin{align} \therefore \quad \angle {O P A}&= \angle O P B\\& = \frac { 1 } { 2 } ( \angle A P B ) \\ & = \frac { 1 } { 2 } ( 80 ) \\ & = 40 ^ { \circ } \end{align}\]
By angle sum property of a triangle, In \(\Delta {OAP}\)
\[\angle {A} + \angle {POA} + \angle {OPA} = 180 ^ { \circ }\]
From the figure,
\({OA} \, \bot \, {AP}\) (Theorem 10.1: The tangent at any point of a circle is perpendicular to the radius through the point of contact.)
\[\begin{align} \therefore \quad \angle {A} &= 90 ^ { \circ } \\ 90 ^ { \circ } + \angle { POA } + 40 ^ { \circ } &= 180 ^ { \circ } \\ 130 ^ { \circ } + \angle { POA }& = 180 ^ { \circ } \\ \angle { POA } & = 180 ^ { \circ } - 130 ^ { \circ } \\ \angle { POA } & = 50 ^ { \circ } \end{align}\]
The correct Option is
A
|
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords Brownian motion (2) (remove)
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
296
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
|
Currently going through a video on Counting Minimum Cuts by Tim Roughgarden. $(A_{i},B_{i}) = \big((A_{1},B_{1}), ..., (A_{t},B_{t})\big) \forall i \in \Bbb{R}$ $P\big((A_{i},B_{i})\big) \geq \frac{1}{\begin{pmatrix} n \\ 2 \end{pmatrix}} = p$, which I interpret as the lower bound on the probability of having at least one minimal cut. In the problem set that follows two answers A and B are highlighted as being correct. I understand why A is correct; but am puzzled why B is also marked as correct. A: For every graph $G$ with $n$ nodes and every min cut $(A,B)$ (I am assuming same thing as $(A_{i},B_{i})$) $P\big((A,B)\big) \geq p$. B There exists a graph $G$ with $n$ nodes and a min cut $(A,B)$ (again assuming same thing as $(A_{i},B_{i})$) of $G$ such that $P\big((A,B)\big) \leq p$.
I don't understand what you mean by "$(A_i,B_i) = ((A_1,B_1),\ldots,(A_t,B_t))$", an obviously false statement. Perhaps you meant $(A_i,B_i) \in \{(A_1,B_1),\ldots,(A_t,B_t)\}$?
I don't quite understand your interpretation of the statement $P((A_i,B_i)) \geq p := 1/\binom{n}{2}$. Here is the correct interpretation:
For any minimum cut $C$, the probability that Karger's algorithm outputs $C$ is at least $p := 1/\binom{n}{2}$.
This is exactly what A states.
For B, you need to give an example of a graph which satisfies $P(C) \leq 1/\binom{n}{2}$ for all cuts $C$. One such example is a cycle, an example you were probably shown in class.
|
In the Schwarzschild spacetime with metric in standard Schwarzschild coordinates
$$ds^2=\rho(r)dt^2-\rho(r)^{-1}dr^2-r^2d\Omega^2,\quad \rho(r)=1-\dfrac{2GM}{r},$$
we have a coordinate singularity at $r = 2GM$. This is the event horizon, and is a surface of the spacetime manifold $M$, which, although here defined by coordinates, is certainly independent of any coordinate system.
Now in a more "conceptual language" if two observers start far away from the black hole, and one falls towards the central singularity, as soon as he passes $r = 2GM$, in other words, as soon as it crosses the horizon, the one that stayed will completely loose contact with him. A physicist I know used this to define a "causal horizon".
Now, this is an imprecise language. How can we characterize a causal horizon like this in general?
I believe the way would be on the definition of reference frames as timelike unit vector fields - namely, congruence of observers. In that case, we could say that $\mathscr{H}$ acts as a causal horizon for the reference frame $Z$ when:
The causal horizon $\mathscr{H}$ splits $M$ into two regions $M_{\text{in}}$ and $M_{\text{out}}$ with $\mathscr{H}$ acting as a boundary between both.
The reference frame $Z$ is supported in either one of the regions. In other words, its observers are located inside one of the regions.
This seems to be on the way, but I believe something more is required, related to the causality structure. Probably we should require $M_{\text{in}}$ and $M_{\text{out}}$ to be causally disconnected in the sense that for every $p\in M_{\text{in}}$ we have $J^{\pm}(p)\cap M_{\text{out}}=\emptyset$ and for every $p\in M_{\text{out}}$ we have $J^{\pm}(p)\cap M_{\text{in}}=\emptyset$.
Is that it? Is my approach correc to what should be a causal horizon in the sense of the "conceptual" example of Schwarzschild coordinates I described?
Is there a standard approach to this, different than this one?
|
When reading the prefaces of many books devoted to the theory of inequalities, I found one thing repeatedly stated: Inequalities are used in all branches of mathematics. But seriously, how important are they? Having finished a standard freshman course in calculus, I have hardly ever used even the most renowned inequalities like the Cauchy-Schwarz inequality. I know that this is due to the fact that I have not yet delved into the field of more advanced mathematics, so I would like to know just how important they are. While these inequalities are usually concerned with a finite set of numbers, I guess they must be generalised to fit into subjects like analysis. Can you provide some examples to illustrate how inequalities are used in more advanced mathematics?
I have a feeling that what you actually seek are examples of "famous" inequalities being put into good use and not just the notion of inequality as a general attribute.
For, if we stick to the notion of inequality in general, a prime example of why that is essential is in defining the real line.
Dedekind cuts that define the reals are partitions of the ordered field $\Bbb{Q}$. If you do not have an ordering (inequality relationships) amongst it's members, you cannot define $\Bbb{R}$ in this way.
For an example of an inequality as a "formula" , consider the $LM$-inequality, used in complex analysis, that gives an upper bound for a contour integral, thus having a variety of applications.
If f is a complex-valued, continuous function on the contour $\Gamma$, the arc length of $\Gamma$ is $l(\Gamma)=L$ and the absolute value of $f$, $|f(z)|$ is bounded by a constant $M$ $\forall$ $z$ on $\Gamma$, then it holds that $$\int_\Gamma|f(z)|dz\leq ML$$
Inequalities are extremely useful in mathematics, especially when we deal with quantities that we do not know exactly what they equate too. For example, let $p_n$ be the $n$-th prime number. We have no nice formula for $p_n$. However, we do know that $p_n \leq 2^n$. Often, one can solve a mathematical problem, by estimating an answer, rather than writing down exactly what it is. This is one way inequalities are very useful.
There are a lot of inequalities in mathematics that are more or less important, for a list you can see here.
It is not simple to establish a rank of importance for them, but I think that the most important is the triangle inequality. In the simplest form this inequality states that, for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This captures a fundamental character of the notion of distance that agree with our intuition in the euclidean space. But it can be generalized to more abstract spaces (as the spaces of functions in functional analysis) so that, also in these spaces we can define a notion of distance.
To prove the triangle inequality in these spaces we need some other inequalities and the more relevant are the Holder and Minkowski inequalities that are used to prove when in a vector space we can define a norm, and, from this norm, a distance.
I think they're most important because of limits. I'm sure you've done limits in your calculus class. Limits are extremely important in maths - they're not just used to define derivatives and integrals.
There's a whole branch of mathematics called analysis which deals with limits. Sometime in the 18th century, mathematicians tried to understand calculus more rigorously and they came up with a formal definition of a limit.
$\lim \limits_{n \rightarrow \infty} a_n = a \iff \forall \varepsilon > 0, \exists N \in \mathbb{N} \ s.t. |a_n - a| < \varepsilon $
(In simple words, if you have a sequence of numbers say $ \dfrac{1}{2}, \dfrac{3}{4}, \dfrac{7}{8}, \dfrac{15}{16}...$ which tend to a number (1 in the previous example), after a certain point ($N$) all points within a certain range ($\varepsilon$) of the limit.)
Because of this, analysis is all about inequalities - including the triangle inequality and the Cauchy-Schwartz. They're very useful.
There are other places such as in Computer Science, when you define the order of growth of an algorithm (Big-O notation), and Operations Research where you use them to put certain constraints on maximisation/minimisation problems, e.g. find the best portfolio of investments, given that you may at most invest $1000. (The last one requires a good knowledge of theoretical probability - which is basically analysis and inequalities.)
|
Today at 2 p.m., I will talk at Fudan Logic Seminar about natural reducts of set theory models.
Lecture: HGW2403, T 18:30-21 Section: HGW2403, R 18:30-20 Instruction for the final paper
You can choose one of the following topics.
An introduction to one specific forcing notion from this list (for Cohen forcing you can introduce an application not discussed in our course) and explain how it works. On one or more issues concerning the meta-theory of forcing. For example, why and how we can talk about an “outer model” or a “object” not in our universe, why we can and why we need to assume the existence of a transitive model, how to account forcing arguments as purely constructive methods, etc. Problem set 01 Let \(\pi\) be the canonical interpretation of PA into ZF. Can we prove “for each arithmetic formula \(\varphi\), if ZFC \(\vdash\pi(\varphi)\), then PA \(\vdash\varphi\)”? Prove it if we can, explain it if we cannot. Assume ZF \(\vdash\varphi^L\) for each formula \(\varphi\in\Sigma\), and \(\Sigma\vdash\psi\). Show that ZF \(\vdash\psi^L\). Why Con(ZF) does not imply there is a countable transitive model of ZF? Problem set 02
Kunen’s set theory (2013) Exercise I.16.6 – I.16.10, I.16.17.
Problem set 03
Kunen’s set theory (2013) Exercise II.4.6, 4.8.
Jech’s set theory (2002) Exercise 7.1, 7.3 – 7.5, 7.13, 7.16, 7.18 – 7.20, 7.22 – 7.33.
Problem set 04 Let \(M^\mathbb{B}\) be a Boolean valued model. Prove the following statements are valid in \(M^\mathbb{B}\). \(\forall y\big(\forall x\varphi(x)\rightarrow\varphi(y)\big)\). \(\forall x(\varphi\rightarrow\psi)\rightarrow\forall x\varphi\rightarrow\forall x\psi\). \(\alpha\rightarrow\forall x\alpha\), \(x\) does not occur in \(\alpha\) freely.
Jech’s set theory (2002) Exercise 14.12.
Problem set 05 Let \(\sigma\) be a \(\mathbb{B}\)-name. Show that \[ |\!|\exists x\in\sigma~\varphi(x)|\!| = \sum_{\xi\in\textrm{dom}\sigma}\sigma(\xi)\cdot|\!|\varphi(\xi)|\!|.\] For any partial order \(\mathbb{P}\), there is a separative partial order \(\mathbb{Q}\) and a surjection \(h:\mathbb{P}\to\mathbb{Q}\) such that \(x\leq y\) implies \(h(x)\leq h(y)\); \(x\) and \(y\) are compatible in \(\mathbb{P}\) if and only if \(h(x)\) and \(h(y)\) are compatible in \(\mathbb{Q}\).
Such \(\mathbb{Q}\) is unique up to isomorphism. We call it the
separative quotientof \(\mathbb{P}\).
Jech’s set theory (2002) Exercise 14.1, 14.9, 14.14, 14.16. Lemma 14.13.
Lecture: HGX205, M 18:30-21
Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy.
And all exercises for Chapter 2 (see page 23,
open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction.
And exercises for Chapter 3 (see page 35,
open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’.
And exercises for Chapter 4 (see page 47,
open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model.
And exercises for Chapter 5 (see page 60,
open minds): 1 – 5. Exercise 05
Exercises for Chapter 6 (see page 69,
open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas.
Exercises for Chapter 7 (see page 88,
open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\).
Exercises for Chapter 8 (see page 99,
open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\]
Exercises for Chapter 9 (see page 99,
open minds). Exercise the last
Exercises for Chapter 10 and 11 (see page 117 and 125,
open minds).
Niconico and bilibili are the two leading streaming video sharing service provider who also feature overlaid comments called danmaku (弾幕).
Continue reading “The idea of a decentralized danmaku (弾幕) and subtitles service”
欢迎在评论区提交有关本书的勘误与修改意见。 Continue reading “《数理逻辑:证明及其限度》勘误”
欢迎在评论区提交有关本书的勘误与修改意见。
1620年11月11日,载着英格兰分离教派清教徒的五月花号终于决定停靠在现马赛诸塞州的鳕鱼角。这不是这片新大陆第一批英语殖民者,当然也不是最后一批。这些殖民者或因为坚持独特的宗教理想或为逃避政治迫害或存粹为了投机而选择离开了欧洲的成熟社会,以寻求各自心中的理想之国。得益于大西洋的阻隔,这群人可以摆脱部分历史束缚,相对自由地开始新的社会实验。十八世纪以来发源于欧洲的启蒙思想和理论在这里找到了理想的实验环境。 Continue reading “加密数字货币、信任与虚拟身份”
|
The seminal paper referred to is "Syntactic Analysis and Operator Precedence" (1963), which describes the operator precedence algorithm still used by many simple expression parsers today.
The basic approach described by Floyd was not exactly new. It was described by Edsger Dijkstra in 1961; Dijsktra's procedure was a pragmatic, special-purpose algorithm used to parse Algol-60, commonly referred to as the "Shunting Yard Algorithm". Floyd's contribution was to formalise and extend the idea, making it into a practical general-purpose parsing technique.
Floyd's approach to parsing was highly influential on another young parsing investigator of the time, Donald Knuth, whose 1965 paper "On the Translation of Languages from Left to Right" introduced the LR(k) parsing algorithm, capable of parsing a much larger set of languages than operator precedence parsing. However, it was not until 1969 when Frank deRemer invented the computationally manageable LALR(1) algorithm described in "Practical Translators for LR(k) languages" that Knuth's discovery moved out of the theoretical realm and into practical compiler construction.
Floyd's 1961 paper "An algorithm for coding efficient arithmetic operations", cited in the OP, was not about parsing ("inserting brackets properly") but rather about efficiently ordering the computation of the parsed expression so as to maximize the use of scarce CPU resources, based on the very limited CPU architectures common at the time. Modern CPU architectures demand different algorithms, but parts of Floyd's 1961 algorithm can be found in simple optimizations for minimising the stack size in stack-based virtual machines (or, equivalently, the number of temporaries required in three-address-code).
For reference, a quick summary of Floyd's operator precedence algorithm. The algorithm takes as input a
precedence grammar $G$, which is a context-free grammar with the property that no right-hand side contains two consecutive non-terminals. (This definition allows unit productions and null productions; in practice, these would usually be removed before proceeding since they do not participate in the parsing algorithm.)
As usual, we will write $G = \langle N, T, P, S\rangle$, where $N$ is the set of
nonterminal symbols, $T$ the set of terminal symbols, $P \subset N \times (N \cup T)^*$ the set of productions and $S \in N$ the target (start) symbol. We write $\alpha \Rightarrow \beta$ if $\alpha$ derives $\beta$ (that is, $\alpha = \alpha_1 A \alpha_3$ and $\beta = \alpha_1\alpha_2\alpha_3$, where $A\rightarrow\alpha_2 \in P$). Also, we write $\Rightarrow^+$ and $\Rightarrow^*$ for the transitive closure and reflexive transitive closure of $\Rightarrow$.
Then we define three
precedence relations over $T$: $\lessdot$, $\doteq$ and $\gtrdot$ , defined as follows:
$a \lessdot b \text{ if } A\leftarrow\alpha a B \beta \in P \text{ and either } B\Rightarrow^+b\gamma \text{ or } B\Rightarrow^+Cb\gamma$
$a \gtrdot b \text{ if } A\leftarrow\alpha B b \beta \in P \text{ and either } B\Rightarrow^+\gamma a \text{ or } B\Rightarrow^+\gamma aC$
$a \doteq b \text{ if either } A\leftarrow\alpha a b \beta \in P \text{ or } A\leftarrow\alpha a C b \beta \in P $
For an intuition about these symbols, think of a derivation being written with $\langle$ and $\rangle$ surrounding the strings replacing each nonterminal. Then $a\lessdot b$ holds if there is such a derivation where $b$ follows $a$ after a sequence of $\langle$s possibly followed by a single nonterminal. (Note that the
operator property guarantees that there could not be two consecutive nonterminals.) In other words, ignoring nonterminals, $b$ is the first terminal in a reduction following $a$. $\gtrdot$ is analogous: $b$ is the first terminal following a reduction whose last terminal was $a$. (Again, the operator property guarantees that $b$ is not preceded by a nonterminal, since a nonterminal cannot follow a reduction.) $a\doteq b$ holds if $a$ and $b$ are consecutive terminals in a production, possibly separated by a (single) nonterminal.
Now, if the three precedence relations $\lessdot$, $\gtrdot$ and $\doteq$ are disjoint, we call $G$ an
operator precedence grammar.
For simplicity, we will parse the augmented grammar
$$G' = \langle N\cup\{S'\}, T\cup\{\#\},P\cup S'\rightarrow S\#, S'\rangle$$
where $\#$ is some symbol not in $T$. It's easy to verify that $a\lessdot \#$ for every $a \in T$ and that there is no other precedence relation involving $\#$. We also add a $\#$ to the end of the input. Then we can parse the input as follows:
Create a parser stack $S$ consisting only of the symbol $\#$.
For each input symbol $b$ in turn, from left to right:
While $TOP(S) \gtrdot b: \;POP(S)$ If $b = \#$ and $top(S) = \#$, $ACCEPT$ the input If $TOP(S) \lessdot b \text{ or } TOP(S) \doteq b: \;PUSH(S, b)$. Otherwise, $REJECT$ the input.
In practice, we'll actually want to record the derivations discovered by this algorithm. If we $POP$ the stack in step 2.1, the popped symbols (both terminals and nonterminals) are accumulated into $X$, and just before we continue with step 2.2, we find the production in $P$ whose right-hand side is $X$, and output that production as a derivation step. If there is more than one such production, then we need some auxiliary algorithm to decide which one to use; usually, we sidestep the problem by requiring that the right-hand sides be unique. (This is also why it's common to eliminate unit and null productions, because the Floyd algorithm provides no clue whatsoever about when to insert their derivation steps.)
Like many others, I think Floyd's contribution to parsing theory was fundamental. But it is important not to be seduced by the beautiful simplicity of his algorithm. As Floyd himself noted, it suffers from two important issues:
It is not sufficient to parse many programming languages, whose grammars cannot conveniently be written as operator grammars with unique right-hand sides or do not yield disjoint precedence relations.
The fact that a string is accepted by the algorithm does not guarantee that the string is accepted by the grammar. Many syntactically invalid strings are accepted and must be filtered out using other algorithms.
Floyd's later work, including his investigation into bounded-context grammars, sought to overcome these shortcomings.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine
320
Power-ordered sets are not always lattices. In the case of distributive lattices we give a description by disjoint of chains. Finite power-ordered sets have a polarity. We introduct the leveled lattices and show examples with trivial tolerance. Finally we give a list of Hasse diagrams of power-ordered sets.
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
220
Hyperidentities (1992)
The concept of a free algebra plays an essential role in universal algebra and in computer science. Manipulation of terms, calculations and the derivation of identities are performed in free algebras. Word problems, normal forms, system of reductions, unification and finite bases of identities are topics in algebra and logic as well as in computer science. A very fruitful point of view is to consider structural properties of free algebras. A.I. Malcev initiated a thorough research of the congruences of free algebras. Henceforth congruence permutable, congruence distributive and congruence modular varieties are intensively studied. A lot of Malcev type theorems are connected to the congruence lattice of free algebras. Here we consider free algebras as semigroups of compositions of terms and more specific as clones of terms. The properties of these semigroups and clones are adequately described by hyperidentities. Naturally a lot of theorems of "semigroup" or "clone" type can be derived. This topic of research is still in its beginning and therefore a lot öf concepts and results cannot be presented in a final and polished form. Furthermore a lot of problems and questions are open which are of importance for the further development of the theory of hyperidentities.
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
253-2
Order-semi-primal lattices (1994)
310
Locally Maximal Clones II (1999)
211-2
304
It is proved that if a finite non-trivial quasi-order is nota linear order then there exist continuum many clones, whichconsist of functions preserving the quasi-order and containall unary functions with this property. It is shown that, fora linear order on a three-element set, there are only 7 suchclones
336
Hyperquasivarieties (2003)
211-1
|
I believe you're not going to find exactly what you want in Lie, because he never formalized flows (or finite transformations) and their commutation as you do. Maybe the closest would be this, from
Über Differentialinvarianten, Math. Ann. 24 (1884) 537-578:
... erhalten wir folgenden Fundamentalsatz, den ich 1872 entdeckt habe:
Satz 3. Enthält eine kontinuierliche Gruppe die beiden infinitesimalen Transformationen:
$$
Bf=\sum\xi_\varkappa\frac{\partial f}{\partial x_\varkappa}
\quad\textit{und:}\quad
Cf=\sum\eta_\varkappa\frac{\partial f}{\partial x_\varkappa},
$$
so enthält sie ebenfalls die infinitesimale Transformation:
$$
\sum_i(B\eta_i-C\xi_i)\frac{\partial f}{\partial x_i},
$$
deren Symbol bekanntlich auf die beiden äquivalenten Formen:
$$
B(C(f)) - C(B(f)) = (B, C)
$$
gebracht werden kann.
As you can see, his definition of the bracket of vector fields is always as the commutator of the derivations they define on functions (something that goes back to Jacobi). What this
Satz states, then, is that the finite transformations (or flow) generated by the infinitesimal commutator $(B,C)$ belong to the group generated by (the flows of) $B$ and $C$. Not surprisingly, Lie's proof is by expanding the flows to second order.
Lie may or may not have stated this
Satz elsewhere before 1884, but I doubt he ever wrote a formula for, much less definition of, the bracket as limit of commutators of finite transformations.
Correction Robert Bryant has now found an 1891 reference where Lie (or at least Engel) indeed commutes finite transformations. See his reply and the comments there.
Update As to your question of who (esp. first) expressed the bracket as a derivative of commutators of flows: I don't know (my impression is that these things developed slowly in a sort of consensus). As a data point though, one might argue that the formula$$[V,T]=\frac{d}{ds}\frac{d}{dt}e^{-sV}e^{tT}e^{sV}\Bigr|_{s=t=0}$$ is on p. 240 of Poincaré, Sur les groupes continus, Trans. Cambridge Philos. Soc. 18 (1900) 220-255.
Further update Trotter's formula that you also mention now is indeed called "Lie-Trotter" by e.g. Chernoff [1968,1974] or Chorin et al. [1978]. The latter write (sic):
... the equation $dx/dt=Ax+Bx$ leads to the 1875 formula of S. Lie [38]:
$$
\exp\{A+B\} = \lim_{n\to\infty}(\exp\{A/n\}\exp\{B/n\})^n.\tag{$*$}
$$
This and the related formula
$$
\exp\{[A,B]\} = \lim_{n\to\infty}(
\exp\bigl\{\frac{-B}{\sqrt n}\bigr\}
\exp\bigl\{\frac{-A}{\sqrt n}\bigr\}
\exp\bigl\{\frac{B}{\sqrt n}\bigr\}
\exp\bigl\{\frac{A}{\sqrt n}\bigr\})\tag{$**$}
$$
occur in the theory of Lie groups.
...
[38] Lie, S., and Engel, F., Theorie der Transformationsgruppen, 3 Vols., Teubner, Leipzig, 1888.
The problem is that [38] is not from 1875, nor does it contain anything remotely like formula ($*$) (I am ready to bet a lot of money). I may be wrong but until someone finds that elusive 1875 paper, I would tend to date ($*$) and ($**$) from around von Neumann [1929, p. 19].
|
Ex.6.5 Q2 Triangles Solution - NCERT Maths Class 10 Question
\({PQR}\) is a triangle right angled at \(P\) and \(M\) is a point on \(QR\) such that \(PM \bot QR\). Show that \({{(\text{PM})}^{2}}\) = \({QM. MR}\)
Diagram
Text Solution Reasoning:
As we know if a perpendicular is drawn from the vertex of the right angle of a right triangle to the hypotenuse then triangles on both sides of the perpendicular are similar to the whole triangle and to each other.
Steps:
In \(\,\Delta PQR\,;\,\angle QPR={{90}^{0}}\) and \(PM\bot QR\)
In \(\,\Delta PQR\,\) and \(\,\Delta MQP\,\)
\(\angle Q P R=\angle Q M P=90^{\circ}\)
\(\angle P Q R=\angle M Q P \) (Common Angles)
\(\begin{align} {\Rightarrow} \quad \Delta P Q R \sim \Delta M Q P\end{align}\) (AA Criterion)......-(1)
In \(\Delta PQR\) and \(\Delta MPR\)
\(\angle QMP=\angle PMR={{90}^{0}}\)
\(PRQ= RPM\) (Common Angle)
\(\begin{align}\Rightarrow\quad \,\Delta \,PQR&\sim{\ }\Delta \,MPR\end{align}\) (AA Criterion).........(2)
From (1) and (2)
\(\Delta MQP\sim{\ }\Delta MPR\)
\(\begin{align}\frac{P M}{M R}&=\frac{Q M}{P M} \text{[Comparing sides opposite to equal angles]}\\&{\Rightarrow\quad P M^{2}=Q M \cdot M R}\end{align}\)
|
Holt-Winters Filtering
Computes Holt-Winters Filtering of a given time series. Unknown parameters are determined by minimizing the squared prediction error.
Keywords ts Usage
HoltWinters(x, alpha = NULL, beta = NULL, gamma = NULL, seasonal = c("additive", "multiplicative"), start.periods = 2, l.start = NULL, b.start = NULL, s.start = NULL, optim.start = c(alpha = 0.3, beta = 0.1, gamma = 0.1), optim.control = list())
Arguments x An object of class
ts
alpha $alpha$ parameter of Holt-Winters Filter. beta $beta$ parameter of Holt-Winters Filter. If set to
FALSE, the function will do exponential smoothing.
gamma $gamma$ parameter used for the seasonal component. If set to
FALSE, an non-seasonal model is fitted.
seasonal Character string to select an
"additive"(the default) or
"multiplicative"seasonal model. The first few characters are sufficient. (Only takes effect if
gammais non-zero).
start.periods Start periods used in the autodetection of start values. Must be at least 2. l.start Start value for level (a[0]). b.start Start value for trend (b[0]). s.start Vector of start values for the seasonal component ($s_1[0] \ldots s_p[0]$) optim.start Vector with named components
alpha,
beta, and
gammacontaining the starting values for the optimizer. Only the values needed must be specified. Ignored in the one-parameter case.
optim.control Optional list with additional control parameters passed to
optimif this is used. Ignored in the one-parameter case.
Details
The additive Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = a[t] + h b[t] + s[t - p + 1 + (h - 1) \bmod p],$$ where $a[t]$, $b[t]$ and $s[t]$ are given by $$a[t] = \alpha (Y[t] - s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] -a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] - a[t]) + (1-\gamma) s[t-p]$$
The multiplicative Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = (a[t] + h b[t]) \times s[t - p + 1 + (h - 1) \bmod p].$$ where $a[t]$, $b[t]$ and $s[t]$ are given by $$a[t] = \alpha (Y[t] / s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] - a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] / a[t]) + (1-\gamma) s[t-p]$$ The data in
x are required to be non-zero for a multiplicative model, but it makes most sense if they are all positive.
The function tries to find the optimal values of $\alpha$ and/or $\beta$ and/or $\gamma$ by minimizing the squared one-step prediction error if they are
NULL (the default).
optimize will be used for the single-parameter case, and
optim otherwise.
For seasonal models, start values for
a,
b and
s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function
decompose) on the
start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for
a and
b are
x[2] and
x[2] - x[1], respectively. For level-only models (ordinary exponential smoothing), the start value for
a is
x[1].
Value An object of class
"HoltWinters", a list with components:
fitted A multiple time series with one column for the filtered series as well as for the level, trend and seasonal components, estimated contemporaneously (that is at time t and not at the end of the series). x The original series alpha alpha used for filtering beta beta used for filtering gamma gamma used for filtering coefficients A vector with named components
a, b, s1, ..., spcontaining the estimated values for the level, trend and seasonal components
seasonal The specified
seasonalparameter
SSE The final sum of squared errors achieved in optimizing call The call used References
C. C. Holt (1957) Forecasting trends and seasonals by exponentially weighted moving averages,
ONR Research Memorandum, Carnegie Institute of Technology 52.
P. R. Winters (1960) Forecasting sales by exponentially weighted moving averages,
Management Science 6, 324--342. See Also Aliases HoltWinters print.HoltWinters residuals.HoltWinters Examples
library(stats)
<testonly>od <- options(digits = 5)</testonly>require(graphics)## Seasonal Holt-Winters(m <- HoltWinters(co2))plot(m)plot(fitted(m))(m <- HoltWinters(AirPassengers, seasonal = "mult"))plot(m)## Non-Seasonal Holt-Wintersx <- uspop + rnorm(uspop, sd = 5)m <- HoltWinters(x, gamma = FALSE)plot(m)## Exponential Smoothingm2 <- HoltWinters(x, gamma = FALSE, beta = FALSE)lines(fitted(m2)[,1], col = 3)<testonly>options(od)</testonly>
Documentation reproduced from package stats, version 3.3, License: Part of R 3.3
|
Answer
There is one value of $\theta$ as solution to the equation: $$\{0^\circ\}$$
Work Step by Step
$$\sec\frac{\theta}{2}=\cos\frac{\theta}{2}$$ over interval $[0^\circ,360^\circ)$ 1) Find corresponding interval for $\frac{\theta}{2}$ The interval for $\theta$ is $[0^\circ,360^\circ)$, which can also be written as the inequality: $$0^\circ\le\theta\lt360^\circ$$ Therefore, for $\frac{\theta}{2}$, the inequality would be $$0^\circ\le\frac{\theta}{2}\lt180^\circ$$ Thus, the corresponding interval for $\frac{\theta}{2}$ is $[0^\circ,180^\circ)$. 2) Now we examine the equation: $$\sec\frac{\theta}{2}=\cos\frac{\theta}{2}$$ Here we have both cosine and secant functions. It would be beneficial if we can change $\sec\frac{\theta}{2}$ into a cosine function, using the identity $\sec x=\frac{1}{\cos x}$ Thus, $$\frac{1}{\cos\frac{\theta}{2}}=\cos\frac{\theta}{2}$$ ($\cos\frac{\theta}{2}\ne0$) Multiply both sides with $\cos\frac{\theta}{2}$: $$\cos^2\frac{\theta}{2}=1$$ $$\cos\frac{\theta}{2}=\pm1$$ With $\cos\frac{\theta}{2}=1$, over interval $[0^\circ,180^\circ)$, there is 1 value whose sine equals $1$, which are $\{0^\circ\}$ With $\cos\frac{\theta}{2}=-1$, over interval $[0^\circ,180^\circ)$ (which does not include $180^\circ$), there is no value whose sine equals $-1$. Combining 2 cases, only 1 value has been found out, meaning $$\frac{\theta}{2}=\{0^\circ\}$$ It follows that $$\theta=\{0^\circ\}$$ This is the solution set of the equation.
|
Cosines
The law of cosines is a statement about a general triangle that relates the lengths of its sides to the cosine of one of its angles.
If we have the following triangel
The following holds true
$$\\ a^{2}=b^{2}+c^{2}-2bc\: cos\: A\\ b^{2}=a^{2}+c^{2}-2ac\: cos\: B\\ c^{2}=a^{2}+b^{2}-2ab\: cos\: C\\$$
Example
If we have a triangle and we are given that C = 30° (this example is also shown in our video lesson) then find c if
a = 12 and b = 16.
To solve this we use the third equation and plug our values into it:
$$c^{2}=12^{2}+16^{2}-2\cdot 12\cdot 16\: cos\: 30^{\circ}$$
$$c^{2}\approx 67.446$$
$$c=\sqrt{67.446}=8.21$$
Video lesson
The example above in videoformat
|
In a paper that I am reading there is a following step:
Let $X$ be a Banach space and let $(x_k) \subset X$ be a normalized sequence that converges weakly to $0$. Then $\overline{co}(x_k)$ is a weakly compact set.
(notice that $\overline{co}(x_k)$ denotes the norm-closure of the convex hull of $(x_k)$.)
I think that I managed to prove the claim, but I had to do a lot of manual checking. My line of thought is given below.
My question is: Can this be proved more directly than I did (assuming that my proof is without error)? For example, is closed convex hull of a weakly compact set always weakly compact? (similar question was asked here, but in a bit different context, so it does not seem to apply to this situation: Convex hulls of compact sets).
I reasoned as follows:
$\{x_k \mid k \in \mathbb{N} \}$ is an weakly compact set. I checked that given a family $(y_{\alpha}) \subset co(x_k)$, it contains an weakly convergent subfamily, which weakly converges to an element in $\{ \sum_{k=1}^{\infty} {\alpha_k x_k \mid (\alpha_k) \in B_{\ell_1} }\}$.
I observed that given any family $(y_{\alpha})_{\alpha \in I} \subset \overline{co}(x_k)$, I can construct a family $(z_{\alpha, \epsilon})_{\alpha \in I, \epsilon > 0} \subset {co}(x_k)$ such that $\forall \alpha \in I, \epsilon > 0$ we have that $\lVert y_{\alpha} - z_{\alpha, \epsilon} \rVert < \epsilon$. By defining order for family $z_{\alpha, \epsilon}$ in such a way that $(\alpha_1, \epsilon_1) \leq (\alpha_2, \epsilon_2) \Leftrightarrow \alpha_1 \leq \alpha_2$ and $\epsilon_1 \geq \epsilon_2$, I can verify that (assuming that I did not make a mistake): \begin{equation*} (y_{\alpha}) \text { converges weakly to } w \Leftrightarrow (z_{\alpha, \epsilon}) \text{ converges weakly to } w \end{equation*}
Since $(y_{\alpha}) \subset \overline{co}(x_k)$, then $(z_{\alpha, \epsilon})_{\alpha \in I, \epsilon > 0} \subset {co}(x_k)$. The latter family contains an weakly convergent subfamily $(z_{\beta}')$, which converges to an element $c \in \{ \sum_{k=1}^{\infty} {\alpha_k x_k \mid (\alpha_k) \in B_{\ell_1} }\} \subset \overline{co}(x_k)$. Therefore the original family $(y_{\alpha}) \subset \overline{co}(x_k)$ can also be shown to have a subfamily $(y_{\gamma}')$ which converges weakly to the same element $c$.
Therefore $\overline{co}(x_k)$ is a weakly compact set.
|
Clayton Shonkwiler
Colorado State University
09.09.17
/denton17
this talk!
Jason Cantarella
U. of Georgia
Tom Needham
Ohio State
Gavin Stewart
NYU
Laney Bowden
CSU
Andrea Haynes
CSU
Aaron Shukert
CSU
W. S. B. Woolhouse,
Educational Times 18 (1865), p. 189
J. J. Sylvester,
Educational Times 18 (1865), p. 68
W. S. B. Woolhouse,
The Lady's and Gentleman's Diary 158 (1861), p. 76
Probability \(p\)
\(p\)
\(p\)
\(p\)
\(p\)
Uh oh!
Suppose \(AB\) is the longest side. Then
\(\mathbb{P}(\text{obtuse})=\frac{\pi/8}{\pi/3-\sqrt{3}/4} \approx 0.64\)
But if \(AB\) is the
second longest side,
\(\mathbb{P}(\text{obtuse}) = \frac{\pi/2}{\pi/3+\sqrt{3}/2} \approx 0.82\)
Proposition [Portnoy]: If the distribution of \((x_1,y_1,x_2,y_2,x_3,y_3)\in\mathbb{R}^6\) is spherically symmetric (for example, a standard Gaussian), then
\(\mathbb{P}(\text{obtuse}) = \frac{3}{4}\)
Consider the vertices \((x_1,y_1),(x_2,y_2),(x_3,y_3)\) as determining a single point in \(\mathbb{R}^6\).
For example, when the
vertices of the triangle are chosen from independent, identically-distributed Gaussians on \(\mathbb{R}^2\).
Choose three vertices uniformly in the disk:
\(\mathbb{P}(\text{obtuse})=\frac{9}{8}-\frac{4}{\pi^2}\approx 0.7197\)
Choose three vertices uniformly in the square:
\(\mathbb{P}(\text{obtuse})=\frac{97}{150}-\frac{\pi}{40}\approx 0.7252\)
reentrant
J.J. Sylvester,
Educational Times, April 1864
J.J. Sylvester,
Phil. Trans. R. Soc. London 154 (1864), p. 654, footnote 64(b)
W.S.B. Woolhouse,
Mathematical Questions with Their Solutions VII (1867), p. 81
A. De Morgan,
Trans. Cambridge Phil. Soc. XI (1871), pp. 147–148
W.S.B. Woolhouse,
Mathematical Questions with Their Solutions VI (1866), p. 52
C.M. Ingleby,
Mathematical Questions with Their Solutions V (1865), p. 82
G.C. De Morgan,
Mathematical Questions with Their Solutions V (1865), p. 109
W.S.B. Woolhouse,
Mathematical Questions with Their Solutions VI (1866), p. 52
W.S.B. Woolhouse,
Mathematical Questions with Their Solutions VIII (1868), p. 105
\(\mathbb{P}(\text{reflex})=\frac{1}{3}\)
\(\mathbb{P}(\text{reflex})=\frac{35}{12\pi^2}\approx 0.296\)
Theorem [Blaschke, 1917]
\(\frac{35}{12\pi^2}\leq\mathbb{P}(\text{reflex})\leq\frac{1}{3}\)
J.M. Wilson,
Mathematical Questions with Their Solutions V (1866), p. 81
W.A. Whitworth,
Mathematical Questions with Their Solutions VIII (1868), p. 36
Report on J.J. Sylvester’s presentation of his paper “On a Special Class of Questions on the Theory of Probabilities” to the British Association for the Advancement of Science, 1865
Are these questions really about choosing random points, or are they actually about choosing random polygons?
How would you choose a polygon “at random”?
— Stephen Portnoy,
Statistical Science 9 (1994), 279–284
The space of all \(n\)-gons should be a (preferably compact) manifold \(P_n\) with a transitive isometry group. We should use the left-invariant metric on \(P_n\), scaled so vol\((P_n)=1\). Then the Riemannian volume form induced by this metric is a natural probability measure on \(P_n\), and we should compute the volume of the subset of \(n\)-gons satisfying our favorite condition.
Spoiler: \(P_n \simeq G_2\mathbb{R}^n\)
Let \(e_1, \ldots , e_n\) be the edges of a planar \(n\)-gon with total perimeter 2. Choose \(z_1, \ldots , z_n\) so that \(z_k^2 = e_k\). Let \(z_k = a_k + i b_k\).
The polygon is closed \(\Leftrightarrow e_1 + \ldots + e_n = 0\)
\(\sum e_k =\sum z_k^2 = \left(\sum a_k^2 - \sum b_k^2\right) + 2i \sum a_k b_k\)
The polygon is closed \(\Leftrightarrow \|a\|=\|b\|\) and \(a \bot b\)
Since \(\sum |e_k| = \sum a_k^2 + \sum b_k^2 = \|a\|^2 + \|b\|^2\), we see that \((a,b) \in V_2(\mathbb{R}^n)\), the Stiefel manifold of 2-frames in \(\mathbb{R}^n\).
Proposition: Rotating \((a,b)\) in the plane it spans rotates the corresponding \(n\)-gon twice as fast. Corollary [Hausmann–Knutson]
The Grassmannian \(G_2(\mathbb{R}^n)\) is (almost) a \(2^n\)-fold covering of the space of planar \(n\)-gons of perimeter 2.
Definition [w/ Cantarella & Deguchi]
The
symmetric measure on \(n\)-gons of perimeter 2 up to translation and rotation is the pushforward of Haar measure on \(G_2(\mathbb{R}^n)\).
Therefore, \(SO(n)\) acts transitively on \(n\)-gons and preserves the symmetric measure.
Polygons in \(\mathbb{R}^3\) correspond to points in \(G_2(\mathbb{C}^n)\) and the analog of the squaring map is the Hopf map.
Polygons in this model have the same scaling behavior as ring polymers like bacterial DNA.
This model was used by the Tezuka lab to verify the synthesis of novel polymer topologies, including \(K_{3,3}\).
A triangle corresponds to an element of \(G_2(\mathbb{R}^3)\), which is a plane in \(\mathbb{R}^3\) with ON basis \((a,b)\)...
or, equivalently, the line through
\(p=a\times b\)
so
Law of Cosines
Perimeter \(2\)
The rotations are natural transformations of the sphere, and the corresponding action on triangles is natural.
\(c=1-z^2\) fixed
\(z\) fixed
\(C(\theta) = (\frac{z^2+1}{2}\cos 2\theta, z \sin 2\theta)\)
The equal-area-in-equal-time parametrization of the ellipse
The right triangles are exactly those satisfying
\(a^2+b^2=c^2\) & permutations
Since \(a=1-x^2\), etc., the right triangles are determined by the quartic
\((1-x^2)^2+(1-y^2)^2=(1-z^2)^2\) & permutations
\(x^2 + x^2y^2 + y^2 = 1\), etc.
\(\mathbb{P}(\text{obtuse})=\frac{1}{4\pi}\text{Area} = \frac{24}{4\pi} \int_R d\theta dz\)
But now \(C\) has the parametrization
And the integral reduces to
By Stokes’ Theorem
\(\frac{6}{\pi} \int_R d\theta dz=\frac{6}{\pi}\int_{\partial R}z d\theta = \frac{6}{\pi}\left(\int_{z=0} zd\theta + \int_C zd\theta \right)\)
\(\left(\sqrt{\frac{1-y^2}{1+y^2}},y,y\sqrt{\frac{1-y^2}{1+y^2}}\right)\)
\(\frac{6}{\pi} \int_0^1 \left(\frac{2y}{1+y^4}-\frac{y}{1+y^2}\right)dy\)
Theorem [with Cantarella, Needham, Stewart]
With respect to the symmetric measure on triangles, the probability that a random triangle is obtuse is
\(\frac{3}{2}-\frac{3\ln 2}{\pi}\approx0.838\)
convex
reflex/reentrant
self-intersecting
A polygon corresponds to \(\begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \\ \vdots & \vdots \\ a_n & v_n \end{bmatrix}\), where \((a_k + i b_k)^2 = e_k\).
\((a_k,b_k)\mapsto(-a_k,-b_k)\) doesn’t change the polygon, so \((\mathbb{Z}/2\mathbb{Z})^n\) acts trivially on polygons.
\(S_n \le SO(n)\) permutes rows and hence edges.
The hyperoctahedral group \(B_n = (\mathbb{Z}/2\mathbb{Z})^n \rtimes S_n = S_2 \wr S_n\) acts by isometries on \(G_2(\mathbb{R}^n)\) and permutes edges.
\(|B_n| = 2^n n!\); e.g., \(|B_3| = 2^3 3! = 48\).
\(S_n \le SO(n)\) permutes rows and hence edges.
\(S_n \le SO(n)\) permutes rows and hence edges.
Upshot: We can count how many elements of the permutation orbit of a single element of \(\mathcal{D}_4\) are convex, reflex, and self-intersecting.
The (empirical)
flag mean of \(\mathcal{D}_4\).
The flag mean of
is (the plane spanned by) the first two left singular vectors of the matrix
Theorem [with Cantarella, Needham, Stewart]
Convex, reflex, and self-intersecting quadrilaterals are all equiprobable.
Theorem [with Cantarella, Needham, Stewart]
With respect to
any measure on \(n\)-gon space which is invariant under permuting edges, the fraction of convex \(n\)-gons is exactly \(\frac{2}{(n-1)!}\).
Symmetric triangles
Isosceles
triangles
Degenerate
triangles
Theorem [with Bowden, Haynes, Shukert]
The least symmetric triangle has side length ratio
Theorem [with Bowden, Haynes, Shukert]
The least symmetric
obtuse triangle has side length ratio Theorem [with Bowden, Haynes, Shukert]
The least symmetric
acute triangle has side length ratio Example Theorem [w/ Cantarella, Grosberg, Kusner]
The expected total curvature of a random space \(n\)-gon is exactly
\(\frac{\pi}{2}n + \frac{\pi}{4} \frac{2n}{2n-3}\)
Polygons in \(\mathbb{R}^3\) correspond to points in \(G_2(\mathbb{C}^n)\) and the analog of the squaring map is the Hopf map.
Observation [Needham]: The flag mean of the equilateral six-edge trefoils is Theorem*: The flag mean of the positive Grassmannian is the convex regular \(n\)-gon. Observation [Needham]: The flag mean of the equilateral six-edge positive trefoils is Theorem [Baxter]
The expected number of vertices on the convex hull of a random planar \(n\)-gon is
Conjecture
The expected perimeter of the convex hull of a random planar \(n\)-gon is
Conjecture vs. sample average (100,000 samples)
Conjecture
The expected perimeter of the convex hull of a random planar \(n\)-gon is
Funding: Simons Foundation
J. Cantarella, T. Deguchi, and C. Shonkwiler
Communications on Pure and Applied Mathematics 67
(2014), 1658–1699
J. Cantarella, T. Needham, C. Shonkwiler, and G. Stewart
arXiv: 1702.01027
L. Bowden, A. Haynes, C. Shonkwiler, and A. Shukert
arXiv:1708.01559
J. Cantarella, A. Grosberg, R. Kusner, and C. Shonkwiler
American Journal of Mathematics 137 (2015), 411–438
|
I know that it isn't necessary that cross product automaton for two minimal DFAs be minimal, but according to my analysis, if two DFAs do not have any common string then their cross product of minimal DFAs would be minimal. What should I do to prove this?
No.
Counterexample:
Let alphabet $\Sigma = \left\{ 0, 1 \right\}$, languages $L_1 = \left\{ w0 \,\middle|\, w \in \Sigma^\ast \right\}$ (i.e. the last digit is $0$), $L_2 = \left\{ w1 \,\middle|\, w \in \Sigma^\ast \right\}$ (i.e. the last digit is $1$). Note that $L_1 \cap L_2 = \emptyset$.
Then the following DFAs $M_1$ and $M_2$ are minimal for $L_1$ and $L_2$ respectively:
And the cross product of $M_1$ and $M_2$ (for the intersection of languages) is:
But this is not minimal for the empty language. A minimal DFA for the language is:
In fact, cross-product DFA of $M_1$ and $M_2$ recognizes an intersection of languages $L(M_1) \cap L(M_2)$ (if a state of cross-product DFA is accepting when the original two states are both accepting on original DFAs). So in this case a generated cross-product DFA always recognizes the empty language. Since a minimal DFA for the empty language has only one state and cross-product DFA has ((#states of $M_1$) × (#states of $M_2$)) states, almost all cross-product DFAs are non-minimal.
Also, even though if you define a state of cross-product DFA is accepting when either of the original two states is accepting, the above $(L_1, L_2)$ is a counterexample. Since $L_1 \cup L_2 = \Sigma^\ast$, the minimal DFA has only one state.
No, the Cartesian product (a.k.a. cross product in the question) of two minimal automaton may not be minimal.
Here are two simple counterexamples.
Note that a DFA with only one state will either accept all words if its start state is also an accept state or accept no words otherwise.
Let alphabet $\Sigma = \left\{ a\right\}$. Let $E$ be a minimal DFA such that $L(E)$ is not empty nor all words. The number of states in $E$ must be greater than 1. Let $O$ be a minimal DFA such that $L(O)$ is the complement of $L(E)$. The number of states in $O$ must be greater than 1.
An automaton $P$ which is an Cartesian product automaton of $E$ and $O$ must have at least 2x2=4 states.
Suppose $P$ is constructed for the intersection of $L(E)$ and $L(O)$, i.e. the empty language. Its equivalent minimal DFA has 1 state, the start state, which is not an accept state.
Suppose $P$ is constructed for the union of $L(E)$ and $L(O)$, i.e., $\{a\}^*$. its equivalent minimal DFA has 1 state, the start state, which is also an accept state.
Please note the Cartesian product automaton of two DFA is not defined uniquely usually since it is allowed to choose different sets of accept states. This is the implicit view as in the book
Introduction to the theory of computation by Michael Sipser.
Let $Q_A,Q_B$ be the number of states in $A,B$, respectively. The number of states in the product automaton is $Q_A Q_B$. Since $L(A) \cap L(B) = \emptyset$, the minimal automaton for $L(A) \cap L(B) = \emptyset$ contains a single state. This can only happen if $Q_A = Q_B = 1$, in which case $A,B \in \{ \emptyset, \Sigma^* \}$. We conclude that the product automaton is minimal exactly in the following three cases:
$A = B = \emptyset$. $A = \emptyset$, $B = \Sigma^*$. $A = \Sigma^*$, $B = \emptyset$.
|
Difference between revisions of "De Bruijn-Newman constant"
(→Threads)
(→Threads)
Line 94: Line 94:
* [https://terrytao.wordpress.com/2018/04/17/polymath15-eighth-thread-going-below-0-28/ Polymath15, eighth thread: going below 0.28], Terence Tao, Apr 17, 2018.
* [https://terrytao.wordpress.com/2018/04/17/polymath15-eighth-thread-going-below-0-28/ Polymath15, eighth thread: going below 0.28], Terence Tao, Apr 17, 2018.
* [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018.
* [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018.
+
== Other blog posts and online discussion ==
== Other blog posts and online discussion ==
Revision as of 20:21, 6 September 2018
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
NTS ABSTRACTSpring2019
Return to [1]
Contents Jan 23
Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24
Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31
Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7
Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters.
For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU).
Feb 14
Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28
Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7
Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14
Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross.
This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew.
March 28
Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4
Adebisi Agboola Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a
set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri.
|
The definition of $f(n) = O(g(n))$ (for $n \to \infty$) is that there are $N_0, c$ such that for $N \geq N_0$ it is $f(n) \leq c g(n)$.
In your case, pick e.g. $N_0 = 2$, then you have $10 n^3 + 3 n < 10 n^3 + 3 n^3 = 13 n^3$, and $c = 13$ works.
The definition of $f(n) = \Omega(g(n))$ (for $n \to \infty$) is that there are $N_0, c$ such that for $N \geq N_0$ it is $f(n) \geq c g(n)$.
Pick e.g. $N_0 = 5$, so $10 n^3 + 3 n > 10 n^3$, and $c = 10$ works.
Now, $f(n) = \Theta(g(n))$ if both $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$, and you are done.
Note the $N_0$, $c$ don't have to be the same (usually at least $c$ is different).
|
At tree level, the Kähler potential is given by (neglecting complex structure)
$K = -\ln(-\mathrm{i}(\tau - \bar{\tau})) - 2\ln(V_{CY})$
where $V_{CY} = \frac{1}{6} \kappa_{abc}t^at^bt^c$ ia the the two cycle volume.
In some literature this is written in form of Kähler moduli variables as $V_{CY}=-i(\rho_a - \bar{\rho_a})t^a $ where $\rho = b + i\tau$. $\tau$ here is 4 cycle modulus.
In some other literature this is given as $V_{CY}=-3i(\rho - \bar{\rho}) $ where $\rho = b + ie^{4u}$. $u$ fixes the volume of Calabi-Yau.
So my question is are these two equivalent? Would the $\rho\bar\rho$ component of the Kähler metric be the same?
This post imported from StackExchange Physics at 2015-04-25 19:21 (UTC), posted by SE-user sol0invictus
|
As described in the Rich Output tutorial, the IPython display system can display rich representations of objects in the following formats:
This Notebook shows how you can add custom display logic to your own classes, so that they can be displayed using these rich representations. There are two ways of accomplishing this:
_repr_html_ when you define your class.
This Notebook describes and illustrates both approaches.
Import the IPython display functions.
from IPython.display import ( display, display_html, display_png, display_svg)
Parts of this notebook need the matplotlib inline backend:
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt
The main idea of the first approach is that you have to implement special display methods when you define your class, one for each representation you want to use. Here is a list of the names of the special methods and the values they must return:
_repr_html_: return raw HTML as a string
_repr_json_: return a JSONable dict
_repr_jpeg_: return raw JPEG data
_repr_png_: return raw PNG data
_repr_svg_: return raw SVG data as a string
_repr_latex_: return LaTeX commands in a string surrounded by "$".
As an illustration, we build a class that holds data generated by sampling a Gaussian distribution with given mean and standard deviation. Here is the definition of the
Gaussian class, which has a custom PNG and LaTeX representation.
from IPython.core.pylabtools import print_figurefrom IPython.display import Image, SVG, Mathclass Gaussian(object): """A simple object holding data sampled from a Gaussian distribution. """ def __init__(self, mean=0.0, std=1, size=1000): self.data = np.random.normal(mean, std, size) self.mean = mean self.std = std self.size = size # For caching plots that may be expensive to compute self._png_data = None def _figure_data(self, format): fig, ax = plt.subplots() ax.hist(self.data, bins=50) ax.set_title(self._repr_latex_()) ax.set_xlim(-10.0,10.0) data = print_figure(fig, format) # We MUST close the figure, otherwise IPython's display machinery # will pick it up and send it as output, resulting in a double display plt.close(fig) return data def _repr_png_(self): if self._png_data is None: self._png_data = self._figure_data('png') return self._png_data def _repr_latex_(self): return r'$\mathcal{N}(\mu=%.2g, \sigma=%.2g),\ N=%d$' % (self.mean, self.std, self.size)
Create an instance of the Gaussian distribution and return it to display the default representation:
x = Gaussian(2.0, 1.0)x
You can also pass the object to the
display function to display the default representation:
display(x)
Use
display_png to view the PNG representation:
display_png(x)
display and
display_png. The former computes
Create a new Gaussian with different parameters:
x2 = Gaussian(0, 2, 2000)x2
You can then compare the two Gaussians by displaying their histograms:
display_png(x)display_png(x2)
Note that like
display functions multiple times in a cell.
When you are directly writing your own classes, you can adapt them for display in IPython by following the above approach. But in practice, you often need to work with existing classes that you can't easily modify. We now illustrate how to add rich output capabilities to existing objects. We will use the NumPy polynomials and change their default representation to be a formatted LaTeX expression.
First, consider how a NumPy polynomial object renders by default:
p = np.polynomial.Polynomial([1,2,3], [-10, 10])p
Polynomial([ 1., 2., 3.], [-10., 10.], [-1, 1])
Next, define a function that pretty-prints a polynomial as a LaTeX string:
def poly_to_latex(p): terms = ['%.2g' % p.coef[0]] if len(p) > 1: term = 'x' c = p.coef[1] if c!=1: term = ('%.2g ' % c) + term terms.append(term) if len(p) > 2: for i in range(2, len(p)): term = 'x^%d' % i c = p.coef[i] if c!=1: term = ('%.2g ' % c) + term terms.append(term) px = '$P(x)=%s$' % '+'.join(terms) dom = r', $x \in [%.2g,\ %.2g]$' % tuple(p.domain) return px+dom
This produces, on our polynomial
p, the following:
poly_to_latex(p)
'$P(x)=1+2 x+3 x^2$, $x \\in [-10,\\ 10]$'
You can render this string using the
Latex class:
from IPython.display import LatexLatex(poly_to_latex(p))
However, you can configure IPython to do this automatically by registering the
Polynomial class and the
plot_to_latex function with an IPython display formatter. Let's look at the default formatters provided by IPython:
ip = get_ipython()for mime, formatter in ip.display_formatter.formatters.items(): print('%24s : %s' % (mime, formatter.__class__.__name__))
image/png : PNGFormatter application/pdf : PDFFormatter text/html : HTMLFormatter image/jpeg : JPEGFormatter text/plain : PlainTextFormatter text/markdown : MarkdownFormatter application/json : JSONFormatter application/javascript : JavascriptFormatter text/latex : LatexFormatter image/svg+xml : SVGFormatter
The
formatters attribute is a dictionary keyed by MIME types. To define a custom LaTeX display function, you want a handle on the
text/latex formatter:
ip = get_ipython()latex_f = ip.display_formatter.formatters['text/latex']
The formatter object has a couple of methods for registering custom display functions for existing types.
help(latex_f.for_type)
Help on method for_type in module IPython.core.formatters: for_type(typ, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a given type. Parameters ----------- typ : type or '__module__.__name__' string for a type The class of the object that will be formatted using `func`. func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or not specified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
help(latex_f.for_type_by_name)
Help on method for_type_by_name in module IPython.core.formatters: for_type_by_name(type_module, type_name, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a type specified by the full dotted module and name of the type, rather than the type of the object. Parameters ---------- type_module : str The full dotted name of the module the type is defined in, like ``numpy``. type_name : str The name of the type (the class name), like ``dtype`` func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or unspecified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
In this case, we will use
for_type_by_name to register
poly_to_latex as the display function for the
Polynomial type:
latex_f.for_type_by_name('numpy.polynomial.polynomial', 'Polynomial', poly_to_latex)
Once the custom display function has been registered, all NumPy
Polynomial instances will be represented by their LaTeX form instead:
p
p2 = np.polynomial.Polynomial([-20, 71, -15, 1])p2
Rich output special methods and functions can only display one object or MIME type at a time. Sometimes this is not enough if you want to display multiple objects or MIME types at once. An example of this would be to use an HTML representation to put some HTML elements in the DOM and then use a JavaScript representation to add events to those elements.
IPython 2.0 recognizes another display method,
_ipython_display_, which allows your objects to take complete control of displaying themselves. If this method is defined, IPython will call it, and make no effort to display the object using the above described
_repr_*_ methods for custom display functions. It's a way for you to say "Back off, IPython, I can display this myself." Most importantly, your
_ipython_display_ method can make multiple calls to the top-level
display functions to accomplish its goals.
Here is an object that uses
display_html and
display_javascript to make a plot using the Flot JavaScript plotting library:
import jsonimport uuidfrom IPython.display import display_javascript, display_html, displayclass FlotPlot(object): def __init__(self, x, y): self.x = x self.y = y self.uuid = str(uuid.uuid4()) def _ipython_display_(self): json_data = json.dumps(list(zip(self.x, self.y))) display_html('<div id="{}" style="height: 300px; width:80%;"></div>'.format(self.uuid), raw=True ) display_javascript(""" require(["//cdnjs.cloudflare.com/ajax/libs/flot/0.8.2/jquery.flot.min.js"], function() { var line = JSON.parse("%s"); console.log(line); $.plot("#%s", [line]); }); """ % (json_data, self.uuid), raw=True)
import numpy as npx = np.linspace(0,10)y = np.sin(x)FlotPlot(x, np.sin(x))
|
I am trying to reconstruct a signal from a noisy speech using an MMSE algorithm proposed long time ago by Ephraim and Malah (1984). After going through the algorithm, I got a matrix A which represents the magnitude of the reconstructed signal. With the help of this group, I now know that I need to use this magnitude with the phase of the noisy signal in order to recover the signal. Now the problem is that after doing an
ifft, I get complex numbers.
Does this mean there is a mistake in implementing the algorithm? I read somewhere that I have to have symmetry in my input to an
ifftfunction, how do I ensure such symmetry is applied here?
Here is my MATLAB command and the equation
A = A_hat.*exp(1i*nu);x_new = ifft(A);
\begin{align} \hat A_k&=\Gamma\left(1.5\right)\frac{\sqrt{v_k}}{\gamma_k}M\left(-0.5;1;-v_k\right)R_k\\ &=\Gamma\left(1.5\right)\frac{\sqrt{v_k}}{\gamma_k}\exp\left(-\frac{v_k}{2}\right)\left[\left(1+v_k\right)I_0\left(\frac{v_k}{2}\right)+ v_k I_1\left(\frac{v_k}{2}\right)\right]R_k.\tag{7} \end{align}
|
Consider the probability space $(\Omega, {\cal B}, \lambda)$ where $\Omega=(0,1)$, ${\cal B}$ is the Borel sets, and $\lambda$ is Lebesgue measure.
For random variables $W,Z$ on this space, we define the Ky Fan metric by
$$\alpha(W,Z) = \inf \lbrace \epsilon > 0: \lambda(|W-Z| \geq \epsilon) \leq \epsilon\rbrace.$$
Convergence in this metric coincides with convergence in probability.
Fix the random variable $X(\omega)=\omega$, so the law of $X$ is Lebesgue measure, that is, ${\cal L}(X)=\lambda$.
Question:For any probability measure $\mu$ on $\mathbb R$, does there exist a random variable $Y$ on $(\Omega, {\cal B}, \lambda)$ with law $\mu$ so that $\alpha(X,Y) = \inf \lbrace \alpha(X,Z) : {\cal L}(Z) = \mu\rbrace$ ? Notes:
By Lemma 3.2 of Cortissoz, the infimum above is $d_P(\lambda,\mu)$: the Lévy-Prohorov distance between the two laws.
The infimum is achieved if we allowed to choose both random variables. That is, there exist $X_1$ and $Y_1$ on $(\Omega, {\cal B}, \lambda)$ with ${\cal L}(X_1) = \lambda$, ${\cal L}(Y_1) = \mu$, and $\alpha(X_1,Y_1) = d_P(\lambda,\mu)$. But in my problem, I want to fix the random variable $X$.
Why the result may be true:the space $L^0(\Omega, {\cal B}, \lambda)$ is huge. There are lots of random variables with law $\mu$. I can't think of any obstruction to finding such a random variable. Why the result may be false:the space $L^0(\Omega, {\cal B}, \lambda)$ is huge. A compactness argument seems hopeless to me. I can't think of any construction for finding such a random variable.
|
I think it's unlikely that there's something really new in the observations Two days ago, the Daily Mail (plus colleagues) has excited many readers by the following esoteric article: Mystery of the 'spooky' pattern in the universe: Scientists find that supermassive black holes are ALIGNEDThe Very Large Telescope has found some weird pattern in the locations of quasars and the rotation of the central supermassive black holes. And these patterns are far-reaching – seem to correlate objects that are billions of light years away from each other, i.e. distances comparable to the size of the visible Universe. This is just an artist's depiction of an alignment.
The probability that such patterns emerge by chance – according to the current models with their probability distributions defining chance – is said to be 1 percent.
A Belgian team has used the VLT to look at 93 quasars as they existed almost 10 billion years ago. The quasars seem to belong to lines – which shouldn't be the case according to the current theories of structure formation, they say. And the direction of the large black holes' angular momenta seem to coincide despite their cosmological distances.
It's said to be "very unlikely" (1%) that such patterns emerge by chance. I find this claim a bit controversial and requiring verification. After all, the usual pictures of the "webs" associated with the structure formation don't look too different. Our state-of-the-art models of structure formation surely imply non-uniformities of the distribution of galaxies and some filaments and voids in the structure. The apparent fact that the researchers seem to be silent – and maybe even ignorant – about this elementary insight seems to reduce their credibility in my eyes.
I feel that the observation that the angular momentum of distant black holes has the same direction could also follow from a realistic emulation, after all. Note that almost all the celestial objects' spin as well as the orbital angular momentum (around the Sun) in the Solar System have the same direction; Venus' spin is the famous exception. This is not shocking: the material from which the Solar System was born had some angular momentum and this conserved angular momentum had to be divided to various terms, and it's not shocking that they were proportional to each other i.e. equally directed.
The research was described in a September 2014 preprint
And just the nearly infinitesimal probability that some of these things could be right was enough for me to write this blog post.
The paper – even its abstract – is more accurate than the media echoes and says that the quasar polarization vectors seem to be surprisingly orthogonal to the surrounding large-scale structure. And if that's not the case and the directions are (nearly) parallel, the emission line width seems to be significantly smaller than for the prevailing orthogonal ones. These two comments may possibly imply that the quasar spins are parallel to the host large-scale structures. In other words, the quasars seem to be spinning around the axes that are drawn in the sky with these quasar dots.
Well, that's nice if true but I don't find it shocking at all. These "lines" result from the gravitational clumping of material close to an axis. This clumping eliminates most of the orbital angular momentum and puts almost all the angular momentum to the spin. And the vanishing of the orbital angular momentum is of course mathematically equivalent to the statement that the surviving spinning objects (quasars) are sitting on the axis of the spin! ;-)
You know, if you want \(\vec L=\vec r \times p\) to carry no contribution in the direction of \(\vec S\), you want \(\vec L \cdot \vec S = 0\) and \(\vec r = \alpha \vec S\) is a solution! I don't believe that the structure formation models (or modelers) don't realize this obvious fact.
|
Background
I have a generative model for a process that can be described as follows:
$$ y = t(x, w) + e $$
where $x$ and $y$ observations of a set of random variables which are related by a non-linear transformation function $t$, parameterised by the unknown parameters to be estimated given by $w$. $e$ is the normally distributed error term with 0 mean and a diagonal covariance matrix given by $\sigma^{-1} I$ where $I$ is the identity matrix and $\sigma$ is the global noise precision.
So, assuming independence along each pixel of the image, I have the following likelihood term as a product over the individual pixels $i$:
$$ P(y|x, w, \sigma) = \prod_{i} (\frac{\sigma}{2\pi})^{\frac{1}{2}} \exp^{-\frac{1}{2}e_i \sigma e_i} $$
Each $w_i$ parameter is a 3-dimensional parameter (for each spatial dimension) to be estimated.
KL-Divergence and M-Projections:
I am using Expectation propagation (EP) to estimate the posterior distribution and EP has a M-projection step which projects a distribution onto a simpler approximating distribution which in my case is a multivariate normal distribution over my parameters $w$. The way EP works is by exploiting the factored form of the likelihood term i.e. it starts with an approximating distribution $Q$ (multivariate normal over $w$) and then replaces each ith factor by the exact term and projecting this distribution on the current estimate of the Q.
For example, in this problem I replace one ith term from my likelihood expression to form the distribution to project as: $P(y|x_i, w_i, \sigma) q_{j \neq i}(w_j)$ where the first term is the exact factor and $q_{j \neq i}(w_j)$ is the approximated distribution with the influence of the ith term removed. Let us call this distribution $U$
According to the literature, I need to find the moments $E_U[w]$. So, I need to match the first and second order moments i.e. compute these expectations for my parameters using the distribution $U$. However, I am completely lost as to how to do this. Can someone give me some suggestion on how I should proceed?
|
Let A be a non-empty set and $f : A → A$ be a function.
Prove that f has a left inverse in $F_{A}$ if and only if f is injective (one-to-one).
$\leftarrow$ assume f is injective then $\forall x\in A \space \space \space \space \space \space \space \space \space f(x) \in A $ such that if $f(x)=f(y) $ then $ x=y$
something something $g(f(x)) = x \space \space \space \space \forall x\in A$
$\rightarrow$ assume f has a left inverse in $F_{A}$ then $\forall x\in A$
$g(f(x)) = x$ something says that x must be one to one?
Im really confused by this question First of all f must be a bijection if it is one to one from $ A \to A $ is it not?
Can someone help me out with this proof?
|
$\def\Hadw{\mathop{\rm Hadw}}$This is true for finite graphs, and false for (not necessarily connected) infinite graphs. Right now I do not know what happens for infinite connected graphs.
1. Each component $G_1\subseteq G$ corresponds to an isolated vertex $v_{G_1}$ in $\Hadw(G)$ and a component $\Hadw(G_1)\setminus \{v_{G_1}\}$; this component is empty if $G_1$ consists of an isolated vertex. For finite graphs, this means that we find the number of isolated vertices and the Hadwiger graphs of other components; so the problem reduces to the same problem for finite connected graphs. For infinite graphs, this leads to the counterexample. Let $G$ be a union of a countable number of components none of which is an isolated vertex, and let $H$ be $G$ augmented with an isolated vertex; then $G\not\cong H$ but $\Hadw(G)\cong \Hadw(H)$.
The rest part is devoted to the reconstruction of a connected graph $G$ by its Hadwiger graph.
2. In this case, we will show a bit more, namely:
Knowing $\Hadw(G)$, we can find all the vertices in $\Hadw(G)$ corresponding to the vertices of $G$ (then the induced graph is isomorphic to $G$).
We proceed by the induction on $|V(G)|$. If $|V(G)|=1$ then the statement is obvious.
Any vertex $P\in V(\Hadw(G))$ has degree 1 exactly if $P=V(G)\setminus \{v\}$, where $v\in V(G)$ is not a cut vertex; in this case $\{v\}$ is its only neighbor. Thus we may reconstruct all the vertices of $\Hadw(G)$ which correspond to non-cut vertices of $G$ (notice that there is at least one such vertex!).
Let $T=\{v\}\in V(\Hadw(G))$ be one of such vertices. Denote by $N$ the set of all neighbors of $v$ in $G$; denote $$ L=\{X\in V(\Hadw(G)): (\{v\}\cup N)\subseteq X\}.$$Notice that $L$ is nonempty.
Consider now the distances $d(S,T)$ from every vertex $S\in V(\Hadw(G))$ (distinct from $T$ and $V(G)$) to $T$.
(i) If $v\notin S$ but $S\cap N\neq\varnothing$ then $d(S,T)=1$.
(ii) If $S\cap(\{v\}\cup N)=\varnothing$ then $d(S,T)=2$ due to a path in $G$ from $v$ to $S$; moreover, in this case $S$ has a neighbor in $L$.
(iii) If $v\in S$ but $N\not\subseteq S$ (that is, $v\in S$ but $S\notin L$) then $d(S,T)=2$ due to any vertex in $N\setminus S$; but in this case $S$ has no neighbor in $L$.
(iv) If $S\in L$ then $d(S,T)=3$ since the distance from every neighbor of $S$ to $T$ is 2.
Thus we can reconstruct the set $L$ (due to the distance 3 from $T$), and then set of all $S$ such that $v\in S$ but $S\notin L$ (due to the distance 2 from $T$ and non-existence of a neighbor in $L$). Thus we have reconstructed all $S\in V(\Hadw(G))$ containing $v$. Now we can remove all these vertices obtaining the graph $\Hadw(G-\{v\})$ for which the induction assumption is applicable.
|
I read in HC Verma that if we are observing the motion of a body from a rotating frame and the body under the observation is NOT in motion with respect to our frame then the centrifugal force is the sufficient pseudo force for the analysis of the motion. But if the body under observation is in motion with respect to our frame than some extra pseudo forces other than centrifugal forces are required for the analysis of the motion of the object in our frame. Please explain this additional pseudo force's reason and details.
I show you a typical situation where the centrifugal force is not enough to explain the dynamics.
Suppose the reference frame $K$ is rotating around the $z$ axis of an inertial reference frame $K_0$ with $\omega = \Omega {\bf e}_z$ and $\Omega>0$ constant.
Consider a point of matter $P$ at rest in $K_0$ far from the origin. In $K$, that point is seen rotating with $\omega'= -\Omega {\bf e}_z$ (as the rotation is around that fixed axis we can identify the axes ${\bf e}_z$ and the corresponding ${\bf e}_z'$ at rest in $K$).
If $m$ is the mass of $p$, the motion in $K$ needs a
centripetal force: $${\bf F}= -m \Omega^2 \vec{OP}\:.\quad(1)$$ This force can only be due to pseudoforces, as no real force acts on $P$ (or, equivalently, the sum of real forces acting on $P$ vanishes), since $P$ stays at rest in the inertial reference frame $K_0$.However, if only the centrifugal pseudoforce were present it would not be enough as it is:$${\bf f}_{centrifugal} = + m \Omega^2 \vec{OP}\:.$$To fulfil (1) another force with inverse direction and double magnitude is necessary.
It is the
Coriolis' (pseudo)force:$${\bf f}_{Coriolis} = -2 m \omega \times {\bf v}_P\:,$$where $\omega$ is the angular velocity of $K$ with respect to $K_0$ and ${\bf v}_P$ is the velocity of $P$ in $K$. One has:$${\bf v}_P = -\omega \times \vec{OP} = - \Omega {\bf e}_z \times \vec{OP}\:.$$Therefore:$${\bf f}_{Coriolis} = 2 m \Omega^2 {\bf e_z} \times ({\bf e}_z \times \vec{OP})= - 2m \Omega^2 \vec{OP}\:,$$so that$${\bf f}_{Coriolis} + {\bf f}_{centrifugal} = -m \Omega^2 \vec{OP}$$as requested by (1).
Dropping the requirement of uniform rotation, and assuming that $\omega$ of $K$ respect to $K_0$ is arbitrary and that $K$ may also translate with respect to $K_0$, the complete set of pseudoforces acting on a point $P$ in $K$ is given by the following added four terms (the third one is noting but the centrifugal force written into a more general form): $$- m {\bf a}_O - m\omega \times (\omega \times \vec{PO}) - 2m \omega \times {\bf v}_P - m \dot{\omega} \times \vec{OP}\:,$$ where ${\bf a}_O$ is the acceleration of the origin $O$ of $K_0$ computed in $K$, ${\bf v}_P$ is the velocity of $P$ in $K$, $\dot{\omega}$ is the time derivative of $\omega$, which is, as before, the angular velocity of $K$ in $K_0$.
Another psuedo-force is the
Azimuthal Force (Sometimes called Euler's force) it arises when there is angular accleration. It's vector equation is $$F_{Azimuthal}=\mathbf{r} \times \mathbf{\alpha}$$, where $\mathbf{\alpha}$ is the angular accleration vector.
|
I am interested to understand the current theoretical status of Landauer's Principle and related ideas. I am looking for key papers and results in the subject.I will highlight one key paper.An improved Landauer principle with finite-size correctionsAbstract:Landauerʼs principle relates...
I know that by physics definition if displacement is zero, work is zero.However, if I push an object 5 m to the east, and then move to the other side of the object and push it 5 m back to the west. I think in this case I have always done positive work on the object and hence the total work...
Hello all:I am asking for your personal experience , did you were able to work normally with your PhD studies ? What kind of work ?Is it easier with theoretical PhD or an experimental one ?BestH.B.
I feel like it would go on the side of the energy the object has where it starts - an object dropped off a cliff would be modeled U - W = K but an object thrown upwards from ground level would beK - W = U. I am not sure though.
So the equation for work is W = F * sF = m * a, so W = m * a * sTransferring this to units of measurement gives us: J = kg * m * s-2 * mOr simplified: J = kg * m2 * s-2Transferring back to units of quantity: W = m * v2How can that be correct? Obviously Ekin = 1/2 * m * v2. Where did that...
Hello all! First post here, and I apologize in advance if this question belongs elsewhere- I'll learn as I go.I am interested in knowing more about energy and performing work. In particular, if I have an object I would like to lift, how much energy I need to expend to do that.Take for...
I have the definition of change in internal energy.$$ \Delta U = Q - W $$I can get the work by$$ W = \int_{V_1}^{V_2} p dV = p \Delta V $$however the pressure isn't constant so this wont do.## W ## is work done by the gas and ## Q ## is amount of heat energy brought into the system.I'm...
1. Homework StatementThe system is released from rest with no slack in the cable and with the spring stretched 225 mm. Determine the distance s traveled by the 3.2-kg cart before it comes to rest (a) if m approaches zero and (b) if m = 2.5 kg. Assume no mechanical interference and no friction...
HiI have the educational background to do a few jobs I have seen posted as "data scientist" or "data engineer". However, I keep getting told that to get a job like that with my background, in mathematics, I need to post code examples on Github. However, I always feel like my ideas are either...
1. Homework StatementA well has a diameter of 6 m and a height of 15 m.2/3 of volume of the well is filled with water.A pump can vacate 1/3 of volume of water from the well in 7 minutes 20 seconds.What is the power of the pump?2. Homework EquationsW=mgh, P=W/t3. The Attempt at a Solution...
1. Homework StatementA ball of mass m=0.300 kg is connected by a strong massless rod of length L = 0.800 m to a pivot and held in place with the rod vertical. A wind exerts constant force F to the right on the ball as shown below. The ball is released from rest. The wind makes it swing up to...
hello,I had made an DIY alpha type Stirling engine for my physics project and now I have to write an report about the relationship between the heat given to the engine and the motion of the wheel.I had searched a lot about Stirling engines and I learned about work, energy, efficiency...
1. Homework StatementA sledge loaded with bricks has a total mass of 18.0 kg and is pulled at constant speed by a rope inclined at 20.0° above the horizontal. The sledge moves a distance of 20.0 m on a horizontal surface. The coefficient of kinetic friction between the sledge and surface is...
1. Homework Statementa) A point charge + q is placed at the origin. By explicitly calculating the relevant line integral, determine how much external work must be done to bring another point charge + q from infinity to the point r2= aŷ ? Consider the difference between external work and...
The problem is asking me to find the final speed of a 1100 kg car traveling at 24 m/s through 18m of mud, where the resistive force on the car is 17000 N.I don't actually know how to go about doing this, so any pointers in the right direction would be super helpful.
Is there an important difference between total work and external work?My knowledge would be that total work a.k.a. net work on a system would be equal to the change in kinetic energy of that system and equal to the line integral of the net force on the system dotted with the differential...
I'm trying to determine the work done by a person as they pull a luggage up a ramp. The ramp has a height of 5 m and the distance the person walks up is 20 m. The weight of the bag is also 10 kg.I am trying to compare the work done by pulling the luggage up a ramp to carrying an equally heavy...
Work W done by moving the object with force F for distance s is W = Fs.When I move the same object the same distance but with twice the acceleration, doesthe work done gets also doubled?By F=ma, doubling the acceleration yields m*2*a = 2F -> 2Fs = 2W.I've mostly read, that if I want to...
1. Homework StatementYou are driving with your car (of total mass: 1.2tonnes) with a speed of v=50km/h, until you see an obstacle.a) What is the kinetic energy of the car?b) When you start to brake, there is still 15m until the obstacle. What must be the size of the friction coefficient...
1. Homework StatementWe shot a projectile with mass ##m## and velocity ##v_0## with angle ##\phi## it collide with a box with mass ##M## at the maximum height of its path. Then, they both start to move with another speed. (We define ##t=0## at this time) (Completely Inelastic Collision). The...
1. Homework StatementQuestion from Fundamentals of Physics (Halliday, Resnick, Walker)This figure below shows a cord attached to a cart that can slide along a frictionless horizontal rail aligned along an x axis. The left end of the cord is pulled over a pulley, of negligible mass and...
1. Homework StatementPlease look at the attached screenshot.This problem is really confusing for me and I can't seem to make much sense out of it.2. Homework EquationsEi = Ef3. The Attempt at a SolutionAs you can see, I did get (a). (The other checkmarks, I guessed — there were...
I am trying to solve problems where I calculate work do to force along paths in cylindrical and spherical coordinates.I can do almost by rote the problems in Cartesian: given a force ##\vec{F} = f(x,y,z)\hat{x} + g(x,y,z)\hat{y}+ h(x,y,z)\hat{z}## I can parametricize my some curve ##\gamma...
1. Homework StatementAn object with mass 100 kg moved in outer space. When it was at location < 9,-24,-4 > its speed was 3.5 m/s. A single constant force < 250,400,-170 > N acted on the object while the object moved from location < 9,-24,-4 > m to location < 15,-17,-8 > m. Then a different...
1. Homework Statementyou push a box out of a carpeted room and along a hallway with a waxed linoleum floor. While pushing the crate 2 m out of the room you exert a force of 34 N; while pushing it 6 m along the hallway you exert a force of 13 N. To slow it down you exert a force of 40 N through...
1. Homework StatementA force varies with time according to the expression F=aΔt, where a = 2.0 N/s.From this information, can you determine the work done on a particle that experienced this force over a displacement of 0.50 m?2. Homework EquationsW = F*dVf = Vo + aΔtF = ma3. The...
1. Homework StatementDerive an expression for the change of temperature of a solid material that is compressed adiabatically and reversible in terms of physical quantities.(The second part of this problem is: The pressure on a block of iron is increased by 1000 atm adiabatically and...
|
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second.
Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec.
But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them.
So what did the guys in the EE chat say...
The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you...
A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it.
Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help?
The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names...
I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC.
I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works.
I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons....
something so "simple" turns out to be hard as duck
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it?
If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics
I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ...
I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array "
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact
He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H".
@ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level.
It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale.
according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one.
@enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization
The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT
@bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory?
These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
|
我亲爱的领导让我花一周多的时间看了四篇论文,于是就有了这篇文章。
input image piece-wise planar segmentation reconstructed depthmap texture-mapped 3D model ScanNet [1,3,4] SYNTHIA [2,3] Cityscapes [2] NYU Depth Dataset [1,3,4] Labeling method ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. annotated with 3D camera poses, surface reconstructions, and instance-level semantic segmentations.
SYNTHIA: The SYNTHetic collection of Imagery and Annotations. 8 RGB cameras forming a binocular 360º camera, 8 depth sensors Cityscapes: Benchmark suite and evaluation server for pixel-level and instance-level semantic labeling. video frames / stereo / GPS / vehicle odometry
NYU Depth Dataset: is recorded by both the RGB and Depth cameras from the Microsoft Kinect. Dense multi-class labels with instancenumber (cup1, cup2, cup3, etc). Raw: The raw rgb, depthand accelerometerdata as provided by the Kinect. Toolbox: Useful functions for manipulating the data and labels.
Difficulty in detect planes from the 3D point cloud by using
J-Linkage method.
(c-d): Plane fitting results generated by J-Linkage with δ = 0.5 and δ = 2, respectively.
ScanNet: 1. Fit plans to a consolidated mesh (merge planes if (normal diff < 20° && distance < 5cm) 2. Project plans back to individual frames
SYNTHIA: 1. Manually draw a quadrilateral region 2. Obtain the plane parameters and variance of the distance distribution 3. Find all pixels that belong to the plane by using the plane parameters and the variance estimate
Cityscapes: 1. “planar” = {ground, road, sidewalk,parking, rail track, building, wall, fence, guard rail, bridge, and terrain} 2. Manually label the boundary of each plane using polygons [CVPR 2018] Liu, Chen, et al. Washington University in St. Louis, Adobe. The first deep neural architecture for piece-wise planar depthmap reconstruction from a RGB image.
DRN: Dilated Residual Networks (2096 channels) CRF: Conditional Random Field Algorithm
Step Loss Plane parameter: $$L^P=\sum_{i=1}^{K^*}min_{j\in[1,K]}\Vert P_i^*-P_j \Vert_2^2 \;\;\; (K = 10)$$ Plane segmentation: softmax cross entropy $$L^M=\sum_{i=1}^{K+1}\sum_{p \in I}(1(M^{*(p)}=i)log(1-M_i^{(p)}))$$ Non-planar depth: ground-truth <==> predicted depthmap $$L^D=\sum_{i=1}^{K+1}\sum_{p\in I}(M_i^{(p)}(D_i^{(p)}-D^{*(p)})^2)$$ - $M^{(p)}\text{: probability of p belonging to the } i^{th} \text{ plane ;}\\ D^{(p)} \text{: depth value at pixel }p \text{ ;}\\ \text{*: GT .}$ [ECCV 18] Fengting Yang and Zihan Zhou Pennsylvania State University. Recovering 3D Planes from a Single Image. Propose a novel plane structure-induced loss
Step Loss Plane loss $$L_{reg}(S_{i})=\sum_{q}^{}-z(q)\cdot log(p_{plane}(q))-(1-z(q))\cdot log(1-p_{plane}(q))$$ Loss $$L=\sum_{i=1}^{n}\sum_{j=1}^m\left(\sum_{q}S_{i}^{j}(q)\cdot \vert(n_{i}^{j})^{T}Q-1\vert\right)+\alpha \sum_{i=1}^{n}L_{reg}(S_{i})$$
[CVPR2019] Liu, Chen, et al.
NVIDIA, Washington University in St. Louis, SenseTime, Simon Fraser University
[CVPR 2019] Yu, Zehao, et al.
ShanghaiTech University, The Pennsylvania State University Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding
Step Loss Segmentation: balanced cross entropy $$L_{S}=-(1-w)\sum_{i\in\mathcal{F}}^{}\log p_{i}-w\sum_{i\in\mathcal{B}}^{}\log(1-p_{i})$$ Embedding: discuiminative loss $$L_{E}=L_{pull}+L_{push}$$ Per-pixel plane: L1 loss $$ L_{PP}=\frac{1}{N}\sum_{i=1}^{N}\vert n_{i}-n^{*}_{i}\ \vert $$ Instance Parameter: $$L_{IP}=\frac{1}{N\tilde{C}}\sum_{j=1}^{\tilde{C}}\sum_{i=1}^{N}S_{ij}\cdot\vert n_{j}^{T}Q_{i}-1\vert $$ Loss $$L=L_{S}+L_{E}+L_{PP}+L_{IP}+…$$ Embedding: associative emvedding (End-to-End Learning for Joint Detection and Grouping) ;
An image can contain an arbitrary number of instances The labeling is permutation-invariant: it does not matter which specific label an instance gets, as long as it is different from all otherinstance labels.
$$L_{E}=L_{pull}+L_{push}$$
$$where$$
$$L_{pull}=\frac{1}{C}\sum_{c=1}^{C}\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\max\left(\lVert\mu_{c}-x_{i}\rVert-\delta_{\textrm{v}},0\right)$$
$$
L_{push}=\frac{1}{C(C-1)}\mathop{\sum_{c_{A}=1}^{C}\sum_{c_{B}=1}^{C}}_{c_{A}\neq c_{B}}\max\left(\delta_{\textrm{d}}-\lVert\mu_{c_{A}}-\mu_{c_{B}}\rVert,0\right) $$ Here, $C$ is the number of clusters $C$ (planes) in the ground truth, $N_c$ is the number of elements in cluster $c$, $x_i$ is the pixel embedding, $μ_c$ is the mean embedding of the cluster $c$, and $δ_v$ and $δ_d$ are the margin for “pull” and “push” losses, respectively. Instance Parameter Loss:
$$L_{IP}=\frac{1}{N\tilde{C}}\sum_{j=1}^{\tilde{C}}\sum_{i=1}^{N}S_{ij}\cdot\vert n_{j}^{T}Q_{i}-1\vert$$ $S\text{: instance segmentation map}\\n_{j}\text{: predicted plane param}\\Q_i\text{: the 3D point at pixel } i $
|
RD Sharma Solutions Class 8 Chapter 3 Exercise 3.9
Using square root table, find the square roots of the following:
1.) 7
Answer:
From the table, we directly find that square root of 7 is 2.646.
2.) 15
Answer:
Using the table to find \(\sqrt{3}\)
\(\sqrt{15}\)
= 1.732 x 2.236 = 3.873
3.) 74
Answer:
Using the table to find \(\sqrt{2}\)
\(\sqrt{74}\)
= 1.414 x 6.083 = 8.602
4.) 82
Answer:
Using the table to find \(\sqrt{2}\)
\(\sqrt{82}\)
= 1.414 x 6.403 = 9.055
5.) 198
Answer:
Using the table to find \(\sqrt{2}\)
\(\sqrt{198}\)
= 1.414 x 3 x 6.403 = 14.070
6.) 540
Answer:
Using the table to find \(\sqrt{3}\)
\(\sqrt{540}\)
= 2 x 3\(\sqrt{3}\)
7.) 8700
Answer:
Using the table to find \(\sqrt{3}\)
\(\sqrt{8700}\)
= 1.414 x 6.403 x 10 = 93.27
8.) 3509
Answer:
Using the table to find \(\sqrt{29}\)
\(\sqrt{3509}\)
= 11 x 5.3851 = 59.235
9.) 6929
Answer:
Using the table to find \(\sqrt{41}\)
\(\sqrt{6929}\)
= 13 x 6.403 = 83.239
10.) 25725
Answer:
Using the table to find \(\sqrt{3}\)
\(\sqrt{25725}\)
= 1.732 x 5 x 7 x 2.646 = 160.41
11.) 1312
Answer
Using the table to find \(\sqrt{2}\)
\(\sqrt{1312}\)
= 2 x 2 x 1.414 x 6.4031 = 36.222
12.) 4192
Answer:
\(\sqrt{4192} = \sqrt{2\times 2\times 2\times 2\times 2\times 131}\)
= \(2\times 2\sqrt{2}\times \sqrt{131}\)
The square root of 131 is not listed in the table. Hence, we have to apply long division to find it.
Substituting the values:
= 2 x 2 x 11.4455 (using the table to find \(\sqrt{2}\)
13.) 4955
Answer:
On prime factorization:
4955 is equal to 5 x 991, which means that \(\sqrt{4955}= \sqrt{5}\)
The square root of 991 is not listed in the table; it lists the square roots of all the numbers below 100. Hence, we have to manipulate the number such that we get the square root of a number less than 100. This can be done in the following manner:
\(\sqrt{4955}= \sqrt{49.55 \times 100}\)
Now, we have to find the square root of 49.55.
We have: \(\sqrt{49} = 7\, and\, \sqrt{50} = 7.071\)
Their difference is 0.071. Thus, for the difference of 1 (50 – 49), the difference in the values of the square roots is 0.071.
For the difference of 0.55, the difference in the values of the square roots is:
0.55 x 0.0701 = 0.03905
\(∴ \sqrt{49.55} = 7 + 0.03905 = 7.03905\)
Finally, we have:
\(\sqrt{49.55} = \sqrt{49.55}\times 10 = 7.03905 x 10 = 70.3905\)
14.) \(\frac{99}{144}\)
Answer:
\(\sqrt{\frac{99}{144}} = \frac{\sqrt{3\times 3\times 11}}{\sqrt{144}}\)
= \(\frac{3 \sqrt{11}}{12}\)
15.) \(\frac{57}{169}\)
Answer:
\(\sqrt{\frac{57}{169}} = \frac{\sqrt{3 \times 19}}{\sqrt{169}}\)
= \(\frac{1.732 \times 4.3589}{13}\)
16.) \(\frac{101}{169}\)
\(\sqrt{\frac{101}{169}} = \frac{\sqrt{101}}{\sqrt{169}}\)
The square of 101 is not listed in the table. This is because the table lists the square roots of all the numbers below 100.
Hence, we have to manipulate the number such that we get the square root of a number less than 100.
This can be done in the following manner:
\(\sqrt{101} = \sqrt{1.01 \times 100} = \sqrt{1.01}\times 10\)
Now, we have to find the square root of 1.01.
We have:
\(\sqrt{1} = 1 and \sqrt{2} = 1.414\)
Their difference is .414.
Thus, for the difference of 1(2 -1 ), the difference in the values of the square roots is .414.
For the difference of .01, the difference in the values of the square roots is:
0.1 x 0.414 = 0.00414
\(∴ \sqrt{1.01} = 1 + .00414 = 1.00414\)
\(\sqrt{101}=\sqrt{1.01}\times 10 = 1.00414 \times 10 = 10.0414\)
Finally,
\(\sqrt{\frac{101}{169}}\)
This value is really close to the one from the key answer.
17.) 13.21
Answer:
From the square root table, we have:
\( \sqrt{13} \)
Their difference is 0.136.
Thus, for the difference of 1 (14 – 13), the difference in the values of the square roots is 0.136.
For the difference of 0.21, the difference in the values of their square roots is:
0.136 x 0.21 = 0.2856
\(∴ \sqrt{13.21}= 3.606 + 0.02856 \approx 3.635\)
18.) 21.97
Answer:
We have to find \(\sqrt{21.97}\)
From the square root table, we have:
\(\sqrt{21} = \sqrt{3}\times \sqrt{7} = 4.583 and \sqrt{22} = \sqrt{2}\times \sqrt{11} = 4.690\)
Their difference is 0.107.
Thus, for the difference of 1 (22 – 21), the difference in the values of the square roosts is 0.107.
For the difference of 0.97, the difference in the values of their square roots is:
0.107 x 0.97 = 0.104
\(∴ \sqrt{21.97} = 4.583 + 0104 \approx 4.687\)
19.) 110
Answer:
\(\sqrt{110} = \sqrt{2}\times \sqrt{5}\times \sqrt{11}\)
= 1.414 x 2.236 x 3.317 (Using the square root table to find all the square roots) = 10.488
20.) 1110
Answer:
\(\sqrt{1110} = \sqrt{2}\times \sqrt{3}\times \sqrt{5}\times \sqrt{37}\)
= 1.414 x 1.732 x 2.236 x 6.083 (using the table to find all the square roots) = 33.312
21.) 11.11
Answer:
We have:
\(\sqrt{11} = 3.317 and \sqrt{12}= 3.464\)
Their difference is 0.1474.
Thus, for the difference of 1 (12 – 11), the difference in the values of the square roots is 0.1474.
For the difference of 0.11, the difference in the values of the square roots is:
0.11 x 0.1474 = 0.0162
\(∴ \sqrt{11.11} = 3.3166 + 0.0162= 3.328\approx 3.333\)
22.) The area of a square field is 325 m
2. Find the appropriate length of one side of the field.
Answer:
The length of one side of the square field will be the square root of 325.
\(∴ \sqrt{325} = \sqrt{5\times 5\times 13}\)
= 5 x \(\sqrt{13}\)
= 5 x 3.605 = 18.030
Hence, the length of one side of the field is 18.030 m.
23.) Find the length of a side of a square, whose area is equal to the area of a rectangle with sides 240m and 70 m.
Answer:
The area of the rectangle = 240 m x 70 m =16800m
2
Given that the length of the square is equal to the area of the rectangle.
Hence, the area of the square will also be 16800 m
2.
The length of one side of a square is the square root of its area.
\(∴ \sqrt{16800} = \sqrt{2\times 2\times 2\times 2\times 2\times 3\times 5\times 5\times 7}\)
= 2 x2 x 5 \(\sqrt{2\times 3\times 7}\)
= 20 \(\sqrt{42}\)
Hence, the length of one side of the square is 129.60 m.
|
I don't understand what you say about derivatives and I will assume that you want the following result.
Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ be a function such that for every $a, b \in \mathbb{R}$, $x \mapsto f(x,b)$ and $y \mapsto f(a,y)$ are polynomial functions; then $f$ is a polynomial in two variables.
This is not obvious because we don't have a good representation of $f$. The idea of the proof is to show that $f$ coincides with a polynomial at sufficiently many points.
1) There exists an infinite set $I \subset \mathbb{R}$ and an integer $N$ such that for any $a,b \in I$, the polynomials $y \mapsto f(a,y)$ and $x \mapsto f(x,b)$ are of degree bounded by $N$. This follows from the fact that $\mathbb{R}$ is not countable: if $K_n$ is the set of $z \in \mathbb{R}$ such that $x\mapsto f(x,z)$ and $y \mapsto f(z,y)$ are of degree bounded by $n$, then $\cup_{n\in \mathbb{N}} K_n = \mathbb{R}$ and one of the $K_n$ must be infinite (in fact uncountable but I don't need it).
2) Let $I$ and $N$ be as in the previous point. Let $z_1,\dots,z_{N+1}$ be $N+1$ arbitrary elements in $I$. I claim that there exists a polynomial $Q$ in two variables, of degree in $x$ and $y$ at most $N$ such that $Q$ takes the same values than $f$ in all points of the form $(z_i,z_j)$, for $1 \leq i,j \leq N+1$. Indeed, this is the analog of Lagrange interpolation in two variables. This polynomial $Q$ is defined by:
$$ Q(x,y) = \sum_{i=1}^{N+1} \sum_{j=1}^{N+1} f(z_i,z_j) \prod_{i' \neq i} \prod_{j' \neq j} \frac{(x-z_{i'})(y-z_{j'})}{(z_i-z_{i'})(z_j-z_{j'})}.$$
3) I claim that $f(x,y) = Q(x,y)$ everywhere. First, $y \mapsto f(z_i,y)$ and $y \mapsto Q(z_i,y)$ are both polynomial of degree bounded by $N$, which coincide in $N+1$ points. This shows that $f = Q$ on sets of the form $z_i \times \mathbb{R}$. Now, take any $y$ in $I$. Then $x \mapsto f(x,y)$ and $x \mapsto Q(x,y)$ are polynomial of degree bounded by $N$ and they are equal for $x$ equal to one of the $z_i$. So they are equal everywhere. This shows that $f = Q$ on $\mathbb{R} \times I$. Finally, consider an arbitrary $x \in \mathbb{R}$. Then $y \mapsto f(x,y)$ and $y \mapsto Q(x,y)$ are both polynomial, equal when $y \in I$. Since $I$ is infinite, they are equal everywhere. This concludes the proof of $Q = f$.
|
I'm encontering fat Cantor sets for the first time, and I found the formula for the length of the complement on Wikipedia (you know, like you do) as
$\mu(I \setminus C_\alpha) = \alpha \sum_{n=1}^\infty (2\alpha)^n = \frac{\alpha}{1-2\alpha}$.
This makes sense and is a pretty intuitive extension of the normal Cantor set formula, but
also it seems like I can make it arbitrarily large for an $\alpha$ arbitrarily close to $\frac{1}{2}$, which doesn't make much sense for a subset of $[0,1]$. That said, given the construction of the set, I'm not sure why I'm not allowed to take intervals of length $3^{-n} < \alpha^n < 2^{-n}$ and still use this formula to compute the length of the complement.
I feel like I'm missing someting small and silly, so if anyone can set me right on this, I'd appreciate it.
The set $C_\alpha$ is defined by saying that at the $n$th step, you remove a middle interval of length $\alpha^n$ from each of the remaining intervals. This only makes sense if each of the remaining intervals actually has length at least $\alpha^n$. That will be false for sufficiently large $n$ for any $\alpha>1/3$, and so $C_\alpha$ is only defined for $\alpha\leq 1/3$.
For an explicit example, if $\alpha=0.4$, then we first remove an interval of length $0.4$ to leave $[0,0.3]$ and $[0.7,1]$. For subsequent steps, let's keep track of just the leftmost of our intervals. We next remove intervals of length $0.16$ to leave $[0,0.07]$ as our leftmost interval. Then we remove intervals of length $0.064$ to leave $[0,0.003]$ as our leftmost interval. Next we need to remove intervals of length $0.0256$, but we can't, because our remaining intervals have length $0.003$!
Note that in fact you can see that the intervals will always be long enough iff the sum $ \sum_{n=1}^\infty \alpha(2\alpha)^n$ is at most $1$, since each new term we add represents the total length of intervals we need to remove at the next step, and there is enough length left iff the resulting partial sum is still at most $1$. So that's where the bound $\alpha\leq 1/3$ comes from: when $\alpha=1/3$, the sum converges to $1$, and any larger $\alpha$ makes it greater than $1$ so the intervals eventually become too small.
The formula doesn't hold for $a > 1/3$. If you try to remove more than $2^n (1/3)^{n+1}$ at each step $n$, then you will remove the entire interval $[0,1]$ after finitely many steps.
For example, consider $a = 2/5$.
At $n=0$ we remove $2^0 (2/5)^1 = 2/5$. The remaining length is $3/5 = 0.6$.
At $n=1$ we remove $2^1 (2/5)^2 = 0.32$. The remaining length is $0.6 - 0.32 = 0.28$.
At $n=2$ we remove $2^2 (2/5)^3 = 0.256$. The remaining length is $0.28 - 0.256 = 0.024$.
At $n=3$ we remove $2^3 (2/5)^4 = 0.2048$. Oops, no we don't, because that's more than the remaining length.
|
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
|
Let $M$ be a compact finite-dimensional manifold and $f\colon M\to M$ be a diffeomrphism. By $P_n(f)$ we denote the number of periodic points of $f$ with period $n$, that is, the number of fixed points of $f^n$.Katok Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Publications Mathématiques de l'IHÉS 51, no. 1 (1980), pp. 137-173 showed that the topological entropy of a $\mathcal{C}^{1+\varepsilon}$ diffeomorphism of a compact surface obeys the following inequality: $$\limsup_{n\to\infty} \frac{\log P_n(f)}{n}\ge h(f).$$He also conjectured (see paragraph 5 page 141 of the article cited above) that this inequality holds in any dimension generically in the $\mathcal{C}^1$-topology. In other words:
There is a dense $G_\delta$ set $\mathcal{G} \subset \text{Diff}^1(M)$ such that for any $f\in\mathcal{G}$ one has$$\limsup_{n\to\infty} \frac{\log P_n(f)}{n}\ge h(f).$$Is this conjecture still open? I would be grateful for any references.
Let $M$ be a compact finite-dimensional manifold and $f\colon M\to M$ be a diffeomrphism. By $P_n(f)$ we denote the number of periodic points of $f$ with period $n$, that is, the number of fixed points of $f^n$.Katok Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Publications Mathématiques de l'IHÉS 51, no. 1 (1980), pp. 137-173 showed that the topological entropy of a $\mathcal{C}^{1+\varepsilon}$ diffeomorphism of a compact surface obeys the following inequality: $$\limsup_{n\to\infty} \frac{\log P_n(f)}{n}\ge h(f).$$He also conjectured (see paragraph 5 page 141 of the article cited above) that this inequality holds in any dimension generically in the $\mathcal{C}^1$-topology. In other words:
Nice question. I think it has not been directly adressed in the literature, but combining known results it seems that a positive answer to that question can be given. Still some details must be carried out (which I did not) to be sure.
The main point is that $C^1$-
far from homoclinic tangencies the result must be true due to the existence of a maximal entropy measure (see here, here, here and references therein) where the splitting has one dimensional center (and so Katok type results can be carried out, see here). There is a subtelty here which is that the central exponent may be a priori equal to zero, in which case one would like to use the genericity hypothesis to show that the estimate holds (I think the concept of principal simbolic extension in the above references should be helpful for this). Here there might be some work to be done.
In the open set where homoclinic tangencies are dense, results of Kaloshin and from Bonatti-Diaz-Fisher suggest that there should be superexponential growth of periodic orbits, but some care must be taken since the precise statements do not work for this case. On the other hand, since what is needed is way less than superexponential growth, I don't see this part as being the hard one.
|
Ex.12.2 Q11 Areas Related to Circles Solution - NCERT Maths Class 10 Question
A car has two wipers which do not overlap. Each wiper has a blade of length \(\text{25 cm}\) sweeping through an angle of \(115^\circ. \)Find the total area cleaned at each sweep of the blades.
Text Solution What is known?
A car has \(2\) wipers which do not overlap. Each wiper has a blade length \(= 25 \,\rm{cm}\) and sweeps through an angle \(\left( \theta \right) = {150^\circ}\)
What is unknown?
Total area cleaned at the sweep of the blade of the \(2\) wipers.
Reasoning:
Visually it is clear that
Area cleaned at the sweep of blades of each wiper \(=\) Area of the sector with angle \(115^\circ\)at the centre and radius of the circle \(\text{25 cm}\)
Since there are \(2\) wipers of same blade length and same angle of sweeping. Also there is no area of overlap for the wipers.
\(\therefore \;\)Total area cleaned at each sweep of the blades \(=\) \(\begin{align}2\ \times\end{align}\) Area cleaned at the sweep of each wiper.
Steps:
Area cleaned at the sweep of blades of each wiper = Area of the sector of a circle with radius \(\text {25 cm}\) and of angle \(\begin{align}115^{\circ}\end{align}\)
\[\begin{align}& = \frac{\theta }{{{{360}^\circ }}} \times \pi {r^2}\\& = \frac{{{{115}^\circ }}}{{{{360}^\circ }}} \times \pi \times 25 \times 25\\& = \frac{{23}}{{72}} \times 625\pi \end{align}\]
Since there are \(2\) identical blade length wipers
\(\therefore\;\)Total area cleaned at each sweep of the blades
\[\begin{align}& = 2 \times \frac{{23}}{{72}} \times 625\pi\\ & = 2 \times \frac{{23}}{{72}} \times \frac{{22}}{7} \times 625\\& = \frac{{23 \times 11 \times 625}}{{18 \times 7}}\\ &= \frac{{158125}}{{126}}\,{\text{c}}{{\text{m}}^{\text{2}}}\\ & = 1254.96\,\,\,{\text{c}}{{\text{m}}^{\text{2}}}\end{align}\]
|
If water evaporates at room temperature because a small percentage of the molecules have enough energy to escape into the air, then why does a kitchen counter with a small amount of water eventually evaporate completely when at room temperature?
As your small percentage of molecules with high enough kinetic energy evaporates, the remaining liquid water cools down. But in doing so, it drains heat from its surroundings and thus stays at room temperature (or close to it), so there is still some fraction of molecules that can evaporate, and they do so, and more heat is transferred from the surroundings, and so it continues until all water is gone.
This happens because the rate of evaporation is higher than the rate of condensation.
$$\ce{ H2O (l) <=>> H2O (g) }$$
This is also due to the fact that you have an open system: matter and energy can be exchanged with its surroundings. The evaporated water can evaporate from the glass and condense somewhere else.
The water on the surface does not exist in isolation it is in contact with the air and with the surface. Random higher energy molecules in the surface and in the air will add energy by collision to the water molecules leading some of them to escape the liquid (evaporate).
This is why evaporating water leads to cooling of the air and surfaces around it.
Let's say, $q\in {]}0,100{]}$ is the minimal net percentage of the volume (or of the mass) that evaporates at any second $t$ (for every $t>0$). Saying "net", we assume that more water molecules leave the kitchen counter than return, and that the fraction of the molecules leaving the surface in relation to the number of molecules getting back has a constant, positive lower bound. (Other answers explain why it is likely to be so in kitchen conditions.)
Then, at most $100-q$ percent are left per time unit. So, after $t$ time units, the amount of water left will be at most $\mathrm{a_0}\Bigl(\frac{100-q}{100}\Bigr)^t$, where $a_0$ is the initial amount. Since $100-q<100$, we obtain $$\lim_{t\to+\infty}\,\mathrm{a_0}\Bigl(\frac{100-q}{100}\Bigr)^t \ = \ 0\,.$$ In particular, after a certain point of time, the amount of water will be lower than the minimal possible amount (the volume of one $\mathrm{H}_2\mathrm{O}$ molecule or its mass, simplified, of course).
If the assumption made is not valid (say, due to great humidity somewhere in Asia), the result would be wrong: the water will NOT fully evaporate.
(An aside has to be made. Note that the above mathematical treatment is a gross simplification. To get a more realistic evaporation model, we should take into account that the evaporation happens only from the surface, and not from the whole volume, and that both the surface and the volume change with time. Also, bear in mind that even within one second, the evaporation rate changes.)
Water always evaporates when above 0 degrees Celsius at normal atmospheric pressure.
Which means when above 0 deg there always are molecules with high enough energies to leave the liquid phase.
protected by orthocresol♦ Oct 16 '17 at 15:23
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Markdown is a lightweight markup language with plain text formatting syntax. It is designed so that it can be converted to HTML and many other formats using a tool by the same name.
from Wikipedia
Introducing the markdown formatting ⛷
# h1## h2### h3standard
*Italic type***Bold**~~Negative~~
Fold the long sentences.
<details><summary>Boostnote is a notepad corresponding to markdown notation, which is a tool for organizing and sharing information.</summary>- Features - <br>· Search function to find memos in one shot· Supports markdown notation <br>· Support for Mac, Windows, Linux, iOS, Android <br>· Export and import to Plain text (.txt), Markdown (.md) format <br>· Supports PDF saving <br>· Can be used offline <br>· Synchronize to dropbox etc. with setting <br>· Supports theme colors and numerous fonts <br></details>
- List 1- List 2* List 3
Put a text on the left and a url on the right.
[Boostnote](https://boostnote.io)
- [x] Task 1- [ ] Task 2
> Quotation> Quotation Quotation
* * ****---

Render: function () { Return ( <div className="commentBox"> <h1> Comments </h1> <CommentList data={this.state.data} /> <CommentForm onCommentSubmit={this.handleCommentSubmit} /> </div> );}
|Fruits|Price||:--|:--||Apple|1$||Grapes|4$||Orange|2$||Lemon|1$||Peach|3$||Melon|20$|
These are the basic markdown formatting.
In addition to above, you can also complex formatting like following in Boostnote.
Mathematical formatting.
$$$\mathrm{e}^{\mathrm{i}\theta} = \cos(\theta) + \mathrm{i}\sin(\theta)$$$
st=>start: Start:>http://www.google.com[blank]e=>end:>http://www.google.comop1=>operation: My Operationsub1=>subroutine: My Subroutinecond=>condition: Yes or No?:>http://www.google.comio=>inputoutput: catch something…st->op1->condcond(yes)->io->econd(no)->sub1(right)->op1
Title: Here is a titleA-> B: Normal lineB -> C: Dashed lineC -> D: Open arrowD -> A: Dashed open arrow
That's all. Enjoy Markdown ;)
|
As Robert writes, the answer is positive for
round circles, which follows from the fact that Möbius transformations map circles to circles (and you can map any circle contained in the unit disc to a circle with its centre at 0).
As Neil mentions, the answer is negative for arbitrary curves. Indeed, this is trivial if the curves in question are not analytic. If the curves are analytic, but not round circles, then you
can extend the conformal map across the boundary to a larger annulus by Schwarz reflection, but not to the whole disc - again, because Möbius transformations map circles to circles, and any conformal isomorphism between discs is Möbius!
Finally, to respond to the new version of the question raised in your comment - Suppose that your annulus has modulus M, and let $\psi$ be a conformal map that takes the interior of $C_2$ to the unit disc,
normalised in such a way that $\psi^{-1}(0)\notin \Omega$. (This assumption was missing from your question, but without it it is clear that you cannot say anything.)
Consider the annulus $A := 1/\psi(\Omega)$. This is a doubly connected region of modulus M, separating the unit circle from $\infty$. Let $R>1$ be the smallest number such that $A$ omits a point of modulus $R$; wlog this point is $R$ itself. What you are asking for is a lower bound on $R$.
It is well-known that, among all such annuli, the
Grötzsch annulus, which is the complement of the closed unit disc and the segment $[R,\infty)$, has the largest modulus, which we denote $M(R)$. (See Ahflors, "Conformal Invariants".) Hence we see that $M(R)\geq M$.
In other words, let $\rho>1$ be such that $M(\rho) = M$. (Such $\rho$ exists because $M$ is clearly monotone and continuous, and tends to $0$ and $R\to 1$ and to $\infty$ as $R\to\infty$.) Then $R\geq \rho$, and hence$$\psi(\Omega) \subset D(0,1/\rho),$$which answers your question. Clearly this bound is sharp, by construction.
There are explicit estimates for the modulus of the Grötzsch annulus; see e.g. Ahlfors's book cited above.
Another interpreation of your question can be given when your inner boundary $C_1$ is not a round circle, but a
quasicircle. (This is true whenever the curve is smooth.) In this case, you can extend the conformal isomorphism to the round annulus to a map that will in general not be conformal, but at least be quasiconformal, with constants depending on the constants in the quasicircle definition. Quasiconformal maps satisfy some nice properties, so again these will give you some control over the image of your curve.
|
Imagine a decreasing sequence of (positive) radii $r_1 > r_2 > r_3 > \cdots$ and a series of nested circles $C_1 \supset C_2 \supset C_3 \supset \cdots$ with these radii, initially each resting on the $x$-axis tangent at $(0,0)$. Each is assigned a rolling speed $s_1, s_2, s_3, \ldots$, the amount of arc length that $C_i$ rolls on the inside of $C_{i-1}$ per unit time. (Positive $s_i$ represents clockwise spinning of $C_i$; negative, counterclockwise.) $C_1$ rolls on the $x$-axis, which can be considered $C_0$.
Here is an example, with $(r_1, r_2, r_3)=(1, \frac{1}{2}, \frac{1}{4})$and $(s_1, s_2, s_3)=(1,2,3)$, with the track of a point onthe third circle highlighted:
Call the curve that is the track of the $n$-th circle a rolling epicycle curve, or just a rolling curve.My question is:
Q1.What is the class of functions on some interval $[0,X]$ that can be approximated by some rolling curve?
Say that a function $f(x)$ is
approximated by a rolling curveif, for any $\epsilon > 0$, a curve may be foundthat remains within an $\epsilon$-tube around $f$.(One can easily substitute other reasonable definitions ofapproximation.)To be specific, we could insist that $f(0)=0$ and the rollingcurve tracks the innermost circle's point that initiallytouches $(0,0)$.
Here is a more random example of four circles ofradii $(1, \frac{3}{4}, \frac{2}{3}, \frac{1}{2})$, withgreen tracking the fourth circle:
There is considerable flexibility, but it seems difficult to control. To pose a more specific version of Q1:
Q2.Can a straight line through $(0,0)$ be approximated on a given interval $[0,X]$?
Wondering about the power of Ptolemaic epicycles led me to this question (although I realize the rolling constraint renders my question different). Thanks for insights!
Addendum. As per J.M.'s request, here is an animated GIF forthe three-circle example (which may or may not animate, depending on your browser settings):
|
I know that a Type II error is where H1 is true, but H0 is not rejected.
Question
How do I calculate the probability of a Type II error involving a normal distribution, where the standard deviation is known?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type II error) is $1 - \textrm{power}$. I assume a one-sided $H_{1}: \mu_{1} > \mu_{0}$. In R:
> sigma <- 15 # theoretical standard deviation> mu0 <- 100 # expected value under H0> mu1 <- 130 # expected value under H1> alpha <- 0.05 # probability of type I error# critical value for a level alpha test> crit <- qnorm(1-alpha, mu0, sigma)# power: probability for values > critical value under H1> (pow <- pnorm(crit, mu1, sigma, lower.tail=FALSE))[1] 0.63876# probability for type II error: 1 - power> (beta <- 1-pow)[1] 0.36124
Edit: visualization
xLims <- c(50, 180)left <- seq(xLims[1], crit, length.out=100)right <- seq(crit, xLims[2], length.out=100)yH0r <- dnorm(right, mu0, sigma)yH1l <- dnorm(left, mu1, sigma)yH1r <- dnorm(right, mu1, sigma)curve(dnorm(x, mu0, sigma), xlim=xLims, lwd=2, col="red", xlab="x", ylab="density", main="Normal distribution under H0 and H1", ylim=c(0, 0.03), xaxs="i")curve(dnorm(x, mu1, sigma), lwd=2, col="blue", add=TRUE)polygon(c(right, rev(right)), c(yH0r, numeric(length(right))), border=NA, col=rgb(1, 0.3, 0.3, 0.6))polygon(c(left, rev(left)), c(yH1l, numeric(length(left))), border=NA, col=rgb(0.3, 0.3, 1, 0.6))polygon(c(right, rev(right)), c(yH1r, numeric(length(right))), border=NA, density=5, lty=2, lwd=2, angle=45, col="darkgray")abline(v=crit, lty=1, lwd=3, col="red")text(crit+1, 0.03, adj=0, label="critical value")text(mu0-10, 0.025, adj=1, label="distribution under H0")text(mu1+10, 0.025, adj=0, label="distribution under H1")text(crit+8, 0.01, adj=0, label="power", cex=1.3)text(crit-12, 0.004, expression(beta), cex=1.3)text(crit+5, 0.0015, expression(alpha), cex=1.3)
To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you may wish to check out the free software, G Power 3.
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
The experiments that have been clearly accepted require at least three "flavors" of neutrinos, namely the \(SU(2)\) partners of the charged leptons called \(e^\pm, \mu^\pm,\tau^\pm\) that are called\[
\nu_e,\quad \nu_\mu, \quad \nu_\tau. \] The Greek letter \(\nu\) ("nu") stands for a "neutrino" and it hasn't been copyrighted by an artist yet. A "neutrino" is a word invented by Enrico Fermi. This word may be translated from Italian to English as a "small stupid Italian neutral thing" (you see that the Italian language is more concise). More precisely, the three mass eigenstates of these three neutrino species are some linear superpositions of the \(SU(2)\) partners of the charged lepton mass eigenstates – and it's the mass eigenstates that we call \(e,\mu,\tau\). The required mixing – the unitary transformation mapping the partners of the charged lepton eigenstates to the neutrino mass eigenstates – is called the PMNS matrix and it is mostly analogous to the CKM matrix for quarks. ATLAS contest, off-topic:Out of 521 competitors, your humble correspondent is back in top ten right now.
A charged lepton such as the electron/positron is described by a complex 4-component Dirac spinor which is why we encounter both electrons and positrons (electrons' antiparticles) and each of them may be spinning up and down. Recall that \(2\times 2 = 4\).
However, the known neutrinos have a correlation between the helicity and their being antimatter: the neutrinos we may produce and see are always left-handed while the antineutrinos are right-handed. That's why a two-component Majorana (or, less appropriately, Weyl) spinor is enough for the description of a neutrino flavor. \(SO(10)\) and higher grand unified theories – and their stringy extensions – like to predict right-handed neutrinos, too. They are likely to exist because the existence of heavy right-handed neutrinos is capable of explaining the low mass of the known neutrinos via the "seesaw mechanism".
However, the masses of the "mostly right-handed" and "mostly left-handed" neutrino eigenstates are so different that the "in principle" 4-component Dirac spinor for neutrinos and antineutrinos (a spinor relevant if the right-handed neutrinos exist which is uncertain) is effectively split into two 2-component Majorana (or Weyl) spinors, anyway. At low energies, only the single well-known Majorana (or Weyl) effectively 2-component spinor is needed to explain all the known observations. If the additional, massive Majorana 2-component part of the Dirac spinor doesn't exist, the nonzero mass of neutrinos implies that the neutrino and the antineutrino are really the same particle and must be able to oscillate in between each other. (However, the knowledge of the angular momentum prohibits the most generic "transmutations" of this sort, anyway: a left-handed neutrino/antineutrino has to stay left-handed.)
At any rate, at least three flavors of the neutrinos – electron, muon, and tau neutrinos – are definitely needed to explain the known data from neutrino experiments, including the neutrino oscillation data. You may see the observed values of the neutrino mass matrix parameters here. All the three mixing angles (12,23,31) between the three flavors are known to be significant – i.e. the mixing angles are not substantially different from 45 degrees which is why the simplified association of the neutrino eigenstates with the electron, muon, and tau is really bad in Nature. The discovery of a nonzero, and pretty large, angle between the flavors 13 was only made recently – and it was rightfully interpreted as a cute confirmation of a prediction by F-theory models in string theory.
(The existence of the nonzero 12 and 23 mixing angles has been known for 2-3 decades from the solar neutrino oscillations and the atmospheric oscillations, respectively, in this order.)
And the three mass eigenvalues are such that two of them are close to each other – with the difference of their \(m^2\) equal to the square of \(8.5\meV\) or so – while the third eigenvalue is substantially different – its \(m^2\) differs from the \(m^2\) of any of the previous two by something like \(50\meV\) squared. The binary question whether the third, special neutrino eigenstate is heavier or lighter than the other two (the "straight/reverted hierarchy" question) remains unanswered so far, and so do the absolute values of the masses (all the eigenvalues of \(m^2\) may be shifted by a universal, small enough constant, and the predictions for the experiments will remain largely unchanged because the oscillations are the only information about the masses that may be measured so far, and the character of these oscillations only depends on the differences of squared masses).
The existence of at least three flavors of neutrinos is necessary to be consistent with the basic, unquestionable experimental constraints – the solar and atmospheric neutrino oscillations were enough to make this conclusion. That hasn't stopped people from considering the possibility that there exist additional flavors of neutrinos. The canonical name of the hypothetical fourth neutrino flavor of neutrinos is the "sterile neutrino" because it would be a neutrino that doesn't routinely have \(SU(2)_W\) sex with a charged lepton, unlike its three colleagues. This general "disconnectedness" with the sex symbols of the charged world (I apologize to the quarks for talking them down but they're the best fermionic guys of color) would make them much less visible.
If you have never had any \(SU(2)_W\) sex, don't cry; I haven't experienced it, either. But some of your quarky friends may know what it is all about.
As far as I can say, the existence of additional neutrino flavors isn't really motivated by any deep physics ideas but it is not banned, either. It's a possibility that we shouldn't
a prioriexclude. It's compatible with the general principles of effective field theories as well as the string theory model building. And sometimes, experimental hints may be abused as an empirical justification for sterile neutrinos. A recent example is the \(3.5\keV\) X-ray line seen in the Cosmos that has been interpreted as a possible sign of sterile neutrinos (or moduli or axion or axinos, among other possible corpuscular explanations of the signal which may be a fluke, anyway) employed as particles of dark matter. This was some background. Let's return to today.
As the Symmetry Magazine pointed out in a new article today,
Everything on the right side of the black curve – in the subtly pink dotted region – has been excluded by MINOS. Previous experiments with names listed in the graphs have falsified the appropriately colored regions in the graph. You see that only MINOS was able to address the lower portions of the graph. You see that the \(y\)-axis in which MINOS was able to penetrate deeply is labeled as\[
\frac{\Delta m_{43}^2}{\eV^2}
\] which means the difference of squared masses between the hypothetical sterile neutrino and the third-generation fertile neutrino in the units of the squared electronvolt. In effect, MINOS was able to exclude a sterile neutrino that differs from the third fertile neutrino by less than \(0.1\eV\) (I actually mean that the squared masses differ by less than this constant squared) if the quantity describing the mixing angle,\[
\sin^2(2\theta_{24}),
\] is larger than \(0.1\) or so. Note that the squared sine is maximized for angles such as \(\pm 45^\circ\) and \(\pm 135^\circ\) where it is equal to one. MINOS isn't able to exclude a sterile neutrino that is made nearly invisible by too weak interactions – by too weak mixing with the known fertile neutrinos – and like any experiment, it isn't able to exclude neutrinos that are "really close" in mass to the known neutrinos flavor.
But MINOS is still able to exclude sterile neutrinos that would be much more closely attached – on the mass axis – to the known neutrino. Where's the key to this magical superior power of this experiment? Well, the experiment studies muon neutrinos produced at the Fermilab in Illinois that are being sent to a mine in Minnesota. So these almost invisible particles have a lot of space to go from the state "I" through states "J,K,L" to the state "M" – each transition by a letter is 125 miles long which makes the total distance 500 miles.
I hope that the American readers won't be to picky about the auxiliary "J,K,L" states in between Illinois and Minnesota, especially if they are ready to show me Ukraine on the map of Pakistan. All U.S. patriots who could be picky about the U.S. geography: Please be satisfied with the degree of respect towards America that I have just proven by having used the stupid medieval units of "miles". ;-)
MINOS is able to address the hypothesis about new neutrino species whose mass is very close to the fertile neutrinos' mass because those 500 miles is enough to allow these hypothetical neutrinos to oscillate even if the oscillations are really slow – i.e. even if the squared mass differences are really low.
An impartial appraisal of the importance of this new experimental result
As I have said, I don't find the existence of additional neutrino flavors well motivated but I think that the hypothesis about their existence is "conceivable" and cannot be immediately eliminated. On the other hand, if additional sterile neutrino flavors existed, I would find it rather unlikely that their mass is very close to the mass of the known fertile neutrino species. Such a proximity could be made more likely by a mechanism but I am not aware of any natural or otherwise justified mechanism of this sort.
So even though the MINOS experiment could have excluded some portions of the parameter space that were "undecidable" by the previous, competing experiments, I would say that the prior probability that Nature has chosen these – now excluded – portions of the parameter space was relatively low. The posterior probability is even lower than that but because the probability was already low before the experiment and it stayed low, the (remarkable) experiment hasn't changed too much, I would say.
Neutrinoless double-\(\beta\) decay
One more comment about today's news from the neutrino experiments. EXO-200 has improved limits on the neutrinoless double-\(\beta\) decay of germanium-76 and xenon-136. The improvement is four-fold. The half-lives are longer than \(1.1\times 10^{25}\) years or so. That allows to place the absolute neutrino masses, assuming that they are Majorana particles identical to their antiparticles, below \(0.2-0.4\eV\).
See Nature for the original expert publication and SLAC for a semi-popular review.
|
KAUST-IAMCS Workshop on Modeling and Simulation of Wave Propagation and Applications – King Abdullah University of Science and Technology (KAUST) Thuwal, Saudi Arabia Conference Center (Building 19), Conference Halls 1 & 2 Fatma Zohra Nouri, Badji Mokhtar University (Algeria) Mathematical Modeling and Brain Tumor Growth Abstract
Mathematical tumor growth models have started to attract attention from the medical image analysis community in last years. These models could help better understanding of the mechanical influence and the diffusion process of gliomas. For the clinical applications, they would provide tools to identify the invaded areas that are not visible in the MR images in order to better adapt the resection in surgery or the irradiation margins in radiotherapy. As one of the most important goals, they would give the opportunity to identify from patent images some model parameters that could help characterizing the tumor and perhaps predict its future evolution.
Research conducted on brain tumor growth modeling can be coarsely classified into two large groups:
Microscopic models Observations in the microscopic scale as a result formulate the growth phenomena at the cellular level Macroscopic models Observations at the macroscopic scale like the ones provided by medical images, formulation of the average behavior of tumor cells and their interactions with underlying tissue structures, which are visible at this scale of observation (e.g., MRI, MR-DTI), detecting real boundaries (grey matter, white matter, bones...)
Almost all diffusive macroscopic models use the reaction-diffusion formalism. This formalism models the invasive tumor by adding a diffusion team to simple solid tumor growth models, which formulates proliferation of cells. However, because of the different nature of the brain tissues (see Figure 1), the change of tumor cell density at a point \(u\left(x\right)\) should be described by an anisotropic diffusion and a nonlinear reaction process. We propose: \[\begin{cases} \frac{\partial u}{\partial t} = \nabla \cdot \left( D \left( x \right)\nabla u \right) + \rho \ldotp R \left( u \right) \\ D \left( x \right) \nabla u \cdot \vec{\eta} = 0 \end{cases}\]
The infiltration of tumor cells is explained by the diffusion process \(\nabla \left( D \left( x \right) \nabla u \right)\), which is characterized by the diffusion tensor \(D\). The proliferation of tumor cells are embedded in the reaction part \(\rho \ldotp R \left( u \right)\) with the motosis right \(\rho\) and \(R \left( u \right)\) a nonlinear function. The Neumann boundary conditions dictate that tumor cells will not pass through the skull nor the ventricles. (See numerical results in Figure 2).
References A. Giese, L. Kluwe, B. Laube, H. Meissner, M. Berens, and M. Westphal, . Migration of human glioma cells on myelin. Neurosurgery, 38. A. Giese, O. Clatz, M. Sermesant, P. Bondiau, H. Delingette, S. Warfield, G. Malandain, and N. Ayache, . Realistic simulation of the 3d growth of brain tumors in mr images coupling diffusion with biomechanical deformation. IEEE TMI, 24. E. Konukoglu, O. Clatz, P. Bondiau, M. Sermesant, H. Delingette, and N. Ayache, . Towards an identification of tumor growth parameters from time series of images. Miccai . F.Z. Nouri in collaboration with C. Bell, E. Chang, A. Foss, L. Hazelwood, J. O'Flaherty, C. Please, G. Richardson, B. Gorilla, A. Setchi, R. Shipley, J. Siggers, M. Tindall, and J. Ward, Mechanisms and localised treatment for complex heart rhythm disturbances. UK MMSG Cardiac Arrhythmias .
|
Let say a periodic signal waveform $v(t)$ can be modeled as:
$v(t) = f(\phi) A \sin(2\pi ft + \psi)$
where $A$ is a constant amplitude, $f$ is signal frequency, $t$ is time and $\psi$ is a constant phase.
While $f(\phi)$ is $f = k \cos(\phi)$, where $k$ is a constant and $\phi$ is a function of time $t$.
Now the signal $v(t)$ is
corrected by removing the dependency to $\phi$ factor i.e. $v'(t) = v(t)/f(\phi)$.
My question: is the process getting $v'(t)$ from $v(t)$ considered signal waveform demodulation ? or undistortion ?
The generic term I can think safely would be signal waveform correction but it would be better if there is a precise terminology.
|
a70 wrote:
If m and n are positive integers and m * n = 40 , what is the value of the sum of m and n?
(1) The number of positive factors of m is twice the number of positive factors of n. (2) m has exactly 4 different positive factors
\(\left. \begin{gathered}
m,n\,\, \geqslant 1\,\,\,{\text{ints}}\,\, \hfill \\
mn = 40\, \hfill \\
\end{gathered} \right\}\,\,\,\,\, \Rightarrow \,\,\,\,m\,\,{\text{and}}\,\,n\,\,{\text{are}}\,\,{\text{pairs}}\,\,{\text{of}}\,{\text{positive}}\,\,{\text{factors}}\,\,{\text{of}}\,\,40\)
\(? = m + n\)
This is a perfect opportunity to present our "T diagram
" (see image attached), GMATH´s creation to help students find ALL positive factors of a given positive integer.
We all know that 40 = (2^3)*5 has 4*2 = 8 positive factors. The T technique
shows them explicitly and, more than that, in the corresponding pairs (in the four rows)!
We have put the number of positive factors of each positive factor of 40 in parentheses. A quick inspection in each pair of divisors shows us that:
\(\left( 1 \right)\,\,\,\, \Rightarrow \,\,\,\left( {m,n} \right) = \left( {{2^3},5} \right)\,\,\,\,\, \Rightarrow \,\,\,? = 13\,\,\,\,\, \Rightarrow \,\,\,{\text{SUFF}}.\,\,\,\,\,\,\)
\(\left( 2 \right)\,\,\, \Rightarrow \,\,\,\left\{ \begin{gathered}
\,\left( {m,n} \right) = \left( {2 \cdot 5,{2^2}} \right)\,\,\,\,\, \Rightarrow \,\,\,? = 14\,\,\,\,\, \hfill \\
\,\left( {m,n} \right) = \left( {{2^3},5} \right)\,\,\,\,\, \Rightarrow \,\,\,? = 13\,\,\,\, \hfill \\
\end{gathered} \right. \Rightarrow \,\,\,{\text{INSUFF}}.\)
The correct answer is therefore (A).
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
Attachments
GMATH_T_diagram_25Set18.gif [ 14.9 KiB | Viewed 921 times ]
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
|
I am reading Rudin and I am very confused what a derivative is now. I used to think a derivative was just the process of taking the limit like this $$\lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$
But between Apostol and Rudin, I am confused in what sense total derivatives are derivatives.
Partial derivatives much more resemble the usual derivatives taught in high school
$$f(x,y) = xy$$
$$\frac{\partial f}{\partial x} = y$$
But the Jacobian doesn't resemble this at all. And according to my books it is a linear map.
If derivatives are linear maps, can someone help me see more clearly how my intuitions about simpler derivatives relate to the more complicated forms? I just don't understand where the limits have gone, why its more complex, and why the simpler forms aren't described as linear maps.
|
As this has obtained the required number of community votes, consider this an official PhysicsOverflow policy.
I see that this is a good point. After all, even the ArXiV
has lecture notes, etc., which are not really "original" in nature.
When reviewing things like blog articles, lecture notes, etc., there is no "originality" in them. This is not necessarily a problem since the formula for overall score is
\(x= \mathfrak{S} \exp\left( \sqrt[3]{\frac{\mathfrak{\times}}{5}} \right) \)
So, even if the originality is 0, which it usually is, the overall score will just be the accuracy score.
I don't think newspapers should get reviewed, for obvious reasons.
Not even blog articles. It is difficult to make an objective criteria between a real physics blog and a troll one. Some are extremely debatable. Also, many blogs are devoted to reviewing themselves.
Great blogs, such as TRF, are not necessarily devoted only to physics. TRF for example is also about politics, and so on. Now, the bot doesn't know that Barrack Obama is a politician, or that Joseph Polchinski is a physicist.
So, in conclusion, it is safe to import research papers, review papers, conference papers (from their proceedings), and seminars. The sources can be ArXiV, ViXrA, journal databases, proceedings, wherever. Direct uploads aren't supported yet.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.