text stringlengths 256 16.4k |
|---|
Regularity of extremal solutions in fourth order nonlinear eigenvalue problems on general domains
1.
Department of Mathematics, The University of British Columbia, Room 121, 1984 Mathematics Road, Vancouver, B.C., Canada V6T 1Z2
2.
Dipartimento di Matematica, Università degli Studi "Roma Tre”, 00146 Roma, Italy
3.
Department of Mathematics, The University of British Columbia, Vancouver BC Canada V6T 1Z2
*is smooth provided $N\leq 5$. If in addition $\lim$i$nf_{t \to +\infty}\frac{f (t)f'' (t)}{(f')^2(t)}>0$, then u *is regular for $N\leq 7$, while if $\gamma$:$= \lim$s$up_{t \to +\infty}\frac{f (t)f'' (t)}{(f')^2(t)}<+\infty$, then the same holds for $N < \frac{8}{\gamma}$. It follows that u *is smooth if $f(t) = e^t$ and $ N \le 8$, or if $f(t) = (1+t)^p$ and $N< \frac{8p}{p-1}$. We also show that if $ f(t) = (1-t)^{-p}$, $p>1$ and $p\ne 3$, then u *is smooth for $N \leq \frac{8p}{p+1}$. While these results are major improvements on what is known for general domains, they still fall short of the expected optimal results as recently established on radial domains, e.g., u *is smooth for $ N \le 12$ when $ f(t) = e^t$ [11], and for $ N \le 8$ when $ f(t) = (1-t)^{-2}$ [9] (see also [22]). Mathematics Subject Classification:Primary: 58E30, 58J05, 35J35; Secondary: 34B1. Citation:Craig Cowan, Pierpaolo Esposito, Nassif Ghoussoub. Regularity of extremal solutions in fourth order nonlinear eigenvalue problems on general domains. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1033-1050. doi: 10.3934/dcds.2010.28.1033
[1]
Baishun Lai, Qing Luo.
Regularity of the extremal solution for a fourth-order elliptic problem with singular nonlinearity.
[2]
Jagmohan Tyagi, Ram Baran Verma.
Positive solution to extremal Pucci's equations with singular and gradient nonlinearity.
[3] [4]
Hongyu Ye.
Positive high energy solution for Kirchhoff equation in $\mathbb{R}^{3}$ with superlinear nonlinearities via Nehari-Pohožaev manifold.
[5]
Guillaume Warnault.
Regularity of the extremal solution for a biharmonic problem with general nonlinearity.
[6]
Canghua Jiang, Kok Lay Teo, Ryan Loxton, Guang-Ren Duan.
A neighboring extremal solution for an optimal switched impulsive control problem.
[7] [8]
Dominique Blanchard, Olivier Guibé, Hicham Redwane.
Existence and uniqueness of a solution for a class of parabolic
equations with two unbounded nonlinearities.
[9]
Carl. T. Kelley, Liqun Qi, Xiaojiao Tong, Hongxia Yin.
Finding a stable solution of a system of nonlinear
equations arising from dynamic systems.
[10] [11]
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei.
Existence and uniqueness of singular solution
to stationary Schrödinger equation with supercritical nonlinearity.
[12]
Galina V. Grishina.
On positive solution to a second order elliptic equation with a singular nonlinearity.
[13]
Maicon Sônego.
Stable solution induced by domain geometry in the heat equation with nonlinear boundary conditions on surfaces of revolution.
[14]
Hermann Brunner.
The numerical solution of weakly singular Volterra functional integro-differential equations with variable delays.
[15] [16]
Xavier Cabré, Manel Sanchón.
Semi-stable and extremal solutions of reaction equations involving the $p$-Laplacian.
[17]
M. Gaudenzi, P. Habets, F. Zanolin.
Positive solutions of superlinear boundary value problems with singular indefinite weight.
[18]
Pawan Kumar Mishra, Sarika Goyal, K. Sreenadh.
Polyharmonic Kirchhoff type equations with singular exponential nonlinearities.
[19]
Futoshi Takahashi.
Singular extremal solutions to a Liouville-Gelfand type problem with exponential nonlinearity.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
JDN 2457005 PST 11:52.
I would have preferred to write about something a bit cheerier (like the fact that by the time I write my next post I expect to be finished with my master’s degree!), but this is obviously the big news in economic policy today. The new House budget bill was unveiled Tuesday, and then passed in the House on Thursday by a narrow vote. It has stalled in the Senate thanks in part to fierce—and entirely justified—opposition by Elizabeth Warren, and so today it has been delayed in the Senate. Obama has actually urged his fellow Democrats to
pass it, in order to avoid another government shutdown. Here’s why Warren is right and Obama is wrong.
You know the saying “You can’t negotiate with terrorists!”? Well, in practice that’s not actually true—we negotiate with terrorists all the time; the FBI has special hostage negotiators for this purpose, because sometimes it really is the best option. But the saying has an underlying kernel of truth, which is that once someone is willing to hold hostages and commit murder, they have crossed a line, a Rubicon from which it is impossible to return; negotiations with them can never again be good-faith honest argumentation, but must always be a strategic action to minimize collateral damage. Everyone knows that if you had the chance you’d just as soon put bullets through all their heads—because everyone knows they’d do the same to you.
Well, right now, the Republicans are acting like terrorists. Emotionally a fair comparison would be with two-year-olds throwing tantrums, but two-year-olds do not control policy on which thousands of lives hang in the balance. This budget bill is designed—quite intentionally, I’m sure—in order to ensure that Democrats are left with only two options: Give up on every major policy issue and abandon all the principles they stand for, or fail to pass a budget and allow the government to shut down, canceling vital services and costing billions of dollars. They are holding the American people hostage.
But here is why you must not give in: They’re going to shoot the hostages anyway. This so-called “compromise” would not only add $479 million in spending on fighter jets that don’t work and the Pentagon hasn’t even asked for, not only cut $93 million from WIC, a 3.5% budget cut adjusted for inflation—literally
denying food to starving mothers and children—and dramatically increase the amount of money that can be given by individuals in campaign donations (because apparently the unlimited corporate money of Citizens United wasn’t enough!), but would also remove two of the central provisions of Dodd-Frank financial regulation that are the only thing that stands between us and a full reprise of the Great Recession. And even if the Democrats in the Senate cave to the demands just as the spineless cowards in the House already did, there is nothing to stop Republicans from using the same scorched-earth tactics next year.
I wouldn’t literally say we should put bullets through their heads, but we definitely need to get these Republicans out of office immediately at the next election—and that means that all the left-wing people who insist they don’t vote “on principle” need to grow some spines of their own and
vote. Vote Green if you want—the benefits of having a substantial Green coalition in Congress would be enormous, because the Greens favor three really good things in particular: Stricter regulation of carbon emissions, nationalization of the financial system, and a basic income. Or vote for some other obscure party that you like even better. But for the love of all that is good in the world, vote.
The two most obscure—and yet most important—measures in the bill are the elimination of the
swaps pushout rule and the margin requirements on derivatives. Compared to these, the cuts in WIC are small potatoes (literally, they include a stupid provision about potatoes). They also really aren’t that complicated, once you boil them down to their core principles. This is however something Wall Street desperately wants you to never, ever do, for otherwise their global crime syndicate will be exposed.
The swaps pushout rule says quite simply that if you’re going to place bets on the failure of other companies—these are called
credit default swaps, but they are really quite literally a bet that a given company will go bankrupt—you can’t do so with deposits that are insured by the FDIC. This is the absolute bare minimum regulatory standard that any reasonable economist (or for that matter sane human being!) would demand. Honestly I think credit default swaps should be banned outright. If you want insurance, you should have to buy insurance—and yes, deal with the regulations involved in buying insurance, because those regulations are there for a reason. There’s a reason you can’t buy fire insurance on other people’s houses, and that exact same reason applies a thousandfold for why you shouldn’t be able to buy credit default swaps on other people’s companies. Most people are not psychopaths who would burn down their neighbor’s house for the insurance money—but even when their executives aren’t psychopaths (as many are), most companies are specifically structured so as to behave as if they were psychopaths, as if no interests in the world mattered but their own profit.
But the swaps pushout rule does not by any means ban credit default swaps. Honestly, it doesn’t even really
regulate them in any real sense. All it does is require that these bets have to be made with the banks’ own money and not with everyone else’s. You see, bank deposits—the regular kind, “commercial banking”, where you have your checking and savings accounts—are secured by government funds in the event a bank should fail. This makes sense, at least insofar as it makes sense to have private banks in the first place (if we’re going to insure with government funds, why not just use government funds?). But if you allow banks to place whatever bets they feel like using that money, they have basically no downside; heads they win, tails we lose. That’s why the swaps pushout rule is absolutely indispensable; without it, you are allowing banks to gamble with other people’s money.
What about margin requirements? This one is even worse.
Margin requirements are literally the only thing that keeps banks from printing unlimited money. If there was one single cause of the Great Recession, it was the fact that there were no margin requirements on over-the-counter derivatives. Because there were no margin requirements, there was no limit to how much money banks could print, and so print they did; the result was a still mind-blowing quadrillion dollars in nominal value of outstanding derivatives. Not million, not billion, not even trillion; quadrillion. $1e15. $1,000,000,000,000,000. That’s how much money they printed. The total world money supply is about $70 trillion, which is 1/14 of that. (If you read that blog post, he makes a rather telling statement: “They demonstrate quite clearly that those who have been lending the money that we owe can’t possibly have had the money they lent.” No, of course they didn’t! They created it by lending it. That is what our system allows them to do.)
And yes, at its core, it was printing money. A lot of economists will tell you otherwise, about how that’s not really what’s happening, because it’s only “nominal” value, and nobody ever expects to cash them in—yeah,
but what if they do? (These are largely the same people who will tell you that quantitative easing isn’t printing money, because, uh… er… squirrel!) A tiny fraction of these derivatives were cashed in in 2007, and I think you know what happened next. They printed this money and now they are holding onto it; but woe betide us all if they ever decide to spend it. Honestly we should invalidate all of these derivatives and force them to start over with strict margin requirements, but short of that we must at least, again at the bare minimum, have margin requirements.
Why are margin requirements so important? There’s actually a very simple equation that explains it. If the margin requirement is
m, meaning that you must retain a portion m between 0 and 1 of the loans you make as reserves, the total amount of money supply that can be created from the current amount of money M is just M/ m. So if margin requirements were 100%—full-reserve banking—then the total money supply is M, and therefore in full control of the central bank. This is how it should be, in my opinion. But usually m is set around 10%, so the total money supply is 10 M, meaning that 90% of the money in the system was created by banks. But if you ever let that margin requirement go to zero, you end up dividing by zero—and the total amount of money that can be created is infinite.
To see how this works, suppose we start with $1000 and put it in bank A. Bank A then creates a loan; how big they can make the loan depends on the margin requirement. Let’s say it’s 10%. They can make a loan of $900, because they must keep $100 (10% of $1000) in reserve. So they do that, and then it gets placed in bank B. Then bank B can make a loan of $810, keeping $90. The $810 gets deposited in bank C, which can make a loan of $729, and so on. The total amount of money in the system is the sum of all these: $1000 in bank A (remember, that deposit doesn’t disappear when it’s loaned out!), plus the $900 in bank B, plus $810 in bank C, plus $729 in bank D. After 4 steps we are at $3,439. As we go through more and more steps, the money supply gets larger at an exponentially decaying rate and we converge toward the maximum at $10,000.
The original amount is
M, and then we add M(1-m), M(1-m)^2, M(1-m)^3, and so on. That produces the following sum up to n terms (below is LaTeX, which I can’t render for you without a plugin, which requires me to pay for a WordPress subscription I cannot presently afford; you can copy-paste and render it yourself here):
\sum_{k=0}^{n} M (1-m)^k = M \frac{1 – (1-m)^{n+1}}{m}
And then as you let the number of terms grow arbitrarily large, it converges toward a limit at infinity:
\sum_{k=0}^{\infty} M (1-m)^k = \frac{M}{m}
To be fair, we never actually go through infinitely many steps, so even with a margin requirement of zero we don’t literally end up with infinite money. Instead, we just end up with
n M, the number of steps times the initial money supply. Start with $1000 and go through 4 steps: $4000. Go through 10 steps: $10,000. Go through 100 steps: $100,000. It just keeps getting bigger and bigger, until that money has nowhere to go and the whole house of cards falls down.
Honestly, I’m not even sure why Wall Street banks would want to get rid of margin requirements. It’s basically putting your entire economy on the
counterfeiting standard. Fiat money is often accused of this, but the government has both (a) the legitimate authority empowered by the electorate and (b) incentives to maintain macroeconomic stability, neither of which private banks have. There is no reason other than altruism (and we all know how much altruism Citibank and HSBC have—it is approximately equal to the margin requirement they are trying to get passed—and yes, they wrote the bill) that would prevent them from simply printing as much money as they possibly can, thus maximizing their profits; and they can even excuse the behavior by saying that everyone else is doing it, so it’s not like they could prevent the collapse all by themselves. But by lobbying for a regulation to specifically allow this, they no longer have that excuse; no, everyone won’t be doing it, not unless you pass this law to let them. Despite the global economic collapse that was just caused by this sort of behavior only seven years ago, they now want to return to doing it. At this point I’m beginning to wonder if calling them an international crime syndicate is actually unfair to international crime syndicates. These guys are so totally evil it actually goes beyond the bounds of rational behavior; they’re turning into cartoon supervillains. I would honestly not be that surprised if there were a video of one of these CEOs caught on camera cackling maniacally, “Muahahahaha! The world shall burn!” (Then again, I was pleasantly surprised to see the CEO of Goldman Sachs talking about the harms of income inequality, though it’s not clear he appreciated his own contribution to that inequality.)
And that is why Democrats must not give in. The Senate should vote it down. Failing that, Obama should veto. I wish he still had the line-item veto so he could just remove the egregious riders without allowing a government shutdown, but no, the Senate blocked it. And honestly their reasoning makes sense; there is supposed to be a balance of power between Congress and the President. I just wish we had a Congress that would use its power responsibly, instead of holding the American people hostage to the villainous whims of Wall Street banks. |
Writing a computer program that handles a small set of data is entirely different than writing a program that takes a large number of input data. The program written to handle a big number of input data MUST BE algorithmically efficient in order to produce the result in reasonable time and space. In this article, I discuss some of the basics of what is the running time of a program, how do we represent running time and other essentials needed for the analysis of the algorithms. Please bear with me the article is fairly long. I promise you will learn quite a few concepts here that will help you to cement a solid foundation in the field of
design and analysis of algorithms.
Suppose you developed a program that finds the shortest distance between two major cities of your country. You showed the program to your friend and he/she asked you “What is the running time of your program?”. You answered promptly and proudly “Only 3 seconds”. It sounds more practical to say the running time in seconds or minutes but is it sufficient to say the running time in time units like seconds and minutes? Did this statement fully answer the question? The answer is NO. Measuring running time like this raises so many other questions like
What’s the speed of the processor of the machine the program is running on? What is the size of the RAM? What is the programming language? How experience and skillful the programmer is? and much more
In order to fully answer your friend’s question, you should say like “My program runs in 3 seconds on Intel Core i7 8-cores 4.7 GHz processor with 16 GB memory and is written in C++ 14”. Who would answer this way? Of course, no one. Running time expressed in time units has so many dependencies like a computer being used, programming language, a skill of the programmer and so on. Therefore, expressing running time in seconds or minutes makes so little sense in computer programming.
You are now convinced that “seconds” is not a good choice to measure the running time. Now the question is
how should we represent the running time so that it is not affected by the speed of computers, programming languages, and skill of the programmer? In another word, how should we represent the running time so that we can abstract all those dependencies away?. The answer to the question is simple which is “input size”. To solve all of these dependency problems we are going to represent the running time in terms of the input size. If the input size is $n$ (which is always positive), then the running time is some function $f$ of $n$. i.e. $$\text{Running Time} = f(n)$$ The functional value of $f(n)$ gives the number of operations required to process the input with size $n$. So the running time would be the number of operations (instructions) required to carry out the given task. Function $f(n)$ is monotonically non-decreasing. That means, if the input size increases, the running time also increases or remains constant. Some examples of the running time would be $n^2 + 2n$, $n^3$, $3n$, $2^n$, $\log n$, etc. Having this knowledge of running time, if anyone asks you about the running time of your program, you would say “the running time of my program is $n^2$ (or $2n$, $n\log n$ etc)” instead of “my program takes 3 seconds to run”. The running time is also called a time complexity
Input size informally means the number of instances in the input. For example, if we talk about sorting, the size means the number of items to be sorted. If we talk about graphs algorithms, the size means the number of nodes or edges in the graph.
In the previous section, I said that the running times are expressed in terms of the input size ($n$). Three possible running times are $n$, $n^2$ and $n^3$. Among these three running times, which one is better? In other words, which function grows
slowly with the input size as compared to others? To find this out, we need to analyze the growth of the functions i.e we want to find out, if the input increases, how quickly the running time goes up.
One easiest way of comparing different running times is to plot them and see the natures of the graph. The following figure shows the graphs of $n$, $n^2$ and $n^3$. (x-axis represents the size of the input and y-axis represents the number of operation required i.e. running time)
Looking at the figure above, we can clearly see the function $n^3$ is growing faster than functions $n$ and $n^2$. Therefore, running time $n$ is better than running times $n^2$, $n^3$. One thing to note here is the input size is very small. I deliberately use the small input size only to illustrate the concept. In computer science especially in the analysis of algorithms, we do the analysis for
very large input size.
Another way of checking if a function $f(n)$ grows faster or slower than another function $g(n)$ is to divide $f(n)$ by $g(n)$ and take the limit $n \to \infty$ as follows
$$\lim_{n \to \infty}\frac{f(n)}{g(n)}$$
If the limit is $0$, $f(n)$ grows faster than $g(n)$. If the limit is $\infty$, $f(n)$ grows slower than $g(n)$.
The table below shows common running times in algorithm analysis. The entities in the table are presented from slower to quicker (best to worst) running times.
Running Time Examples Constant 1, 2, 100, 300, … Logarithmic $\log n$, $5\log n$, … Linear $n$, $n + 3$, $2n + 3$, … $n\log n$ $n\log n$, $2n\log n + n$, … Polynomial Quadratic, Cubic, or higher order Exponential $2^n$, $3^n$, $2^n + n^4$, … Factorial n!, n! + n, …
The figure below shows the graphical representations of these functions (running times).
Earlier I said that the running time are expressed as $n^2 + 3n +2$ or $3n$ etc. These are called
exact running time or exact complexity of an algorithm. We are rarely interested in the exact complexity of the algorithm rather we want to find the approximation in terms of upper, lower and tight bound. The $O$ notation gives the upper bound to the exact complexity and denoted by $O$ (Big-o), $\Theta$ gives the tight bound on exact complexity and $\Omega$ gives the lower bound on exact complexity. There are other two notations $o$ (Little-o) and $\omega$ with slight variation in $O$ and $\Omega$. All these notations are described in detail below.
This notation is called Big Theta notation. Formally, $f(n)$ is $\Theta(g(n))$ if there exist constants $c_1$, $c_2$, and $n_0$ such that
$$0 \le c_1g(n) \le f(n) \le c_2g(n) \text{ for all $n \ge n_0$}$$ Example: Let $g(n) = n^2$ and $f(n) = 5n^2 + 3n$. We want to prove $f(n) = \Theta(g(n))$. That means we need three constants $c_1$, $c_2$ and $n_0$ such that $$c_1n^2 \le 5n^2 + 3n \le c_2n^2$$ Simplification results, $$c_1 \le 5 + 3/n \le c_2$$ If we choose $n_0 = 1$, then the expression in the middle can not be smaller than 5. That means $c_1 \le 5$. Choose $c_1 = 5$. Similarly, it can not be greater than 8 so $c_2 \ge 8$. Choose $c_2 = 9$. The expression now becomes $$4n^2 \le 5n^2 + 3n \le 9n^2 \text{ for all $n > 1$}$$ This proves $5n^2 + 3n$ is $\Theta(n^2)$. The graphs of $4n^2$, $5n^2 + 3n$ and $9n^2$ is shown below. The figure clearly shows that $5n^2 + 3n$ is sandwiched between $4n^2$ and $9n^2$. That is way, we say big theta gives the asymptotic tight bound.
More examples
$2n + 2$ = $\Theta(n)$ $4n^4 + 5n^2 + 10 = \Theta(n^4)$ $5/25n = \Theta(n)$ $2^n + n^{100} = \Theta(2^n)$ $n \ne \Theta(n^2)$ $\log n \ne \Theta(n)$ $7n^3 + 2 \ne \Theta(n^2)$
Coding Example: Following code for matrix addition runs in $\Theta(n^2)$.
1 // add two matrices
Assume both matrices are square matrix of size $n \times n$. We need two for loops that go from 1 to $n$. Also assume the addition takes only a constant time $1$. Total time taken by those two loops are, therefore, $1\times n \times n = n^2$. Similarly, to display the result it takes another $n^2$ time. Total time is
$$n^2 + n^2 = 2n^2$$ We can easily show $2n^2 = \Theta(n^2)$ using technique discussed above.
This notation is also called Big Oh notation. Formally, $f(n)$ is $O(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) \le cg(n) \text{ for all $n \ge n_0$}$$
Big-O gives the
Asymptotic Upper Bound of a function. $f(n) = O(g(n))$ means $g(n)$ defines the upper bound and $f(n)$ has to be equal or less than $cg(n)$ for some value of $c$. Example: Let $g(n) = n^3$ and $f(n) = 50n^3 + 10n$. We want to prove $f(n)= O(g(n))$. To prove this, we need two constants $c$ and $n_0$ such that the following relation holds for all $n \ge n_0$ $$50n^3 + 10n \le cn^3$$ Simplification results $$50 + \frac{10}{n^2} \le c$$ If we choose $n_0 = 1$ then the maximum value the left hand side expression can get is 60. Choose $c = 61$ and we are done. We found two constants $c = 61$ and $n_0 = 1$. Therefore we can write $50n^3 + 10n = O(n^3)$. The graphs for functions $50n^3 + 10n$ and $61n^3$ is shown in the figure below. The graph above clearly shows that the function $50n^3 + 10n$ is bounded from above by the function $61n^3$ for all values of $n \ge 1$.
More Examples
$1/2n^2 + 2n + 1 = O(n^2)$ $n = O(n\log n)$ $100000n = O(n^{1.00001}$ $\log n = O(n)$ $n = O(n^4)$ $n^4 = O(2^n)$ $n^2 \ne O(n)$ $3n + 4 \ne O(\log n)$
Code Example: The matrix multiplication code above also runs in $O(n^2)$ (Please try to prove it yourself). The following code example runs in $(O(n))$.
1 int search(int a, int n, int item) {
Function
search returns the index of the item if the item is in the array and -1 otherwise. The running time varies depending upon where in the array the item is located. If its located in the very first position, the running time would be 1 (best case) and if its located in the last position, the running time would be $n$ (worst case). But worst case the running time can not go beyond $n$. So we can say that the worst case running time of this algorithm is $O(n)$.
The running time of the above function can also be written as $O(n^2)$ as $O(n)] = O(n^2)$, but we never write this way. Once we know it can not go away beyond $n$, we write O(n).
This notation is called Big-Omega notation. Formally, $f(n)$ is $\Omega(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) \ge cg(n) \text{ for all $n \ge n_0$}$$
Big-$\Omega$ gives the
Asymptotic Lower Bound of a function. $f(n) = \Omega(g(n))$ means $g(n)$ defines the lower bound and $f(n)$ has to be equal or greater than $cg(n)$ for some value of $c$.
Example: Let $g(n) = n^2$ and $f(n) = 10n^2 + 14n + 10$. We want to prove $f(n)= \Omega(g(n))$. To prove this, we need two constants $c$ and $n_0$ such that the following relation holds for all $n \ge n_0$
$$10n^2 + 14n + 10 \ge cn^2$$ Simplification results $$10 + \frac{14}{n} + \frac{10}{n^2} \ge c$$ If we choose $n_0 = 1$ then the minimum value the left hand side expression can get is 10. Choose $c = 9$ and we are done. We found two constants $c = 9$ and $n_0 = 1$. Therefore we can write $10n^2 + 14n + 10 = \Omega(n^2)$. The graphs for functions $10n^2 + 14n + 10$ and $9n^2$ is shown in the figure below. The graph above clearly shows that the function $10n^2 + 14n + 10$ is bounded from below by the function $9n^2$ for all values of $n \ge 1$.
More Examples:
$n^{2.001} = \Omega(n^2)$ $5n^2 + 5 = \Omega(n^2)$ $n\log n = \Omega(n)$ $n^{100} = \Omega(2^n)$ $n^2 \ne \Omega(n^3)$
Coding Example: Take any comparison based sorting algorithms. The running time of all such algorithms is $\Omega(n\log n)$
This notation is called Small Oh notation. We use o-notation to denote an upper bound that is not asymptotically tight. Formally, $f(n)$ is $o(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) < cg(n) \text{ for all $n < n_0$}$$
The definitions of O-notation and o-notation are similar. The main difference is that in $f(n) = O(g(n))$, the bound $f(n) \le cg(n)$ holds for some constant $c > 0$, but in $f(n) = o(g(n))$, the bound $f(n) < cg(n)$ holds for all constants $c > 0$.
Examples:
$2n = o(n^2)$ $2n^2 + 5n \ne o(n^2)$
Alternatively, $f(n)$ is $o(g(n))$ if
$$\lim_{n \to \infty}\frac{f(n)}{g(n)} = 0$$
This notation is called Small Omega notation. We use $\omega$-notation to denote an upper bound that is not asymptotically tight. Formally, $f(n)$ is $\omega(g(n))$ if there exist constants $c$ and $n_0$ such that
$$f(n) > cg(n) \text{ for all $n < n_0$}$$
Examples:
$n^2/2 = \omega(n)$ $n^3 + 2n^2 = \omega(n^2)$ $n\log n = \omega(n)$ $n^2/2 \ne \omega(n^2)$
Alternatively, $f(n)$ is $\omega(g(n))$ if
$$\lim_{n \to \infty}\frac{f(n)}{g(n)} = \infty$$ All the analysis we do in the algorithms are only for a large input. It can be wrong when applied to the small input. When your program has a small number of input instances, do not worry about the complexity. Use the algorithm that is easier to code. Most of the people use $O$ notations instead of $\Theta$ notations even though $\Theta$ could be more appropriate. This is not wrong because all the running times that are $\Theta$ are also O. Brassard, G., & Bratley, P. (2008). Fundamentals of Algorithmics. New Delhi: PHI Learning Private Limited. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (n.d.). Introduction to algorithms (3rd ed.). The MIT Press. |
Square Number
If a is a greatest 1-digit square number, b is the greatest the even 1-digit number, what is the answer of b+a:0,5?
Huy Toàn 8A (TL) 23/05/2018 at 01:05
We have : x
2= 6400
=> x
2= \(\pm80^2\)
To try on:
- x = 80 => x it isn't a square number
- x = -80 < 0 it isn't square number
=> x it isn't square number
a rectangle rug is 3 times as long as it is wide. If it were 3m shorter and 3m wider it would be a square. How long, in metres, is the rug?
We have this equation{a=3ba−3=b+3
(with a is the length and b is the width)
⇒3b−3=b+3
⇒3b=b+6
⇒2b=6
⇒b=3
⇒a=9
So that the rug is 3m wide and 9m long.
Son Nguyen Cong 30/07/2017 at 15:42
We have this equation\(\left\{{}\begin{matrix}a=3b\\a-3=b+3\end{matrix}\right.\)(with a is the length and b is the width)
\(\Rightarrow3b-3=b+3\)
\(\Rightarrow3b=b+6\)
\(\Rightarrow2b=6\)
\(\Rightarrow b=3\)
\(\Rightarrow a=9\)
So that the rug is 3m wide and 9m long.
Find a value of n that satisfies the following:
(a) \(\dfrac{n}{3}\)is a square number
(b) \(\dfrac{n}{5}\)is a cubic number, A cubic number has the form a x a x a = a
3
Phan Thanh Tinh Coodinator 27/03/2017 at 22:49
(a) Assume that the square number 16 = 4
2is equal to\(\dfrac{n}{3}\),so n = 48
(b) Assume that the cubic number 27 = 3Selected by MathYouLike
3is equal to\(\dfrac{n}{5}\),so n = 135
(a) Assume that the square number 16 = 42 is equal ton3n3,so n = 48
(b) Assume that the cubic number 27 = 33 is equal ton5n5,so n = 135
1 Selected by MathYou
Indratreinpro 05/04/2017 at 13:11
(a) Assume that the square number 16 = 42 is equal ton3n3,so n = 48
(b) Assume that the cubic number 27 = 33 is equal ton5n5,so n = 135
1 Selected by MathYou
A number n becomes a square number when 6 is subtracted from it. The sum of n and 19 is another square number, Find the value of n
Phan Thanh Tinh Coodinator 27/03/2017 at 22:52
The difference between 2 square numbers in the question is :
(n + 19) - (n - 6) = 25
So those square numbers must be 0 and 25.Then that number is 6Selected by MathYouLike
The difference between 2 square numbers in the question is :
(n + 19) - (n - 6) = 25
So those square numbers must be 0 and 25.Then that number is 6
Selected by MathYou
Nếu bây giờ ngỏ ý . Liệu có còn kịp không 28/03/2017 at 12:30
The difference between 2 square numbers in the question is :
(n + 19) - (n - 6) = 25
So those square numbers must be 0 and 25.Then that number is 6
Given that 24a + 96b is a square number, where a and b are positive integers. Find the latest value of a + b
24a + 96b = 4(6a + 24b)
4 is a square number,so 24a + 96b is a square number only when
6a + 24b is a square number
a,b are positive integers⇒a,b ≥ 1⇒ 6a + 24b ≥ 30
When a + b gets the least value,so does 6a + 24b.Hence,6a + 24b = 36
⇒24b < 36 ⇒24b = 24⇒{6a=12b=1⇒{a=2b=1
=> a + b = 3
So 3 is the least value of a + b
Phan Thanh Tinh Coodinator 24/04/2017 at 16:46
24a + 96b = 4(6a + 24b)
4 is a square number,so 24a + 96b is a square number only when
6a + 24b is a square number
a,b are positive integers⇒a,b ≥ 1⇒ 6a + 24b ≥ 30
When a + b gets the least value,so does 6a + 24b.Hence,6a + 24b = 36
⇒24b < 36 ⇒24b = 24\(\Rightarrow\left\{{}\begin{matrix}6a=12\\b=1\end{matrix}\right.\Rightarrow\left\{{}\begin{matrix}a=2\\b=1\end{matrix}\right.\)=> a + b = 3
So 3 is the least value of a + b
Phan Thanh Tinh Coodinator 27/03/2017 at 22:56
The difference between 2 above square numbers is :
(151 + n) - (100 + n) = 51
So those square numbers must be 49 and 100.Then n = -51Selected by MathYouLike
The difference between 2 above square numbers is :
(151 + n) - (100 + n) = 51
So those square numbers must be 49 and 100.Then n = -51
»ﻲ2004#ﻲ« 29/03/2017 at 06:11
The difference between 2 above square numbers is :
(151 + n) - (100 + n) = 51
So those square numbers must be 49 and 100.Then n = -51
It is given
1 x 2 x 3 x 4 + 1 = 25 = 5
2
2 x 3 x 4 x 5 + 1 = 121 = 11
2
3 x 4 x 5 x 6 + 1 = 361 = 19
2
4 x 5 x 6 x 7 + 1 = 841 = 29
2
5 x 6 x 7 x 8 + 1 = 1681 = 41
2
Find 2006 x 2007 x 2008 x 2009 + 1
We have :
n(n + 1)(n + 2)(n + 3) + 1 = [n(n + 3)][(n + 1)(n + 2)] + 1
= (n2 + 3n)(n2 + 3n + 2) + 1 = (n2 + 3n)2 + 2.(n2 + 3n) + 12
= (n2 + 3n + 1)2
Replace n = 2006 into the expression,we have :
2006 x 2007 x 2008 x 2009 + 1
= (20062 + 3 x 2006 + 1)2 = 40300552
Phan Thanh Tinh Coodinator 23/04/2017 at 08:24
We have :
n(n + 1)(n + 2)(n + 3) + 1 = [n(n + 3)][(n + 1)(n + 2)] + 1
= (n
2+ 3n)(n 2+ 3n + 2) + 1 = (n 2+ 3n) 2+ 2.(n 2+ 3n) + 1 2
= (n
2+ 3n + 1) 2
Replace n = 2006 into the expression,we have :
2006 x 2007 x 2008 x 2009 + 1
= (2006
2+ 3 x 2006 + 1) 2= 4030055 2
The nth term is
¯¯¯¯¯¯¯¯¯¯¯(n)52=(10n+5)2=100n2+100n+25=100n(n+1)+25
=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯[n(n+1)]25
Example :
To find the 20th term,we calculate 20(20 + 1) = 420
So the answer is 42025
Phan Thanh Tinh Coodinator 20/04/2017 at 22:08
The n
thterm is
\(\overline{\left(n\right)5}^2=\left(10n+5\right)^2=100n^2+100n+25=100n\left(n+1\right)+25\)
\(=\overline{\left[n\left(n+1\right)\right]25}\)
Example :
To find the 20
thterm,we calculate 20(20 + 1) = 420
So the answer is 42025
The number of student in Balmoral High School in 2008 is a square number. The number of students increased by 184 in 2009 and is still a square number. The school had x students in 2008.
Find x ?
You should check the question.This problem is easy
ab - ba = 0,so n2 = 0.Then n = 0
»ﻲ2004#ﻲ« 29/03/2017 at 06:10
You should check the question.This problem is easy
ab - ba = 0,so n2 = 0.Then n = 0
Number One 02/04/2017 at 06:29
You should check the question.This problem is easy
ab - ba = 0,so n2 = 0.Then n = 0
The oattern for the sum of consecutive even numbers befinning from 2 is as follows:
\(2=2=1\times2\)
\(2+4=6=2\times3\)
\(2+4+6=12=3\times4\)
\(2+4+6+8=20=4\times5\)
\(2+4+6+8+10=30=5\times6\)
\(⋮\)
Devise an expression for the sum of n consecutive even numbers.
Given a sum : 2 + 4 + 6 + ... + 2k
The number of terms is : 2k−22+1=k
( terms )
So, the result of this sum is : k.(2k+2)2=k.(k+1)
Example : 2 + 4 + 6 = 2 + 4 + 2.3 ; so k = 3; then 2 + 4 +6 = k.(k+1)=3.4 .
Indratreinpro 05/04/2017 at 13:13
Given a sum : 2 + 4 + 6 + ... + 2k
The number of terms is : 2k−22+1=k2k−22+1=k ( terms )
So, the result of this sum is : k.(2k+2)2=k.(k+1)k.(2k+2)2=k.(k+1)
Example : 2 + 4 + 6 = 2 + 4 + 2.3 ; so k = 3; then 2 + 4 +6 = k.(k+1)=3.4 .
Run my EDM 21/03/2017 at 21:17
Given a sum : 2 + 4 + 6 + ... + 2k
The number of terms is : \(\dfrac{2k-2}{2}+1=k\) ( terms )
So, the result of this sum is : \(\dfrac{k.\left(2k+2\right)}{2}=k.\left(k+1\right)\)
Example : 2 + 4 + 6 = 2 + 4 + 2.3 ; so k = 3; then 2 + 4 +6 = k.(k+1)=3.4 .
»ﻲ†hïếu๖ۣۜGïลﻲ« 25/03/2017 at 19:06
We have : ab = 8(a + b)
=> 10a + b = 8a + 8b
=> 10a - 8a = 8b - b
=> 2a = 7b
=> a = 7 ; b = 2
So ab is 72
Trịnh Đức Phát 22/03/2017 at 20:43
We have : ab = 8(a + b)
=> 10a + b = 8a + 8b
=> 10a - 8a = 8b - b
=> 2a = 7b
=> a = 7 ; b = 2
So ab is 72
→இے๖ۣۜQuỳnh 22/03/2017 at 20:20
We have : ab = 8(a + b)
=> 10a + b = 8a + 8b
=> 10a - 8a = 8b - b
=> 2a = 7b
=> a = 7 ; b = 2
So ab is 72 |
Problem statement: A thick target of 55Mn is irradiated with deutrons with current $I$ during a time $T$. Most of the reactions result in 56Mn, which then decays with half-life $t_{1/2}$. Calculate the number of active 56Mn nuclei at the end of the irradiation under the assumption that the deutrons have range $R$ and that the mean value of the cross-section over this range is $\sigma$.
Following the question there are some numbers given, and the range $R$ is given in units of mg/cm$^2$.
I cannot find a consistent explanation of what this
range is defined as. In my understanding it is the maximum distance a particle can go in a material until it runs out of energy. But the units makes no sense.
I've done a similar calculation where I calculated the
reaction rate, and solved the ODE$$\frac{dN}{dt} = (\text{reaction rate}) - \lambda N,$$where $N$ is the number of active nuclei.
Some explanation of this
range and how to use it in this calculation is appreciated. |
Introduction to Monte Carlo Methods Introduction
Two major classes of numerical problems that arise in data analysis procedures are optimization and integration problems. It is not always possible to analytically compute the estimators associated with a given model, and we are often led to consider numerical solutions. One way to avoid that problem is to use simulation. Monte Carlo estimation refers to simulating hypothetical draws from a probability distribution, in order to calculate significant quantities of that distribution.
The basic idea of Monte Carlo consist of writing the integral as an expected value with respect to some probability distribution, and then approximated using the method of moment estimator ($E[g(X)] \approx \overline{g(X)} = \dfrac{1}{n}\sum g(X_{i})$).
If we have a continuous function $g(\theta)$ and we want to integrated in the interval (a,b), we can rewrite our integral as an expected value of an uniform distribution $U \sim U[a,b]$, that is:
Using the method of moments estimator our integral approximation is:
Where the
Are simulated values from a uniform distribution.
Example 1: Exponential integral approximation
1) Given a function $f(x)= e^{x}$ the integral in the interval [3,5] is:
The MonteCarlo approximation of the integral is:
# Declaring the desired functionf = function(x){return(exp(x))}# Declaring the absolute error functionerror = function(x,y){return(abs(x-y))}# The actual integral answerans = exp(5)-exp(3)set.seed(6971)# number of iterationsn = 10^2# simulated uniform datax= runif(n,3,5)# MonteCarlo approximationMCa= (5-3)*mean(f(x))# Approximation errore = error(ans,MCa)rest = data.frame(n = n,MCapprox = MCa,error = e)set.seed(6971)for(k in 3:6){ n = 10^k x = runif(n,3,5) mca = (5-3)*mean(f(x)) rest = rbind(rest,c(n,mca,error(ans,mca) ) )}kable(rest,digits = 5,align = 'c',caption = "Integral Monte Carlo approximation results", col.names =c("Number of simulations","Monte Carlo approximation","Error approximation"))
Generalized Monte Carlo approximation
In a general case, the integral approximation for a given distribution f is:
An algorithm for construction of $\widehat{I}$ can be described by the following steps:
1) Generate from a f distribution
2) Calculate:
3) Obtain the sample mean: $$\overline{I} = \dfrac{1}{n}\sum_{k=1}^{n}\dfrac{g(\theta_k)}{f(\theta_k)}$$
In the next chunk, the simple Monte Carlo approximation function is presented to show how the algorithm works, where a and b are the uniform density parameters, n the number of desired simulations, and f is the function that we want to integrate.
# The simple Monte Carlo functionMCaf = function(n,a,b,f){x = runif(n,a,b)MCa = (b-a)*mean(f(x))return(MCa)}
Monte Carlo methods in Bayesian data analysis
The main idea of the Bayesian data analysis is fitting a model (
such as a regression or a time series model ) using a Bayesian inference approach. We assume that our parameters of interest have a theoretical distribution, this distribution ( posterior) is updated using the distribution of the observed data ( likelihood), and the previous or external information about our parameters ( prior distribution) by using the Bayes' theorem.
$$p(\theta / X) \text{ } \alpha\text{ } p(X/\theta)p(\theta)$$ Where:
$p(\theta/X)$ is the parameter posterior distribution
$p(X/\theta)$ is the sampling distribution of the observed data ( likelihood ).
$p(\theta)$ is the parameter prior distribution.
The main problem in the Bayesian approach is estimating the posterior distribution. The Markov Chain Monte Carlo methods ( mcmc ) generate a sample of the posterior distribution and approximate the expected values, probabilities or quantiles using Monte Carlo methods.
In the next two sections, we provide two examples for approximating probabilities and quantiles of a theoretical distribution. The procedure presented above are the usual methodologies used in a Bayesian approach.
Example 2: Probability approximation of a gamma distribution
Lets suppose we want to calculate the probability that a random variable $\theta$ is between zero and 5 $P(0 < \theta < 5)$, where $\theta$ has gamma distribution with parameter a = 2 and b = 1/3 ($\theta \sim Gamma(a = 2,b = 1/3)$), so the probability is:
Where $I_{[0,5]}(\theta) = 1$ if $\theta$ belongs to the interval [0,5].The idea of the Monte Carlo approximation, is count the number of observations that belong to the interval [0,5], and divide it by the total of the simulated data.
set.seed(6972)# number of iterationsn = 10^2# simulated uniform datax= rgamma(n,shape = 2,1/3)# MonteCarlo approximationMCa= mean(x <= 5)# Approximation errore = error(pgamma(5,2,1/3),MCa)rest = data.frame(n = n,MCapprox = MCa,error = e)for(k in 3:6){ n = 10^k x= rgamma(n,shape = 2,1/3) mca= mean(x <= 5) rest = rbind(rest,c(n,mca,error(pgamma(5,2,1/3),mca) ) )}kable(rest,digits = 5,align = 'c',caption = "Probability Monte Carlo approximation results", col.names =c("Number of simulations","Monte Carlo approximation","Error approximation"))
Example 3: Quantile approximation of a normal distribution
Lets suppose we want to calculate the 0.95 quantile of a random variable $\theta$ that has normal distribution with parameters $\mu = 20$ and $\sigma = 3$ ($\theta \sim normal(\mu = 20,\sigma^{2} = 9)$), so the 0.95 quantile is:
The main idea is found the largest sample value that gives a probabilty equal or less than 0.95, the Monte Carlo quantile approximation is estimate it using the quantile() function of the simulated data.
set.seed(6973)# number of iterationsn = 10^2# simulated uniform data x= rnorm(n,20,3)# MonteCarlo approximationMCa= quantile(x,0.95)# Approximation errore = error(qnorm(0.95,20,3),MCa)rest = data.frame(n = n,MCapprox = MCa,error = e)for(k in 3:6){ n = 10^k x= rnorm(n,20,3) mca= quantile(x,0.95) rest = rbind(rest,c(n,mca,error(qnorm(0.95,20,3),mca) ) )}kable(rest,digits = 5,align = 'c',caption = "Quantile Monte Carlo approximation results", col.names =c("Number of simulations","Monte Carlo approximation","Error approximation"),row.names = FALSE)
Discussions and conclusions
The Monte Carlo approximation methods offer an alternative tool for integral approximation and are a vital tool in the Bayesian inference approach, especially when we work with sophisticated and complex models. As it seems in all our three examples, the Monte Carlo methods offer an excellent approximation, but it demands a huge number of simulations for getting an approximation error close to zero.
References
1)
Introducing Monte Carlo methods with R, Springer 2004, Christian P. Robert and George Casella.
2)
Handbook of Markov Chain Monte Carlo, Chapman and Hall, Steve Brooks, Andrew Gelman, Galin L. Jones, and Xiao-Li Meng.
3)
Introduction to mathematical Statistics, Pearson, Robert V. Hogg, Joseph W. Mckean, and Allen T. Craig.
4)
Statistical Inference An Integrated Approach, Chapman and Hall, Helio S. Migon, Dani Gamerman, Francisco Louzada. |
I'm trying to figure out how should a vaccination model be built to correlate with population density, and I'm having problems to understand meanings of the results I receive when I apply theory on specific data I'm provided with.
Theory(i):
The initial phase of an outburst of a disease can be described by an exponential growth model. The relevant equation is:
$(1)\frac{dI}{dt}=\beta n(1-q)I-\mu I$ where:
$n$ = the population density. Let us measure it in units of $km^{-2}$.
$I$ = the density of already infected individuals in the population; measured in the same units as $n$.
$q$ = the fraction of the population that is immune to the disease, either naturally of due to vaccination. Consequently, $1-q$ is the fraction of the population that is susceptible, i.e., at risk of getting infected. $q$ is a pure number between $0$ and $1$, and has no units.
$\beta$ = is the transmission rate of the disease. It measures how easily and quickly the disease can be transmitted from an infected individual to an non-infected susceptible individual. $\beta$ includes within it both the rate at which encounter between infected and non-infected individuals occur, and the probability that such an encounter would result in actual transmission of the disease. $\beta$ has dimensions of $\frac{1}{time\times density^{2}}$, so let us measure it in units of $week^{-1}km^{4}$.
$\mu$ = the rate at which infected individuals are eliminated from the group of infected individual, either because they recover, or because they die. $\frac{1}{\mu}$ is the average duration of the infection, i.e., the average time that an individual remains infected before it either recovers or dies. Let us measure $\mu$ in units of $week^{-1}$.
This equation derives from the differential equation $(2) \frac{dN}{dt}=rN$ where $r$ is called instantaneous rate of increase. It is easy to see that $I$ from equation $(1)$ is equivalent to $N$ from equation $(2)$ and therefore, $r$ for equation $(1)$ will be $(3) r=\beta n(1-q)-\mu$. When we look at equation $(3)$, we see two factors:
$\beta n(1-q)$ - A positive factor(ii) $\mu$ - A negative factor
Minding the above, when $r=0$, there is no increase in population(iii). From this, we can compute $q_{0}$ which is the minimum fraction of vaccinated/immune individuals in the population that is required in order to prevent the disease from spreading. From equation $(3)$ we can figure out that $q_{0}=1-\frac{\mu}{\beta n}$. Just as $q$, $q_{0}$ is a pure number between $0$ and $1$.
Welcome to the desert of the real (my question):
Suppose we compare two countries with the following data:
Israel: $n=347km^{-2}$, $\beta=0.0015week^{-1}km^{4}$, $\mu=0.25week^{-1}$ Finland: $n=16km^{-2}$, $\beta=0.0015week^{-1}km^{4}$, $\mu=0.25week^{-1}$
When we look for $q_{0}$ for Israel we see that $q_{0}(Israel)=1-\frac{0.25}{0.0015\times347}=0.52=52$% while for Finland we see that $q_{0}(Finland)=1-\frac{0.25}{0.0015\times16}=-9.42=-942$%. Assuming that we've got correct data in the first place, $q_{0}$ is a negative pure numbers which is not between $0$ and $1$.
Do such, and similar results make any sense at all? Especially when they are not between the defined boundaries of the variable.
If they do make sense, what does it mean getting a negative results? How should it affect my vaccination policy?
Footnotes:
(i) Taken from my Populations Ecology lecture slides
(ii) Positive when looking at it from the epidemic point of view
(iii) Of infected individuals |
Let $\mathcal{V}$ be the space $C^r$ vector fields on a non-compact (smooth) manifold $M$. Being a subspace of $C^r(M, T M)$, it inherits the natural $C^r$ topology (i.e. the strong topology) of that space. Furthermore, since $C^r(M, T M)$ is Baire and $\mathcal{V}$ is closed in $C^r(M, T M)$, $\mathcal{V}$ is Baire too [See Hirsch,
Differential Topology, Theorem 4.4].
If we further restrict $\mathcal{V}$ to only include the vector fields that induce locally uniformly bounded trajectories, is the restricted space still Baire?
Note: Let $f \in \mathcal{V}$ be a particular vector field and $\phi(x, t)$ be the flow obtained from the ODE $\dot{x} = f(x)$. Vector field $f$ induces locally uniformly bounded trajectories if $\displaystyle \sup_{x \in C \; , \; t \geq 0} \| \phi(x, t) \| < \infty$ for all compact $C \subseteq M$.
Some comments
The cited theorem implies the following: The space $\mathcal{V}$ has a complete metric, if we consider the weak topology instead of the strong topology. Furthermore, to prove that the restricted space is Baire in the strong topology, it is sufficient to show that the restricted space is closed in this weak topology. Therefore, if one can show that the restricted space is still complete (with the mentioned metric), it is necessarily (weakly) closed, and hence Baire in the strong topology.
An informal argument: If $M \subseteq \mathbb{R}^n$ for some $n > 0$, or (more generally) if $M$ is a uniform space, then the weak topology on $\mathcal{V}$ (i.e. the compact-open topology) coincides with the topology of compact convergence. Since the restriction criterion is formulated in terms of compact subsets of $M$, I suspect that the restricted space is closed under the weak topology. Thus, I suspect it is Baire. However, I wasn't able to turn this into a rigorous argument, and it is possible that this line of reasoning is incorrect. |
Edited 1. Some suggestions are added at the end concerning Q2.
Edited 2. An "explanation" of spikes at $17^0$ is added at the very end...
Q1 I think that the answer to Q1 is positive provided the boundary of the tube is smooth. I'll consider the case of dimension $2$.
So, by our assumptions the light is propagating in the strip bounded by two smooth curves $L_+$ and $L_-$ that are both equidistant from the central curve $L$ on distance $\frac{1}{2}$. It is important that the whole strip is foliated by unit intervals orthogonal to all three curves, we call this foliation $F$.
Now consider our ray of light $R(t)$ propagating in the strip and introduce a function $\angle(t)$ that equals the angle between $R(t)$ and the orthogonal to $F$ in the direction of the tube. On the entrance of the tube the angle equals $0$.
Claim 1. For any moment $t$ we have $\angle(t)<\frac{\pi}{2}$.
Proof. Indeed, suppose that at some time $\angle(t)=\frac{\pi}{2}$. This means that $R(t)$ at this time goes in the direction of the foliation $F$. But since any segment of $F$ is a periodic ray in the strip, $R(t)$ must coincide with the segment, which is an absurd. END.
So, we see now that $R(t)$ will always propagate in the strip in one direction. So the only possibility for the ray to stay forever in the strip is to accumulate at some point to a segment $F_0$ of the foliation $F$. Let me explain why this is impossible. The main idea is that
this is impossible in the case the curve $L$ is a circle of radius $r>1$. In this case it is easy to check the statement. The statement for general $L$ roughly follows from the fact that $L$ can be approximated well by the circle at any point.
To spell out the above in more details we can reduce the question to a question of billiards. Indeed, on the two-dimensional set of straight directed segments that join $L_+$ with $L_-$ there is a (partially defined) self map, consisting of two consequent reflection of the segment (first with respect $L_-$ then with respect to $L_+$). All the segments of $F$ are fixed points of the map. We need to show that for $F_0$ there is no a point that tends to it under the infinite iterations of the map. This self-map have three properties: 1) it preserves an area form 2) it fixes a segment (parametrizing the segments of $F$) 3) its linearisation is never identical on the fixed segment.
These 3 properties are sufficient to deduce that everything roughly boils down to the following exercise:
Exercise. Consider a sequence $a_n$, such that $a_{n+1}=a_n(1-a_n)$, with $a_0$ positive and less than one . Then $\sum_i a_i=\infty$.
PS. I think, that we can ask the curves $L_+$, $L_-$ and $L$ to be only $C^3$-smooth, but the proof uses the fact that the curvature of $L$ is strictly larger than $1$. It is not obvious if this condition can be relaxed.
Q2 This is more of a suggestion rather than an answer. But this suggestion might help to get some clues to the answer. I would suggest you to make one more picture, namely, the picture of the Phase portrait - standard thing one does when dealing with a billiard. So, one only needs to consider the trajectory and for each reflection of the trajectory from the upper curve plot the point with two coordinates:
(angle of the ray; $x$-coordinate modulo $2$)
If you plot 1500 points, a certain shape will appear. Probably the points will fill a two-dimensional domain, but according to the histogram, the trajectory will avoid a large part of the phase portrait. This just reflects the fact that this billiard is not ergodic. I think, that to understand why there are no rays with angles in $[19^0, 111^0]$ one should analyse the boundary of the shape that will appear. This boundary might correspond to some "quasi-periodic" trajectory(ies) of the billiard.
Further on Q2. I want to add a couple of remarks on Q2, that are rather superficial. So, from the experiment of Joseph we see that with some probability it turns out that the original trajectory is quasi-periodic. I.e. the segments constituting the trajectory land on a one-dimensional curve in the 2-dimensional space of all segments. This at least explains the appearance of spikes in the first histogram. Indeed, when you project a measure evenly distributed on a curve on a plane to the $x$ axes - the projected measure will have singularities at points where the vertical lines $x=const$ are tangent to the curve.
Now, I guess, that in order to really answer the question one can indeed try to prove that the initial trajectory is quasi-periodic. The billiard is rather simple of course, but I don't know how hard it will be. And before you prove this, you can not be sure that the trajectory is really quasi periodic... |
Let $D$ be a bounded simply connected region (open subset homeomorphic to the disc) in the plane, containing the origin. Suppose that for every line $L$ through the origin the intersection $L\cap\partial D$ consists of two points $z_1$ and $z_2$ such that $|z_1-z_2|=\mathrm{diam} D$. Does it follow that $D$ is a disc?
Following the suggestion of Benoît Kloeckner, moderator, I have deleted my later answer, then edited and appended this (originally partial) answer to make it complete.
Pick an arbitrary direction. Then draw the line $L$ through the origin, perpendicular to the chosen direction. Since $L$ intersects the boundary of $D$ at two points, say $z_1$ and $z_2$, such that the segment $\overline{z_1z_2}$ is a diameter of $D$, the lines perpendicular to this segment and passing through $z_1$ and $z_2$, respectively, bound a parallel strip containing $D$ - otherwise the diameter of $D$ would be greater than the distance from $z_1$ to $z_2$. This proves that the width of $D$ in every direction is equal to the diameter of $D$.
Remark: This also proves that the closure $\bar{D}$ of $D$ is convex, being the intersection of a family of strips. in fact, $\bar{D}$ is strictly convex, as every set of constant width must be.
Another remark: The same proof works in every dimension; just replace the
arbitrary direction by arbitrary hyperplane with respect to which we look at the width of $D$.
Thus far, this does not quite answer the question. Among all examples of convex bodies of constant width I know, only the ball has the property that all diameters have one common point. It remains to prove that no other such body exists.
It has been established that $\bar{D}$ is strictly convex, that is, every support line of $D$ contains exactly one boundary point of $D$. Also, each support line of $D$ has its "opposite" support line, forming a strip between them of width $d$ and containing $D$. Now, suppose $\bar{D}$ is not smooth. Specifically, let $x_0$ be a boundary point of $D$ at which there are two intersecting support lines. Then the corresponding opposite to them support lines touch $\bar{D}$ at points $x_1$ and $x_2$, respectively, such that each of the segments $\overline{x_0x_1}$ and $\overline{x_0x_2}$ is a diameter of $D$. But since all diameters of $D$ meet at a single point, namely at the origin, $x_0$
must be the origin, contrary to the assumption that the origin lies in the interior of $\bar{D}$. This, in view of Alexandre's comment at the end of his question, implies that $D$ is a circular disk.
By the way, it is not necessary to assume that the origin lies in the interior of $\bar{D}$, since this follows from the other assumptions. Namely, if the origin were a boundary point of $D$, then a line passing through it and penetrating the interior of $D$ would intersect the boundary of $D$ at another point, the two points forming a diameter. Then the line $L$ through the origin and perpendicular to the penetrating line would be a support line of $D$. But there should be another boundary point on $L$ at the diameter-distance from the origin - a clear contradiction: two perpendicular diameters meeting at their end points. Thus $D$ is a circle centered at the origin.
Final remark: The statement for the plane implies the same in higher dimensions, by taking all 2-dimensional cross-sections of the body through the origin: each of them is smooth and each satisfies the same assumptions on the diameters. Since all such cross-sections are congruent circles, the body is a ball.
I show that $\partial D$ is a Jordan curve in the plane, may be this will be of some help.
To do this, define a map $\phi : S^1 \to \Bbb R^2$ such that $\phi(v) = (\Bbb R_+ v) \cap \partial D$. Your conditions imply that this map is well-defined, that is, it is single-valued. It also immediately follows that $\phi$ is injective. Moreover, $\phi(S^1) = \partial D$. In fact, the inclusion $\partial D \subset \phi(S^1)$ is obvious by the assumption that any line through 0 intersects $\partial D$ in two points, opposite to each other with respect to 0. The inclusion $\phi(S^1) \subset \partial D$ is true because $(\Bbb R_+ v) \cap D$ must be an interval containing 0 (otherwise you get more than one intersections with $\partial D$), and $\phi(v)$ can be thought as the supremum of this interval.
Next we show that $\phi$ is continuous by contradiction. Suppose that there is a sequence $v_n \in S^1$ which converges to $a\in S^1$ such that $\phi(v_n)$ does not converges to $\phi(a)$. This means that there is an open disk $U$ in $\Bbb R^2$ centered at $\phi(a)$ such that $\phi(v_n) \notin U$ for all $n$ (up to passing to a suitable subsequence). In this way you get a limit point of $\phi(v_n)$ that is different from $\phi(a)$, belongs to $\partial D$, but is in the half-line line $\Bbb R_+ a$, and this contradicts your assumptions. It follows that $\partial D$ is a Jordan curve.
Note that we didn't used the hypothesis that $D$ is homeomorphic to a disk (or that is simply connected). This follows automatically from the Schoenflies theorem, since we have proved that $\partial D$ is a Jordan curve. Or, if you prefer, you can explicitly define a radial homeomorphism of the plane that sends the unit disk to $D$, by means of a rescaling of the embedding $\phi$.
Inspired by Wlodek Kuperberg's answer, I think I have a simple proof that your domain must be a circle.
As noticed by Wlodek, given any line $L$ through the origin, at both point of intersection between $L$ and $\partial D$ the line orthogonal to $L$ is a supporting line for $D$. Moreover $D$ is convex.
This means that the boundary curve of $D$ must be an integral curve of the vector field orthogonal to the directions issued from the origin, i.e. $\partial/\partial \theta$ in polar coordinates $(r,\theta)$. This shows that it must be a circle.
Edit: this proofs may look like it needs the boundary to be smooth, but really it doesn't: one only needs the fundamental theorem of analysis for the function that maps a direction from the origin to the distance between the corresponding boundary point and the origin. Convexity of the boundary is more than enough, since it ensures that the above function is Lipschitz. |
Ok, so I'm looking at Ballentine's
Quantum Mechanics right now, 7th reprint (2010).
On page 363, he starts with
12.7 Adiabatic Approximation and quickly moves on to explain Berry's phase on page 365.
In equation $(12.90)$, he gives a formula for the time evolution of a certain, up to now seemingly "unimportant", phase, namely \begin{equation}\tag{1} \dot{\gamma}_n(t)=\iota\langle n(R(t))|\dot{n}(R(t))\rangle, \end{equation} where $|n(R(t))\rangle$ is the $n$-th Eigenstate of the time-dependent Hamiltonian $\hat{H}(R(t))$ for some curve $R(t)$ in the parameter space.
Next, he states that we may rewrite this equation as \begin{equation}\tag{2} \dot{\gamma}_n(t)=\iota\langle n|\nabla_R\dot{n}\rangle \cdot \dot{R}(t). \end{equation} Comparing this to Berry's original equation $(4)$, which is \begin{equation}\tag{3} \dot{\gamma}_n(t)=\iota\langle n|\nabla_Rn\rangle \cdot \dot{R}(t), \end{equation} you might already see where my problem arises: The dot over the $n$ or lack thereof. My intuition tells me that Berry is right and that it's just an error in Ballentine's book. Which would kind of make sense, since Berry's paper is peer-reviewed and I would interpret $\nabla_R{n}$ as $$|\dot{n}(R(t))\rangle=|\frac{\partial n}{\partial R^i}\frac{\partial R^i}{\partial t}\rangle=|\frac{\partial n}{\partial R^i}\rangle\dot{R^i}\equiv|\nabla_Rn\rangle\cdot\dot{R}.$$ But Ballentine's error is consistent: On the next page, we can see it 3 more times, and it is quite hard to believe that such an error occurs this impertinently.
Could you please tell me whose side the error is on and whether my interpretation of $\nabla$ is right? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The energy balance on the control volume reads
\[ Q = C_p\, \left({T_0}_2 - {T_0}_1 \right) \label{ray:eq:energy} \tag{1} \] The momentum balance reads \[ A\, ( P_1 - P_2 ) = \dot{m} \, ( V_2 - V_1) \label{ray:eq:momentum} \tag{2} \] The mass conservation reads \[ \rho_1 U_1 A = \rho_2 U_2 A = \dot{m} \label{ray:eq:mass} \tag{3} \] Equation of state \[ {P_1 \over \rho_1 \,T_1} = {P_2 \over \rho_2 \,T_2} \label{ray:eq:state} \tag{4} \] There are four equations with four unknowns, if the upstream conditions are known (or downstream conditions are known). Thus, a solution can be obtained. One can notice that equations (??), (??), and (??) are similar to the equations that were solved for the shock wave. Thus, results in the same as before (??)
Pressure Ratio
\[
\label{ray:eq:Pratio} \dfrac{P_2 }{ P_1} = \dfrac {1 + k\,{M_1}^{2} }{ 1 + k\,{M_2}^{2}} \tag{5} \]
The equation of state (??) can further assist in obtaining the temperature ratio as
\[ \dfrac{T_2 }{ T_1} = \dfrac{P_2 }{ P_1} \dfrac{\rho_1 }{ \rho_2} \label{ray:eq:Tratio} \tag{6} \] The density ratio can be expressed in terms of mass conservation as \[ \dfrac{\rho_1 }{ \rho_2} = \dfrac{U_2 }{ U_1} = \dfrac{ \dfrac{U_2 }{ \sqrt{k\,R\,T_2} } \sqrt{k\,R\,T_2} } { \dfrac{U_1 }{ \sqrt{k\,R\,T_1} } \sqrt{k\,R\,T_1} } = \dfrac{M_2 }{ M_1} \sqrt{ T_2 \over T_1} \label{ray:eq:rhoR1} \tag{7} \] or in simple terms as
Density Ratio
\[
\label{ray:eq:rhoR} \dfrac{\rho_1 }{ \rho_2} = \dfrac{U_2 }{ U_1} = \dfrac{M_2 }{ M_1} \sqrt{\dfrac{ T_2 }{ T_1}} \tag{8} \]
or substituting equations (??) and (??) into equation (??) yields
\[ {T_2 \over T_1} = {1 + k\,{M_1}^{2} \over 1 + k\,{M_2}^{2}}\, {M_2 \over M_1} \sqrt{ T_2 \over T_1} \label{ray:eq:t2t1a} \tag{9} \] Transferring the temperature ratio to the left hand side and squaring the results gives
Temperature Ratio
\[
\label{ray:eq:t2t1b} \dfrac{T_2 }{T_1} = \left[ \dfrac{1 + k\,{M_1}^{2} }{ 1 + k\,{M_2}^{2}} \right]^{2}\, \left(\dfrac{M_2 }{ M_1}\right)^{2} \tag{10} \]
The Rayleigh line exhibits two possible maximums one for \(dT/ds = 0 \) and for \(ds /dT =0\). The second maximum can be expressed as \(dT/ds = \infty\). The second law is used to find the expression for the derivative.
\[ {s_1 -s_2 \over C_p} = \ln {T_2 \over T_1} - \dfrac{k -1 }{ k}\, \ln {P_2 \over P_1} \label{ray:eq:2ndLaw} \tag{11} \] \[ \label{ray:eq:sndRlawEx} \dfrac{s_1 -s_2 }{ C_p} = 2\, \ln \left[ {({1 + k\,{M_1}^{2}) \over (1 + k\,{M_2}^{2} ) } {M_2 \over M_1}} \right] + {k -1 \over k} \ln \left[ {1 + k\,{M21}^{2} \over 1 + k\,{M_1}^{2} } \right] \qquad \, \tag{12} \]
Let the initial condition \(M_1\), and \(s_1\) be constant and the variable parameters are \(M_2\), and \(s_2\). A derivative of equation (??) results in
\[
\dfrac{1 }{ C_p } \dfrac{ds }{ dM} = \dfrac{2\, ( 1 - M^{2} ) }{ M\, (1 + k\,M^{2} )} \label{ray:eq:sndRlawExDrivative} \tag{13} \] Taking the derivative of equation (??) and letting the variable parameters be \(T_2\), and \(M_2\) results in \[ {dT \over dM} = constant \times { 1 - k\,M^{2} \over \left( 1 + k\,M^2\right)^{3} } \label{ray:eq:dTdM} \tag{14} \] Combining equations (??) and (??) by eliminating \(dM\) results in \[ {dT \over ds} = constant \times {M (1 - kM^2 ) \over ( 1 -M^2) ( 1 + kM^2 )^2 } \label{ray:eq:dTds} \tag{15} \] On T–s diagram a family of curves can be drawn for a given constant. Yet for every curve, several observations can be generalized. The derivative is equal to zero when \(1 - kM^2 = 0\) or \(M = 1 /\sqrt{k}\) or when \(M \rightarrow 0\). The derivative is equal to infinity, \(dT/ds = \infty\) when \(M = 1\). From thermodynamics, increase of heating results in increase of entropy. And cooling results in reduction of entropy. Hence, when cooling is applied to a tube the velocity decreases and when heating is applied the velocity increases. At peculiar point of \(M = 1/\sqrt{k}\) when additional heat is applied the temperature decreases. The derivative is negative, \(dT/ds < 0\), yet note this point is not the choking point. The choking occurs only when \(M= 1\) because it violates the second law. The transition to supersonic flow occurs when the area changes, somewhat similarly to Fanno flow. Yet, choking can be explained by the fact that increase of energy must be accompanied by increase of entropy. But the entropy of supersonic flow is lower (Figure 11.40) and therefore it is not possible (the maximum entropy at \(M=1\).). It is convenient to refer to the value of \(M=1\).
The equation (??) can be written between choking point and any point on the curve.
Pressure Ratio
\[
\label{ray:eq:Pratioa} \dfrac{P^{*} }{ P_1} = {1 + k\,{M_1}^{2} \over 1 + k} \tag{16} \]
The temperature ratio is
Pressure Ratio
\[
\label {ray:eq:Tratioa} {T^{*} \over T_1} = {1 \over M^2} \left( {1 + k{M_1}^{2} \over 1 + k} \right)^{2} \tag{17} \]
The stagnation temperature can be expressed as
\[
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.9:_Rayleigh_Flow/11.9.2:_Governing_Equations), /content/body/p[11]/span, line 1, column 4
{T_1 \left( 1 + \dfrac{k -1 }{ 2} {M_1}^{2} \right)
\over T^{*} \left( \dfrac{1 + k } {2} \right)}
\label{ray:eq:T0ratio2} \tag{18}
\]
or explicitly
Stagnation Temperature Ratio
\[
\label{ray:eq:T0ratio}
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.9:_Rayleigh_Flow/11.9.2:_Governing_Equations), /content/body/div[6]/p[2]/span, line 1, column 4
\dfrac{ 2\, ( 1 + k )\, {M_1}^{2} }{ (1 + k\,M^{2})^2}
\left( 1 + {k -1 \over 2} {M_1} ^2 \right) \tag{19}
\]
The stagnation pressure ratio reads
\[ \dfrac
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.9:_Rayleigh_Flow/11.9.2:_Governing_Equations), /content/body/p[12]/span, line 1, column 4
{P_1 \left( 1 + \dfrac{k -1 }{ 2}\, {M_1}^{2} \right)
\over P^{*} \left( {1 + k } \over 2 \right)}
\label{ray:eq:P0ratio2} \tag{20}
\]
or explicitly
Stagnation Pressure Ratio
\[
\label{ray:eq:P0ratio} \dfrac
Callstack: at (Under_Construction/Purgatory/Book:_Fluid_Mechanics_(Bar-Meir)__-_old_copy/11:_Compressible_Flow_One_Dimensional/11.9:_Rayleigh_Flow/11.9.2:_Governing_Equations), /content/body/div[7]/p[2]/span, line 1, column 4
{\left({ 1 + k \over 1 + k\,{M_1}^2}\right)}
\left( { 1 + k\,{M_1}^2 \over {(1 + k) \over 2}} \right)^{\dfrac{k }{ k -1}} \tag{21}
\]
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
$\newcommand{\Ric}{\text{Ric}}$Let $M$ be a smooth closed oriented Riemannian
surface.
I am searching for a reference (or a sketch of proof) for the following inequality:
$$ \int_M | \nabla V|^2 \ge \int_{M} \Ric(V,V)=\int_{M} K|V|^2, \tag{1}$$
for every vector field $V \in \Gamma(TM)$, where $ \nabla$ is the Levi-Civita connection, and the integration is against the Riemannian volume form. ($K$ is the Gauss curvature).
I guess some kind of Bochner identity is needed. I am also interested to know if this inequality holds for manifolds of higher dimensions.
BTW, specializing for the case of the round $2$-sphere, we get
$$ \int_{\mathbb{S}^2} | \nabla V|^2 \ge \int_{\mathbb{S}^2} |V|^2. \tag{2}$$
A proof of this specific case can be found here. |
Let's first have a look at the rectangular signal given as an example in your question. If you have a rectangle $s(t)$ in the time domain which is $1$ in the interval $[-T/2,T/2]$ and zero elsewhere, its Fourier transform is $S(f)=T\text{sinc}(Tf)$, where I use $\text{sinc}(x)=\sin(\pi x)/(\pi x)$. The value of its Fourier transform at $f=0$ equals $S(0)=T$, which corresponds to
$$\int_{-\infty}^{\infty}s(t)dt=T\tag{1}$$
Its
time average (or mean, or DC value) is given by
$$\bar{s}=\lim_{T_0\rightarrow\infty}\frac{1}{T_0}\int_{-T_0/2}^{T_0/2}s(t)dt=0\tag{2}$$
It is clear that any function for which the integral in (1) is finite, must have a DC-value of zero. The integral in (1) is the value of the Fourier transform of the signal at DC, and this is probably what confuses you. The DC value of a signal, and the value of its Fourier transform at DC are not the same. Any signal with a finite Fourier transform at DC has a DC value of zero, i.e. $\bar{s}=0$. Any signal with a non-zero DC value $\bar{s}\neq 0$ has a Dirac delta impulse component in its Fourier transform at DC.
If you write a signal as
$$s(t)=\bar{s}+\tilde{s}(t)$$
where $\bar{s}$ is the DC component as computed from (2), and, consequently, $\tilde{s}(t)$ has a DC component of zero, then its Fourier transform is
$$S(f)=\bar{s}\delta(f)+\tilde{S}(f)$$
where $\tilde{S}(0)$ is finite.
EDIT:Also note that when the Fourier transform of a signal $s(t)$ has a certain non-zero value at a frequency $f_0$, then this does not entail that the signal has a pure sinusoidal component at that frequency. The same is true for DC. If the Fourier transform has a finite value at DC, the time-domain signal has no DC component, otherwise there would be a Dirac impulse at $f=0$, just as there would be a Dirac impulse at $f_0$ if the signal contained a sinusoid at the frequency. |
Given a range of the rational numbers, $x$, between $0$ and $2\pi$\, what is the set of rational numbers $ y = \cos(x) $?
I was inspired by the stackoverflow question Can $\cos(a)$ ever equal $0$ in floating point? (The irrational number $\frac{\pi}{2}$ does not translate well into a computer representation.)
I looked for rational cosines, and came up with the likes of $$ 0, \frac{\pi}{3},\frac{\pi}{2}, \pi, \frac{3\pi}{2}$$ Following this rabbit hole, I wondered if there were any rational (Floating Point) numbers (besides $0$) that yielded rational cosines.
One respondent opened a different question, on english.stackexchange.com, What is the upper bound on “several”? which involves the size of the set in question. |
How to Calculate the Moment of Inertia of a Beam Section (Second Moment of Area)
Before we find the moment of inertia (or second moment of area) of a beam section, its centroid (or center of mass) must be known. For instance, if the moment of inertia of the section about its horizontal (XX) axis was required then the vertical (y) centroid would be needed first (Please view our Tutorial on how to calculate the Centroid of a Beam Section).
Before we start, if you were looking for our Free Moment of Inertia Calculator please click the link to learn more. This will calculate the centroid, moi and other results and even show you the step by step calculations! But for now, let’s look at a step-by-step guide and example of how to calculate moment of inertia:
Step 1: Segment the beam section into parts
When calculating the area moment of inertia, we must calculate the moment of inertia of smaller segments. Try to break them into simple rectangular sections. For instance, consider the I-beam section below, which was also featured in our Centroid Tutorial. We have chosen to split this section into 3 rectangular segments:
Step 2: Calculate the Neutral Axis (NA)
The Neutral Axis (NA) or the horizontal XX axis is located at the centroid or center of mass. In our Centroid Tutorial, the centroid of this section was previously found to be 216.29 mm from the bottom of the section.
Step 3: Calculate Moment of Inertia
To calculate the total moment of inertia of the section we need to use the “Parallel Axis Theorem”:
[math]
{I}_{total} = \sum{(\bar{I}_{i} + {A}_{i} {{d}_{i}}^{2})} \text{ where:}\\ \begin{align} \bar{I}_{i} &= \text{The moment of inertia of the individual segment about its own centroid axis}\\ {A}_{i} &= \text{The area of the individual segment}\\ {d}_{i} &= \text{The vertical distance from the centroid of the segment to the Netrual Axis (NA)} \end{align} [math]
Since we have split it into three rectangular parts, we must calculate the moment of inertia of each of these sections. It is widely known that the moment of inertia equation of a rectangle about its centroid axis is simply:
[math]
\bar{I}=\frac{1}{12}b{h}^{3} \text{ where:}\\\\ \begin{align} b &= \text{The base or width of the rectangle}\\ h &= \text{The height of the rectangle} \end{align} [math]
The moment of inertia of other shapes are often stated in the front/back of textbooks or from this guide of moment of inertia shapes. However the rectangular shape is very common for beam sections, so it is probably worth memorizing.
Now we have all the information we need to use the “Parallel Axis Theorem” and find the total moment of inertia of the I-beam section. In our moment of inertia example:
[math]
\text{Segment 1:}\\ \begin{align} \bar{I}_{1} &= \tfrac{1}{12}(250)(38)^{3} = 1,143,166.667 {\text{ mm}}^{4}\\ {A}_{1} &= 250\times38 = 9500 {\text{ mm}}^{2}\\ {d}_{1} &= \left|{y}_{1} – \bar{y} \right| = \left|(38 +300 +\tfrac{38}{2}) – 216.29\right| = 140.71 \text{ mm}\\\\ \end{align} [math]
[math]
\text{Segment 2:}\\ \begin{align} \bar{I}_{2} &= \tfrac{1}{12}(25)(300)^{3} = 56,250,000 {\text{ mm}}^{4}\\ {A}_{2} &= 300\times25 = 7500 {\text{ mm}}^{2}\\ {d}_{2} &= \left|{y}_{2} – \bar{y}\right| = \left|(38 +\tfrac{300}{2}) – 216.29\right| = 28.29 \text{ mm}\\\\ \end{align} [math]
[math]
\text{Segment 3:}\\ \begin{align} \bar{I}_{3} &= \tfrac{1}{12}(150)(38)^{3} = 685,900 {\text{ mm}}^{4}\\ {A}_{3} &= 150\times38 = 5700 {\text{ mm}}^{2}\\ {d}_{3} &= \left|{y}_{3} – \bar{y}\right| = \left|\tfrac{38}{2} – 216.29\right| = 197.29 \text{ mm}\\\\ \end{align} [math]
[math]
\begin{align} \therefore {I}_{total} &= \sum{(\bar{I}_{i} + {A}_{i} {{d}_{i}}^{2})} \\ &= (\bar{I}_{1} + {A}_{1}{{d}_{1}}^{2}) + (\bar{I}_{2} + {A}_{2}{{d}_{2}}^{2}) + (\bar{I}_{3} + {A}_{3}{{d}_{3}}^{2})\\ &= (1,143,166.667 + 9500\times140.71^{2}) + (56,250,000 + 7500\times28.29^{2}) + (685,900 + 5700\times197.29^{2})\\ &= 474,037,947.7 {\text{ mm}}^{4}\\ {I}_{total} &= 4.74 \times 10^{8} {\text{ mm}}^{4} \end{align} [math]
So there you have our guide on calculating the area of moment for beam sections. This result is critical in structural engineering and is an important factor in the deflection of a beam. We hope you enjoyed the tutorial and look forward to any comments you have.
BONUS: Using our Moment of Inertia Calculator
SkyCiv’s Account shows full calculations of moment of inertia. This interactive module will show you the step-by-step calculations of how to find moment of inertia:
Alternatively, you can see the results of our Free Moment of Inertia Calculator to check your work. This will calculate all the properties of your cross section and is a useful reference to calculate the Centroid, Area and Moment of Inertia of your beam sections! |
Under the auspices of the Computational Complexity Foundation (CCF)
We study polynomials computed by depth five $\Sigma\wedge\Sigma\wedge\Sigma$ circuits, i.e., polynomials of the form $\sum_{i=1}^t Q_i$ where $Q_i = \sum_{j=1}^{r_i}\ell_{ij}^{d_{ij}}$, $\ell_{ij}$ are linear forms and $r_i$, $t\ge 0$. These circuits are a natural generalization of the well known class of $\Sigma\wedge\Sigma$ circuits and received significant attention recently. We prove exponential lower bound for the monomial $x_1\cdots x_n$ against the following sub-classes of $\Sigma\wedge\Sigma\wedge\Sigma$ circuits:
\begin{itemize} \item Depth four homogeneous $\Sigma\wedge\Sigma \wedge$ arithmetic circuits. \item Depth five $\Sigma\wedge\Sigma^{[\le n]}\wedge^{[\ge 21]}\Sigma$ and $\Sigma\wedge\Sigma^{[\le 2^{\sqrt{n}/1000}]}\wedge^{[\ge \sqrt{n}]}\Sigma$ arithmetic circuits where the bottom $\Sigma$ gate is homogeneous; \end{itemize} Our results show precisely how the fan-in of the middle $\Sigma$ gates, the degree of the bottom powering gates and the homogeneity at the bottom $\Sigma$ gates play a crucial role in the computational power of $\Sigma\wedge\Sigma\wedge\Sigma$ circuits.
Some of the bounds are improved.
We study polynomials computed by depth five $\Sigma\wedge\Sigma\wedge\Sigma$ circuits, i.e., polynomials of the form $\sum_{i=1}^t Q_i$ where $Q_i = \sum_{j=1}^{r_i}\ell_{ij}^{d_{ij}}$, $\ell_{ij}$ are linear forms and $r_i$, $t\ge 0$. These circuits are a natural generalization of the well known class of $\Sigma\wedge\Sigma$ circuits and received significant attention recently. We prove exponential lower bound for the monomial $x_1\cdots x_n$ against the following sub-classes of $\Sigma\wedge\Sigma\wedge\Sigma$ circuits:
\begin{itemize} \item Depth four homogeneous $\Sigma\wedge\Sigma \wedge$ arithmetic circuits. \item Depth five $\Sigma\wedge\Sigma^{[\le n]}\wedge^{[\ge 21]}\Sigma$ and $\Sigma\wedge\Sigma^{[\le 2^{\sqrt{n}/1000}]}\wedge^{[\ge \sqrt{n}]}\Sigma$ arithmetic circuits where the bottom $\Sigma$ gate is homogeneous; \end{itemize} Our results show precisely how the fan-in of the middle $\Sigma$ gates, the degree of the bottom powering gates and the homogeneity at the bottom $\Sigma$ gates play a crucial role in the computational power of $\Sigma\wedge\Sigma\wedge\Sigma$ circuits.
Some of the bounds are improved.
The power symmetric polynomial on $n$ variables of degree $d$ is defined as
$p_d(x_1,\ldots, x_n) = x_{1}^{d}+\dots + x_{n}^{d}$. We study polynomials that are expressible as a sum of powers of homogenous linear projections of power symmetric polynomials. These form a subclass of polynomials computed by depth five circuits with summation and powering gates (i.e., $ \sum\bigwedge\sum\bigwedge\sum$ circuits). We show $2^{\Omega(n)}$ size lower bounds for $x_1\cdots x_n$ against the following models: \begin{itemize} \item Depth five $\sum\bigwedge\sum^{\le n}\bigwedge^{\ge 21}\sum$ arithmetic circuits where the bottom $\sum$ gate is homogeneous; \item Depth four $\sum\bigwedge\sum^{\le n}\bigwedge$ arithmetic circuits. \end{itemize} Together with the ideas in [Forbes, FOCS 2015] our lower bounds imply deterministic $n^{\poly(\log n)}$ black-box identity testing algorithms for the above classes of arithmetic circuits. Our technique uses a measure that involves projecting the partial derivative space of the given polynomial to its multilinear subspace and then setting a subset of variables to $0$.
Title was incorrectly entered in the info. Changed it now.
The power symmetric polynomial on $n$ variables of degree $d$ is defined as
$p_d(x_1,\ldots, x_n) = x_{1}^{d}+\dots + x_{n}^{d}$. We study polynomials that are expressible as a sum of powers of homogenous linear projections of power symmetric polynomials. These form a subclass of polynomials computed by depth five circuits with summation and powering gates (i.e., $ \sum\bigwedge\sum\bigwedge\sum$ circuits). We show $2^{\Omega(n)}$ size lower bounds for $x_1\cdots x_n$ against the following models: \begin{itemize} \item Depth five $\sum\bigwedge\sum^{\le n}\bigwedge^{\ge 21}\sum$ arithmetic circuits where the bottom $\sum$ gate is homogeneous; \item Depth four $\sum\bigwedge\sum^{\le n}\bigwedge$ arithmetic circuits. \end{itemize} Together with the ideas in [Forbes, FOCS 2015] our lower bounds imply deterministic $n^{\poly(\log n)}$ black-box identity testing algorithms for the above classes of arithmetic circuits. Our technique uses a measure that involves projecting the partial derivative space of the given polynomial to its multilinear subspace and then setting a subset of variables to $0$. |
It is commonly written in the literature that due to it transforming in the adjoint representation of the gauge group, a gauge field is lie algebra valued and may be decomposed as $A_{\mu} = A_{\mu}^a T^a$. For $\mathrm{SU}(3)$ the adjoint representation is 8 dimensional so objects transforming under the adjoint representation are $8 \times 1$ real Cartesian vectors and $3 \times 3$ traceless hermitean matrices via the lie group adjoint map. The latter motivates writing $A_{\mu}$ in terms of generators, $A_{\mu} = A_{\mu}^a T^a$.
My first question is, this equation is said to be valid independent of the representation of $T^a$ - but how can this be true? In some representation other than the fundamental representation, the $T^a$ will not be $3 \times 3$ hermitean traceless matrices and thus will not contain $8$ real parameters needed for transformation under the adjoint representation. But we know the gluon field transforms under the adjoint representation so what is the error in this line of reasoning which is suggestive of constraining the $T^a$ to be the Gell mann matrices?
Consider the following small computation: $$A_{\mu}^a \rightarrow A_{\mu}^a D_b^{\,\,a} \Rightarrow A_{\mu}^a t^a \rightarrow A_{\mu}^b D_b^{\,\,a}t^a$$ Now, since $Ut_bU^{-1} = D_b^{\,\,a}t^a$ we have $A_{\mu}^a t^a \rightarrow A_{\mu}^b (U t^b U^{-1}) = U A_{\mu}^b t^b U^{-1}$. The transformation law for the $A_{\mu}^a$ is in fact $A_{\mu} \rightarrow UA_{\mu}U^{-1} - i/g (\partial_{\mu} U) U^{-1}$.
What is the error that amounts to these two formulae not being reconciled?
The latter equation doesn't seem to express the fact that the gluon field transforms in the adjoint representation. I was thinking under $\mathrm{SU}(3)$ colour, since this is a global transformation, $U$ will be independent of spacetime so the derivative term goes to zero but is there a more general argument? |
Many have tried to convert
NetHack units of measurement to a scale that would more or less correlate with real life. This attempt uses metabolism as the transition factor. The result was approximately 7.5 seconds per turn, which seems to be within proper degrees of magnitude. The translated NetHack time will be called Correlated NetHack Time, or CNT. CalculationEdit
According to the guidebook, divine intervention was certainly sufficient to influence sleep of the chosen hero ("Strange dreams... haunted you in your sleep"). Therefore, it can be extrapolated that whilst in the dungeon (or Gehennom), magical or divine influence allowed the hero to not require regular sleep. This explains why no occurrences of mandatory sleep occur through the game.
Assuming, then, that the hero hacks continuously, and that hacking is hard work, the hero's caloric consumption can be calculated. Averaging the energy spent during combat and the energy spent leisurely exploring, the estimated caloric consumption is likely equivalent to that of a brisk walk for a human of 80 kg, or around
400 Cal/hour. (Note that the capital letter is significant here; 1 Calorie ( 1 Cal) is equal to 1000 calories ( 1000 cal) or 1 kilocalorie ( 1 kcal) [1].)
Hero consumes $ 400\text{ Cal}/\text{h} $ continuously (no sleep).
Since food rations are frequently in the hero's inventory from the start, it can be safely assumed that food rations are not magical products, but designed for normal mortal humans to eat three times daily with a daily consumption of 2000 Cal / day.
$ \frac{2000\text{ Cal}/\text{day}}{3\text{ rations}/\text{day}} = 667\text{ Cal}/\text{ration} $
Because the hero burns through food rations faster than normal humans, each ration can sustain the hero for a much shorter time.
$ \frac{667\text{ Cal}/\text{ration}}{400\text{ Cal}/\text{h}} = 1.\overline{6}\ \text{h}/\text{ration} $
In NetHack, one ration supplies 800 nutritional points, which lasts 800 turns in standard hacking conditions. Therefore,
$ \frac{800\text{ turns}/\text{ration}}{1.\overline{6}\text{ h}/\text{ration}} = 480\text{ turns}/\text{h} $
$ \frac{3600\text{ s}/\text{h}}{480\text{ turns}/\text{h}} = 7.5\text{ s}/\text{turn} $
ExtrapolationsEdit
With this core value of
7.5 seconds/turn in mind, one can calculate the scale of magnitude of other values in the NetHack world.
An average ascension is 50000 turns, this means that:
$ (50000\text{ turns}/\text{ascension})(7.5\text{ s}/\text{turn})\left (\frac{1\text{ day}}{86400\text{ s}}\right ) \approx 4.34\text{ days}/\text{ascension} $
4 days and 8 hours of NetHack time seems reasonable for an ascension.
A human walks at approximately 5 km/hour. This means that one NetHack tile is:
$ (5\text{ km}/\text{h}) \left( \frac{5\text{ h}\cdot \text{m}}{18\text{ km}\cdot \text{s}} \right) (7.5\text{ s}/\text{tile}) \approx 10.4\text{ m}/\text{tile} $
This seems reasonable, as a dragon has to fit into one.
A dungeon level's dimension limits are roughly 78×20 varying a little between levels (80×25 minus borders and status lines). This means a dungeon is approximately 810 m by 210 m (roughly, 1/2 mi by 1/8 mi).
Assuming the stairs have an inclination of 45 degrees, the ceiling can be guessed to be about 7 meters high. This result can be found using the Pythagorean theorem
a^2 + b^2 = c^2, setting
c = 10.4 m and
a = b and solving
a:
$ \lfloor \sqrt{\frac{(10.4\text{ m})^2}{2}} \rfloor = 7\text{ m} $
ResultsEdit
The Hero:
... consumes 400 Cal/hourcontinuously (no sleep). ... takes 7.5 secondsto experience one NetHack turn. ... ascends in 4 days and 8 hoursof Correlated NetHack Time (CNT)on average. rast's record fast ascension took 13.5 hours CNT. rast's record fast ascension took ... eats a full meal every 1 hour 40 minutes CNT. ... forgets spells about every 42 hours CNT. (no sleep, remember?)
Also:
A NetHack tile is approximately 10.4 m × 10.4 m × 7 m (34.1 ft. × 34.1 ft. × 23.0 ft.), or 760 m³ (26 740 ft.³). Dungeon dimensions are approximately 810 m × 210 m (1/2 mi. × 1/8 mi.). The Plane of Water contains about 1.18 billion liters (312 million gallons) of water, weighing 1.18 million metric tons. You can throw a zorkmid up to 52 maway. Moloch's Sanctum can be as deep as 360 mbelow ground. |
Smaller DFTs from bigger DFTs Introduction
Let's consider the following hypothetical situation: You have a sequence $x$ with $N/2$ points and a black box which can compute the DFT (Discrete Fourier Transform) of an $N$ point sequence. How will you use the black box to compute the $N/2$ point DFT of $x$? While the problem may appear to be a bit contrived, the answer(s) shed light on some basic yet insightful and useful properties of the DFT.
On a related note, the reverse problem of computing an $N$ point DFT using a black box that can only compute an $N/2$ point DFT is elegantly addressed in this post (and in several textbooks, of course) and forms the backbone of the FFT algorithm.
We will consider three different solutions to this problem, each of which will highlight a distinct property of the DFT. The first step in each of these solutions is to create an $N$ point sequence which serves as input to our black box.
Zero-padding $x$ with $N/2$ zeros Interlacing $x$ with $N/2$ zeros Appending a replica of $x$ to itself
The final step in each solution will be to select an appropriate $N/2$ point subsequence from the $N$ point output sequence produced by the black box. Each approach is discussed in a separate section below.
Before we proceed, let's explicitly write down the $N/2$ point DFT of $x$ we desire to compute.
$$X(k) = \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N/2}}, \; k=0,1, ..., {N \over 2}-1$$
Zero padding
Let's consider an approach where $N/2$ zeros are padded to the end of $x$ to construct the $N$ point sequence $\tilde{x} = [\underbrace{x}_{N/2} \; \underbrace{0 \; 0 \; ... \; 0}_{N/2}] = [x \; 0_{N/2}]$. The $N$ point DFT of $\tilde{x}$ is computed as
$$\tilde{X}(k) = \sum_{n=0}^{N-1} \tilde{x}(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} \underbrace{\tilde{x}(n)}_{=x(n)} e^{{-j2 \pi kn \over N}} + \sum_{n=N/2}^{N-1} \underbrace{\tilde{x}(n)}_{=0} e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
This has started to resemble the result we want, but one more step is required. Let's split the $N$ point sequence $\tilde{X}$ into even ($k=2m$) and odd terms ($k=2m+1$) and inspect the even terms
$$\tilde{X}(2m) = \sum_{n=0}^{N/2-1} x(n) e^{{-j4 \pi mn \over N}}, \; m=0,1, ..., {N \over 2}-1$$
$$ \implies \tilde{X}(2m) = \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi mn \over N/2}} = X(m), \; m=0,1, ..., {N \over 2}-1$$
Thus, we can extract the $N/2$ point DFT of $x$ from the DFT of its zero-padded version simply by sampling the even points.
Notes
The zero padding operation increases the size of the DFT and results in a higher resolution of frequency bins. While only the even points in the DFT of the zero-padded sequence are relevant to us, the odd points provide an interpolation between the even points and hence result in an overall
smoother spectral output. This "trick" can be used to get a smoother visualization of the spectrum or to get more refined estimates of the amplitudes of spectral bins of interest. Note, however, that zero-padding does not create any new information, i.e. frequency components which are otherwise unresolvable in the original input will not magically get resolved by padding the input with zeros. The only way to improve frequency resolution is to consider a longer segment of the input for analysis. This is a common point of confusion and has been addressed in many a blog (e.g. here, here, and here) Extensions
What if we had padded the zeros at the start of the signal $x$ or had placed $x$ in the middle with $N/4$ zeros on either side? Those well-versed with properties of the DFT will immediately recognize these two configurations are merely circular shifts (mod $N$) of $\tilde{x} = [x 0_{N/2}]$, which can be related to a phase component in spectral domain. Nevertheless, it is instructive to walk through the analysis for one of them.
Let's say we defined $\tilde{x} = [0_{N/2} \; x]$. The $N$ point DFT of $\tilde{x}$ is computed as
$$\tilde{X}(k) = \sum_{n=0}^{N-1} \tilde{x}(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} \underbrace{\tilde{x}(n)}_{=0} e^{{-j2 \pi kn \over N}} + \sum_{n=N/2}^{N-1} \underbrace{\tilde{x}(n)}_{=x(n-N/2)} e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=N/2}^{N-1} x(n-N/2) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi k(n-N/2) \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}} \cdot \underbrace{e^{j \pi k}}_{(-1)^k}, \; k=0,1, ..., N-1$$
$$= (-1)^k \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
Similar to the zero-padding case we originally evaluated, the even terms of $\tilde{X}(k)$ give us precisely the result we need. The odd terms have the same magnitude as the case where the zeros were padded at the end (i.e. we still get the same smoothing effect in the magnitude spectrum), though their phase is shifted by $\pi$. A nearly identical analysis shows that when $N/4$ zeros each are padded on both sides of $x$, the desired $N/2$ point DFT can again be recovered by sampling the even points of the $N$ point DFT of the padded sequence.
Interlacing with zeros
This time let's insert a zero after each element of $x$ to construct $\tilde{x} = [x(0) \; 0 \; x(1) \; 0 \; ... \; x(N/2-1) \; 0]$. The $N$ point DFT of $\tilde{x}$ is computed as
$$\tilde{X}(k) = \sum_{n=0}^{N-1} \tilde{x}(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$=\sum_{n \text{ even}} \underbrace{\tilde{x}(n)}_{x(n)} e^{{-j2 \pi kn \over N}} + \sum_{n \text{ odd}} \underbrace{\tilde{x}(n)}_{0} e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{m=0}^{N/2-1} \underbrace{\tilde{x}(2m)}_{x(m)} e^{{-j4 \pi km \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{m=0}^{N/2-1} x(m) e^{{-j2 \pi km \over N/2}}, \; k=0,1, ..., N-1$$
We are very close. We already have $\tilde{X}(k) = X(k)$ for $0 \leq k \leq N/2-1$, but what about the remaining $N/2$ points? Let's evaluate $\tilde{X}(k)$ for $k \geq N/2$, or equivalently, $\tilde{X}(N/2+p)$ for $0 \leq p \leq N/2-1$.
$$\tilde{X}(N/2+p) = \sum_{m=0}^{N/2-1} x(m) e^{{-j2 \pi (N/2+p)m \over N/2}}, \; p=0,1, ..., N/2-1$$
$$= \sum_{m=0}^{N/2-1} x(m) e^{{-j2 \pi pm \over N/2}} \cdot \underbrace{e^{-j 2 \pi m}}_{=1}, \; p=0,1, ..., N/2-1$$
$$ = \tilde{X}(p) = X(p), \; p=0,1, ..., N/2-1$$
This implies that the first $N/2$ points of the DFT of the zero-interlaced sequence exactly match our desired $N/2$ point DFT and the remaining $N/2$ points are merely a replica of the first $N/2$ points (and can therefore be discarded).
Notes
Inserting a zero after each sample results in a duplication of the spectrum. This result can be generalized, i.e. inserting $P$ zeros after each sample results in a $P$-fold replication of the spectrum. Further, by duality, interlacing the spectrum with zeros results in replication of the input sequence (at the output of the inverse DFT). This process of inserting zeros is referred to as
upsampling in digital signal processing. I will not attempt to explain all the nuances of upsampling, but will refer the reader to the following post. Repeating the sequence
Our final method is to construct an $N$ point sequence as follows: $\tilde{x} = [x \; x]$. The $N$ point DFT of $\tilde{x}$ is computed as
$$\tilde{X}(k) = \sum_{n=0}^{N-1} \tilde{x}(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} \underbrace{\tilde{x}(n)}_{=x(n)} e^{{-j2 \pi kn \over N}} + \sum_{n=N/2}^{N-1} \underbrace{\tilde{x}(n)}_{=x(n-N/2)} e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}} + \sum_{n=N/2}^{N-1} x(n-N/2) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}} + \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi k(n-N/2) \over N}}, \; k=0,1, ..., N-1$$
$$= \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}} + \sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}} \cdot \underbrace{e^{j k \pi}}_{(-1)^k}, \; k=0,1, ..., N-1$$
$$ = [1 + (-1)^k]\sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi kn \over N}}, \; k=0,1, ..., N-1$$
Like we did in the zero padding case, let's split the $N$ point sequence $\tilde{X}$ into even ($k=2m$) and odd terms ($k=2m+1$). The odd terms go to zero and the even terms can be written as
$$\tilde{X}(2m) = 2\sum_{n=0}^{N/2-1} x(n) e^{{-j2 \pi mn \over N/2}} = 2X(m), \; m=0,1, ..., {N \over 2}-1$$
Thus, similar to the zero padding case, we can get the desired result by sampling the even terms from the $N$ point DFT of $\tilde{x} = [x \; x]$.
Notes
Not surprisingly, we have derived the dual of the upsampling result. When we interlaced the input sequence with zeros, it ended up creating a replica in spectral domain. When we created a replica of the input sequence, it resulted in an upsampling effect in spectral domain. In both the zero padding and replication approaches, the desired result was contained in the even terms of the $N$ point DFT output. However, in the former case we got a smoothed version of the spectrum, whereas in the latter case we got an upsampled version of the spectrum.
Summary
In this post, we addressed the hypothetical problem of computing the $N/2$ point DFT of a sequence given a black box that can only compute an $N$ point FFT. We examined three different approaches, namely zero padding, upsampling, and replication of the signal, each of which allowed us to explore interesting and useful properties of the DFT, including the relations between the three approaches. The results derived here are well known (and can also be generalized), but I hope working through the detailed derivations in a simplified problem setup will be instructive for those getting their hands dirty with DFT/FFT based frequency analysis for the first time (or even those just revisiting familiar concepts).
Thanks Aditya for the well structured article.
An application area is lte:using only 2k fft for 20 or 10 or 5Mhz
fft: pad 5MHz or 10Mhz lte symbols with zeros, decimate output to jump bins
ifft: pad centre with zeros, decimate output
zero insertion doesn't work with 2k fft for 15MHz as bins get shifted
I think one way to do fft of 2k on 15MHz is upsampling to 20 size then decimate by 2/3
and for ifft downsample the 2k output to 15 case
but one thing I do is proof of concept numerically in Matalb rather than through equations.
Kaz
Thanks Kaz. Upsampling by an integer factor, followed by downsampling by an integer factor is indeed the way to realize resampling by a rational factor. One can theoretically relate the various operations via z-transforms, though it is always good to do some numerical validation. Also, in practice, one would do a polyphase implementation for efficiency rather than independent upsampling and downsampling steps.
practically (in FPGAs at least) zero padding makes the only viable option. Any other method implies storage. interleaving requires least storage but saving entire signal just to repeat it is not recommended.
Agreed that repeating the sequence is hardly a practical solution!
I also just realized that repeating the sequence is essentially adding two sequences: one with zero-padding at the end and one with zero-padding in the front. So the resultant DFT is simply a sum of the DFTs of those two zero-padded sequences. Could have gotten to the answer easily by using the results from the "zero padding" section rather than deriving it from first principles. Oh well :)
Hi Aditya. Your blog is interesting! When I read to up to your following lines I decided to stop reading:
1. Zero-padding x with N/2
2. Interlacing x with N/2
3. Appending a replica of x to itself
I want to see if I can figure out how to use your black box under those three situations before I continue reading your solutions.
Thank you for the interest and encouragement Rick! I have benefited immensely from your writings over the years and really appreciate your feedback.
Just for fun, another interesting situation to explore is x' = [x(0) x(0) x(1) x(1) ... ], i.e. each sample is repeated twice to create an N-point sequence
Hi Aditya. Thank you for your kind words.
I was able to figure out how to use your black box under the 1, 2, & 3 scenarios. That was a fun mental exercise! But now you've "thrown me a curve ball" with your last x' = [x(0) x(0) x(1) x(1) ... ] scenario. That last scenario deserves contemplation. Ha ha.
Constructing x' = [x(0) x(0) x(1) x(1) ... ] is certainly not the best way to solve the problem at hand, but it does create interesting spectra. The left and right halves end up being mirror images of each other (at least in magnitude) and the middle term is 0. The first N/2 points have the N/2 point DFT one is looking for, but each term has a complex phase which must be undone. The phase terms have a cosine shaped amplitude.
DFTs can indeed create remarkable patterns!
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads. |
Which one of the following is NOT a valid identity?
Let $\style{font-family:'Times New Roman'}\oplus$ and $\style{font-family:'Times New Roman'}\odot$ denote the Exclusive OR and Exclusive NOR operations, respectively.
Consider the minterm list form of a Boolean function F given below.
$\style{font-family:'Times New Roman'}{F\left(P,Q,R,S\right)=\sum m\left(0,2,5,7,9,11\right)+d\left(3,8,10,12,14\right)}$
If w, x, y, z are Boolean variables, then which one of the following is INCORRECT ?
Consider the Boolean operator # with the following properties:
$x\#0\;=\;x,\;x\#1\;=\;\overline x,\;x\#x\;=\;0\;and\;x\#\overline x\;=1.$ Then $ x#y $ is equivalent to
Let X be the number of distinct 16-bit integers in 2's complement representation. Let Y be the number of distinct 16-bit integers in sign magnitude representation. Then X-Y is ________.
The binary operator ≠ is defined by the following truth table.
Which one of the following is true about the binary operator ≠?
Which one of the following expressions does NOT represent exclusive NOR of x and y?
Which one of the following circuits is NOT equivalent to a 2-input XNOR (exclusive NOR) gate?
The simplified SOP (Sum of Product) form of the Boolean expression
P+Q+R.P+Q+R.P+Q+R is
What is the minimum number of gates required to implement the Boolean function (AB+C) if we have to use only 2-input NOR gates?
Given f1, f3 and f in canonical sum of products form (in decimal) for the circuit
f1 = ∑m (4, 5, 6, 7, 8)
f3 = ∑m (1, 6, 15)
f = ∑m (1, 6, 8, 15)
then f2 is
If P, Q, R are Boolean variables, then
P+Q¯P.Q¯+P.RP¯.R¯+Q¯
Simplifies to
What is the maximum number of different Boolean functions involving n Boolean variables?
Define the connective * for the Boolean variables X and Y as: X * Y = XY + X'Y'. Let Z =X *Y. Consider the following expressions P,Q and R.
P: X = Y *Z Q: Y = X *Z R: X *Y *Z = 1
Which of the following is TRUE?
In a look-ahead carry generator, the carry generate function Gi and the carry propagate function Pi for inputs Ai and Bi are given by:
Pi=Ai⊕Bi and Gi=AiBi
The expressions for the sum bit Si and the carry bit Ci+1 of the look-ahead carry adder are given by:
Si=Pi⊕Ci and Ci+1=Gi+PiCi ,where C0 is the input carry
Consider a two-level logic implementation of the look-ahead carry generator. Assume that all Pi and Gi are available for the carry generator circuit and that the AND and OR gates can have any number of inputs. The number of AND gates and OR gates needed to implement the look-ahead carry generator for a 4-bit adder with S3, S2, S1, S0, and C4 as its outputs are respectively:
This is not the official website of GATE.
It is our sincere effort to help you. |
According to the Wikipedia entry on elementary functions, the trigonometric functions and their inverses are elementary functions.
It doesn't seem to me that the floor and ceiling functions should be elementary, since they don't seem too natural. However, here is an expression of the floor function using only elementary functions
$$ \text{floor}(x) = (x - 0.5)-\frac{\arctan(\tan(\pi(x-0.5)))}{\pi} $$
As pointed out below, this function is
undefinedfor the integers.
Here is a plot of this function using Desmos. I am aware that this is not proof of this identity, and that this identity depends on the choice of range for $\arctan$. However, given that $\arctan$ was listed as an elementary function on the Wikipedia entry, I assumed that this was independent of the choice of range.
Is there anything I am missing here? Or is floor, and therefore ceiling and modulo all elementary functions? |
1) Show that the solutions of $y''+e^ty=0$ admit infinitely many zeros.
Suppose there exists a solution $y$ with finitely many zeros. Therefore, $y$ is positive or negative on $[A,+ \infty)$ for $A$ large enough; if $y<0$ consider $-y$ so that you can suppose $y>0$.
Because $y''(t)=-e^ty(t)<0$ for $t \geq A$, $y'$ is decreasing on $[A,+ \infty)$. Moreover, if $y'$ is not bounded below, then there exist $C<0$ and $t_0>A$ such that $y'(t)<C$ for $t \geq t_0$, hence (by integration) $y(t) < y(t_0)+C(t-t_0) \underset{t\to + \infty}{\longrightarrow} - \infty$: a contradiction with $y>0$. Therefore, $y'$ is bounded below and the limit $\lim\limits_{t \to + \infty} y'(t)=\ell$ exists.
For $\epsilon>0$, there exists $t_1>0$ such that $t \geq t_1$ implies $\ell-\epsilon<y'(t)<\ell+\epsilon$; by integrating, $y(t_1)+ (\ell-\epsilon)(t-t_1)<y(t)<y(t_1)+(\ell+\epsilon)(t-t_1)$. You deduce that $\lim\limits_{t \to + \infty} \frac{y(t)}{t}=\ell$ and $\ell \geq 0$.
For $n \geq 1$, let $c_n \in (n,n+1)$ such that $y'(n+1)-y'(n)=y''(c_n)$. So $c_n \underset{n \to + \infty}{\longrightarrow} + \infty$, and using the above limit, $y''(c_n) \underset{n \to + \infty}{\longrightarrow} 0$.
Because $y'$ is decreasing and $\lim\limits_{t \to + \infty} y'(t)=\ell \geq 0$, you deduce that $y' \geq 0$, so $y$ is nondecreasing. Consequently, $y(t) \geq y(A)>0$ so $y''(t) \leq -e^ty(A) \underset{t \to + \infty}{\longrightarrow} - \infty$: contradiction with $y''(c_n) \underset{n \to + \infty}{\longrightarrow} 0$.
2) Show that the (non zero) solutions of $y''-e^ty=0$ admit at most one zero in $\mathbb{R}_+$.
Let $y$ be a solution of $y''-e^ty=0$ with at least two zeros.
First, suppose there exists an interval $[a,b]$ such that $y(a)=y(b)=0$ and $y(x) \neq 0$ for $x \in (a,b)$. Without loss of generality, suppose $y>0$ on $(a,b)$ (otherwise, consider $-y$). Then $y''=e^ty>0$ on $(a,b)$ and $y'$ is increasing on $(a,b)$. According to Rolle's theorem, there exists $c \in (a,b)$ such that $y'(c)=0$, so $y'(a)<0$.
Because $y'$ is continuous, $y' \leq 0$ on $[a,a+ \epsilon]$ for some $\epsilon >0$ hence $\displaystyle y(t)= \int_a^t y'(s)ds \leq 0$ for $t \in [a,a+\epsilon]$: contradiction with $y>0$ on $(a,b)$.
So there is no such interval $[a,b]$. You deduce that there exists a decreasing sequence $(x_n)$ of zeros. For $n \geq 1$, according to Rolle's theorem, there exists $u_n \in (x_{n+1},x_n)$ such that $y'(u_n)=0$.
We have $0<u_{n+1}<x_{n+1}<u_n<x_n$ for any $n \geq 1$, so $(u_n)$ and $(x_n)$ converge to the same limit $\ell$. We deduce by continuity that $y(\ell)= \lim\limits_{n \to + \infty} y(x_n)=0$ and $y'(\ell)= \lim\limits_{n \to + \infty} y'(u_n)=0$.
Using Cauchy-Lipschitz theorem, you find that the only possibility is $y=0$. |
$$\det(A^T) = \det(A)$$
Using the geometric definition of the determinant as the area spanned by the
columns could someone give a geometric interpretation of the property? Thanks!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This is more-or-less a reformulation of Matt's answer. He relies on the existence of the SVD-decomposition, I show that $\det(A)=\det(A^T)$ can be stated in a little different way.
Every square matrix can be represented as the product of an orthogonal matrix (representing an isometry) and an upper triangular matrix (QR decomposition)- where the determinant of an upper (or lower) triangular matrix is just the product of the elements along the diagonal (that stays in its place under transposition), so, by the Binet formula, $A=QR$ gives: $$\det(A^T)=\det(R^T Q^T)=\det(R)\det(Q^T)=\det(R)\det(Q^{-1}),$$ $$\det(A^T)=\frac{\det{R}}{\det{Q}}=\det(Q)\det(R)=\det(QR)=\det(A),$$ where we used that the transpose of an orthogonal matrix is its inverse, and the determinant of an orthogonal matrix belongs to $\{-1,1\}$ - since an orthogonal matrix represents an isometry.
You can also consider that $(*)$ the determinant of a matrix is preserved under Gauss-row-moves (replacing a row with the sum of that row with a linear combination of the others) and Gauss-column-moves, too, since the volume spanned by $(v_1,\ldots,v_n)$ is the same of the volume spanned by $(v_1+\alpha_2 v_2+\ldots,v_2,\ldots,v_n)$. By Gauss-row-moves you can put $A$ in upper triangular form $R$, then have $\det A=\prod R_{ii}.$ If you apply the same moves as column moves on $A^T$, you end with $R^T$ that is lower triangular and has the same determinant of $R$, obviously. So, in order to provide a "really geometric" proof that $\det(A)=\det(A^T)$, we only need to provide a "really geometric" interpretation of $(*)$. An intuition is that the volume of the parallelepiped originally spanned by the columns of $A$ is the same if we change, for istance, the base of our vector space by sending $(e_1,\ldots,e_n)$ into $(e_1,\ldots,e_{i-1},e_i+\alpha\, e_j,e_{i+1},\ldots,e_n)\,$ with $i\neq j$, since the geometric object is the same, and we are only changing its "description".
A geometric interpretation in four intuitive steps.... The Determinant is the Volume Change Factor
Think of the matrix as a geometric transformation, mapping points (column vectors) to points: $x \mapsto Mx$. The determinant $\mbox{det}(M)$ gives the factor by which volumes change under this mapping.
For example, in the question you define the determinant as the volume of the parallelepiped whose edges are given by the matrix columns. This is exactly what the unit cube maps to, so again, the determinant is the factor by which the volume changes.
A Matrix Maps a Sphere to an Ellipsoid
Being a linear transformation, a matrix maps a sphere to an ellipsoid. The singular value decomposition makes this especially clear.
If you consider the principal axes of the ellipsoid (and their preimage in the sphere), the singular value decomposition expresses the matrix as a product of (1) a rotation that aligns the principal axes with the coordinate axes, (2) scalings in the coordinate axis directions to obtain the ellipsoidal shape, and (3) another rotation into the final position.
The Transpose Inverts the Rotation but Keeps the Scaling
The transpose of the matrix is very closely related, since the transpose of a product is the reversed product of the transposes, and the transpose of a rotation is its inverse. In this case, we see that the transpose is given by the inverse of rotation (3), the
same scaling (2), and finally the inverse of rotation (1).
(This is almost the same as the inverse of the matrix, except the inverse naturally uses the
inverse of the original scaling (2).) The Transpose has the Same Determinant
Anyway, the rotations don't change the volume -- only the scaling step (2) changes the volume. Since this step is exactly the same for $M$ and $M^\top$, the determinants are the same.
Since $\text{sign}(\sigma^{-1})=\text{\sign}(\sigma)$ and $\phi:S_n\to S_n,\sigma\mapsto\sigma^{-1}$ is a bijection, we have $\det(A)=\sum_{\sigma\in S_n}\text{sign}(\sigma)\prod_{i=1}^na_{i\sigma(i)}=\sum_{\sigma\in S_n}\text{sign}(\sigma^{-1})\prod_{i=1}^na_{\sigma^{-1}(i)i}=\sum_{\sigma\in S_n}\text{sign}(\sigma)\prod_{i=1}^na_{\sigma(i)i}=\det(A^t)$
Note: I'm not using the geometric definition, but I could only post this here since the question without the geometric requirement was flagged incorrectly as a duplicate of this problem.
I came up with the same question when I was watching a linear algebra tutorial by Professor Wildberger. And I think there's another interesting explanation in it. My personal math background is not very good, so I suggest watching the first ten lectures of the series. Here's the link.
The series first talked about wedge product. so if you have two vectors $v_1$ and $v_2$, the wedge product $v_1 \lor v_2$ represents a signed area or volume.
Another idea is to consider a matrix transformation to be a coordinates change. Take 2x2 matrix as example:
$$ \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} * \begin{pmatrix} x_1 \\ x_2 \\ \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ \end{pmatrix} $$
Let $e_1$ and $e_2$ to be the base of the plane. If we substitute $(1, 0)$ and $(0, 1)$ into $(x_1, x_2)$ separately, we will have $(a, c)$ and $(b, d)$ to be the new base after change of coordinates. If we assume the area of each grid in the old coordinates to be 1, namely $e_1 \lor e_2 = 1$, we will have $(a, c) \lor (b, d)$ be the area of each grid in the new coordinates. The result of $(a, c) \lor (b, d)$ happens to be $ad - bc$, which is the determinant. From a geometric point of view, it represents the ratio of area change.
Now let's take $x_1$ and $x_2$
themselves as bases, namely, let $x_1 = e_1$ and $x_2 = e_2$. We will then have $y_1 = a e_1 + b e_2$ and $y_2 = c e_1 + d e_2$. The wedge product of $y_1$ and $y_2$ also represents the same area change ratio, and the value is $(a, b) \lor (c, d) = ad - bc$ as well.
In the above calculation, we're using matrix columns in the first case, and matrix rows in the second case, and they are transpose of each other, thus the determinant of a matrix is the same as that of its transpose.
This is probably not a rigour way to think, but it's quite interesting.
Here's an algebraic proof, followed by a geometric interpretation of it, which could be summarized as follows.
Volumes can be computed with scalar products by cut-and-pasting, and because of the definition of the transpose, the volume spanned by some $Av_i$ and the volume spanned by the corresponding $A^tv_i$ can be decomposed using the same pieces.
Let $V$ be a vector space. For all vectors $x_1,\ldots, x_n$ of $V$ and $y_1,\ldots y_n$ linear functionals on $V$ define \begin{equation*} V(x,y)= \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma)\prod_{i} y_i(x_{\sigma(i)}). \end{equation*}
It is easy to check that for fixed $y$'s this is a multilinear alternating form on the $x$'s. Conversely for fixed $x$'s it is an alternating multilinear form on the $y$'s. It follows that
\begin{align*} \det(A^t)V(x,y) &=V(x,A^ty)\\ &=\sum_{\sigma \in S_n} \operatorname{sgn}(\sigma)\prod_{i} A^ty_i(x_{\sigma(i)})\\ &=\sum_{\sigma \in S_n} \operatorname{sgn}(\sigma)\prod_{i} y_i(Ax_{\sigma(i)})\\ &=V(Ax,y) \\ &=\det(A)V(x,y) \end{align*} whence $\det(A)=\det(A^t)$ for properly chosen $x$ and $y$'s.
For the geometric interpretation we now replace the linear functionals $y$ by vectors and evaluation $y(x)$ by scalar product $(y,x)$. Furthermore we chose $x$ and $y$ to be equal to the canonical basis.
The above formula for $V(x,y)$ gives a decomposition of the volume spanned by the $Ax$ into a sum of volumes of 'hyperrectangles' whose sides are given by scalar products $(Ax_i,x_j)$. The exact same factors $(x_i,A^tx_j)$ and the same 'hyperrectangles' intervene when decomposing the volume spanned by the $A^t x$.
Hence the $Ax$ and the $A^tx$ span the same volume so that $A$ and $A^t$ have same determinant.
One can argue that it is difficult to see in a concrete example why the same factors appear. But then one should also say that it is difficult to see in a concrete example what the transpose of a transformation looks like, because of the indirect nature of its definition.
For any square matrix $A$, we have $PA=LDU$. Therefore, $A=P^{T}LDU$ (LU decomposition) where $P$ is permutation matrix, $D$ is diagonal matrix, $L$ is lower triangular matrix with diagonal is 1 or 0, $U$ is upper triangular matrix.
$|A^{T}|=|U^{T} D^{T} L^{T} P|=|U^{T}| |D^{T}| |L^{T}| |P|=|P||U^{T}||D^{T}||L^{T}|$
Because $P^{T}P=I$, then $|P^{T}||P|=1$. Therefore, we have $|P||U^{T}||D^{T}||L^{T}|=|P||U||D||L|=|P^{T}||U||D||L|=|A|$.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I had a question regarding a question on Shannon entropy I came across. It has to do with representing entropy in the form of their probability distributions, but let me elaborate. Here's the specific problem I'm referring to:
$\space$
Consider three independent random variables $u$, $v$, and $w$ with entropies $H_u$, $H_v$, $H_w$. Let,
$$X \equiv (U,\ V)$$ $$Y \equiv (V,\ W)$$
What is $H(X, Y)$, $H(X | Y)$, and $I(X; Y)$?
$\space$
Here's what I've come up with so far:
Since the random variables $u$, $v$, and $w$ are independent,
$$P(X) = P(U)P(V)$$ $$P(Y) = P(V)P(W)$$
And since $$P(X, Y) = P(X|Y)P(Y)$$ $$P(X|Y) = \frac{P(Y|X)P(U)}{P(W)}$$
But I'm not sure how to progress further from here... The solution my instructor provided for this particular problem in the textbook I'm using (
Information Theory, Inference, and Learning Algorithms) is that:
$$P(X|Y) = \left\{ \begin{array}{c} P(U)\ (x_2 = y_1) \\ 0\ (else) \end{array} \right.$$
$$P(X, Y) = \left\{\begin{array}{c} P(U)P(V)P(W)\ (x_2 = y_1) \\ 0\ (else) \end{array}\right.$$
And with this result the solution is fairly easy to derive.
$\space$
What I'm wondering is, where did the $x_2 = y_1$ come from, and how were the results for those probability distributions come to be? The approach I was taking was causing me to go in circles without any real results.
Thank you. |
The steady state situation provides several ways to reduce the complexity. The time derivative term can be eliminated since the time derivative is zero. The acceleration term must be eliminated for the obvious reason. Hence the energy equation is reduced to
Steady State Equation
\[
\label{ene:eq:govSTSF} \dot{Q} - \dot{W}_{shear} - \dot{W}_{shaft} = \int_S \left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho \,dA + \int_S P U_{bn} dA \tag{72} \]
If the flow is uniform or can be estimated as uniform, equation (72) is reduced to
Steady State Equation & uniform
\[
\label{ene:eq:govSTSFU} \begin{array}{c} \dot{Q} - \dot{W}_{shear} - \dot{W}_{shaft} = \left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho A_{out} - \\ \left( h + \dfrac{U^2}{2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho A_{in} + \displaystyle P\, U_{bn} A_{out} - \displaystyle P U_{bn} A_{in} \end{array} \tag{73} \]
It can be noticed that last term in equation (73) for non-deformable control volume does not vanished. The reason is that while the velocity is constant, the pressure is different. For a stationary fix control volume the energy equation, under this simplification transformed to
\[ \label{ene:eq:govSTSFUfix} \dot{Q} - \dot{W}_{shear} - \dot{W}_{shaft} = \left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho A_{out} - \\ \left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right) U_{rn}\, \rho A_{in} \tag{74} \] Dividing equation the mass flow rate provides
Steady State Equation, Fix \(\dot{m}\) & uniform
\[
\label{ene:eq:govSTSFUfixMass} \dot{q} - \dot{w}_{shear} - \dot{w}_{shaft} = \left.\left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right)\right|_{out} - \left.\left( h + \dfrac{U^2} {2\dfrac{}{}} + g\,z \right)\right|_{in} \tag{75} \] Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
is there any concept of state transition matrix for this discrete system?
\begin{equation}N(k)x(k+1)=x(k)+B(k)u(k)\dots(1)\end{equation}
$N(k)$ are nilpotent matrices for $k=0,1,2\dots$
I know state transition matrix for the system $x(k+1)=A(k)x(k)+B(k)u(k)$
is given by $\phi(k,k_0)=A(k-1)\times \dots \times A(k_0)$
Thanks for helping.
$ ax(k+1)=x(k)+bu(k)$ where $a$ is an nilpotent matrix, I am given that its solution is $x(k)=-\sum_{i=1}^{q-1} a^ibu(k+i)$ where $q$ is the degree of nilpotency of $a$. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
No, there is no such requirement. It is pretty easy to find counterexamples where you have translation-invariant hamiltonians which have localized energy eigenstates with no such translation invariance. In particular, the statement you make,
translational invariance leads to a energy eigenstate that is delocalised in space,
is false in general, given the reasonable understanding of the above into the more precise statement
if $H$ is translation-invariant and $|\psi\rangle$ is an eigenfunction of $H$, then $|\psi\rangle$ also needs to be translation invariant
which does not hold.
To make a simple counterexample to the statement above, consider the hamiltonian for a free particle in two dimensions, $H=\frac12(p_x^2+p_y^2)$, which obviously has translation invariance and translationally invariant eigenfunctions of the form$$\langle x,y|p_x,p_z\rangle = \frac{1}{2\pi} e^{i(xp_x+yp_y)}.$$However, there is no requirement that the eigenfunctions be like that, and indeed you can form rotationally invariant wavefunctions that have a clear localization at the origin by taking phased superpositions of plane waves in the form$$|p,l\rangle = \frac{1}{2\pi} \int_0^{2\pi} e^{il\theta}|p\cos(\theta),p\sin(\theta)\rangle \mathrm d\theta.$$These are somewhat easier to understand in polar coordinates, where you have \begin{align}\langle r,\theta|p,l\rangle & = \frac{1}{2\pi} \int_0^{2\pi} \langle r,\theta|p\cos(\theta'),p\sin(\theta')\rangle e^{il\theta'}\mathrm d\theta'\\ & = \frac{1}{(2\pi)^2} \int_0^{2\pi} e^{ipr(\cos(\theta)\cos(\theta')+\sin(\theta)\sin(\theta'))} e^{il\theta'}\mathrm d\theta'\\ & = \frac{1}{(2\pi)^2} e^{il\theta}\int_0^{2\pi} e^{ipr\cos(\theta'-\theta)} e^{il(\theta'-\theta)}\mathrm d(\theta'-\theta')\\ & = \frac{i^{l}}{2\pi} e^{il\theta} J_{l}(pr),\end{align}which are obviously the separable cylindrical-harmonics solutions of the Schrödinger equation in two dimensions. This means that they are legitimate eigenfunctions of $H$, but they have absolutely nothing to do with translation symmetry. Instead, they are eigenfunctions of the rotational symmetry of $H$ - and, in fact, the plane-wave states you started with are excellent examples of how a rotationally-invariant hamiltonian can have eigenfunctions that do not respect that symmetry.
That said, if you're really looking for an analogue of the initial result you stated,
if the Hamiltonian is parity invariant, then non-degenerate energy eigenstates are either even or odd
then yes, it's possible - but it is absolutely crucial to have non-degenerate eigenvalues. (This is of course also true in the parity case, and if you have even and odd eigenstates at the same eigenvalue then it's trivial to construct mixed-parity eigenstates that do not have any definite symmetry.)
If you do manage to find a translationally invariant hamiltonian $H$ such that $[H,T_a]=0$ and some eigenvalue $p$ is non-degenerate (like e.g. $p=0$ for a free particle as the unique physically relevant case), then yes, the eigenstate $|\psi_p\rangle$ must be translationally invariant, since $T_a|\psi_p\rangle$ must be an eigenstate of the same eigenvalue, and by non-degeneracy it must be proportional to $|\psi_p\rangle$ , i.e. $T_a|\psi_p\rangle = e^{i f(a)}|\psi_p\rangle$, so $|\psi_p\rangle$ is translationally invariant.
However, you're highly unlikely to find
any nontrivial, physically meaningful hamiltonians that are translationally invariant but not parity invariant, so you will always have at least a twofold energy degeneracy in all nonzero eigenvalues, making the argument above largely useless. |
Leap day, 29th February, in 2004 fell on a Sunday. On what day of the week will the leap day in 2012 fall? a. Monday b. Friday c. Wednesday d. Thursday
Leap day, 29th February, in 2004 fell on a Sunday. On what day of the week will the leap day in 2012 fall? a. Monday b. Friday c. Wednesday d. Thursday
29th February, in 2004 ( Sunday ) \(\rightarrow\) 29th February, in 2012 ( ? )
Difference 8 Years in Days:
\(= 6\cdot 365\ \text{days} + 2 \cdot 366\ \text{days}\\ = 2922\ \text{days}\)
In weeks:
\(\frac{2922\ \text{days} } {7} = 417\ \text{weeks} \text{ and }\mathbf{ 3\ \text{days} }\)
Sunday +
3 days = Wednesday
29th February, in 2004 ( Sunday ) \(\rightarrow\) 29th February, in 2012 ( Wednesday )
c. Wednesday
THAT'S IS EASIEST QUESTION OF TODAY!!!!!!!.
FEBRURAY 29, 2012 WAS ON A "WEDNESDAY"!!!!.
Leap day, 29th February, in 2004 fell on a Sunday. On what day of the week will the leap day in 2012 fall? a. Monday b. Friday c. Wednesday d. Thursday
29th February, in 2004 ( Sunday ) \(\rightarrow\) 29th February, in 2012 ( ? )
Difference 8 Years in Days:
\(= 6\cdot 365\ \text{days} + 2 \cdot 366\ \text{days}\\ = 2922\ \text{days}\)
In weeks:
\(\frac{2922\ \text{days} } {7} = 417\ \text{weeks} \text{ and }\mathbf{ 3\ \text{days} }\)
Sunday +
3 days = Wednesday
29th February, in 2004 ( Sunday ) \(\rightarrow\) 29th February, in 2012 ( Wednesday )
c. Wednesday
We can also answer this question by looking at the next day, March 1. It falls on Monday in 2004. And this date advances forward one day for each 365 day year and two days forward in every leap year. Thus, three 365 day years pass between 2004 and 2008 and one leap occurs in 2008. So, March 1 advances 5 days ahead on March 1, 2008 [ i.e., it occurs on Saturday in that year].
The same pattern occurs between 2008 and 2102.....the date advances 5 more days and occurs on Thursday in 2102.....and Feb 29th is the day before......i.e., Wednesday.....!!!! |
Answer
$\frac{5}{6}$
Work Step by Step
$\frac{5}{8}\div\frac{3}{4}=\frac{5}{8}\times\frac{4}{3}=\frac{5\times4}{8\times3}=\frac{20}{24}=\frac{20\div4}{24\div4}=\frac{5}{6}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Defintions
Here is a collection of terms that I've come to use while developing MapGen.
Terrain Type Procedural Terrain water (seas, rivers, etc…) and mountains, so called because they are defined by a height map which is a two dimension function Grown Terrain plain, farm, forest, swamp, waste, so called because they are generated with methods similar to single cell automatons Province Size(1)
\begin{align} \varepsilon=\frac{sn}{xy} \end{align}
Small(2)
\begin{align} 0 < \varepsilon \leq \frac{1}{2} \end{align}
Medium(3)
\begin{align} \frac{1}{2} < \varepsilon \leq 2 \end{align}
Large(4)
\begin{align} 2 < \varepsilon \end{align}
page revision: 3, last edited: 02 Feb 2007 18:43 |
There is a huge variety of feasible approaches. Which is best suited depends on
what you are trying to show, how much detail you want or need.
If the algorithm is a widely known one which you use as a subroutine, you often remain at a higher level. If the algorithm is the main object under investigation, you probably want to be more detailed. The same can be said for analyses: if you need a rough upper runtime bound you proceed differently from when you want precise counts of statements.
I will give you three examples for the well-known algorithm Mergesort which hopefully illustrate this.
High Level
The algorithm Mergesort takes a list, splits it in two (about) equally long parts, recurses on those partial lists and merges the (sorted) results so thatthe end-result is sorted. On singleton or empty lists, it returns the input.
This algorithm is obviously a correct sorting algorithm. Splitting the list and merging it can each be implemented in time $\Theta(n)$, which gives us a recurrence for worst case runtime $T(n) = 2T\left(\frac{n}{2}\right) + \Theta(n)$. By the Master theorem, this evaluates to $T(n) \in \Theta(n\log n)$.
Medium Level
The algorithm Mergesort is given by the following pseudo-code:
procedure mergesort(l : List) {
if ( l.length < 2 ) {
return l
}
left = mergesort(l.take(l.length / 2)
right = mergesort(l.drop(l.length / 2)
result = []
while ( left.length > 0 || right.length > 0 ) {
if ( right.length == 0 || (left.length > 0 && left.head <= right.head) ) {
result = left.head :: result
left = left.tail
}
else {
result = right.head :: result
right = right.tail
}
}
return result.reverse
}
We prove correctness by induction. For lists of length zero or one, the algorithm is trivially correct. As induction hypothesis, assume
mergesort performs correctly on lists of length at most $n$ for some arbitrary, but fixed natural $n>1$. Now let $L$ be a list of length $n+1$. By induction hypothesis,
left and
right hold (non-decreasingly) sorted versions of the first resp. second half of $L$ after the recursive calls. Therefore, the
while loop selects in every iteration the smallest element not yet investigated and appends it to
result; thus
result is a non-increasingly sorted list containing all elements from
left and
right. The reverse is a non-decreasingly sorted version of $L$, which is the returned -- and desired -- result.
As for runtime, let us count element comparisons and list operations (which dominate the runtime asymptotically). Lists of length less than two cause neither. For lists of length $n>1$, we have those operations caused by preparing the inputs for the recursive calls, those from the recursive calls themselves plus the
while loop and one
reverse. Both recursive parameters can be computed with at most $n$ list operations each. The
while loop is executed exactly $n$ times and every iteration causes at most one element comparison and exactly two list operations. The final
reverse can be implemented to use $2n$ list operations -- every element is removed from the input and put into the output list. Therefore, the operation count fulfills the following recurrence:
$\qquad \begin{align}T(0) = T(1) &= 0 \\T(n) &\leq T\left(\left\lceil\frac{n}{2}\right\rceil\right) + T\left(\left\lfloor\frac{n}{2}\right\rfloor\right) + 7n\end{align}$
As $T$ is clearly non-decreasing, it is sufficient to consider $n=2^k$ for asymptotic growth. In this case, the recurrence simplifies to
$\qquad \begin{align}T(0) = T(1) &= 0 \\T(n) &\leq 2T\left(\frac{n}{2}\right) + 7n\end{align}$
By the Master theorem, we get $T \in \Theta(n \log n)$ which extends to the runtime of
mergesort.
Ultra-low level
Consider this (generalised) implementation of Mergesort in Isabelle/HOL:
types dataset = "nat * string"
fun leq :: "dataset \<Rightarrow> dataset \<Rightarrow> bool" where
"leq (kx::nat, dx) (ky, dy) = (kx \<le> ky)"
fun merge :: "dataset list \<Rightarrow> dataset list \<Rightarrow> dataset list" where
"merge [] b = b" |
"merge a [] = a" |
"merge (a # as) (b # bs) = (if leq a b then a # merge as (b # bs) else b # merge (a # as) bs)"
function (sequential) msort :: "dataset list \<Rightarrow> dataset list" where
"msort [] = []" |
"msort [x] = [x]" |
"msort l = (let mid = length l div 2 in merge (msort (take mid l)) (msort (drop mid l)))"
by pat_completeness auto
termination
apply (relation "measure length")
by simp+
This already includes proofs of well-definedness and termination. Find an (almost) complete correctness proof here.
For the "runtime", that is number of comparisons, a recurrence similar to the one in the prior section can be set up. Instead of using the Master theorem and forgetting the constants, you can also analyse it to get an approximation that is asymptotically equal the true quantity. You can find the full analysis in [1]; here is a rough outline (it does not necessarily fit the Isabelle/HOL code):
As above, the recurrence for the number of comparisons is
$\qquad \begin{align}f_0 = f_1 &= 0 \\f_n &= f_{\left\lceil\frac{n}{2}\right\rceil} + f_{\left\lfloor\frac{n}{2}\right\rfloor} + e_n\end{align}$
where $e_n$ is the number of comparisons needed for merging the partial results². In order to get rid of the floors and ceils, we perform a case distinction over whether $n$ is even:
$\qquad \displaystyle \begin{cases}f_{2m} &= 2f_m + e_{2m} \\f_{2m+1} &= f_m + f_{m+1} + e_{2m+1} \end{cases}$
Using nested forward/backward differences of $f_n$ and $e_n$ we get that
$\qquad \displaystyle \sum\limits_{k=1}^{n-1} (n-k) \cdot \Delta\kern-.2em\nabla f_k = f_n - nf_1$.
The sum matches the right-hand side of Perron's formula. We define the Dirichlet generating series of $\Delta\kern-.2em\nabla f_k$ as
$\qquad \displaystyle W(s) = \sum\limits_{k\geq 1} \Delta\kern-.2em\nabla f_k k^{-s} = \frac{1}{1-2^{-s}} \cdot \underbrace{\sum\limits_{k \geq 1} \frac{\Delta\kern-.2em\nabla e_k}{k^s}}_{=:\ \boxminus(s)}$
which together with Perron's formula leads us to
$\qquad \displaystyle f_n = nf_1 + \frac{n}{2\pi i} \int\limits_{3-i\infty}^{3+i\infty} \frac{\boxminus(s)n^s}{(1-2^{-s})s(s+1)}ds$.
Evaluation of $\boxminus(s)$ depends on which case is analysed. Other than that, we can -- after some trickery -- apply the residue theorem to get
$\qquad \displaystyle f_n \sim n \cdot \log_2(n) + n \cdot A(\log_2(n)) + 1$
where $A$ is a periodic function with values in $[-1,-0.9]$.
Mellin transforms and asymptotics: the mergesort recurrence by Flajolet and Golin (1992) Best case: $e_n = \left\lfloor\frac{n}{2}\right\rfloor$ Worst case: $e_n = n-1$ Average case: $e_n = n - \frac{\left\lfloor\frac{n}{2}\right\rfloor}{\left\lceil\frac{n}{2}\right\rceil + 1} - \frac{\left\lceil\frac{n}{2}\right\rceil}{\left\lfloor\frac{n}{2}\right\rfloor + 1}$ |
$\newcommand{\Beta}{\operatorname{Beta}}$I'm sampling a bunch of probabilities, $\theta_i \sim \Beta(a,b)$, from a common beta distribution, and then using each $\theta_i$ to sample a value $x_i$ out of $N_i$ possibilities. However, I am only able to see $x_i$ if $1 \leq x_i \leq N_i-1$, so there is 2-sided truncation of the data as well. Also, note that the family of beta distributions I'm interested in tend to have both $a$ and $b$ less than 1.
Given the situation, I would like to be able to estimate the beta parameters, $a$ and $b$ given only the observed $x_i$ and $N_i$ values. I have tried to incorporate the correct (I think) truncated binomial likelihood: $\left(\text{i.e. }\frac{\theta_i^{x_i}(1-\theta_i)^{N-x_i}}{1-\theta_i^N-(1-\theta_i)^N}\right) \, ,$ within a Laplace approximation to the marginal likelihood $\left(\text{marginal with respect to }\log\left(\frac{\theta_i}{1-\theta_i}\right)\right)$ to no avail (I always get overestimates of $a$ and $b$).
Does anyone have a good idea of how to do this, or is this simply a fool's errand without extremely large $N_i$ values?
Edit: I'm trying this out simulating $\theta_i$ from a $\Beta(0.08,0.72)$ distribution, with all $N_i=1999$ for simplicity (though in general, they could be different).
Some example $(x_i, N_i)$ pairs are: $$[(442,1999),(1,1999),(22,1999),...,(5,1999),(601,1999),(737,1999)].$$ Note that any $x_i=0$ or $x_i=1999$ are excluded for this problem. |
Possible Duplicate: \subseteq + \circ as a single symbol (“open subset”)
Is it possible to superimpose a variable size
\circ on
\sum to compose a symbol just like
\oint?
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
Possible Duplicate: \subseteq + \circ as a single symbol (“open subset”)
Is it possible to superimpose a variable size
\circ on
\sum to compose a symbol just like
\oint?
The symbol ⨊ is Unicode character U+2a0a
Modulo two sum. It can be used directly with LuaTeX or XeTeX if the glyph is supported by the used fonts. Example:
\documentclass{article}\usepackage{unicode-math}\setmathfont{XITS Math}\begin{document}\[ ⨊ \] % or \modtwosum\end{document}
fdsymbol
Another font that directly provides the symbol is
fdsymbol:
\documentclass{article}\usepackage{fdsymbol}\begin{document}\[ \modtwosum \]\end{document}
It is also possible to superimpose the symbols, for example:
\documentclass{article}\makeatletter\newcommand*{\modtwosum}[1][.4]{% \mathop{% \mathpalette\@modtwosum{#1}% }%}\newcommand*{\@modtwosum}[2]{% \sbox0{$\m@th#1\sum$}% \rlap{% \hbox to \wd0{% \hspace{0pt plus #2fil}% $\m@th#1\circ$% \hspace{0pt plus 1fil}% \hspace{0pt plus -#2fil}% }% }% {#1\sum}%}\makeatother\begin{document}\[ \modtwosum \]\[ \def\test#1{\modtwosum[#1]_{#1}} \test{0}\test{0.2}\test{0.4}\test{0.6}\test{0.8}\test{1}\]\end{document} |
As far as I know, I don't think it is possible to drive a Color Ramp or Mapping nodes from another socket (but I am not super experienced in drivers). However I have managed to re-create the color ramp and mapping node with math nodes, which you can plug inputs into directly.
Color Ramp
Unfortunately there is no way to create a group node exactly like the Color Ramp, with addable and removable color swatches. To get around this I have created a node with two movable swatches, you can then combine multiple of these nodes together to have the functionality of multiple swatches.
The theory:
The two input colors are plugged directly into a Mix RGB node. The two
Pos inputs need to be sent through a function and plugged into the mix factor. The position of the first swatch needs to be mapped to $0$ to get just the first color out of the mix node. The position of the second swatch needs to be mapped to $1$ to get just the second color out of the mix node. The math:
Here's a graph to visualize what we are trying to do, on the x-axis is the input factor of the color ramp, on the y-axis is the desired output, $a$ and $b$ are the positions of the two swatches.
With some simple algebra we can find the equation of the line to be:
$$y = \frac{1}{b-a}x + \frac{a}{a-b}$$
The math nodes below are simply replicating this equation. The final
Add node also has Clamp checked to clamp the output to the interval $[0,1]$, which is what the mix node accepts.
Mapping node
The mapping node has three functions it can
Translate (move), Rotate, and Scale the texture. Since often not all of these are needed I will create separate group nodes for each of these functions.
If you don't have a decent understanding about how texture coordinates work you may want to read my answer here. The basic theory of manipulation texture coordinates is to separate the components of the vector with a
Separate XYZ node, manipulate the components individually, then combine them back into a vector with a Combine XYZ node. Mapping Node - Translation
Translation is easy, to translate $V$ to $V^\prime$ just add the desired amounts to each of the components of $V$.
In the above node setup I actually used
Subtract nodes instead of Add so positive values will move the texture to the right instead of left. Mapping Node - Scaling
Scaling is the same as translating, just multiply the individual coordinates by the scale factor instead of adding.
Mapping Node - Rotation
Rotation is a little more complicated and I won't take the time to derive the equations here, but they are pretty standard pre-calc formulas. Here are the formulas for rotation.
Around the x-axis
$$\begin{aligned}x^\prime &= x\\y^\prime &= y\cos{\theta} - z\sin{\theta}\\z^\prime &= y\sin{\theta} + z\cos{\theta}\end{aligned}$$
Around the y-axis
$$\begin{aligned}x^\prime &= x\cos{\theta} - z\sin{\theta}\\y^\prime &= y\\z^\prime &= x\sin{\theta} + z\cos{\theta}\end{aligned}$$
Around the z-axis$$\begin{aligned}x^\prime &= x\cos{\theta} - y\sin{\theta}\\y^\prime &= x\sin{\theta} + y\cos{\theta}\\z^\prime &= z\end{aligned}$$
the first two nodes in each of the rotation setups convert the degree input to radians, which is what the trig math nodes want. Note:
And finally, here's a .blend file with all the above node groups. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Topic: System Properties Question
The input x(t) and the output y(t) of a system are related by the equation
$ y(t)=\int_{-\infty}^t x(\tau) d\tau . \ $
Is the system linear (yes/no)? Justify your answer.
You will receive feedback from your instructor and TA directly on this page. Other students are welcome to comment/discuss/point out mistakes/ask questions too!
Answer 1
Yes, this system is linear.
If
$ x_1(t) \to \Bigg[ system \Bigg] \to y_1(t)= \int_{-\infty}^{t} x_1(\tau) d\tau $
and
$ x_2(t) \to \Bigg[ system \Bigg] \to y_2(t)= \int_{-\infty}^{t} x_2(\tau) d\tau $
Then
$ ax_1(t)+bx_2(t) \to \Bigg[ system \Bigg] \to y(t)= \int_{-\infty}^{t} ax_1(\tau)+bx_2(\tau) d\tau = a\int_{-\infty}^{t} x_1(\tau) d\tau\ +\ b\int_{-\infty}^{t} x_2(\tau) d\tau = ay_1(t)+by_2(t) $
--Cmcmican 19:20, 26 January 2011 (UTC)
TA's comment: Excellent!
--Ahmadi 17:27, 27 January 2011 (UTC)
Answer 2
Write it here.
Answer 3
Write it here. |
Under the auspices of the Computational Complexity Foundation (CCF)
We initiate a study of ``universal locally testable codes" (Universal-LTCs). These codes admit local tests for membership in numerous possible subcodes, allowing for testing properties of the encoded message. More precisely, a Universal-LTC $C:\{0,1\}^k \to \{0,1\}^n$ for a family of functions $\mathcal{F} = \left\{ f_i : \{0,1\}^k \to \{0,1\} \right\}_{i \in [M]}$ is a code such that for every $i \in [M]$ the subcode $\{ C(x) \: : \: f_i(x) = 1 \}$ is locally testable.
We show a ``canonical" $O(1)$-local Universal-LTC of length $\tilde O(M\cdot s)$ for any family $\mathcal{F}$ of $M$ functions such that every $f\in\mathcal{F}$ can be computed by a circuit of size $s$, and establish a lower bound of the form $n=M^{1/O(k)}$, which can be strengthened to $n=M^{\Omega(1)}$ for any $\mathcal{F}$ such that every $f,f' \in\mathcal{F}$ disagree on a constant fraction of their domain.
This work appeared previously as a part of the first version of this technical report, which contained results regarding Universal-LTCs as well as results regarding a related notion called ``universal locally verifiable codes''. Since this combination caused the latter notion and results to be missed, we chose to split the original version into two parts. The current part contains the material regarding Universal-LTCs. The part regarding universal locally verifiable codes appears in a companion paper (ECCC TR16-192).
We initiate a study of ``universal locally testable codes" (universal-LTCs). These codes admit local tests for membership in numerous possible subcodes, allowing for testing properties of the encoded message. More precisely, a universal-LTC $C:\{0,1\}^k \to \{0,1\}^n$ for a family of functions $\mathcal{F} = \{ f_i : \{0,1\}^k \to \{0,1\} \}_{i \in [M]}$ is a code such that for every $i \in [M]$ the subcode $\{ C(x) \: : \: f_i(x) = 1 \}$ is locally testable. We show a ``canonical" $O(1)$-local universal-LTC of length $\tilde{O}(M\cdot s)$ for any family $\mathcal{F}$ of $M$ functions such that every $f\in\mathcal{F}$ can be computed by a circuit of size $s$, and establish a lower bound of the form $n=M^{1/O(k)}$, which can be strengthened to $n=M^{\Omega(1)}$ for any $\mathcal{F}$ such that every $f,f' \in\mathcal{F}$ disagree on a constant fraction of their domain.
We also consider a variant of universal-LTCs wherein the testing procedures are also given free access to a short proof, akin the MAPs of Gur and Rothblum (ITCS 2015). We call such codes ``universal locally verifiable codes" (universal-LVCs). We show universal-LVCs of length $\tilde{O}(n^2)$ for $t$-ary constraint satisfaction problems ($t$-CSP) over $k$ variables, with proof length and query complexity $\tilde{O}(n^{2/3})$, where $t=O(1)$ and $n\ge k$ is the number of constraints in the CSP instance. In addition, we prove a lower bound of $p \cdot q = \tilde\Omega(k)$ for every polynomial length universal-LVC for CSPs (over $k$ variables) having proof complexity $p$ and query complexity $q$.
Lastly, we give an application for interactive proofs of proximity (IPP), introduced by Rothblum et al. (STOC 2013), which are interactive proof systems wherein the verifier queries only a sublinear number of input bits and soundness only means that, with high probability, the input is close to an accepting input. We show that using a small amount of interaction, our universal-LVC for CSP can be, in a sense, ``emulated" by an IPP, yielding a $3$-round IPP for CSP with sublinear communication and query complexity.
Minor corrections
We initiate a study of ``universal locally testable codes" (universal-LTCs). These codes admit local tests for membership in numerous possible subcodes, allowing for testing properties of the encoded message. More precisely, a universal-LTC $C:\{0,1\}^k \to \{0,1\}^n$ for a family of functions $\mathcal{F} = \{ f_i : \{0,1\}^k \to \{0,1\} \}_{i \in [M]}$ is a code such that for every $i \in [M]$ the subcode $\{ C(x) \: : \: f_i(x) = 1 \}$ is locally testable. We show a ``canonical" $O(1)$-local universal-LTC of length $\tilde{O}(M\cdot s)$ for any family $\mathcal{F}$ of $M$ functions such that every $f\in\mathcal{F}$ can be computed by a circuit of size $s$, and establish a lower bound of the form $n=M^{1/O(k)}$, which can be strengthened to $n=M^{\Omega(1)}$ for any $\mathcal{F}$ such that every $f,f' \in\mathcal{F}$ disagree on a constant fraction of their domain.
We also consider a variant of universal-LTCs wherein the testing procedures are also given free access to a short proof, akin the MAPs of Gur and Rothblum (ITCS 2015). We call such codes ``universal locally verifiable codes" (universal-LVCs). We show universal-LVCs of length $\tilde{O}(n^2)$ for $t$-ary constraint satisfaction problems ($t$-CSP) over $k$ variables, with proof length and query complexity $\tilde{O}(n^{2/3})$, where $t=O(1)$ and $n\ge k$ is the number of constraints in the CSP instance. In addition, we prove a lower bound of $p \cdot q = \tilde\Omega(k)$ for every polynomial length universal-LVC for CSPs (over $k$ variables) having proof complexity $p$ and query complexity $q$.
Lastly, we give an application for interactive proofs of proximity (IPP), introduced by Rothblum et al. (STOC 2013), which are interactive proof systems wherein the verifier queries only a sublinear number of input bits and soundness only means that, with high probability, the input is close to an accepting input. We show that using a small amount of interaction, our universal-LVC for CSP can be, in a sense, ``emulated" by an IPP, yielding a $3$-round IPP for CSP with sublinear communication and query complexity. |
Big Bang singularity cannot be treated as “point type explosion”. The problem arises when we think singularity as a “point in space-time”.
According to Einstein’s general theory of relativity, spacetime at every event has definite curvature. If that curvature is everywhere infinite, we define no spacetime at all. If we try to imagine the time of the big bang itself as being covered by of the cosmology, we are saying that there is a time at which spacetime is not properly defined. So, there can be no time in the cosmology corresponding to the big bang. We describe the big bang as a “singularity”, a breakdown in the laws that govern space and time. The term singularity, roughly speaking, designates a point in a mathematical structure where a quantity fails to be well defined, even though the quantity is well defined at all neighboring points. The simplest and best-known example arises with the inverse function, $1/x$. As long as $x$ is non-zero, $1/x$ is well defined.
For the $$\lim_{x \rightarrow 0^-} {\frac {1} {x}}=-\infty$$
And
$$\lim_{x \rightarrow 0^+} {\frac {1} {x}}=\infty$$
So in general we say $1/0$ goes to infinity but which infinity, what can be say about it ?
The safer course is just to say that we have a singularity at $x=0$ and not try to give it any value. |
Zeroth Order Approximation
In the simplest approximation,
cold implies that $T_{e} = T_{i} = 0$, where $T_{s}$ is the average temperature of species $s$. There is an entire branch of plasma theory based upon this assumption. It is another way of saying that you assume the plasma is initially at rest with no thermal fluctuations. It also implies the plasma has no pressure, thus no pressure waves can exist if $T_{e} = T_{i} = 0$.
I wrote an answer describing potential wave modes in such a system at: https://physics.stackexchange.com/a/138460/59023.
Finite Temperature Approximation
In a slightly less extreme approximation, one can argue the plasma is
cold when the plasma beta, $\beta$, is very small. That is:$$\beta = \frac{2 \mu_{o} \ n_{o} \ k_{B} \left( T_{e} + T_{i} \right) }{B_{o}^{2}} \ll 1$$where $n_{o}$ is the charged particle number density, $B_{o}$ is the quasi-static magnetic field, $\mu_{o}$ is the permeability of free space, and $k_{B}$ is the Boltzmann constant. I wrote an answer describing how to define the particle temperatures at: https://physics.stackexchange.com/a/218643/59023. Phenomenological Answer
The answer to your question really depends upon the application or circumstances in which you are interested. For instance, we have found through observations that the whistler mode wave, or the R mode when $\Omega_{ci} < \omega < \Omega_{ce}$ (where $\Omega_{cs}$ is the cyclotron frequency of species $s$), is well characterized by cold plasma dispersion relation in the high density limit (i.e., $\omega^{2} \ll \omega_{pe}^{2}$ and $\Omega_{ce}^{2} \ll \omega_{pe}^{2}$, where $\omega_{ps}$ is the plasma frequency of species $s$) even though we know the plasma is not cold.
It is another way of saying that there are circumstances where the temperature corrections do not have a noticeable impact on the system/phenomena in which you are interested. |
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ... |
10.2.3: The Connection Between the Stream Function and the Potential Function Last updated Save as PDF
For this discussion, the situation of two dimensional incompressible is assumed. It was shown that
\[
\label{if:eq:UxpotentianlSteam} \pmb{U}_x = \dfrac{\partial \phi}{\partial x} = \dfrac{\partial \psi}{\partial y} \tag{70} \] and \[ \label{if:eq:UypotentianlSteam} \pmb{U}_y = \dfrac{\partial \phi}{\partial y} = - \dfrac{\partial \psi}{\partial x} \tag{71} \]
These equations (70) and (71) are referred to Definition of the potential function is based on the gradient operator as \(\pmb{U} = \boldsymbol{\nabla}\phi\) thus derivative in arbitrary direction can be written as
\[ \label{if:eq:arbitraryPotential} \dfrac{d\phi}{ds} = \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \widehat{s} = \pmb{U} \boldsymbol{\cdot} \widehat{s} \tag{72} \] where \(ds\) is arbitrary direction and \( \widehat{s}\) is unit vector in that direction. If \(s\) is selected in the streamline direction, the change in the potential function represent the change in streamline direction. Choosing element in the direction normal of the streamline and denoting it as \(dn\) and choosing the sign to possible in the same direction of the stream function it follows that \[ \label{if:eq:velocitystreamPotential} {U} = \dfrac{d\phi}{ds} \tag{73} \] If the derivative of the stream function is chosen in the direction of the flow then as in was shown in equation (58). It summarized as \[ \label{if:eq:streamFpotentialFDerivative} \dfrac{d\phi}{ds} = \dfrac{d\psi}{dn} \tag{74} \]
Fig. 10.3 Constant Stream lines and Constant Potential lines.
There are several conclusions that can be drawn from the derivations above. The conclusion from equation (74) that the stream line are orthogonal to potential lines. Since the streamline represent constant value of stream function it follows that the potential lines are constant as well. The line of constant value of the potential are referred as potential lines.
Fig. 10.4 Stream lines and potential lines are drawn as drawn for two dimensional flow. The green to green–turquoise color are the potential lines. Note that opposing quadrants (first and third quadrants) have the same colors. The constant is larger as the color approaches the turquoise color. Note there is no constant equal to zero while for the stream lines the constant can be zero. The stream line are described by the orange to blue lines. The orange lines describe positive constant while the purple lines to blue describe negative constants. The crimson line are for zero constants. This Figure was part of a project by Eliezer Bar-Meir to learn GLE graphic programing language.
Figure 10.4 describes almost a standard case of stream lines and potential lines.
Example 10.3
A two dimensional stream function is given as \(\psi= x^4 - y^2\). Calculate the expression for the potential function \(\phi\) (constant value) and sketch the streamlines lines (of constant value).
Solution 10.3
Utilizing the differential equation (70) and (??) to
\[ \label{streamTOpotential:derivativeY} \dfrac{\partial \phi}{\partial x} = \dfrac{\partial \psi}{\partial y} = - 2\, y \tag{75} \] Integrating with respect to \(x\) to obtain \[ \label{streamTOpotential:integralX} \phi = - 2\,x\,y + f(y) \tag{76} \] where \(f(y)\) is arbitrary function of \(y\). Utilizing the other relationship (??) leads \[ \label{streamTOpotential:eq:derivativeX} \dfrac{\partial \phi}{\partial y} = - 2\, x + \dfrac{d\,f(y) }{dy} = - \dfrac{\partial \psi}{\partial x} = - 4\,x^3 \tag{77} \] Therefore \[ \label{streamTOpotential:eq:potentialODE} \dfrac{d\,f(y) }{dy} = 2\,x - 4\, x^3 \tag{78} \] After the integration the function \(\phi\) is \[ \label{streamTOpotential:phiIntegration} \phi = \left( 2\,x - 4\, x^3 \right)\, y + c \tag{79} \] The results are shown in Figure
Fig. 10.5 Stream lines and potential lines for Example . |
[
edited to explain a few steps and connect with the Hahn polynomials]
The answer is$$\frac{(2n+1)(2n+3) n!^4}{(2n)!}$$assuming that I did the algebra right, which seems likely because this formulaagrees with the previously computed values $3,5,14,324/5$ for $n=0,1,2,3$.
Consider first the minimum of $\sum_{i=1}^{n+1} p(i)^2$ over monic $p$of degree $n$. Any vector $v = (a_1,a_2,\ldots,a_{n+1})$ is the list ofvalues at $1,2,\ldots,n+1$ of some polynomial of degree at most $n$;the leading coefficient of this polynomial is $(u,v) / n!$ where$u$ is the vector whose $(n+1-i)$-th coordinate is $(-1)^i {n \choose i}$(each $i$ in $0 \leq i \leq n$).$\color{red}{\bf[1]}$We thus seek the minimum of $(v,v)$ subject to $(u,v) = n!$, and by Cauchy-Schwarz the answer is $n!^2 / (u,u)$, attained
iff$v = n! u / (u,u)$. The denominator $(u,u)$ is $\sum_{i=0}^n {n \choose i}^2$,which is well-known to equal $2n \choose n$.$\color{red}{\bf[2]}$Hence the answer is $n!^2 / {2n \choose n} = n!^4 / (2n)!$.
With a bit more work we can find for each $k$ the minimum of$\sum_{i=1}^{n+k+1} p(i)^2$ over monic $p$ of degree $n$.Here's how it goes for $k=2$. There are now three linear conditionson $v = (a_1,a_2,\ldots,a_{n+3})$ to be the list of values at$1,2,\ldots,n+3$ of a monic polynomial of degree at most $n$.We can write them as $(u_0,v)=n!$, $(u_1,v)=0$, $(u_2,v)=0$,where $u_j$ is the vector whose $(n+3-i)$-th coordinate is$(-1)^i {n+j \choose i}$ for each $i$ in $0 \leq i \leq n+2$.$\color{red}{\bf[1a]}$As was the case for $k=0$,the minimum of $(v,v)$ over all such $v$ is attained bya linear combination of $u_0,u_1,u_2$. So we need onlycalculate the $3 \times 3$ Gram matrix of inner products$(u_j,u_{j'})$ $(j,j'=0,1,2)$, and invert it to find the linear combination$v$ such that $(u_j,v) = n! \delta_j$. Each $(u_j,u_{j'})$ is$\sum_{i \geq 0} {n+j \choose i} {n+j' \choose i} = {2n+j+j' \choose n+j}$.$\color{red}{\bf[2a]}$So write each of these entries of the Gram matrix as $2n \choose n$ timessome rational function of $n$, solve the resulting linear equations forthe coefficients of $v$ in $u_0,u_1,u_2$, and recover $(v,v)$.This calculation yields the formula $(2n+1)(2n+3) n!^2 / {2n \choose n}$displayed (in equivalent form) at the start of this answer.
For general $k$ the minimum seems to be$$\frac{n!^4}{(2n)!} {2n+1+k \, \choose k},$$which presumably can be proved from the above analysis and known identities.$\color{red}{\bf[3]}$
$\color{red}{\bf[1],[1a]}$ Taking the inner product with $u$ amounts to evaluating an $n$-th finite difference.Likewise, taking the inner product with $u_i$ amounts to evaluating an $(n+i)$-th finite difference.
$\color{red}{\bf[2],[2a]}$ The formula$\sum_{i \geq 0} {m \choose i} {m' \choose i} = {m+m' \choose m}$has at least two well-known proofs, one bijective and onegeneratingfunctionological.For the former, write ${m+m' \choose m}$ as the number of $(m+m')$-tuplesof $m$ 0's and $m'$ 1's, and let $i$ be the number of 1's among thefirst $m$ coordinates. For the latter, compute the $X^m$ coefficient of $(1+X)^m \, (1+X)^{m'} = (1+X)^{m+m'}$ in two ways.
$\color{red}{\bf[3]}$ Further corroboration is that this is also consistent with the extreme cases $n=0$ and (with a bit more work) $n=1$.I later obtained a proof by transforming the relevant determinants into Vandermonde determinants. The existence of such a formula for all $n,k$ suggested that the $p$'s (which are orthogonal polynomials for a discrete measure) must be known already, and after some Googling found that indeed they are the special case $\alpha=\beta=0$ of the Hahn polynomials$Q_n$ evaluated at $x-1$ (with $N = n+k+1$). The orthogonality relation,together with the formula for the leading coefficient of $Q_n$,soon yields the evaluation for all $n,k$ of the minimum of $\sum_{i=1}^{n+k+1} p(i)^2$ over monic polynomials $p$ of degree $n$. |
Little explorations with HP calculators (no Prime)
03-23-2017, 01:23 PM (This post was last modified: 03-23-2017 01:23 PM by pier4r.)
Post: #21
RE: Little explorations with the HP calculators
(03-23-2017 12:19 PM)Joe Horn Wrote: I see no bug here. Variables which are assigned values should never be used where formal variables are required. Managing them is up to the user.
Ok (otherwise a variable cannot just be feed in a function from a program) but then how come that when I set the flags, that let the function return reals, the variable is purged? The behavior could be more consistent.
Nothing bad, just a quirks like this, when not clear to spot, may lead to other solutions (see the advice of John Keit that I followed)
Wikis are great, Contribute :)
03-24-2017, 01:45 PM (This post was last modified: 03-24-2017 03:23 PM by pier4r.)
Post: #22
RE: Little explorations with the HP calculators
Quote:Brilliant.org
Is there a way to solve this without using a wrapping program? (hp 50g)
I'm trying around some functions (e.g: MSLV) with no luck, so I post this while I dig more on the manual, and on search engines focused on "site:hpmuseum.org" or "comp.sys.hp48" (the official hp forums are too chaotic, so I won't search there, although they store great contributions as well).
edit: I don't mind inserting manually new starting values, I mean that there is a function to find at least one solution, then the user can find the others changing the starting values.
Edit. It seems that the numeric solver for one equation can do it, one has to set values to the variables and then press solve, even if one variable has already a value (it was not so obvious from the manual, I thought that variables with given values could not change it). The point is that one variable will change while the others stay constant. In this way one can find all the solutions.
Wikis are great, Contribute :)
03-24-2017, 03:23 PM (This post was last modified: 03-24-2017 03:24 PM by pier4r.)
Post: #23
RE: Little explorations with the HP calculators
Quote:Same site as before.
This can be solved with multiple applications of SOLVEVX (hp 50g) on parts of the equation with proper observations. So it is not that difficult, just I found it nice and I wanted to share.
Wikis are great, Contribute :)
03-24-2017, 10:04 PM (This post was last modified: 03-24-2017 10:05 PM by pier4r.)
Post: #24
RE: Little explorations with the HP calculators
How do you solve this using a calculator to compute the number?
I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
I also needed to prove to myself that the center on the circle is on a particular location to proceed to build the final equation.
Wikis are great, Contribute :)
03-24-2017, 10:54 PM (This post was last modified: 03-24-2017 11:07 PM by Dieter.)
Post: #25
RE: Little explorations with the HP calculators
I think a calculator is the very last thing required here.
(03-24-2017 10:04 PM)pier4r Wrote: I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
So 6√2 = 2√2 + r + r√2. Which directly leads to r = 4 / (1 + 1/√2) = 2,343...
Dieter
03-25-2017, 08:17 AM (This post was last modified: 03-25-2017 08:19 AM by pier4r.)
Post: #26
RE: Little explorations with the HP calculators
As always, first comes the mental work to create the formula or the model, but to compute the final number one needs a calculator most of the time.
Quote:Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
I used the numeric solver because instead of grouping r on the left, I just used the formula of one step before - the one without grouping - to find the value. Anyway one cannot just use the diagonal because the picture is well done, one has to prove to himself that O is on the diagonal (nothing difficult, but required), otherwise it may be a step taken for granted.
Wikis are great, Contribute :)
03-27-2017, 12:14 PM
Post: #27
RE: Little explorations with the HP calculators
Quote:Brilliant.org
This one defeated me at the moment. My rusty memory about mathematical relations did not help me. At the end, having the hp50g, I tried to use some visual observations to write down the cartesian coordinates of the points defining the inner square, or observing the length of the sides, so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution (with or without the hp50g). I end in too ugly/tedious formulae.
Wikis are great, Contribute :)
03-27-2017, 12:54 PM (This post was last modified: 03-27-2017 03:42 PM by Thomas Okken.)
Post: #28
RE: Little explorations with the HP calculators
Consider a half-unit circle jammed into the corner of the first quadrant (so its center is at (0.5, 0.5)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The tangent on the circle where it meets that radius will intersect the X axis at 1 + tan(phi), and the Y axis at 1 + cot(phi) or 1 + 1 / tan(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 1 (or X = Y - 1). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the diameter of the circle is 1 / sqrt(X^2 + Y^2).
EDIT: No, I screwed up. The intersections at the axes are not at 1 + tan(phi), etc., that relationship is not quite that simple. Back to the drawing board!
Second attempt:
Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). The tangent on the circle at that point will have a slope of -1 / tan(phi), and so it will intersect the X axis at Px + Py * tan(phi), or (1 + sin(phi)) * tan(phi) + 1 + cos(phi), and it will intersect the Y axis at (1 + cos(phi)) / tan(phi) + 1 + sin(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 2 (or X = Y - 2). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the radius of the circle is 1 / sqrt(X^2 + Y^2).
Because of symmetry, sweeping the angles from 0 to pi/2 is actually not necessary; you can restrict yourself to 0 through pi/4 and the case that X = Y - 2.
03-27-2017, 02:27 PM (This post was last modified: 03-27-2017 02:28 PM by pier4r.)
Post: #29
RE: Little explorations with the HP calculators
(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
I'm a bit blocked.
http://i.imgur.com/IW4QIeU.jpg
Would be possible to add a quick sketch?
Wikis are great, Contribute :)
03-27-2017, 03:44 PM (This post was last modified: 03-27-2017 03:44 PM by Thomas Okken.)
Post: #30
RE: Little explorations with the HP calculators
(03-27-2017 02:27 PM)pier4r Wrote:(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
OK; I attached a sketch to my previous post.
03-27-2017, 03:55 PM
Post: #31
RE: Little explorations with the HP calculators
Thanks and interesting approach. On brilliant.org there were dubious solutions (that did not prove their assumptions) and just one really cool making use of a known relationship with circles enclosed in right triangles.
Wikis are great, Contribute :)
03-27-2017, 05:29 PM (This post was last modified: 03-27-2017 05:31 PM by pier4r.)
Post: #32
RE: Little explorations with the HP calculators
Quote:Brilliant.org
For this I wrote a quick program, remembering some quality of the mean that after enough iterations it stabilizes. (I should find the right statement though).
Code:
But i'm not sure about the correctness of the approach. Im pretty sure there is a way to compute this with an integral ad then a closed form too.
Anyway this is the result at the moment:
Wikis are great, Contribute :)
03-27-2017, 06:01 PM (This post was last modified: 03-27-2017 07:33 PM by Dieter.)
Post: #33
RE: Little explorations with the HP calculators
(03-27-2017 12:14 PM)pier4r Wrote: ...so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution
Right, in the end you realize that both formulas are the same. ;-)
The second constraint for s and r could be the formula of a circle inscribed in a triangle. This leads to two equations in two variables s and r. Or with d = 2r you'll end up with something like this:
(d² + d)/2 + (sqrt((d² + d)/2) + d)² = 1
I did not try an analytic solution, but using a numeric solver returns d = 2r = (√3–1)/2 = 0,36603 and s = 1/2 = 0,5.
Edit: finally this seems to be the correct solution. ;-)
Dieter
03-27-2017, 06:14 PM (This post was last modified: 03-27-2017 06:15 PM by pier4r.)
Post: #34
RE: Little explorations with the HP calculators
(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
How? One should be from the Pythagoras' theorem, a^2+b^2=c^2 (where I use the two sides of the triangle to get the hypotenuse) the other is the composition of the area of the square, made up from 4 triangles and one inner square. To me they sound as different models for different measurements. Could you explain me why are those the same?
Anyway to me it is great even the numerical solution (actually the one that I search with the hp50g) but I cannot tell you if it is right or not because I did not solve it by myself, other reviewers are needed.
Edit, anyway I remember that a discussed solution mentioned the relationship of a circle inscribed in a triangle, so I guess your direction is right.
Wikis are great, Contribute :)
03-27-2017, 06:30 PM
Post: #35
RE: Little explorations with the HP calculators
(03-27-2017 06:14 PM)pier4r Wrote:(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
Just do the math. On the one hand, \( s^2 + (s + 2r)^2 = 1\) from Phythagorus' Theorem as you observed. And your other observation is that
\[ 4 \cdot \underbrace{\frac{1}{2} \cdot s \cdot (s+2r)}_{\text{area of }\Delta}
+ \underbrace{(2r)^2}_{\text{area of } \Box} = 1 \]
Simplify the left hand side:
\[
\begin{align}
4 \cdot \frac{1}{2} \cdot s \cdot (s+2r) + (2r)^2 & =
2s^2+4rs + 4r^2 \\
& = s^2 + s^2 + 4rs + 4r^2 \\
& = s^2 + (s+2r)^2
\end{align} \]
Hence, both formulas are the same.
Graph 3D | QPI | SolveSys
03-27-2017, 06:57 PM (This post was last modified: 03-27-2017 06:57 PM by pier4r.)
Post: #36
RE: Little explorations with the HP calculators
Thanks, I did not worked on the formula, I was more stuck (and somewhat still stuck) on the fact that they should represent different objects/results.
But then again, the square build on the side, it is the square itself. So now I see it. I wanted to see it in terms of "represented objects (1)" not only formulae.
Wikis are great, Contribute :)
03-27-2017, 07:07 PM
Post: #37
RE: Little explorations with the HP calculators
(03-27-2017 06:57 PM)pier4r Wrote:
Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
A few geometric proofs: http://www.cut-the-knot.org/pythagoras/
Graph 3D | QPI | SolveSys
03-27-2017, 07:22 PM
Post: #38
RE: Little explorations with the HP calculators
(03-27-2017 07:07 PM)Han Wrote: Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
Maybe my choice of words was not the best. I wanted to convey the fact that if I try to model two different events (or objects in this case) and I get the same formula, for me it is not immediate to say "oh, ok, then they are the same object", I have to, how can I say, "see it". So in the case of the problem, I saw it when I realized that the 1^2 is not only equal to the area because also the area is 1^2, it is exactly the area because it models the area of the square itself. (I was visually building 1^2 outside the square, like a duplicate)
Anyway the link you shared is great. I looked briefly and I can say:
- long ago I saw the proof #1
- in school I saw the proof #9
- oh look, the proof #34 would have helped, as someone mentioned
- how many!
Great!
Wikis are great, Contribute :)
03-27-2017, 07:24 PM (This post was last modified: 03-27-2017 07:28 PM by Joe Horn.)
Post: #39
RE: Little explorations with the HP calculators
(03-27-2017 05:29 PM)pier4r Wrote:Quote:Brilliant.org
After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result:
10 randomize
20 T=0:C=0
30 repeat
40 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
50 until C=99999994
60 repeat
70 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
80 print C;T/C
90 until C=99999999
run
99999995 0.5214158234249566646569152059
99999996 0.5214158242970253667174680247
99999997 0.5214158240318481570747604814
99999998 0.5214158247892039896051570164
99999999 0.5214158253601312510245695897
OK
run
99999995 0.5213642776110289008920452545
99999996 0.5213642752079475043717958065
99999997 0.52136427197858201293861314
99999998 0.5213642744828552963477424429
99999999 0.5213642759132547792130043215
OK
run
99999995 0.5213770659191193073147616413
99999996 0.5213770610000764506616015052
99999997 0.5213770617149058467216528505
99999998 0.5213770589414874167694264508
99999999 0.5213770570854305903944611055
OK
So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up.
<0|ɸ|0>
-Joe-
03-27-2017, 07:35 PM (This post was last modified: 03-27-2017 07:37 PM by pier4r.)
Post: #40
RE: Little explorations with the HP calculators
(03-27-2017 07:24 PM)Joe Horn Wrote: 99999999 0.5213770570854305903944611055
Interestingly your number is quite different from mine. Ok that you have a couple of iterations more, but my average was pretty stable for the first 3 digits. I wonder why the discrepancy.
Moreover if you round to the 4th decimal place, you always get 0.5214 if the rounding is adding a "+1" to the last digit in the case the first digit to be excluded is higher than 5.
Wikis are great, Contribute :)
User(s) browsing this thread: 1 Guest(s) |
Any reversible process can be described as a sum of many infinitesimally small Carnot cycles, so $\oint {dS} = \oint {\frac{{dQ}}{T}} = 0% MathType!MTEF!2!1!+-% feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbWexLMBbXgBd9gzLbvyNv2CaeHb5MDXbpmVaibaieYlf9irVe% eu0dXdh9vqqj-hEeeu0xXdbba9frFj0-OqFfea0dXdd9vqaq-JfrVk% FHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr0-vqpWqaaeaabiGaci% aacaqabeaadaqaaqaafaGcqaaaaaaaaaWdbeaadaWdfaqaaiaadsga% caWGtbaaleqabeqdcqWIr4E0cqGHRiI8aOGaeyypa0Zaa8qbaeaada% WcaaqaaiaadsgacaWGrbaabaGaamivaaaaaSqabeqaniablgH7rlab% gUIiYdGccqGH9aqpcaaIWaaaaa!5091!$ holds. It means the integral is independent of path it takes so the entropy
S is a state variable. Such a path-independence is only true so reversible process, in strickly speaking. Then...is the entropy S not state variable for irreversible process?
Any reversible process can be described as a sum of many infinitesimally small Carnot cycles, so $\oint {dS} = \oint {\frac{{dQ}}{T}} = 0% MathType!MTEF!2!1!+-% feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbWexLMBbXgBd9gzLbvyNv2CaeHb5MDXbpmVaibaieYlf9irVe% eu0dXdh9vqqj-hEeeu0xXdbba9frFj0-OqFfea0dXdd9vqaq-JfrVk% FHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr0-vqpWqaaeaabiGaci% aacaqabeaadaqaaqaafaGcqaaaaaaaaaWdbeaadaWdfaqaaiaadsga% caWGtbaaleqabeqdcqWIr4E0cqGHRiI8aOGaeyypa0Zaa8qbaeaada% WcaaqaaiaadsgacaWGrbaabaGaamivaaaaaSqabeqaniablgH7rlab% gUIiYdGccqGH9aqpcaaIWaaaaa!5091!$ holds. It means the integral is independent of path it takes so the entropy
Entropy is a property of the system and it does not depend on the process that system experiences. Also, it does not depend on how you measure it. For irreversible process $\oint {\frac{{dQ}}{T}}$ is not equal to zero, but the system has a property called entropy at any instance that its
change is depend only on initial and final states of the system.
In an irreversible process, the system
itself generates entropy that has to be added to the terms $\frac{\delta Q}{T}$ representing heat transfer from/to the outside. For example, if a current $I$ passes through a resistor $R$ at temperature $T$ the rate of entropy generation is $\frac{I^2R}{T}$ as measured in units of $\frac{joule}{kelvin \times sec}$. |
I need to estimate the probability that an outcome R comes from either the sum (SUM) or from the difference (DIF) of two independent random variables. To do that, I need to compute the conditional probabilities P(SUM|R) and P(DIF|R).
Suppose that the elements of the sum of two independent random variables are given by SUM = { X + Y } and that the elements of the difference are given by DIF = { X - Y }. The pdfs for the events SUM and DIF are then fsum(r) and fdif(r).
Since an outcome from the final sample space, S = SUM U DIF, may comes from either the sets SUM and DIF, how could I derivate the pdf for the final sample space, fs(r)?
Intuitively, I would use
$$ f_S(r) = Pr(SUM)f_{sum}(r) + Pr(DIF)f_{dif}(r) $$
where Pr(SUM) and Pr(DIF) are the probabilities of the events, but this clearly doesn't handle the joint density amount.
Supposing that R is an event from the final sample space S, we would have
$$ R = R \cap S = R \cap (S_{sum} \cup S_{dif}) = (R \cap S_{sum}) \cup (R \cap S_{dif}) $$
then
$$ F_R(r) = Pr( R \leq r ) = Pr\left [ (R \cap S_{sum}) \cup (R \cap S_{dif}) \leq r \right ] \\ \\ = Pr \left [ (R \cap S_{sum}) \leq r \right ] + Pr\left [ (R \cap S_{dif}) \leq r \right ] - Pr\left [ (R \cap S_{sum} \cap S_{dif}) \leq r \right ] \\ \\ = Pr \left [ (R | S_{sum}) \leq r \right ]Pr(S_{sum}) + Pr \left [ (R | S_{dif}) \leq r \right ]Pr(S_{dif}) - Pr \left [ (R | (S_{sum} \cap S_{dif})) \leq r \right ]Pr(S_{sum} \cap S_{dif}). $$
If I assume that the events SUM and DIF cannot happen at a same time and that they have the same probability of occurance, i.e. P(SUM) = P(DIF) = 1/2, how could I derivate fs(r) from the equations above? |
SolidsWW Flash Applet Sample Problem 1
Line 352: Line 352:
|}
|}
− +
The Flash applets are protected under the following license:
The Flash applets are protected under the following license:
[http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported License].
[http://creativecommons.org/licenses/by-nc/3.0/ Creative Commons Attribution-NonCommercial 3.0 Unported License].
Revision as of 10:10, 10 August 2011 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem with solidsWW.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below:
There are other example problems using this applet: solidsWW Flash Applet Sample Problem 2 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets
PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ##########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", );
This is the
The
TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $b = random(2,10,1); $xy = 'y'; $func1 = "$a*sin(pi*y/8)+2"; $func2 = "$b*sin(pi*y/2)+2"; $xmax = max(Compute("$a+2"), Compute("$b+2"),9); $shapeType = 'circle'; $correctAnswer = Compute("64*$a+4*pi*$a^2+32*pi");
This is the
The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set
######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height #########################################
<p> This is the
Those portions of the code that begin the line with
################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' );
<p>You must include the section that follows
################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0000ff</theColor> <profile> <piece func='$func1' cut='8'/> <piece func='$func2' cut='10'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0000ff</theColor> <profile> <piece func='$func1' cut='8'/> <piece func='$func2' cut='10'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, )));
The lines
$applet->initialState(qq{<xml><plot>
<xy>$xy</xy>
<captiontxt>'Compute the volume of the figure shown.'</captiontxt>
<shape shapeType='$shapeType' sides='3' ratio='1.5'/>
<xmax>$xmax</xmax>
<theColor>0x0000ff</theColor>
<profile>
<piece func='$func1' cut='8'/>
<piece func='$func2' cut='10'/>
</profile>
</plot></xml>}); configure the applet.
The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable
The code
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[x=\begin{cases} $a\sin\left(\frac{\pi y}{8}\right)+2 &y\le 8\\ $b\sin\left(\frac{\pi y}{2}\right)+2 &8<y\le 10\end{cases}\] about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings;
This is the
################################ # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT();
This is the
The
License
The Flash applets are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License. |
A discrete-time sinusoid can have frequency up to just shy of half the sample frequency. But if you try to plot the sinusoid, the result is not always recognizable. For example, if you plot a 9 Hz sinusoid sampled at 100 Hz, you get the result shown in the top of Figure 1, which looks like a sine. But if you plot a 35 Hz sinusoid sampled at 100 Hz, you get the bottom graph, which does not look like a sine when you connect the dots. We typically want the plot of a...
This article covers interpolation basics, and provides a numerical example of interpolation of a time signal. Figure 1 illustrates what we mean by interpolation. The top plot shows a continuous time signal, and the middle plot shows a sampled version with sample time Ts. The goal of interpolation is to increase the sample rate such that the new (interpolated) sample values are close to the values of the continuous signal at the sample times [1]. For example, if...
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT) by showing an implementation of how the parameters of a real pure tone can be calculated from just two DFT bin values. The equations from previous articles are used in tandem to first calculate the frequency, and then calculate the amplitude and phase of the tone. The approach works best when the tone is between the two DFT bins in terms of frequency.The Coding...
In an earlier post [1], we implemented lowpass IIR filters using a cascade of second-order IIR filters, or biquads. This post provides a Matlab function to do the same for Butterworth bandpass IIR filters. Compared to conventional implementations, bandpass filters based on biquads are less sensitive to coefficient quantization [2]. This becomes important when designing narrowband filters.
A biquad section block diagram using the Direct Form II structure [3,4] is shown in...
There are many applications in which this technique is useful. I discovered a version of this method while analysing radar systems, but the same approach can be used in a very wide range of...
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT), but only indirectly. The main intent is to get someone who is uncomfortable with complex numbers a little more used to them and relate them back to already known Trigonometric relationships done in Real values. It is essentially a followup to my first blog article "The Exponential Nature of the Complex Unit Circle".Polar Coordinates
The more common way of...
One of the basic DSP principles states that a sampled time signal has a periodic spectrum with period equal to the sample rate. The derivation of can be found in textbooks [1,2]. You can also demonstrate this principle numerically using the Discrete Fourier Transform (DFT).
The DFT of the sampled signal x(n) is defined as:
$$X(k)=\sum_{n=0}^{N-1}x(n)e^{-j2\pi kn/N} \qquad (1)$$
Where
X(k) = discrete frequency spectrum of time sequence x(n)
Figure 1a shows the block diagram of a decimation-by-8 filter, consisting of a low-pass finite impulse response (FIR) filter followed by downsampling by 8 [1]. A more efficient version is shown in Figure 1b, which uses three cascaded decimate-by-two filters. This implementation has the advantages that only FIR 1 is sampled at the highest sample rate, and the total number of filter taps is lower.
The frequency response of the single-stage decimator before downsampling is just...
In my last post, we saw that finding the spectrum of a signal requires several steps beyond computing the discrete Fourier transform (DFT)[1]. These include windowing the signal, taking the magnitude-squared of the DFT, and computing the vector of frequencies. The Matlab function pwelch [2] performs all these steps, and it also has the option to use DFT averaging to compute the so-called Welch power spectral density estimate [3,4].
In this article, I’ll present some...
The Discrete Fourier Transform (DFT) operates on a finite length time sequence to compute its spectrum. For a continuous signal like a sinewave, you need to capture a segment of the signal in order to perform the DFT. Usually, you also need to apply a window function to the captured signal before taking the DFT [1 - 3]. There are many different window functions and each produces a different approximation of the spectrum. In this post, we’ll present Matlab code that...
Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of...
While there are plenty of canned functions to design Butterworth IIR filters [1], it’s instructive and not that complicated to design them from scratch. You can do it in 12 lines of Matlab code. In this article, we’ll create a Matlab function butter_synth.m to design lowpass Butterworth filters of any order. Here is an example function call for a 5th order filter:
The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined...
This article covers interpolation basics, and provides a numerical example of interpolation of a time signal. Figure 1 illustrates what we mean by interpolation. The top plot shows a continuous time signal, and the middle plot shows a sampled version with sample time Ts. The goal of interpolation is to increase the sample rate such that the new (interpolated) sample values are close to the values of the continuous signal at the sample times [1]. For example, if...
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
This is an article to hopefully give an understanding to Euler's magnificent equation:
$$ e^{i\theta} = cos( \theta ) + i \cdot sin( \theta ) $$
This equation is usually proved using the Taylor series expansion for the given functions, but this approach fails to give an understanding to the equation and the ramification for the behavior of complex numbers. Instead an intuitive approach is taken that culminates in a graphical understanding of the equation.Complex...
In my last post, we saw that finding the spectrum of a signal requires several steps beyond computing the discrete Fourier transform (DFT)[1]. These include windowing the signal, taking the magnitude-squared of the DFT, and computing the vector of frequencies. The Matlab function pwelch [2] performs all these steps, and it also has the option to use DFT averaging to compute the so-called Welch power spectral density estimate [3,4].
In this article, I’ll present some...
A discrete-time sinusoid can have frequency up to just shy of half the sample frequency. But if you try to plot the sinusoid, the result is not always recognizable. For example, if you plot a 9 Hz sinusoid sampled at 100 Hz, you get the result shown in the top of Figure 1, which looks like a sine. But if you plot a 35 Hz sinusoid sampled at 100 Hz, you get the bottom graph, which does not look like a sine when you connect the dots. We typically want the plot of a...
Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard.
MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a...
The finite-word representation of fractional numbers is known as fixed-point. Fixed-point is an interpretation of a 2's compliment number usually signed but not limited to sign representation. It extends our finite-word length from a finite set of integers to a finite set of rational real numbers [1]. A fixed-point representation of a number consists of integer and fractional components. The bit length is defined...
Introduction Quadrature signals are based on the notion of complex numbers and perhaps no other topic causes more heartache for newcomers to DSP than these numbers and their strange terminology of j operator, complex, imaginary, real, and orthogonal. If you're a little unsure of the physical meaning of complex numbers and the j = √-1 operator, don't feel bad because you're in good company. Why even Karl Gauss, one the world's greatest mathematicians, called the j-operator the "shadow of...
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
While there are plenty of canned functions to design Butterworth IIR filters [1], it’s instructive and not that complicated to design them from scratch. You can do it in 12 lines of Matlab code. In this article, we’ll create a Matlab function butter_synth.m to design lowpass Butterworth filters of any order. Here is an example function call for a 5th order filter:
Minimum Shift Keying (MSK) is one of the most spectrally efficient modulation schemes available. Due to its constant envelope, it is resilient to non-linear distortion and was therefore chosen as the modulation technique for the GSM cell phone standard.
MSK is a special case of Continuous-Phase Frequency Shift Keying (CPFSK) which is a special case of a general class of modulation schemes known as Continuous-Phase Modulation (CPM). It is worth noting that CPM (and hence CPFSK) is a...
This is an article to hopefully give an understanding to Euler's magnificent equation:
$$ e^{i\theta} = cos( \theta ) + i \cdot sin( \theta ) $$
This equation is usually proved using the Taylor series expansion for the given functions, but this approach fails to give an understanding to the equation and the ramification for the behavior of complex numbers. Instead an intuitive approach is taken that culminates in a graphical understanding of the equation.Complex...
$$ atan(z) \approx \dfrac{z}{1.0 +...
Figure 1.1 is a block diagram of a digital PLL (DPLL). The purpose of the DPLL is to lock the phase of a numerically controlled oscillator (NCO) to a reference signal. The loop includes a phase detector to compute phase error and a loop filter to set loop dynamic performance. The output of the loop filter controls the frequency and phase of the NCO, driving the phase error to zero.
One application of the DPLL is to recover the timing in a digital...
In this post, I present a method to design Butterworth IIR bandpass filters. My previous post [1] covered lowpass IIR filter design, and provided a Matlab function to design them. Here, we’ll do the same thing for IIR bandpass filters, with a Matlab function bp_synth.m. Here is an example function call for a bandpass filter based on a 3rd order lowpass prototype:N= 3; % order of prototype LPF fcenter= 22.5; % Hz center frequency, Hz bw= 5; ...
The topic of estimating a noise-free real or complex sinusoid's frequency, based on fast Fourier transform (FFT) samples, has been presented in recent blogs here on dsprelated.com. For completeness, it's worth knowing that simple frequency estimation algorithms exist that do not require FFTs to be performed . Below I present three frequency estimation algorithms that use time-domain samples, and illustrate a very important principle regarding so called "exact"... |
First of all, dark matter and dark energy, despite their naming, are two very different concepts. We don't really have any good reason to group them together, other than the fact that both represent things we don't understand. Thus they are not necessarily backed by the same sets of evidence.
Why we believe these things exist
As it happens though, some of the strongest evidence for both comes from the cosmic microwave background. Basically, whatever "stuff" there is in the universe will have an effect on the temperature and polarization fluctuations in this radiation, which was emitted some few hundred thousand years after the Big Bang.
The best all-sky map of this radiation is made by the WMAP satellite, and every couple years they release several papers with the analysis of the data. For instance, here is the 2011 paper focusing on the cosmological parameters. They basically just feed all the data into an enormous statistical program to find the most likely values for a large set of parameters, including the amount of "normal" baryonic matter $\Omega_\mathrm{b}$, the amount of cold dark matter $\Omega_\mathrm{c}$, the amount of dark energy $\Omega_\Lambda$, and the dark energy equation of state parameter $w$. There is a lot that can be said as to
what the effects of these parameters are, but ultimately you just cannot explain the CMB without having dark energy and dark matter.
Alternate theories for dark matter
Now "cold dark matter" (the CDM of the $\Lambda\text{CDM}$ model) means massive particles interacting via gravity and the weak force but not via electromagnetism and that were non-relativistic even at the time of recombination when the CMB was released. Just plain old "dark matter" refers to any gravitating mass that doesn't have
much of an electromagnetic signal. The galaxy rotation curves you refer to were some of the first dark matter evidence, and indeed they could be explained by assuming a large number of quiescent black holes or star-less planets or dust that we just missed for one reason or another. These alternate theories could also, with enough manipulation, explain the bullet cluster, where the gravitational mass found with lensing maps is clearly not collocated with the baryonic mass in hot, X-ray emitting intracluster gas. However, microlensing surveys tend to rule out the first two, and we think we have a good handle on dust dynamics. Something more exotic is called for. There is a diminishing community in support of MOND - modified Newtonian dynamics - which postulates long-range deviations from the inverse-square law in gravity. However, the bullet cluster, together with very precise Solar system data, makes this theory difficult to get working.
Add to this the very nice "WIMP miracle" (no good wiki page there - sorry), which is
suggestive of dark matter being new types of particles. "WIMP" stands for "weakly interacting massive particle," and the "miracle" is this: If you assume there is a species of particle $\text{X}$ whose only appreciable interaction is annihilation with its antiparticle $\overline{\text{X}}$, with a cross section typical of weak interactions and a mass typical of, well, particles, you can easily calculate the abundance of these things in the present universe. They are in equilibrium with other species when the universe is energetic enough for them to be pair-created, and they "freeze out" when the universe cools enough. The end result is right around what we infer from other means.
Alternate theories for dark energy
Dark energy is a little trickier. There really is not a good explanation of "what" it even is - the name is more a catch-all for describing the observed accelerated expansion of the universe. You might believe it is a cosmological constant. In this case it is a nonzero scalar $\Lambda$ in Einstein's equation$$ G_{\mu\nu} + \Lambda g_{\mu\nu} = 8\pi T_{\mu\nu}, $$where $g$ is the metric containing all the information about the curvature of spacetime, $G$ is a known function of $g$ and its first two derivatives, and $T$ is the stress-energy tensor containing all the information about matter, energy, and momentum in the universe. This is
equivalent to saying there is some substance in the universe with equation of state $P = -\rho c^2$ ($P$ is pressure, $\rho$ is mass density). [For comparison, non-relativistic diffuse matter is well approximated by $P = 0$, and relativistic matter has $P = \rho c^2/3$.]
Others are open to the idea that $w = P/(\rho c^2)$ is not
exactly $-1$ for this new "stuff," and so that can be a free parameter in your modeling. All the evidence points to $w$ being consistent with $-1$, but the uncertainties are still somewhat large.
You can get more exotic and note that the accelerated expansion phase the universe is currently undergoing is not entirely unlike the inflation many believe happened in the very early universe. Inflation has all sorts of theories proposed for it, many of which are variations on "there is an abstract quantum field $\phi$ with these certain properties..." Others take place in the realm of strings and branes. Many of these theories can be boiled down to a $w$ that changes over time.
Future research
These fields of study are very much alive and well. Dark matter calls for both astrophysicists to study its role in large-scale dynamics and particle physicists to try to nail down its properties. The astrophysics side involves observers placing better constraints on the evidence so far, which has a lot to do with galaxy clusters and large-scale structure, as well as theorists to predict how dark matter influences e.g. galaxy formation, usually by using massive simulations. (And those simulations, such as the Millennium Simulation, often lead to cool movies.) The physics side involves experiments to try to detect the stuff directly (there are literally dozens of these - far too many to list - and though there are some claimed detections, none are really accepted by the community at large). It also involves finding a place for such particles in an extended version of the Standard Model.
Dark energy is begging for physicists to come up with testable models that mesh with the rest of physics without seeming fine-tuned. It is still very much new, given that it was only last year's Nobel Prize that recognized the teams that provided undeniable proof of its existence around the turn of the millennium.
So by all means feel free to be inspired by these problems to work in some field. There is plenty to be done. I would even say the prospects are better than e.g. string theory alone, since we have
solid, incontrovertible evidence saying there are gaping holes in our knowledge when it comes to dark matter and energy, and these holes are right where we can easily probe them via astronomy. |
Under the auspices of the Computational Complexity Foundation (CCF)
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are \emph{seeded non-malleable extractors}, introduced by Dodis and Wichs \cite{DW09}; \emph{seedless non-malleable extractors}, introduced by Cheraghchi and Guruswami ... more >>>
A non-malleable extractor is a seeded extractor with a very strong guarantee - the output of a non-malleable extractor obtained using a typical seed is close to uniform even conditioned on the output obtained using any other seed. The first contribution of this paper consists of two new and improved ... more >>>
We construct non-malleable extractors with seed length $d = O(\log{n}+\log^{3}(1/\epsilon))$ for $n$-bit sources with min-entropy $k = \Omega(d)$, where $\epsilon$ is the error guarantee. In particular, the seed length is logarithmic in $n$ for $\epsilon> 2^{-(\log{n})^{1/3}}$. This improves upon existing constructions that either require super-logarithmic seed length even for constant ... more >>>
A typical obstacle one faces when constructing pseudorandom objects is undesired correlations between random variables. Identifying this obstacle and constructing certain types of "correlation breakers" was central for recent exciting advances in the construction of multi-source and non-malleable extractors. One instantiation of correlation breakers is correlation breakers with advice. These ... more >>>
In this paper we give improved constructions of several central objects in the literature of randomness extraction and tamper-resilient cryptography. Our main results are:
(1) An explicit seeded non-malleable extractor with error $\epsilon$ and seed length $d=O(\log n)+O(\log(1/\epsilon)\log \log (1/\epsilon))$, that supports min-entropy $k=\Omega(d)$ and outputs $\Omega(k)$ bits. Combined with ... more >>> |
Under the auspices of the Computational Complexity Foundation (CCF)
We construct pseudorandom generators of seed length $\tilde{O}(\log(n)\cdot \log(1/\epsilon))$ that $\epsilon$-fool ordered read-once branching programs (ROBPs) of width $3$ and length $n$. For unordered ROBPs, we construct pseudorandom generators with seed length $\tilde{O}(\log(n) \cdot \mathrm{poly}(1/\epsilon))$. This is the first improvement for pseudorandom generators fooling width $3$ ROBPs since the work of Nisan [Combinatorica, 1992].
Our constructions are based on the ``iterated milder restrictions'' approach of Gopalan et al. [FOCS, 2012] (which further extends the Ajtai-Wigderson framework [FOCS, 1985]), combined with the INW-generator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018].
Two conceptual ideas that play an important role in our analysis are:
(1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions.
In addition, we achieve nearly optimal seed-length $\tilde{O}(\log(n/\epsilon))$ for the classes of: (1) read-once polynomials on $n$ variables, (2) locally-monotone ROBPs of length $n$ and width $3$ (generalizing read-once CNFs and DNFs), and (3) constant-width ROBPs of length $n$ having a layer of width $2$ in every consecutive $\mathrm{poly}\log(n)$ layers.
We construct pseudorandom generators of seed length $\tilde{O}(\log(n)\cdot \log(1/\epsilon))$ that $\epsilon$-fool ordered read-once branching programs (ROBPs) of width $3$ and length $n$. For unordered ROBPs, we construct pseudorandom generators with seed length $\tilde{O}(\log(n) \cdot \mathrm{poly}(1/\epsilon))$. This is the first improvement for pseudorandom generators fooling width $3$ ROBPs since the work of Nisan [Combinatorica, 1992].
Our constructions are based on the ``iterated milder restrictions'' approach of Gopalan et al. [FOCS, 2012] (which further extends the Ajtai-Wigderson framework [FOCS, 1985]), combined with the INW-generator [STOC, 1994] at the last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered case, we combine iterated milder restrictions with the generator of Chattopadhyay et al. [CCC, 2018].
Our main technical contributions are:
(1) A relabeling technique allowing us to analyze a relabeled version of the given branching program, which turns out to be much easier. (2) Treating the number of colliding layers in a branching program as a progress measure and showing that it reduces significantly under pseudorandom restrictions.
In addition, we achieve nearly optimal seed-length $\tilde{O}(\log(n/\epsilon))$ for the classes of: (1) read-once polynomials on $n$ variables, (2) locally-monotone ROBPs of length $n$ and width $3$ (generalizing read-once CNFs and DNFs), and (3) constant-width ROBPs of length $n$ having a layer of width $2$ in every consecutive $\mathrm{poly}\log(n)$ layers. |
Oscar’s demand for movies is given by Q = 10−2P.
(a) What is the price elasticity of demand at a price of 2? Is Oscar’s demand elastic or inelastic at a price of 2?
(b) Assume that the price is 3. What is Oscar’s total expenditure on movies? What is the consumer surplus?
(c) What is the price that maximizes Oscar’s total expenditure on movies? What is the price elasticity of demand at this price?
(d) If the price increases from 1 to 2, does Oscar’s total expenditure on movies rise or fall? If the price rises from 3 to 4, does Oscar’s total expenditure on movies rise or fall?
This question appears to be off-topic. The users who voted to close gave this specific reason:
(a) Price elasticity is $\eta_d=\frac{\partial Q}{\partial P} \cdot \frac{P}{Q}$. Therefore you get $\eta_d=-2 \cdot \frac{2}{10-2\cdot2}= -\frac{2}{3}$.
(b) $Q=10-2 \cdot 3 = 4$. Total expenditure is $3 \cdot 4 = 12$. At a price of 5 units, there is no demand. The consumer surplus for linear demand functions follows $\frac{1}{2}\left(p_{max}-p\right)\cdot x$ with $p_{max}$ as the price where there is no demand. Consumer surplus therefore is $\frac{1}{2}\left(5-3\right)\cdot 4=4$.
(c) Total expenditure $TR$ follows $TR=p \cdot x$. For maximizing $TR$ just derive it (and set it to zero) and you get $\frac{\partial TR}{\partial p}=\frac{\partial 10p-2p^2}{\partial p}=10-4\cdot p = 0$. This gives the value $2.5$ for $p$. On how to calculate elasticity see (a).
(d) Total expenditure for $p=1$ is $8 \cdot 1 = 8$, for $p=2$ it is $6 \cdot 2 = 12$. If price increases from 1 to 2, $TR$ increases. Look the result in (c), that $p=2.5$ maximizes $TR$. Therefore from $p=1$ to $p=2$, $TR$ has to increase. In equivalent, $TR$ has to decrease when price rises from $p=3$ to $p=4$. |
Edit. As the OP points out, for his purpose it suffices to take the zero locus of a single (nonzero) partial derivative. So the OP produces a proper closed subset of $V$ containing the singular locus and having degree bounded by $(\text{deg}(V)-1)\text{deg}(V)$. Although this is not what the OP asks, there are cases where we need an upper bound on the degree of the singular locus (or at least the union of all components of the singular locus that have the maximum dimension). This often occurs for bounding the set of "bad characteristic" for some property of schemes over a field of (possibly) finite characteristic. The answer below gives an upper bound on the degree of the singular locus.
I am just rewriting the proof of Lemma 4.2.5 of the following as one answer. I learned of this from Fedor Bogomolov.
Jan Gutt
Hwang–Mok rigidity of cominuscule homogeneous varieties in positive characteristic PhD. thesis, 2013 https://arxiv.org/pdf/1305.5296.pdf
The original statement is for projective varieties, but the result for affine varieties follows by intersecting with affine space (a Zariski open subset of projective space).
Lemma [Jan Gutt, 2013 thesis, Lemma 4.2.5] For a purely $r$-dimensional closed subscheme $V$ of projective space $\mathbb{P}^n_k$ with degree $D>1$, if the zero scheme $S$ of the $r^\text{th}$ Fitting ideal of $\Omega_{V/k}$ has dimension $m$, then the corresponding $m$-cycle of $S$ has degree no greater than $D(D-1)^{r-m}$.
Proof. When $m$ equals $r$, then this just says that the $m$-dimensional cycle of $S$ has degree no greater than the degree of the $m$-dimensional cycle of $V$. Thus, without loss of generality, assume that $r>m$. Also, it suffices to prove the result when $k$ is algebraically closed. The proof uses Theorem 1.1 of the following.
MR0282975 (44 #209)
Mumford, David Varieties defined by quadratic equations. 1970 Questions on Algebraic Varieties (C.I.M.E., III Ciclo, Varenna, 1969) pp. 29–100 Edizioni Cremonese, Rome http://www.dam.brown.edu/people/mumford/alg_geom/papers/1970a--CIME-QuadEqns-DAM.pdf
Mumford proves that the ideal sheaf $I$ of $V$ is generated in degree $d$. More precisely, the linear system $H^0(\mathbb{P}^n_k,I(d))$ of sections $g$ of $\mathcal{O}(d)$ on $\mathbb{P}^n_k$ that vanish on $V$ has base locus that equals $V$ set-theoretically and that equals $V$ scheme-theoretically, at least on the dense open subset $V\setminus S$ of $V$.Thus, the common zero scheme in $V$ of the set of partial derivatives, $\partial g/\partial t$ (for varying homogeneous coordinates $t$) is contained in $S$ set-theoretically, and contains $S$ scheme-theoretically (since the Fitting ideal contains these partial derivatives, locally).
By Bertini’s theorem, for $r-m$ general polynomials $g = (g_1 , \dots , g_{r-m})$ in this linear system, for a general choice of homogeneous coordinates on $P^n_k$ and for a choice $t = (t_1 , \dots , t_{r-m} )$ of $r-m$ of these coordinates, the common zero scheme in $V$ of the $r-m$ partial derivative polynomials $\partial g_i /\partial t_i$ is $m$-dimensional and contains $S$. Since these partial derivatives are global sections of $\mathcal{O}(D − 1)$, the degree bound follows.
QED.
Will Sawin's Examples. Let $V$ be a subvariety that spans a linear subspace $\mathbb{P}^{r+1}_k \subset \mathbb{P}^n_k$ and that equals a degree-$D$ hypersurface in this linear space with defining polynomial $g=t_{m+1}^D + \dots + t_{r+1}^D$. Assume that the integer $D$ is nonzero in $k$. The Fitting ideal is precisely defined by $t_{m+1}^{D-1},\dots,t_{r+1}^{D-1}$ and the linear polynomials $t_{r+2},\dots,t_n$. Thus, the degree equals $(D-1)^{r+1-m}$, which is close to $(D-1)^{r-m}D$. |
The solution for upstream Mach number, \(M_1\), and shock angle, θ, are far much simpler and a unique solution exists. The deflection angle can be expressed as a function of these variables as
\(\delta\) For \(\theta\) and \(M_1\)
\[
\label {2Dgd:eq:Odelta-theta} \cot \delta = \tan \left(\theta\right) \left[ \dfrac{(k + 1)\, {M_1}^2 }{ 2\, ( {M_1}^2\, \sin^2 \theta - 1)} - 1 \right] \tag{51} \]
or
\[ \tan \delta = {2\cot\theta ({M_1}^2 \sin^2 \theta -1 ) \over 2 + {M_1}^2 (k + 1 - 2 \sin^2 \theta )} \label{2Dgd:eq:Odelta-thetaA} \tag{52} \] The pressure ratio can be expressed as
Pressure Ratio
\[
\label {2Dgd:eq:OPR} \dfrac{P_ 2 }{ P_1} = \dfrac{ 2 \,k\, {M_1}^2 \sin ^2 \theta - (k -1) }{ k+1} \tag{53} \]
The density ratio can be expressed as
Density Ratio
\[
\label {2Dgd:eq:ORR} \dfrac{\rho_2 }{ \rho_1 } = \dfrac{ {U_1}_n }{ {U_2}_n} = \dfrac{ (k +1)\, {M_1}^2\, \sin ^2 \theta } { (k -1) \, {M_1}^2\, \sin ^2 \theta + 2} \tag{54} \]
The temperature ratio expressed as
Temperature Ratio
\[
\label {2Dgd:eq:OTR} \dfrac{ T_2 }{ T_1} = \dfrac{ {c_2}^2 }{ {c_1}^2} = \dfrac{ \left( 2\,k\, {M_1}^2 \sin ^2 \theta - ( k-1) \right) \left( (k-1) {M_1}^2 \sin ^2 \theta + 2 \right) } { (k+1)\, {M_1}^2\, \sin ^2 \theta } \tag{55} \]
The Mach number after the shock is
Exit Mach Number
\[
\label{2Dgd:eq:OM2_0} {M_2}^2 \sin (\theta -\delta) = { (k -1) {M_1}^2 \sin ^2 \theta +2 \over 2 \,k\, {M_1}^2 \sin ^2 \theta - (k-1) } \tag{56} \]
or explicitly
\[ {M_2}^2 = {(k+1)^2 {M_1}^4 \sin ^2 \theta - 4\,({M_1}^2 \sin ^2 \theta -1) (k {M_1}^2 \sin ^2 \theta +1) \over \left( 2\,k\, {M_1}^2 \sin ^2 \theta - (k-1) \right) \left( (k-1)\, {M_1}^2 \sin ^2 \theta +2 \right) } \label{2Dgd:eq:OM2} \tag{57} \] The ratio of the total pressure can be expressed as
Stagnation Pressure Ratio
\[
\label {2Dgd:eq:OP0R} {P_{0_2} \over P_{0_1}} = \left[ (k+1) {M_1}^2 \sin ^2 \theta \over (k-1) {M_1}^2 \sin ^2 \theta +2 \right]^{k \over k -1} \left[ k+1 \over 2 k {M_1}^2 \sin ^2 \theta - (k-1) \right] ^{1 \over k-1} \tag{58} \]
Even though the solution for these variables, \(M_1\) and \(\theta\), is unique, the possible range deflection angle, \(\delta\), is limited. Examining equation (51) shows that the shock angle, \(\theta\), has to be in the range of \(\sin^{-1} (1/M_1) \geq \theta \geq (\pi/2)\) (see Figure Fig. 12.8). The range of given \(\theta\), upstream Mach number \(M_1\), is limited between \(\infty\) and \(\sqrt{1 / \sin^{2}\theta}\).
Fig. 12.8 The possible range of solutions for different parameters for given upstream Mach numbers. Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license. |
I have a problem in which a rod of length $d$ has mass $m$ and a point mass of $2m$ is on the left end of it. I want to calculate the moments of inertia for several axes all perpendicular to the rod. I've already correctly calculated the moment of inertia directly for the axis through the rod's mid-point, finding it to be $\frac{7md^2}{12}$. But then it occurred to me that this would be more efficient if I found the moment of inertia through the center of mass and use the Parallel Axis Theorem repeatedly.
To do that I calculated the center of mass of the system and found it to be $d/6$. I then calculated the moment of inertia through the center of mass in two parts, one for the mass of the rod itself,
$$\int_{-d/6}^{5d/6}r^2dm = \lambda \frac{r^3}{3}\Bigg|_{-d/6}^{5d/6} = \frac{m}{3d}\left[\left(\frac{5d}{6}\right)^3-\left(\frac{-d}{6}\right)^3\right]$$
$$=\frac{md^2}{3}\left(\frac{126}{6^3}\right)=\frac{7md^2}{36}$$
But then I have to add the mass of the ball at the end which contributes $2m(d/6)^2$ to the moment of inertia and therefore the moment at the center is
$$I_{cm}=\frac{md^2}{4}$$
Now, however, when I use the parallel axis theorem to get the moment of inertia at the half-way point, I get
$$\frac{md^2}{4}+m(2d/6)^2 = \frac{13md^2}{36}$$
I clearly don't get the right answer. |
I am trying to give a fast sketch of what the BCFW reduction does and embed within it some questions at the steps which I don't seem to understand clearly. The first bullet point is sort of a very basic question about the formalism which I can't get!
Let $\{p_i\}_{i=1}^{i=n}$ be the momentum of the $n$-gluons whose scattering, $A(1,2,..,n)$ one is interested in. Let the $(n-1)^{th}$ have negative helicity and the rest be positive. So its an MHV scenario.
For denoting the gluonic states why is it okay to use the spinor helicity formalism where for a massless Dirac particle of wave function $u(p)$ one uses the notation of, $|p> = \frac{1+\gamma^5}{2}u(p)$, $|p] =\frac{1- \gamma^5}{2}u(p)$, $<p| = \bar{u}(p)\frac{1+\gamma^5}{2}$, $[p| = \bar{u}(p)\frac{1-\gamma^5}{2}$? (..gluons are afterall not massless Dirac particles!..) What is going on? Why is this a valid description?
Then one defines analytic continuations of for the $(n-1)^{th}$ and the $n^{th}$ gluonic states as, $|p_n> \rightarrow |p_n(z)> = |p_n> + z |p_{n-1}>$ and $|p_{n-1}] \rightarrow |p_{n-1}(z)] = |p_{n-1}] - z |p_n]$.
Then the key idea is that if the amplitude as a function of $z$ tends to $0$ as $|z| \rightarrow \infty$ then one can write the analytically continued amplitude as $A(1,2,..,n,z) = \sum _{i} \frac{R_i}{(z-z_i)}$ where $z_i$ and $R_i$ are the poles and residues of $A(1,2,..,n,z)$
Is there a quick way to see the above? (..though I have read much of the original paper..) |
pre-calculus-trigonometric-equation-calculator
2\cos ^2\left(x\right)-\sqrt{3}\cos \left(x\right)=0, 0^{\circ }\lt x\lt 360^{\circ }
en
1. Sign Up free of charge:
Join with Office365
Join with Facebook
OR
Join with email
2. Subscribe to get much more:
From $0.99
Please try again using a different payment method
One Time Payment $5.99 USD for 2 months Weekly Subscription $0.99 USD per week until cancelled Monthly Subscription $2.49 USD per month until cancelled Annual Subscription $19.99 USD per year until cancelled
Please add a message.
Message received. Thanks for the feedback. |
In projective geometry, a
plane at infinity refers to the hyperplane at infinity of a three dimensional projective space or to any plane contained in the hyperplane at infinity of any projective space of higher dimension. This article will be concerned solely with the three dimensional case.
Contents Definition 1 Analytic representation 2 Properties 3 Notes 4 References 5 Definition
There are two approaches to defining the
plane at infinity which depend on whether one starts with a projective 3-space or an affine 3-space.
If a projective 3-space is given, the
plane at infinity is any distinguished projective plane of the space. [1] This point of view emphasizes the fact that this plane is not geometrically different than any other plane. On the other hand, given an affine 3-space, the plane at infinity is a projective plane which is added to the affine 3-space in order to give it closure of incidence properties. Meaning that the points of the plane at infinity are the points where parallel lines of the affine 3-space will meet, and the lines are the lines where parallel planes of the affine 3-space will meet. The result of the addition is the projective 3-space, P^3. This point of view emphasizes the internal structure of the plane at infinity, but does make it look "special" in comparison to the other planes of the space.
If the affine 3-space is real, \mathbb{R}^3, then the addition of a real projective plane \mathbb{R}P^2 at infinity produces the real projective 3-space \mathbb{R}P^3.
Analytic representation
Since any two projective planes in a projective 3-space are equivalent, we can choose a homogeneous coordinate system so that any point on the plane at infinity is represented as (
X: Y: Z:0). [2] Any point in the affine 3-space will then be represented as ( X: Y: Z:1). The points on the plane at infinity seem to have three degrees of freedom, but homogeneous coordinates are equivalent up to any rescaling: (X : Y : Z : 0) \equiv (a X : a Y : a Z : 0) ,
so that the coordinates (
X: Y: Z:0) can be normalized, thus reducing the degrees of freedom to two (thus, a surface, namely a projective plane).
Proposition: Any line which passes through the origin (0:0:0:1) and through a point ( X: Y: Z:1) will intersect the plane at infinity at the point ( X: Y: Z:0).
Proof: A line which passes through points (0:0:0:1) and ( X: Y: Z:1) will consist of points which are linear combinations of the two given points: a (0:0:0:1) + b (X:Y:Z:1) = (bX :bY: bZ: a + b).
For such a point to lie on the plane at infinity we must have, a + b = 0. So, by choosing a = - b, we obtain the point (bX:bY:bZ:0) = (X : Y : Z : 0) , as required. Q.E.D.
Any pair of parallel lines in 3-space will intersect each other at a point on the plane at infinity. Also, every line in 3-space intersects the plane at infinity at a unique point. This point is determined by the direction—and only by the direction—of the line. To determine this point, consider a line parallel to the given line, but passing through the origin, if the line does not already pass through the origin. Then choose any point, other than the origin, on this second line. If the homogeneous coordinates of this point are (
X: Y: Z:1), then the homogeneous coordinates of the point at infinity through which the first and second line both pass is ( X: Y: Z:0).
Example: Consider a line passing through the points (0:0:1:1) and (3:0:1:1). A parallel line passes through points (0:0:0:1) and (3:0:0:1). This second line intersects the plane at infinity at the point (3:0:0:0). But the first line also passes through this point: \lambda (3:0:1:1) + \mu (0:0:1:1) \ = (3 \lambda : 0 : \lambda + \mu : \lambda + \mu)\ = ( 3 : 0 : 0 : 0)
when \lambda + \mu = 0. ■
Any pair of parallel planes in affine 3-space will intersect each other in a projective line (a line at infinity) in the plane at infinity. Also, every plane in the affine 3-space intersects the plane at infinity in a unique line.
[3] This line is determined by the direction—and only by the direction—of the plane. Properties
Since the plane at infinity is a projective plane, it is homeomorphic to the surface of a "sphere modulo antipodes", i.e. a sphere in which antipodal points are equivalent: S
2/{1,-1} where the quotient is understood as a quotient by a group action (see quotient space). Notes ^ Samuel 1988, p. 11 ^ Meserve 1983, p. 150 ^ Woods 1961, p. 187 References
Bumcrot, Robert J. (1969), Modern Projective Geometry, Holt, Rinehart and Winston Meserve, Bruce E. (1983) [1955], Fundamental Concepts of Geometry, Dover, Pedoe, Dan (1988) [1970], Geometry / A Comprehensive Course, Dover, Samuel, Pierre (1988), Projective Geometry, UTM Readings in Mathematics, Springer-Verlag, Woods, Frederick S. (1961) [1922], Higher Geometry / An Introduction to Advanced Methods in Analytic Geometry, Dover Yale, Paul B. (1968), Geometry and Symmetry, Holden-Day
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type
Department of Mathematics, University of Nebraska-Lincoln, 203 Avery Hall, Lincoln, NE 68588-0130, USA
$ p $
$u_{tt} - Δ_p u -Δ u_t = 0$
$ \Omega \subset \mathbb{R}^3 $
$ \Gamma = \partial \Omega $
$ Δ_p $
$ 2<p<3 $
$ p$
$ f(u) $ supercriticalexponent, in the sense that the associated Nemytskii operator is not locally Lipschitz from
$ {W^{1,p}}\left( \Omega \right) $
$ L^2(\Gamma) $ Keywords:Wave equation, $ p $-Laplacian, supercritical source, local existence, generalized Robin condition. Mathematics Subject Classification:Primary: 35L05, 35L20, 35L72; Secondary: 58J45. Citation:Nicholas J. Kass, Mohammad A. Rammaha. Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1449-1478. doi: 10.3934/cpaa.2018070
References:
[1]
R. A. Adams,
[2] [3] [4]
V. Barbu,
[5] [6] [7]
L. Bociu and I. Lasiecka,
Blow-up of weak solutions for the semilinear wave equations with nonlinear boundary and interior sources and damping,
[8]
L. Bociu and I. Lasiecka,
Uniqueness of weak solutions for the semilinear wave equations with supercritical boundary/interior sources and damping,
[9] [10]
F. Boyer and P. Fabrie,
[11]
M. M. Cavalcanti and V. N. Domingos,
Cavalcanti and I. Lasiecka, Well-posedness and optimal decay rates for the wave equation with nonlinear boundary damping-source interaction,
[12] [13]
I. Chueshov, M. Eller and I. Lasiecka,
On the attractor for a semilinear wave equation with critical exponent and nonlinear boundary dissipation,
[14] [15] [16] [17] [18] [19] [20]
Y. Guo, M. A. Rammaha, S. Sakuntasathien, E. S. Titi and D. Toundykov,
Hadamard wellposedness for a hyperbolic equation of viscoelasticity with supercritical sources and damping,
[21]
H. Koch and I. Lasiecka, Hadamard well-posedness of weak solutions in nonlinear dynamic elasticity-full von Karman systems, In
[22] [23]
J.-L. Lions and E. Magenes,
[24] [25] [26]
P. Pei, M. A. Rammaha and D. Toundykov, Weak solutions and blow-up for wave equations of
[27] [28]
M. Rammaha, D. Toundykov and Z. Wilstein,
Global existence and decay of energy for a nonlinear wave equation with
[29] [30]
E. Vitillaro, Some new results on global nonexistence and blow-up for evolution problems with positive initial energy,
[31] [32] [33]
show all references
References:
[1]
R. A. Adams,
[2] [3] [4]
V. Barbu,
[5] [6] [7]
L. Bociu and I. Lasiecka,
Blow-up of weak solutions for the semilinear wave equations with nonlinear boundary and interior sources and damping,
[8]
L. Bociu and I. Lasiecka,
Uniqueness of weak solutions for the semilinear wave equations with supercritical boundary/interior sources and damping,
[9] [10]
F. Boyer and P. Fabrie,
[11]
M. M. Cavalcanti and V. N. Domingos,
Cavalcanti and I. Lasiecka, Well-posedness and optimal decay rates for the wave equation with nonlinear boundary damping-source interaction,
[12] [13]
I. Chueshov, M. Eller and I. Lasiecka,
On the attractor for a semilinear wave equation with critical exponent and nonlinear boundary dissipation,
[14] [15] [16] [17] [18] [19] [20]
Y. Guo, M. A. Rammaha, S. Sakuntasathien, E. S. Titi and D. Toundykov,
Hadamard wellposedness for a hyperbolic equation of viscoelasticity with supercritical sources and damping,
[21]
H. Koch and I. Lasiecka, Hadamard well-posedness of weak solutions in nonlinear dynamic elasticity-full von Karman systems, In
[22] [23]
J.-L. Lions and E. Magenes,
[24] [25] [26]
P. Pei, M. A. Rammaha and D. Toundykov, Weak solutions and blow-up for wave equations of
[27] [28]
M. Rammaha, D. Toundykov and Z. Wilstein,
Global existence and decay of energy for a nonlinear wave equation with
[29] [30]
E. Vitillaro, Some new results on global nonexistence and blow-up for evolution problems with positive initial energy,
[31] [32] [33]
[1]
VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad.
A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition.
[2]
Phuong Le.
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian.
[3]
Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia.
Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $.
[4]
Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar.
Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system.
[5]
Gabriele Bonanno, Giuseppina D'Aguì.
Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions.
[6]
K. D. Chu, D. D. Hai.
Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem.
[7]
Xinghong Pan, Jiang Xu.
Global existence and optimal decay estimates of the compressible viscoelastic flows in $ L^p $ critical spaces.
[8]
Peng Mei, Zhan Zhou, Genghong Lin.
Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations.
[9]
Harbir Antil, Mahamadi Warma.
Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence.
[10] [11] [12]
Annalisa Cesaroni, Serena Dipierro, Matteo Novaga, Enrico Valdinoci.
Minimizers of the $ p $-oscillation functional.
[13]
Sugata Gangopadhyay, Goutam Paul, Nishant Sinha, Pantelimon Stǎnicǎ.
Generalized nonlinearity of $ S$-boxes.
[14]
Linglong Du, Caixuan Ren.
Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $.
[15]
Linglong Du.
Long time behavior for the visco-elastic damped wave equation in $\mathbb{R}^n_+$ and the boundary effect.
[16]
Yonglin Cao, Yuan Cao, Hai Q. Dinh, Fang-Wei Fu, Jian Gao, Songsak Sriboonchitta.
Constacyclic codes of length $np^s$ over $\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$.
[17]
Joaquim Borges, Cristina Fernández-Córdoba, Roger Ten-Valls.
On ${{\mathbb{Z}}}_{p^r}{{\mathbb{Z}}}_{p^s}$-additive cyclic codes.
[18] [19]
Genghong Lin, Zhan Zhou.
Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities.
[20]
Vladimir Chepyzhov, Alexei Ilyin, Sergey Zelik.
Strong trajectory and global $\mathbf{W^{1,p}}$-attractors for the damped-driven Euler system in $\mathbb R^2$.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
"Suppose $D$ is a string...".
No it is not.
Because if this were a string (a discrete sequence), then its content would take values from an alphabet of
symbols, say $A \in \left[0 \dots (b-a) \right]$ where $b,a \in \mathbb{N}, b>a$ with some uniform discrete probability $\mathcal{U}(0,|A|)$, where $|\cdot|$ denotes the length of the sequence. Then, the operation of correlation would not make sense and we would have to use an appropriate string "similarity" function. For example, the hamming distance. Then, we would accept an error rate, which would represent the number of symbols set in "error" by some noise process. In ideal conditions, the normalised hamming distance (the sum of all symbols in error in a sequence divided by the length of the sequence) would be $0$ and as the error rate increased, so would the hamming distance all the way to $1$ in which case we would miss the entire sequence.
But that is not what is implied here. What is implied here is that $D \in \mathbb{R^k}$ (where $k$ here denotes the length of $D$), it takes values from a
continuous $\mathcal{U}(a,b)$ and these represent the "message". This message is then contaminated by additive white gaussian noise to produce $X$. Then, we try to correlate a rolling window (of finite width) of $D$ over $X$ and observe the amplitude of the correlation at each "window shift" ($C_n$).
So, given that former framework where $D \in \mathbb{R}$:
The first step in modelling this is to look at what is the output exactly. The output is something like "What is the probability that the amplitude of correlation makes (or does not make) the threshold?". Without knowing anything else about the amplitude of each correlation ($C_n$), its value can swing anywhere $\in \left[-1 \dots 1 \right]$. So the probability there is some $p \in \mathcal{U}(-1,1)$. Not very useful yet.
Under ideal conditions (no noise), we would expect the correlation to correctly recognise the subsequence every time. So, if $P$ was repeating in $X$, $C_n=1$.
In the presence of additive white gaussian noise, the correlation amplitude is
reduced (+). And, not only $C_n$ is reduced but any estimate of $n$ ( where the sequence was detected, the $n$) is becoming uncertain too.
Additive white Gaussian noise is kind of special because its correlation with anything is $0$. So, if you had something like $Z_{\alpha} = \alpha \cdot X + (1-\alpha) \cdot \mathcal{N}(0,\sigma)$, then the correlation of $Z_1, Z_0$ is $0$. Another way to look at $\alpha$ is as (a variable proportional to) the signal to noise ratio (SNR). The lower the $\alpha$ the more noise you get in the signal, the more noise you get in the signal, the lower the correlation amplitude gets, the lower the correlation amplitude gets, the harder it is for it to reach the threshold.
What is the lowest value the correlation can attain?
It is zero. (If it gradually went towards $-1$, ....that would be spooky).
When will the correlation be zero?
When the noise will be at its maximum.
When is the noise at its maximum?
...it depends on the relative power between the noise and the signal. In other words, all things being equal, $\alpha$ depends on the length of $D$ to achieve
the same SNR (but scaled up, to account for the fact that a sequence is now longer). For more details about this, you might want to have a look at this.
Long story short, this now brings us to the bit error rate curves. The theoretical bit error rate curves tell you how many symbols in error you are expected to get given the power SNR. These are also relevant here because at the end of the day, you are also making a binary decision. You either detect $P$ or you don't.
But, all that you have to do here is to
adjust your $E_b/N_0$ for the lengths of the sequences you are looking for because in your case, your "bit" is the whole sequence $P$.
What is the theoretical expression for bit error rate?
$$BER = \frac{1}{2} \text{erfc}\left(\sqrt{\frac{E_b}{N_0}}\right)$$
Where $\text{erfc}$ is the error function. For more details on BER, see here
This $BER$ will tell you what you are after, that is, what is the probability that I miss a $P$ when it is there,
given a particular SNR.
Hope this helps.
Notes:
+: The correlation has a finite range $\in \left[-1 \dots 1\right]$. Therefore, saying "...the standard deviation of the correlation signal..." is a misconception. This would be valid for something like $\mathcal{N}(0,0.2)$. But as the correlation tends to the edges of its range, this "...plus or minus..." of the standard deviation becomes invalid. When the conditions are such that the correlation hits $1$, there is no way its deviation will be anything else than $0$. So, perhaps this definition of the threshold as $K$ times the standard deviation of the correlation signal needs a bit more thinking depending on the problem at hand (?). I am assuming that it was defined in this way to then be able to determine a suitable value that guarantees detection. |
You are right that it seems strange why a cash-rich company is borrowing. In the case of Apple, the money that they are borrowing is being used to pay dividends to shareholders. The reason why they aren't using their \$200 billion is because doing so would cost them tens of billions of dollars in taxes. The current US tax code taxes corporations at 35% when ...
People, particularly business leaders, seem to remain confused about this issue even today. At the core of is the question Is equity finance expensive?. We certainly observe in the data that the realized returns on firm debt are much lower than the realized returns on firm equity. Does this mean that firms have too much equity?If equity capital always ...
The first equation can be written as:$$ r_E(Levered) = \frac{E+D}{E}r_E(Unlevered) - \frac{D}{E}r_D $$Then, isolating the unlevered return gives:$$ r_E(Unlevered) = \frac{E}{E+D}r_E(Levered) + \frac{D}{E+D}r_D$$And this is the WACC.
If you are asking "Is the WACC the amount that the company expects to earn on the stocks and bonds that it holds.." then the answer is no.The WACC, in very simple terms, is the amount of money a company pays to obtain financing for projects. These types of financing are clearly listed in the wikipedia article and clearly extend beyond stocks and bonds ...
I don't know if you refer to the extensive margin (some borrowers not being able to get credit) or to the intensive margin (one borrower not being able to get as much credit as (s)he wants). If you are referring to the former, one of the theoretical papers for borrowing constraints on markets with asymmetric information is the following one:Stiglitz and ...
All assets which have a finite useful life are depreciated. For example, your patents or copyright might hold for 5 or 10 years but no more. Thus, it is quite coherent to reflect the loss of value through depreciation and amortization. Same goes for a software for example: in 5 years time, a software might be obsolete, so we need to reflect this in the ...
Debt is cheap. Flexibility is valuable.They hold debt + cash up to the point where the value of flexibility is still greater than the net cost of servicing the debt minus any interest earnt on the cash.It saves them the transaction costs of re-raising debt when they need it, had they paid it down early.It's cashflow that typically kills businesses, ...
Another key feature of those shell companies is that they hide the ultimate beneficiary of the transactions. Banks, insurance companies and most financial services firm must make enquiries as part of the "Know Your Customer" (KYC) regulations: they should be able to find out who will benefit ultimately from the transactions, or in the name of whom they are ...
Considering this is an Economics Stack Exchange site, I’m going to answer in the spectre of Financial Economics. These are the most foundational equations and ideas of Financial Economics to understand more complex applied or academic research.1. Gross yieldThe gross yield is the yield on an investment before the deduction of taxes and expenses.$$1+R_{t+...
A December fiscal year end, which gives a first quarter of three months ending March 30, aligns the fiscal and the tax year. This can be very convenient and in the United States is sometimes required. In addition, some regulated firms like banks are required to prepare documents on calendar quarters regardless of the month of their fiscal year end, and it is ...
To illustrate what Tirole has done, let's consider a simpler environment.Consider a utility maximisation problem over two goods, $x$ and $y$. The consumer has utility function $u(x,y) = f(x) + y$, where $f$ is strictly increasing and strictly concave. The consumer's problem is thus$$ \begin{align}\max_{x,y} &\quad f(x) + y \\\text{s.t.} &\quad ...
It appears to me that it is the other way round: The RBS was running out of cash which is why the stock price was dropping.Stocks usually don't affect the immediate operation of a company, since they are traded on secondary markets (stock exchanges) among stock owners, not bought from / sold to the actual company which issued the stock.
The point @EnergyNumbers raises is correct, and it's easy to understand from an intuitive standpoint: One of the key roles of financial intermediaries is to match the demand for liabilities of a given tenor to the demand for assets of a given tenor.Financial intermediation allows maturity mismatches to exist in non-finance sectors of the economy by taking ...
The first equation is dollars time interest over total dollars.For example, if a company wants to finance a project, issues \$1M in equity with an expected ROI to the investors of 6% and \$4M in bonds at 4%, it's WACC is:$$\frac{(4\%*4,000,000 + 6\% * 1,000,000)}{4,000,000+1,000,000} $$which for simplicity we can say is$$\frac{(4\%*4 + 6\% * 1)}{4+1}...
The concept of $\text{WACC}$ seems pretty straightforward... it is a weighted average percentage, calculated in principle as equation $(2)$ in the question shows. If we have two sources of financing each demanding a different interest rate and with given percentage contribution each to the total funds we want to borrow, then what would be the single ...
The assertion of the book is based on the phenomenon of commercial credit - the fact that business-to-business sales almost always are on credit, and the differences between terms-of-credit that a company gives to its customers, compared to the terms of credit that enjoys from its suppliers. It describes the (short-term) phenomenon, peculiar to some, that "...
Using the Federal Reserve's definition for M1 (warning, M definitions can vary between countries, so always check the local definition):"M1 is defined as the sum of currency held by the public and transaction deposits at depository institutions (which are financial institutions that obtain their funds mainly through deposits from the public, such as ...
If I'm reading it correctly, Table X (page 2633) of Schwert (2000) (Journal of Finance, Hostility in Takeovers: In the Eyes of the Beholder?) says that about 78 percent of deals in 1975-1996 were successful.However, this measure is constructed based on the acquisition of the firm, not the bid of the acquirer, so that if there are multiple bidders this is ...
A shell is simply an inactive company - there is a market for shell companies because it allows ordinary persons to buy a ready to go business - for example public traded shells with a stockmarket ticker allow you to skip all the paperwork.Back to topic. A Limited company is a legal person, thus it can buy / sell and hold other companies and assets - sue ...
What happens is completely dependent on the owners. They're the ones whose income has been taken anyway. They may be the only ones with the legal power to do anything (depending on the jurisdiction: there may be some countries where the State can intervene in such matters).In some jurisdictions, the directors have a legal obligation to maximise returns to ...
They are not the same.Basic accounting equation:Assets = Liability + Shareholder EquityAssets refers to what the company actually owns: cash, property, inventory, etc.Assets are paid for in two major ways: debt (liability) and stock (equity). Essentially, everything a company owns is paid for by a combination of (1) getting loans from other entities ...
$\beta$ is the measure of the sensitivity of stock returns to market returns. This has nothing to do with the value of $R^2$.Your results appear to be fine, you can get significant beta estimates but low $R^2$. Why?As measured by $R^2$, 24.56% of variation in Apple returns is accounted for by the variation in the market index, $S$&$P 500$. Clearly, ...
In the link one can very clearly see that the company has no contractually short term debt, and in the short-term (i.e. in the next 12 months) has to pay part of its long term debt. Also, that the debt amounts are not included in the line "Accounts payable"Also, one can see its long term debtsAnd no, debt is not only bonds.
The original paper (of Altman) isAltman, E. I. (1968). Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. The journal of finance, 23(4), 589-609.We read (p. 593)III. DEVELOPMENT OF THE MODEL"Sample Selection. The initial sample is composed of sixty-sixcorporations with thirty-three firms in each of the two ...
In economics, a firm wants to maximize profit. Your ice cream shop has some costs that vary depending on how many half-gallon ice cream packages are made, and some benefits, the revenue from selling ice cream. The additional cost of making one more package of ice cream is called the marginal cost, and it can change based on how much ice cream you are already ...
As Kitsune Cavalry noted, organisations like Khan Academy are non-profits that are supported by donations and run for the public good.To address your second question (why would you start a project that requires so much time and not charge anything for it?): the fact that the organisation is a non-profit doesn't mean that the people working there don't make ...
Khan Academy is a 501c non-profit. They do some fundraising, they have some regular backers, and some big time contributors like these.I imagine it's a similar case for Anatomy Zone. They are affiliated with a bunch of other non-profits that you can see on the bottom of their front page.
What I don't understand is why we discount the profits in year one butdon't consider inflation in year one. This seems inconsistent to me.Why do we discount the real profit in the first year and not thenominal profit, or if we assume nominal profits in year one are thesame as real profit, why do we discount in year one?As you said, the firm... |
Plot Description The invariant mass distribution of $\mu^+\mu^-$ reconstructed by the Hlt1 DiMuon trigger in both PbPb and PbNe SMOG collisions, with the number of Velo hits below 6000. The data correspond to about 110 $#mu b^{-1}$ integrated luminosity and 4.5E+20 Pb on Ne target. There are around 29K $J\/\psi$ signals, most of the signals come from PbPb collisions, and there are around 500 signal decays from PbNe data
Plot Description Invariant mass distribution of $\mu^+\mu^-$ in 2018 PbNe collisions. To remove ghost and upstream pollution, events with more than 10 PUHits are removed, and only events with PVz between -200 and 200 mm are kept (following previous SMOG studies). The standard SMOG $J/\psi$ selection is applied: hlt1 and hlt2 SMOG Dimuon trigger, $J/\psi$ stripping line (S35r1), and standard Jpsi selection (see LHCb-ANA-2018-008). The full dataset is used, $\sim$ 5.7E+20 Pb on Ne target. There are around 700 $J\/\psi$ signal candidates
Invariant mass distribution of $\K^\mp \pi^\pm$ in 2018 PbNe collisions. To remove ghost and upstream pollution, events with more than 10 PUHits are removed, and only events with PVz between -200 and 200 mm are kept (following previous SMOG studies). The standard SMOG $D^0$ selection is applied: hlt1 and hlt2 SMOG Kpi trigger, D0kpi stripping line (S35r1), and $D^0$ standard selection (see LHCb-ANA-2018-008) is adapted to reduce the combinatorial background: $p_T$(daughter) $> 650 MeV, DIRA > 0.9998$ and no cut on $\tau_{D^0}$. The full dataset is used, $\sim$ 5.7E+20 Pb on Ne target. There are around 1500 $D^0$ signal candidates
Invariant mass distribution of $\mu^+\mu^-$ in 2018 PbPb collisions in the event-activity class $60-90\%$. The following selection is used: JpsiToMuMu stripping line (S35) with the standard selection based on J/$\psi$ analysis note (see LHCb-ANA-2016-067) and a luminous region cut as well to reduce SMOG contamination (see details in the next description's plot). A very small preliminary sample is used, with around 1800 J/$\psi$ signal candidates.
Invariant mass distribution of $\K^\mp \pi^\pm$ in 2018 PbPb collisions in the event-activity class $70-90\%$. The following $D^0$ selection is applied: D0kpi stripping line (S35), and $D^0$ standard selection in pPb (see CERN-LHCb-ANA-2016-012) is applied, together with a cut on the luminous region in order to reduce SMOG contamination : $ \abs{ z_{PV} }$ < 200 mm, 0.35 < $\rho$ <1.1 mm with $\rho =︎\sqrt{x_{PV}^2 + y_{PV}^2), based on the J/$\psi$ analysis note in 2015 PbPb collisions (see LHCb-ANA-2016-067). A small preliminary sample is used. There are around 10500 $D^0$ signal candidates
I Attachment History Action Size Date Who Comment D0_2018PbPb.pdf r5 r4 r3 r2 r1 manage 25.3 K 2019-02-19 - 16:05 BenjaminAudurier png D0_2018PbPb.png r5 r4 r3 r2 r1 manage 104.5 K 2019-02-19 - 16:05 BenjaminAudurier Jpsi_PbPb2018.pdf r1 manage 17.4 K 2019-02-19 - 16:50 SamuelBelin png Jpsi_PbPb2018.png r1 manage 23.6 K 2019-02-19 - 17:08 SamuelBelin PbNe2018_D0Analysis.pdf r3 r2 r1 manage 16.9 K 2019-02-19 - 16:08 EmilieAmandineMaurice png PbNe2018_D0Analysis.png r3 r2 r1 manage 17.3 K 2019-02-19 - 16:08 EmilieAmandineMaurice PbNe2018_JpsiAnalysis.pdf r3 r2 r1 manage 19.3 K 2019-02-19 - 16:08 EmilieAmandineMaurice png PbNe2018_JpsiAnalysis.png r3 r2 r1 manage 16.5 K 2019-02-19 - 16:08 EmilieAmandineMaurice PbPb18DiMuonSMOGMass.pdf r1 manage 17.0 K 2018-11-27 - 14:59 YanxiZhang c PbPb18_DiMuon_SMOGMass.C r1 manage 7.7 K 2018-11-27 - 14:55 YanxiZhang Invariant mass distributions of mu+mu- reconstructed as Hlt1DiMuonHighMass eps PbPb18_DiMuon_SMOGMass.eps r1 manage 13.5 K 2018-11-27 - 14:55 YanxiZhang Invariant mass distributions of mu+mu- reconstructed as Hlt1DiMuonHighMass PbPb18_DiMuon_SMOGMass.pdf r1 manage 17.0 K 2018-11-27 - 14:57 YanxiZhang png PbPb18_DiMuon_SMOGMass.png r1 manage 62.7 K 2018-11-27 - 14:55 YanxiZhang Invariant mass distributions of mu+mu- reconstructed as Hlt1DiMuonHighMass |
The instability inherent in the medium length axis or $\prod_2 $ as shown above is discussed in detail in Marsden and Ratiu, which is where the image is from.
The unstable homoclinic orbit that connect the two unstable points have intersting features. Not only are they interesting because of the chaotic solutions via the Poincare-Melnikov method that can be obtained in various perturbed systems (ref), but already,
the orbit itself is interesting since a rigid body tossed about its middle axis will undergo an interesting (and unexpected) half twist when the opposite saddle point is reached even though the rotation axis has returned to where it was.
The interesting half-twist is best shown in the "Dzhanibekov effect" and may also be seen in the "tennis racket theorem."
For those who don't understand why the saddle point along the medium length axis $\prod_2 $ in the above picture is unstable consider the following image:
Image Source
The three axes you describe are comparable to respectively:
~ a rod/axle ~ a Fly-wheel ~ a propeller
What is Stability, and why are two axes stable while the third is unstable?
Stability refers to a "stable" oscillation, which must be harmonic like a mass on a spring. There is a restoring force proportional to displacement.
$$F=-k*x=m*a=m*\frac{d^2x}{dt^2}~~~~~~~~~~~~~~~~~~~~\text{(1)}$$For angular situations the situation becomes much more complicated since the torque is normal to the plane of rotation. $$\tau =-\kappa \theta=I*\dot{\omega}=I*\frac{d^2\theta}{dt^2} ~~~~~~~~~~~~~~~~~~~~\text{(2)}$$
When there is off-axis force about a stable axis there are two components of torque: one along the primary axis which will always cause linear rotation and a second perpendicular to the axis (about 1 or both of the other axes), which in the absence of the primary rotation (or with a uniform mass distribution) would also cause linear rotation. So there are always two torques; the primary one large, the second one small. With stable harmonic oscillations as with stable rotational axes there is a restoring force proportional to displacement. Richard Feynman has done some fascinating work to describe a wobbling plate, which will wobble twice as often as it rotates.
Let $\hat{x}_1$,$\hat{x}_2$,and $\hat{x}_3$ be axes along the rectangular prism's longest, medium, and shortest axes respectively. During stable rotation (which occurs when the primary rotation is about $\hat{x}_1$, and $\hat{x}_3$) the secondary axis(es) trace out circles as described by Feynman.
Conducting an analysis of a rectangular prism according to the method described by Feynman will certainly show that rotation about $\hat{x}_2$ creates a spiral instead of a circle.
The spiral occur
Imagine how a disk spinning on its axis is very stable: the difference between the moments of inertia about the other two axes is zero: it is very stable. Now replace the disk "O" with an "X" shaped structure spinning along an axis normal to the plane of the X. Rotation is stable again for the same reason. Cut off two arms of the X on opposite sides and the straight rod continues to rotate in stable oscillations. Now add a wire along the axis of rotation but sticking out of only one side of the rod. Suddenly you have the Dzhanibekov effect, which is unstable just like adding width to the rod along the axis of rotation to form a shape comparable to a rectangular prism. In the case of the wire it is still baffling but I think it provides some insight into the nature of the problem. Especially considering that a Top (spinning disk with a wire through it assymetrically) is very stable, as is an X shaped top.... while a propeller shaped top is not even a top really. So take an O shaped top spinning in zero gravity and randomly have almost-half-circle shaped chunks of the disk fly off so that it turns into a propeller. Now the moment of inertia about the axis of the rotating propeller "blade" (the longest length axis) is greatly reduced while at the same time the gyroscopic force is drastically reduced.
It makes sense that this (longest) axis becomes "an axis of free rotation" to some degree or other... with the gyroscopic or centrifugal forces of the spinning "blade" adding to and then subtracting from the shaft as it flips back and forth in the Dzhanibekov effect. The difference between the length of the medium-length axis $\prod_2 $ and the shortest axis $\prod_1 $ serves the same function as the shaft of the propeller-like object in the Dzhanibekov effect: specifically it gives and takes centripetal energy from the primary axis of rotation $\prod_2 $ as represented by the saddle point.
Also, notice how a top when it slows down begins to precess in ever larger circles until it falls over. Is that merely gyroscopic precession? Or is it the first sign of unstable oscillation comparable to the spiral trace of the axis in the Dzhanibekov effect? I would speculate that it is a little of both: the top is probably not a perfect disk and once the wobble starts gyroscopic precession likely adds to it.
I might add that a Y shaped top (60 degree apart) has some particularly fascinating properties, since it has similarities to both a box and a propeller, but remains a top because radial symmetry allows gyroscopic forces to stabilize the medium length axis. As pointed out by Ben Crowell in the comments, this effect is explained in beautiful intuitive detail Here in section 4.3.3, which the direct link to the pdf is here. I copied the explanation there as follows:
For a typical, asymmetric object, the angular momentum vector and the angular velocity vector need not be parallel. That is,
only for a body that possesses symmetry about the rotation axis is it true that $L=I\omega$ (the rotational equivalent of $p=mv$) for some scalar I.... (fancy derivation of:)
$$ K=\frac 12 L\cdot \omega$$.... Let's analyze the problem of the spinning shoe that I posed at the beginning of the chapter. The three rotation axes (comparable to a rectangular prism) referred to there are approximately the principal axes of the shoe. While the shoe is in the air, no external torques are acting on it, so its angular momentum vector must be constant. That's in the room's frame of reference, however. The principal axis frame is attached to the shoe, and tumbles madly along with it. In the principal axis frame, the kinetic energy and the magnitude of the angular momentum stay constant, but the actual direction of the angular momentum need not stay fixed (as you saw in the case of rotation that was initially about the intermediate-length axis). Constant $|L|$ gives
$$ {L_x}^2+ {L_y}^2+ {L_z}^2=constant $$
In the principle axis frame, it's easy to solve for the components of $\omega$ in terms of the components of L, so we eliminate $\omega$ from the expression $2K=L\cdot \omega$, giving
$$ \frac{1}{I_xx}{L_x}^2 + \frac{1}{I_yy}{L_y}^2+ \frac{1}{I_zz}{L_z}^2=constant \# 2$$
The first equation is the equation of a sphere in the three dimensional space occupied by the angular momentum vector, while the second one is the equation of an ellipsoid:
The top figure corresponds to the case of rotation about the shortest axis, which has the greatest moment of inertia element. The intersection of the two surfaces consists only of the two points at the front and back of the sphere. The angular momentum is confined to one of these points, and can't change its direction, i.e., its orientation with respect to the principal axis system, which is another way of saying that the shoe can't change its orientation with respect to the angular momentum vector. In the bottom figure, the shoe is rotating about the longest axis. Now the angular momentum vector is trapped at one of the two points on the right or the left. In the case of rotation about the axis with the intermediate moment of inertia element, however, the intersection of the sphere and the ellipsoid is not just a pair of isolated points, but the curve shown with the dashed line. The relative orientation of the shoe and the angular momentum vector can and will change.
One application of the moment of inertia tensor is to video games that simulate car racing or flying air-planes....
One more exotic example has to do with nuclear physics. Although you have probably visualized atomic nuclei as nothing more than featureless points, or perhaps tiny spheres, they are often ellipsoids with one long axis and two shorter, equal ones. Although a spinning nucleus normally gets rid of its angular momentum via gamma ray emission within a period of time on the order of picoseconds, it may happen that a deformed nucleus gets into a state in which a large angular momentum is along its long axis, which is a very stable mode of rotation. Such states can live for seconds or even years! (There is more to the the story--this is the topic on which I wrote my Ph.D. thesis--but the basic insight applies even though the full treatment requires fancy quantum mechanics.)
Our analysis has so far assumed that the kinetic energy of rotation energy can't be converted into other forms of energy such as heat, sound, or vibration. When this assumption fails, then rotation about the axis of least moment of inertia becomes unstable, and will eventually convert itself into rotation about the axis whose moment of inertia is greatest. This happened to the U.S.'s first artificial satellite, Explorer i, launched in 1958. Note the long floppy antennas, which tended to dissipate the kinetic energy into vibration. It had been designed to spin about its minimum-moment-of-inertia axis, but almost immediately , as soon as it was in space, it began spinning end over end. It was nevertheless able to carry out its science mission, which didn't depend on being able to maintain a stable orientation, and it discovered the Van Allen radiation belts.
A Related Question Here On Physics.SE
A Related Question On MathOverflow |
Help:Formatting
You can format your text by using wiki markup. This consists of normal characters like asterisks, apostrophes or equal signs which have a special function in the wiki, sometimes depending on their position. For example, to format a word in
italic, you include it in two pairs of apostrophes like
''this''.
Contents 1 Text formatting markup 2 Paragraphs and line breaks 3 HTML tags 4 Links 5 Other formatting and tools 6 Inserting media and tables 7 Inserting templates, citations, and category tags 8 Inserting symbols 9 HTML tags and symbol entities displayed themselves (with and without interpreting them) 10 Formatting help Text formatting markup
Description You type You get Character (inline) formatting – applies anywhere Italic text ''italic'' italic Bold text '''bold''' bold Bold and italic '''''bold & italic''''' bold & italic Strike text <strike>strike text</strike> Escape wiki markup <nowiki>no ''markup''</nowiki> no ''markup'' Escape wiki markup once [[Laboratory]]<nowiki/> equipment Laboratory equipment Section formatting – only at the beginning of the line Headings of different levels ==Level 2== ===Level 3=== ====Level 4==== =====Level 5===== ======Level 6====== Level 2 Level 3 Level 4 Level 5 Level 6 Horizontal rule Text before ---- Text after Text before
Text after
Bullet list * Start each line * with an asterisk (*). ** More asterisks give deeper *** and deeper levels. * Line breaks <br />don't break levels. *** But jumping levels creates empty space. Any other start ends the list.
Any other start ends the list.
Numbered list # Start each line # with a number sign (#). ## More number signs give deeper ### and deeper ### levels. # Line breaks <br />don't break levels. ### But jumping levels creates empty space. # Blank lines # end the list and start another. Any other start also ends the list.
Any other start also ends the list.
Definition list ;item 1 : definition 1 ;item 2 : definition 2-1 : definition 2-2
Begin with a semicolon. One item per line; a new line can appear before the colon, but using a space before the colon improves parsing.
Indent text : Single indent :: Double indent ::::: Multiple indent
This workaround may harm accessibility.
Mixture of different types of list # one # two #* two point one #* two point two # three #; three item one #: three def one # four #: four def one #: this looks like a continuation #: and is often used #: instead <br />of <nowiki><br /></nowiki> # five ## five sub 1 ### five sub 1 sub 1 ## five sub 2
The usage of
#: and
*: for breaking a line within an item may also harm accessibility.
Preformatted text Start each line with a space. Text is '''preformatted''' and ''markups'' '''''can''''' be done.
This way of preformatting only applies to section formatting. Character formatting markups are still effective.
Start each line with a space. Text is Preformatted text blocks <nowiki>Start with a space in the first column, (before the <nowiki>). Then your block format will be maintained. This is good for copying in code blocks: def function(): """documentation string""" if True: print True else: print False</nowiki> Start with a space in the first column, (before the <nowiki>). Then your block format will be maintained. This is good for copying in code blocks: def function(): """documentation string""" if True: print True else: print False Paragraphs and line breaks
MediaWiki ignores single line breaks. To start a new paragraph, leave an empty line:
You type You get A single newline generally has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function (used internally to compare different versions of a page). But an empty line starts a new paragraph. When used in a list, a newline ''does'' affect the layout (see above).
A single newlinegenerally has no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the
But an empty line starts a new paragraph.
When used in a list, a newline
If necessary, you can force a line break within a paragraph with the HTML tag
<br />:
You type You get You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You can break lines
Some HTML tags are allowed in MediaWiki, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them.
Description You type You get Inserted
(Displays as underline in most browsers)
<ins>Inserted</ins> or <u>Underline</u>
or
Deleted
(Displays as strikethrough in most browsers)
<s>Struck out</s> or <del>Deleted</del>
or
Fixed width text <code>Source code</code> or <tt>Fixed width text</tt>
or
Superscripts and subscripts X<sup>2</sup>, H<sub>2</sub>O
X
Line breaks You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You can break lines
Blockquotes Text before <blockquote>Blockquote</blockquote> Text after
Text before
Blockquote
Text after
Completely preformatted text <pre> Text is '''preformatted''' and ''markups'' '''''cannot''''' be done</pre>
For marking up of preformatted text, check the "Preformatted text" entry at the end of the previous table.
Text is '''preformatted''' and ''markups'' '''''cannot''''' be done Customized preformatted text <pre style="color: red"> Text is '''preformatted''' with a style and ''markups'' '''''cannot''''' be done </pre>
A CSS style can be named within the
style property.
Text is '''preformatted''' with a style and ''markups'' '''''cannot''''' be done
continued:
Description You type You get Customized preformatted text with text wrap according to available width <pre style="white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;"> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. </pre> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. Preformatted text with text wrap according to available width <code> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. </code>
Leading spaces to preserve formatting Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Help:MediaWiki basics/Introduction to MediaWiki and wikis|wiki]] ''markup'' and special characters: → Putting a space at the beginning of each line stops the text from being reformatted. It still interprets wiki Links
Description You type You get Internal links Here's a link to a page named [[Cell counter]]. You can even say [[cell counter]]s and the link will show up correctly. You can put formatting around a link. Example: ''[[Laboratory informatics]]''. The ''first letter'' of articles is automatically capitalized, so [[laboratory informatics]] goes to the same place as [[Laboratory informatics]]. Capitalization matters after the first letter. You can link to a page section by its title: [[Laboratory information management system#Technology]] You can make the text appearing on an internal link different from the article title: [[Laboratory information management system#Technology|technology of LIMS]] If you wish to link to a category, add a colon in front: [[:Category:LIMSwiki help documentation]]
You can put formatting around a link.Example:
You can link to a page section by its title: Laboratory information management system#Technology
You can make the text appearing on an internal link different from the article title: technology of LIMS
If you wish to link to a category, add a colon in front: Category:LIMSwiki help documentation
External links You can make an external link just by typing a URL: http://clinfowiki.org You can give it a title: [http://clinfowiki.org ClinfoWiki.org] Or leave the title blank: [http://clinfowiki.org] Linking to an e-mail address works the same way: mailto:someone@example.com or [mailto:someone@example.com someone]
You can make an external link just by typing a URL: http://clinfowiki.org
You can give it a title: ClinfoWiki.org
Or leave the title blank: [1]
Other formatting and tools
Description You type You get Mathematical formulas <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Comment <!-- This is a comment --> Comments are only visible in the edit zone.
Comments are only visible in the edit zone.
Signing talk page comments You should "sign" your comments on talk pages: <br> - Three tildes gives your signature: ~~~ <br> - Four tildes give your signature plus date/time: ~~~~ <br> - Five tildes gives the date/time alone: ~~~~~
You should "sign" your comments on talk pages:
Page redirects #REDIRECT [[Laboratory informatics]]
You use redirects most often on pages with incorrect or outdated page titles. You simply copy or remove the existing content, paste this code in, and change the internal link text to the title of the article you wish to automatically redirect users to.
#REDIRECT Laboratory informatics Inserting media and tables
For more on these topics: Help:MediaWiki basics/Intermediate training
For more on these topics: Help:MediaWiki basics/Advanced training
Inserting symbols
Symbols and other special characters not available on your keyboard can be inserted through a special sequence of characters. Those sequences are called HTML entities. For example, the following sequence (entity)
→ when inserted will be shown as HTML symbol → and — when inserted will be shown as an HTML symbol —.
HTML symbol entities Á á  ⠴ Æ æ À à ℵ Α α & ∧ ∠ Å å ≈ à ã Ä ä „ Β β ¦ • ∩ Ç ç ¸ ¢ Χ χ ˆ ♣ ≅ © ↵ ∪ ¤ † ‡ ↓ ⇓ ° Δ δ ♦ ÷ É é Ê ê È è ∅ Ε ε ≡ Η η Ð ð Ë ë € ∃ ƒ ∀ ½ ¼ ¾ ⁄ Γ γ ≥ > ↔ ⇔ ♥ … Í í Î î ¡ Ì ì ℑ ∞ ∫ Ι ι ¿ ∈ Ï ï Κ κ Λ λ 〈 « ← ⇐ ⌈ “ ≤ ⌊ ∗ ◊ ‹ ‘ < ¯ — µ · − Μ μ ∇ – ≠ ∋ ¬ ∉ ⊄ Ñ ñ Ν ν Ó ó Ô ô Œ œ Ò ò ‾ Ω ω Ο ο ⊕ ∨ ª º Ø ø Õ õ ⊗ Ö ö ¶ ∂ ‰ ⊥ Φ φ Π π ϖ ± £ ′ ″ ∏ ∝ Ψ ψ " √ 〉 » → ⇒ ⌉ ” ℜ ® ⌋ Ρ ρ › ’ ‚ Š š ⋅ § Σ σ ς ∼ ♠ ⊂ ⊆ ∑ ⊃ ¹ ² ³ ⊇ ß Τ τ ∴ Θ θ ϑ Þ þ ˜ × ™ Ú ú ↑ ⇑ Û û Ù ù ¨ ϒ Υ υ Ü ü ℘ Ξ ξ Ý ý ¥ ÿ Ÿ Ζ ζ
Description You type You get Copyright symbol © Greek delta letter symbol δ Euro currency symbol €
See the list of all HTML entities on the Wikipedia article List of XML and HTML character entity references. Additionally, MediaWiki supports two non-standard entity reference sequences:
&רלמ; and
&رلم; which are both considered equivalent to
which is a right-to-left mark. (Used when combining right-to-left languages with left-to-right languages in the same page.)
€→
€
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span>→
Typo to be corrected
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span>→
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span> Nowiki for HTML
<nowiki /> can prohibit (HTML) tags:
<<nowiki />pre> → <pre>
But
not & symbol escapes: &<nowiki />amp; → &
To print & symbol escapes as text, use "
&" to replace the "&" character (eg. type "
", which results in "
").
Formatting help
Beyond the text formatting markup shown on this page, here are some other formatting references:
You can find more help documentation at Category:LIMSwiki help documentation. |
I need to implement the following in python: For a given discrete time series Zt (t=0 to T), find smallest t such that:
$c\sum_{s=0}^t e^{[k(Zt-Zs)+m(t-s)]} >= \frac{p*}{1-p*} $
where c,k,m are constants and P* is given by
$\int_0^{0.5} \frac{(1-2y)e^{-d/y}}{(1-y)^{1+d}y^{1-d}}dy=\int_{0.5}^{p*} \frac{(2y-1)e^{-d/y}}{(1-y)^{1+d}y^{1-d}}dy $
where d is another constant(something to be optimized)
I need to implement this for each row of an array Z = input 2d array of shape (N,w) I implemented a loop version:
i implemented the sum in equation 1 above utilizing the fact
$ e^{[k(Zt-Zs)+m(t-s)]} = \frac{e^{kZt+mt}}{e^{kZs+ms}} $
hence
$\sum_{s=0}^t e^{[k(Zt-Zs)+m(t-s)]} =e^{kZt+mt}*\sum_{s=0}^t\frac{1}{e^{kZs+ms}} $ which is reflected in use of cumsum in code below
def f(z): return ((1-2*z)*np.exp(-d/z))/(((1-z)**(1+d))*(z**(1-d)))lhs=integrate.quad(f,0,0.5)def rhs(p): return integrate.quad(-f,0.5, p)p_star= fsolve(rhs-lhs,0.75) # will depend on time series only indrectly when d will be optimizedfor i in np.arange(N): z=Z[i,:] main = np.exp (kz+m*np.arange(w)) cumsum_t=np.cumsum(1/main) final_sum= main*cumsum_t t_solution= # index i where final_sum[i]> p_star/(1-p_star)) # not implemented yet
Is there any way I can vectorize this for Z? In this case N is ~400,000 so vectorization would really help. The function f(z) will be fine with vector inputs but I dont think the function rhs will be fine as it uses integration. |
An enlightening example is its use in Stochastic Neighborhood Embedding devised by Hinton and Roweis.
Essentially the authors are trying to represent data on a two or three dimensional manifold so that the data can be visually represented (similar in aim as PCA, for instance). The difference is that rather than preserve total variance in the data (as in PCA), SNE attempts to preserve local structure of the data -- if that is unclear, the KL divergence may help to illuminate what it means. To do this, they use a Gaussian kernel to estimate the probability that points $i$ and $j$ would be neighbours:
$$P_i = \sum_j p_{i,j}\qquad \text{ where }\qquad p_{i,j} = \frac{\exp(-\|x_i-x_j\|^2\;/\; 2\sigma_i^2)}{\sum_{k\neq l} \exp(-\| x_i - x_k \|^2\;/\;2\sigma_i^2)}$$
They then use a Gaussian kernel to find a probability density for the new points in the low dimensional space.
$$Q_i = \sum_j q_{i,j}\qquad \text{ where }\qquad q_{i,j} = \frac{\exp(-\|y_i-y_j\|^2)}{\sum_{k\neq l} \exp(-\| y_i - y_k \|^2)}\qquad\;\;$$
and they use a cost function $C=\sum_i D_{KL}(P_i||Q_i)$ to measure how well the low dimensional data represents the original data.
If we fix the index $i$ for a moment and just look at a single point $i$, expanding out the notation we get:
$$D_{KL}(P||Q) = \sum_j p_{i,j} log(\frac{p_{i,j}}{q_{i,j}})$$
Which is brilliant! For each point $i$, the KL divergence will be high if points which are close in high-dimensional space (large $p_{i,j}$) are far apart in low-dimensional space (small $q_{i,j}$). But it puts a much smaller penalty on points that are far apart in high-dimensional space which are close together in low-dimensions. In this way the asymmetry of the KL-divergence is actually beneficial!
If we were to find a minimum Cost, we would have a method that preserves well the local structure of the data, as the authors set out to do, and the KL divergence played a pivotal role. |
I know that bond angle decreases in the order $\ce{H2O}$, $\ce{H2S}$ and $\ce{H2Se}$. I wish to know the reason for this. I think this is because of the lone pair repulsion but how?
Here are the $\ce{H-X-H}$ bond angles and the $\ce{H-X}$ bond lengths: \begin{array}{lcc} \text{molecule} & \text{bond angle}/^\circ & \text{bond length}/\pu{pm}\\ \hline \ce{H2O} & 104.5 & 96 \\ \ce{H2S} & 92.3 & 134 \\ \ce{H2Se}& 91.0 & 146 \\ \hline \end{array}
The traditional textbook explanation would argue that the orbitals in the water molecule is close to being $\ce{sp^3}$ hybridized, but due to lone pair - lone pair electron repulsions, the lone pair-X-lone pair angle opens up slightly in order to reduce these repulsions, thereby forcing the $\ce{H-X-H}$ angle to contract slightly. So instead of the $\ce{H-O-H}$ angle being the perfect tetrahedral angle ($109.5^\circ$) it is slightly reduced to $104.5^\circ$. On the other hand, both $\ce{H2S}$ and $\ce{H2Se}$ have no orbital hybridization. That is, The $\ce{S-H}$ and $\ce{Se-H}$ bonds use pure $\ce{p}$-orbitals from sulfur and selenium respectively. Two $\ce{p}$-orbitals are used, one for each of the two $\ce{X-H}$ bonds; this leaves another $\ce{p}$-orbital and an $\ce{s}$-orbital to hold the two lone pairs of electrons. If the $\ce{S-H}$ and $\ce{Se-H}$ bonds used pure $\ce{p}$-orbitals we would expect an $\ce{H-X-H}$ interorbital angle of $90^\circ$. We see from the above table that we are very close to the measured values. We could fine tune our answer by saying that in order to reduce repulsion between the bonding electrons in the two $\ce{X-H}$ bonds the angle opens up a bit wider. This explanation would be consistent with the $\ce{H-S-H}$ angle being slightly larger than the corresponding $\ce{H-Se-H}$ angle. Since the $\ce{H-Se}$ bond is longer then the $\ce{H-S}$ bond, the interorbital electron repulsions will be less in the $\ce{H2Se}$ case alleviating the need for the bond angle to open up as much as it did in the $\ce{H2S}$ case.
The only new twist on all of this that some universities are now teaching is that water is not really $\ce{sp^3}$ hybridized, the $\ce{sp^3}$ explanation does not fit with all of the experimentally observed data, most notably the photoelectron spectrum. The basic concept introduced is that "orbitals only hybridize in response to bonding." So in water, the orbitals in the two $\ce{O-H}$ bonds are roughly $\ce{sp^3}$ hybridized, but one lone pair resides in a nearly pure p-orbital and the other lone pair is in a roughly $\ce{sp}$ hybridized orbital.
The question asks why water has a larger angle than other hydrides of the form $\ce{XH2}$ in particular $\ce{H2S}$ and $\ce{H2Se}$. There have been other similar questions, so an attempt at a general answer is given below.
There are, of course, many other triatomic hydrides, $\ce{LiH2}$, $\ce{BeH2}$, $\ce{BeH2}$, $\ce{NH2}$, etc.. It turns out that some are linear and some are V shaped, but with different bond angles, and that the same general explanation can be used for each of these cases.
It is clear that as the bond angle for water is neither $109.4^\circ$, $120^\circ$, nor $180^\circ$ that $\ce{sp^3}$, $\ce{sp^2}$ or $\ce{sp}$ hybridisation will not explain the bond angles. Furthermore, the UV photoelectron spectrum of water, which measures orbital energies, has to be explained as does the UV absorption spectra.
The way out of this problem is to appeal to molecular orbital theory and to construct orbitals based upon $\ce{s}$ and $\ce{p}$ orbitals and their overlap as bond angle changes. The orbital diagram was worked out a long time ago is now called a Walsh diagram (A. D. Walsh
J. Chem. Soc. 1953, 2262; DOI: 10.1039/JR9530002260). The figure below sketches such a diagram, and the next few paragraphs explain the figure.
The shading indicates the sign (phase) of the orbital, 'like to like' being bonding otherwise not bonding. The energies are relative as are the shape of the curves. On the left are the orbitals arranged in order of increasing energy for a linear molecule; on the right those for a bent molecule. The orbitals labelled $\Pi_\mathrm{u}$ are degenerate in the linear molecule but not so in the bent ones. The labels $\sigma_\mathrm{u}$, $\sigma_\mathrm{g}$ refer to sigma bonds, the $\mathrm{g}$ and $\mathrm{u}$ subscripts refer to whether the combined MO has a centre of inversion $\mathrm{g}$ (gerade) or not $\mathrm{u}$ (ungerade) and derive from the irreducible representations in the $D_\mathrm{\infty h}$ point group. The labels on the right-hand side refer to representations in the $C_\mathrm{2v}$ point group.
Of the three $\Pi_\mathrm{u}$ orbitals one forms the $\sigma_\mathrm{u}$, the other two are degenerate and
non-bonding. One of the $\ce{p}$ orbitals lies in the plane of the diagram, the other out of the plane, towards the reader.
When the molecule is bent this orbital remains non-bonding, the other becomes the $\ce{3a_1}$ orbital (red line) whose energy is significantly lowered as overlap with the H atom's s orbital increases.
To work out whether a molecule is linear or bent all that is necessary is to put electrons into the orbitals. Thus, the next thing is to make a list of the number of possible electrons and see what diagram predicts. \begin{array}{rcll} \text{Nr.} & \text{Shape} & \text{molecule(s)} & \text{(angle, configuration)} \\ \hline 2 & \text{bent} & \ce{LiH2+} & (72,~\text{calculated})\\ 3 & \text{linear} & \ce{LiH2}, \ce{BeH2+} &\\ 4 & \text{linear} & \ce{BeH2}, \ce{BH2+} &\\ 5 & \text{bent} & \ce{BH2} & (131, \ce{[2a_1^2 1b_2^2 3a_1^1]})\\ 6 & \text{bent} & \ce{^1CH2} & (110, \ce{[1b_2^2 3a_1^2]})\\ & & \ce{^3CH2} & (136, \ce{[1b_2^2 3a_1 1b_1^1]})\\ & & \ce{BH2^-} & (102)\\ & & \ce{NH2+} & (115, \ce{[3a_1^2])}\\ 7 & \text{bent} & \ce{NH2} & (103.4, \ce{[3a_1^2 1b_1^1]})\\ 8 & \text{bent} & \ce{OH2} & (104.31, \ce{[3a_2^2 1b_1^2]})\\ & & \ce{NH2^-} & (104)\\ & & \ce{FH2^+} &\\ \hline \end{array}
Other hydrides show similar effects depending on the number of electrons in $\ce{b2}$, $\ce{a1}$ and $\ce{b1}$ orbitals; for example: \begin{array}{ll} \ce{AlH2} & (119, \ce{[b_2^2 a1^1]}) \\ \ce{PH2} & (91.5, \ce{[b_2^2 a_1^2 b_1^1]}) \\ \ce{SH2} & (92)\\ \ce{SeH2} & (91)\\ \ce{TeH2} & (90.2)\\ \ce{SiH2} & (93)\\ \end{array}
The agreement with experiment is qualitatively good, but, of course the bond angles cannot be accurately determined with such a basic model only general trends.
The photoelectron spectrum (PES) of water shows signals from $\ce{2a1}$, $\ce{1b2}$, $\ce{3a1}$, $\ce{1b1}$ orbitals, ($21.2$, $18.7$, $14.23$, and $\pu{12.6 eV}$ respectively) the last being non-bonding as shown by the lack of structure. The signals from $\ce{3b2}$ and $\ce{3a1}$ orbitals show vibrational structure indicating that these are bonding orbitals.
The range of UV and visible absorption by $\ce{BH2}$, $\ce{NH2}$, $\ce{OH2}$ are $600 - 900$, $450 - 740$, and $150 - \pu{200 nm}$ respectively. $\ce{BH2}$ has a small HOMO-LUMO energy gap between $\ce{3a1}$ and $\ce{1b1}$ as the ground state is slightly bent. The first excited state, is predicted to be linear as its configuration is $\ce{1b_2^2 1b_1^1}$ and this is observed experimentally.
$\ce{NH2}$ has a HOMO-LUMO energy gap from $\ce{3a_1^2 1b_1^1}$ to $\ce{3a_1^1 1b_1^2 }$, so both ground and excited states should be bent, the excited state angle is approx $144^\circ$. Compared to $\ce{BH2}$, $\ce{NH2}$ is more bent so the HOMO-LUMO energy gap should be larger as observed.
$\ce{OH2}$ has a HOMO-LUMO energy gap from $\ce{3a_1^2 1b_1^2}$ to $\ce{3a_1^2 1b_1^1 4a_1^1 }$, i.e. an electron promoted from the non-bonding orbital to the first anti-bonding orbital. The excited molecule remains bent largely due to the strong effect of two electrons in $\ce{3a1}$ counteracting the single electron in $\ce{4a1}$. The bond angle is almost unchanged at $107^\circ$, but the energy gap will be larger than in $\ce{BH2}$ or $\ce{NH2}$, again as observed.
The bond angles of $\ce{NH2}$, $\ce{NH2-}$ and $\ce{NH2+}$ are all very similar, $103^\circ$, $104^\circ$, and $115^\circ$ respectively. $\ce{NH2}$ has the configuration $\ce{3a_1^2 1b_1^1}$ where the $\ce{b1}$ is a non bonding orbital, thus adding one electron makes little difference, removing one means that the $\ce{3a_1}$ orbital is not stabilised as much and so the bond angle is opened a little.
The singlet and triplet state $\ce{CH2}$ molecules show that the singlet has two electrons in the $\ce{3a1}$ orbital and has a smaller angle than the triplet state with just one electron here and one in the non-bonding $\ce{b1}$, thus the triplet ground state bond angle is expected to be larger than the singlet.
As the size of the central atom increases, its nucleus becomes more shielded by core electrons and it becomes less electronegative. Thus going down the periodic table the $\ce{X-H}$ bond becomes less ionic, more electron density is around the $\ce{H}$ atom thus the $\ce{H}$ nucleus is better shielded, and thus the $\ce{X-H}$ bond is longer and weaker. Thus, as usual with trends within the same family in the periodic table, the effect is, basically, one of atomic size.
Molecules with heavier central atom, $\ce{SH2}$, $\ce{PH2}$, etc. all have bond angles around $90^\circ$. The decrease in electronegativity destabilises the $\Pi_\mathrm{u}$ orbital raising its energy. The $\ce{s}$ orbitals of the heavier central atoms are larger and lower in energy than those of oxygen, hence these orbitals overlap with the $\ce{H}$ atom's $\ce{s}$ orbital more weakly. Both these factors help to stabilise the linear $3\sigma_\mathrm{g}$ orbital and hence the $\ce{4a1}$ in the bent configuration. This orbital belongs to the same symmetry species as $\ce{3a1}$ and thus they can interact by a second order Jahn-Teller interaction. This is proportional to $1/\Delta E$ where $\Delta E$ is the energy gap between the two orbitals mentioned. The effect of this interaction is to raise the $\ce{4a1}$ and decrease the $\ce{3a1}$ in energy. Thus in going down the series $\ce{OH2}$, $\ce{SH2}$, $\ce{SeH2}$, etc. the bond angle should decrease which is what is observed.
Example have been given for $\ce{XH2}$ molecules, but this method has also been used to understand triatomic and tetra-atomic molecules in general, such as $\ce{NO2}$, $\ce{SO2}$, $\ce{NH3}$, etc..
Adding a bit to the answers above, one factor that isn't shown in the Walsh diagram is that as the angle decreases, there is increased mixing between the central atom valence s and p orbitals, such that the 2a$_1$ orbital has increased p contribution and the 3a$_1$ has increased s. This is where one gets the result that Ron mentioned at the end of his answer that the lone pairs on water reside in a pure p (1b$_1$) and an sp (3a$_1$) orbital. That means the bonding orbitals shift from one pure s (2a$_1$) and one pure p (1b$_2$) to one sp (2a$_1$) and one p (1b$_2$) (ignoring the extreme case where 3a$_1$ actually gets lower in energy than 1b$_2$, which isn't really relevant). Mixing occurs to a greater extent in $\ce{SH2}$ relative to $\ce{OH2}$ because the 3s and 3p orbitals of S are closer in energy to each other than 2s and 2p on O.
If we hybridize the two bonding orbitals so that they are equivalent and do the same for the two nonbonding orbitals, we find that they start as bonding = 50% s/50% p (ie $sp$ hybrid) and nonbonding = 100% p and shift towards an endpoint of bonding and nonbonding both being 25% s/75% p (ie $sp^3$ hybrid).
Thus, the common introductory chemistry explanation that "bonding in $\ce{SH2}$ is pure p" is not supported by the MO analysis. Instead, $\ce{SH2}$ is closer to $sp^3$ than $\ce{H2O}$ is. The bonding orbitals in $\ce{H2O}$ are somewhere between $sp^2$ and $sp^3$. So it is correct to say that "the bonds in $\ce{SH2}$ have less s character than those in $\ce{OH2}$", but not to say that they are "pure p".
The fact that the $\ce{SH2}$ bond angle is around 90 degrees is not because its bonds are made from p orbitals only. That coincidence is a red herring. Instead, the fact that the bond angle is smaller than the canonical $sp^3$ is because the bonding and nonbonding orbitals are not equivalent. That means that the particular p orbitals involved in each $sp^3$ group do not have to have the same symmetry as in, for example, a tetrahedral molecule like CH4.
I will try to give u a most appropriate and short answer that u can understand easily See h20 has 104.5 degrees bond angle , h2s has 92degrees , h2se has 91degrees and h2te has 90degrees bond angles Draw diagrams of these u will find all of them have tetrahedral shape with 2 lone pairs , assume that no hybridization occurs and all these central atoms are using pure p orbitals for bonding then because of repulsions by lone pairs the bond angle should be 90degrees between 2 surrounding atoms , now according to dragos rules when central atom belongs to 3rd period or higher and electro negativity of surrounding atoms is 2.5 or less then central atom uses almost pure p orbitals . So the final answer is the extenend of hybridization decreases In this case which leads to decrease in bond angle . note that only in h2te no hybridization is observed.
We know that as the electronegativity of central atom increases, the bond angles also increase. The relevant electronegative order is $$\ce{O > S > Se}\,,$$ hence the bond angle order of $$\ce{H2O>H2S>H2Se}\,.$$ |
Under the auspices of the Computational Complexity Foundation (CCF)
We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are:
(1) We extract $k (k/nd)^{O(1)}$ bits with exponentially small error from $n$-bit sources of min-entropy $k$ that are generated by functions $f : \{0,1\}^\ell \to \{0,1\}^n$ where each output ... more >>>
We reduce non-deterministic time $T \ge 2^n$ to a 3SAT
instance $\phi$ of size $|\phi| = T \cdot \log^{O(1)} T$ such that there is an explicit circuit $C$ that on input an index $i$ of $\log |\phi|$ bits outputs the $i$th clause, and each output bit of $C$ depends on ... more >>>
A map $f:[n]^{\ell}\to[n]^{n}$ has locality $d$ if each output symbol
in $[n]=\{1,2,\ldots,n\}$ depends only on $d$ of the $\ell$ input symbols in $[n]$. We show that the output distribution of a $d$-local map has statistical distance at least $1-2\cdot\exp(-n/\log^{c^{d}}n)$ from a uniform permutation of $[n]$. This seems to be the ... more >>> |
How honest are you?
Running here
Summary: breaking a weak PRNG
On the main page we see the text:
You are coming back home from a hard day sieving numbers at the river.
Unfortunately, you trip and all your numbers fall in a nearby lake.
Continue.
We click Continue and then:
From the lake a god emerged carrying a number on each hand.
He looked at you and asked the following question…
Continue.
Again, click Continue:
Did you drop 4452678531 or 754311689?
If we enter one of the numbers, we either get another question or start from the beginning. So it seems that we need to guess the correct numbers many times.
The numbers look random and the challenge looks like pure guessing. But after looking around, we can find a snippet of code hidden in the html source of the second page. It is hidden in a html comment and padded down with 500 empty lines. Here’s the snippet:
class SecurePrng(object): def __init__(self): # generate seed with 64 bits of entropy self.p = 4646704883L self.x = random.randint(0, self.p) self.y = random.randint(0, self.p) def next(self): self.x = (2 * self.x + 3) % self.p self.y = (3 * self.y + 9) % self.p return (self.x ^ self.y)
It is a quite simple PRNG: it consists of two LCG combined with xor.
We can guess the first few values and then attack the PRNG to recover the seed and predict next outputs.
The simplest solution is to bruteforce all candidates for $x$, deduce $y$ as xor of PRNG output with $x$ and check if the numbers match. But I will describe another solution, which exploits the fact that the multipliers are very small (2 and 3). This solution would work for much larger $p$.
The idea is to reconstruct $x$ bit-by-bit from least significant to most significant bits. Since we also know $x \oplus y$, we immediately obtain value of the same bits of $y$. Then, to account the modulus $p$ we simply guess how many $p$ we subtract on overflow. This number is not greater the multiplier constants and since they are small, there are quite few possible values. So we compute least significant bits of $x’ = 2x + 3 – k_xP$, then we obtain least significant bits of $y’$ from known $x’ \oplus y’$ and we check if for some $k_y$ the congruence $y’ \equiv 3y + 9 -k_yP\pmod{2^t}$ works.
Note that when we guess a bit of $x$, it possible that both bit values pass the test, leading to exponential explosion. One option is to compute next values (the same LSBs) and check if they match the third generated value (and for this we need to guess the modulus reductions again). But it seems that when one of the multipliers is even, there are at most one candidate per $k_x,k_y$ guess. I haven’t proved this, just observed experimentally. So for the multipliers 2,3 it works perfectly.
Here’s POC:
import random from itertools import product P = 2**256 + 7 NBITS = P.bit_length() Ax, Cx = 2, 5 Ay, Cy = 3, 7 def next(x, y): x = (Ax*x + Cx) % P y = (Ay*y + Cy) % P return x, y # generate two values X0, Y0 = random.randint(0, P-1), random.randint(0, P-1) print "X0", hex(X0) print "Y0", hex(Y0) realkx = (Ax*X0 + Cx) / P realky = (Ay*Y0 + Cy) / P print "REAL KX KY", realkx, realky X1, Y1 = next(X0, Y0) X2, Y2 = next(X1, Y1) prng = [X0 ^ Y0, X1 ^ Y1, X2 ^ Y2] # guess modulo reductions for kx, ky in product(range(Ax), range(Ay)): xs = {0} # go from LSB to MSB for b in xrange(NBITS): if not xs: break xs2 = set() mask = 2**(b+1) - 1 mod = 2**(b+1) for x, bx in product(xs, range(2)): x |= bx << b y = (prng[0] ^ x) & mask if x >= P or y >= P: continue x1 = (Ax*x + Cx - kx * P) % mod y1 = (Ay*y + Cy - ky * P) % mod if (x1 ^ y1) & mask == prng[1] & mask: xs2.add(x) xs = xs2 else: print kx, ky, ":", len(xs), "candidates" for x0 in xs: y0 = prng[0] ^ x0 assert x0 < P if y0 >= P: continue x1, y1 = next(x0, y0) if x0 ^ y0 == prng[0] and x1 ^ y1 == prng[1]: print "GOOD", hex(x0), hex(y0)
The flag:
CTF{_!_aRe_y0U_tH3_NSA_:-?_!_} |
This chapter deals with some pretty big questions and ideas. Some belief systems teach us that there are questions to which “we were not meant to know” the answers. Other people feel that if our minds and instruments are capable of exploring a question, then it becomes part of our birthright as thinking human beings. Have your group discuss your personal reactions to discussing questions like the beginning of time and space, and the ultimate fate of the universe. Does it make you nervous to hear about scientists discussing these issues? Or is it exciting to know that we can now gather scientific evidence about the origin and fate of the cosmos? (In discussing this, you may find that members of your group strongly disagree; try to be respectful of others’ points of view.)
A popular model of the universe in the 1950s and 1960s was the so-called steady-state cosmology. In this model, the universe was not only the same everywhere and in all directions (homogeneous and isotropic), but also the same at all times. We know the universe is expanding and the galaxies are thinning out, and so this model hypothesized that new matter was continually coming into existence to fill in the space between galaxies as they moved farther apart. If so, the infinite universe did not have to have a sudden beginning, but could simply exist forever in a steady state. Have your group discuss your reaction to this model. Do you find it more appealing philosophically than the Big Bang model? Can you cite some evidence that indicates that the universe was not the same billions of years ago as it is now—that it is not in a steady state?
One of the lucky accidents that characterizes our universe is the fact that the time scale for the development of intelligent life on Earth and the lifetime of the Sun are comparable. Have your group discuss what would happen if the two time scales were very different. Suppose, for example, that the time for intelligent life to evolve was 10 times greater than the main-sequence lifetime of the Sun. Would our civilization have ever developed? Now suppose the time for intelligent life to evolve is ten times shorter than the main-sequence lifetime of the Sun. Would we be around? (This latter discussion requires considerable thought, including such ideas as what the early stages in the Sun’s life were like and how much the early Earth was bombarded by asteroids and comets.)
The grand ideas discussed in this chapter have a powerful effect on the human imagination, not just for scientists, but also for artists, composers, dramatists, and writers. Here we list just a few of these responses to cosmology. Each member of your group can select one of these, learn more about it, and then report back, either to the group or to the whole class.
The California poet Robinson Jeffers was the brother of an astronomer who worked at the Lick Observatory. His poem “Margrave” is a meditation on cosmology and on the kidnap and murder of a child: http://www.poemhunter.com/best-poems/robinson-jeffers/margrave/.
In the science fiction story “The Gravity Mine” by Stephen Baxter, the energy of evaporating supermassive black holes is the last hope of living beings in the far future in an ever-expanding universe. The story has poetic description of the ultimate fate of matter and life and is available online at: http://www.infinityplus.co.uk/stories/gravitymine.htm.
The musical piece YLEM by Karlheinz Stockhausen takes its title from the ancient Greek term for primeval material revived by George Gamow. It tries to portray the oscillating universe in musical terms. Players actually expand through the concert hall, just as the universe does, and then return and expand again. See: http://www.karlheinzstockhausen.org/ylem_english.htm.
The musical piece Supernova Sonata http://www.astro.uvic.ca/~alexhp/new/supernova_sonata.html by Alex Parker and Melissa Graham is based on the characteristics of 241 type Ia supernova explosions, the ones that have helped astronomers discover the acceleration of the expanding universe.
Gregory Benford’s short story “The Final Now” envisions the end of an accelerating open universe, and blends religious and scientific imagery in a very poetic way. Available free online at: http://www.tor.com/stories/2010/03/the-final-now.
When Einstein learned about Hubble’s work showing that the universe of galaxies is expanding, he called his introduction of the cosmological constant into his general theory of relativity his “biggest blunder.” Can your group think of other “big blunders” from the history of astronomy, where the thinking of astronomers was too conservative and the universe turned out to be more complicated or required more “outside-the-box” thinking?
Review Questions
What are the basic observations about the universe that any theory of cosmology must explain?
Describe some possible futures for the universe that scientists have come up with. What property of the universe determines which of these possibilities is the correct one?
What does the term Hubble time mean in cosmology, and what is the current best calculation for the Hubble time?
Which formed first: hydrogen nuclei or hydrogen atoms? Explain the sequence of events that led to each.
Describe at least two characteristics of the universe that are explained by the standard Big Bang model.
Describe two properties of the universe that are not explained by the standard Big Bang model (without inflation). How does inflation explain these two properties?
Why do astronomers believe there must be dark matter that is not in the form of atoms with protons and neutrons?
What is dark energy and what evidence do astronomers have that it is an important component of the universe?
Thinking about the ideas of space and time in Einstein’s general theory of relativity, how do we explain the fact that all galaxies outside our Local Group show a redshift?
Astronomers have found that there is more helium in the universe than stars could have made in the 13.8 billion years that the universe has been in existence. How does the Big Bang scenario solve this problem?
Describe the anthropic principle. What are some properties of the universe that make it “ready” to have life forms like you in it?
Describe the evidence that the expansion of the universe is accelerating.
Thought Questions
What is the most useful probe of the early evolution of the universe: a giant elliptical galaxy or an irregular galaxy such as the Large Magellanic Cloud? Why?
What are the advantages and disadvantages of using quasars to probe the early history of the universe?
Would acceleration of the universe occur if it were composed entirely of matter (that is, if there were no dark energy)?
Suppose the universe expands forever. Describe what will become of the radiation from the primeval fireball. What will the future evolution of galaxies be like? Could life as we know it survive forever in such a universe? Why?
Some theorists expected that observations would show that the density of matter in the universe is just equal to the critical density. Do the current observations support this hypothesis?
There are a variety of ways of estimating the ages of various objects in the universe. Describe two of these ways, and indicate how well they agree with one another and with the age of the universe itself as estimated by its expansion.
Since the time of Copernicus, each revolution in astronomy has moved humans farther from the center of the universe. Now it appears that we may not even be made of the most common form of matter. Trace the changes in scientific thought about the central nature of Earth, the Sun, and our Galaxy on a cosmic scale. Explain how the notion that most of the universe is made of dark matter continues this “Copernican tradition.”
The anthropic principle suggests that in some sense we are observing a special kind of universe; if the universe were different, we could never have come to exist. Comment on how this fits with the Copernican tradition described in the previous exercises.
Penzias and Wilson’s discovery of the Cosmic Microwave Background (CMB) is a nice example of scientific serendipity—something that is found by chance but turns out to have a positive outcome. What were they looking for and what did they discover?
Construct a timeline for the universe and indicate when various significant events occurred, from the beginning of the expansion to the formation of the Sun to the appearance of humans on Earth.
Figuring for Yourself
Suppose the Hubble constant were not 22 but 33 km/s per million light-years. Then what would the critical density be?
Assume that the average galaxy contains 1011MSun and that the average distance between galaxies is 10 million light-years. Calculate the average density of matter (mass per unit volume) in galaxies. What fraction is this of the critical density we calculated in the chapter?
The CMB contains roughly 400 million photons per m3. The energy of each photon depends on its wavelength. Calculate the typical wavelength of a CMB photon. Hint: The CMB is blackbody radiation at a temperature of 2.73 K. According to Wien’s law, the peak wave length in nanometers is given by [latex]{{\lambda}}_{\text{max}}=\frac{3\times {10}^{6}}{T}[/latex]. Calculate the wavelength at which the CMB is a maximum and, to make the units consistent, convert this wavelength from nanometers to meters.
Calculate the energy of a typical photon. Assume for this approximate calculation that each photon has the wavelength calculated in the exercises. The energy of a photon is given by [latex]E=\frac{hc}{{\lambda}}[/latex], where h is Planck’s constant and is equal to 6.626 × 10–34 J × s, c is the speed of light in m/s, and λ is the wavelength in m.
Continuing the thinking in Question 3 and Question 4, calculate the energy in a cubic meter of space, multiply the energy per photon calculated in Question 4 by the number of photons per cubic meter given above.
Continuing the thinking in the last three exercises, convert this energy to an equivalent in mass, use Einstein’s equation E = mc2. Hint: Divide the energy per m3 calculated in Question 5 by the speed of light squared. Check your units; you should have an answer in kg/m3. Now compare this answer with the critical density. Your answer should be several powers of 10 smaller than the critical density. In other words, you have found for yourself that the contribution of the CMB photons to the overall density of the universe is much, much smaller than the contribution made by stars and galaxies.
There is still some uncertainty in the Hubble constant. (a) Current estimates range from about 19.9 km/s per million light-years to 23 km/s per million light-years. Assume that the Hubble constant has been constant since the Big Bang. What is the possible range in the ages of the universe? Use the equation in the text, [latex]{T}_{0}=\frac{1}{H}[/latex], and make sure you use consistent units. (b) Twenty years ago, estimates for the Hubble constant ranged from 50 to 100 km/s per Mps. What are the possible ages for the universe from those values? Can you rule out some of these possibilities on the basis of other evidence?
It is possible to derive the age of the universe given the value of the Hubble constant and the distance to a galaxy, again with the assumption that the value of the Hubble constant has not changed since the Big Bang. Consider a galaxy at a distance of 400 million light-years receding from us at a velocity, v. If the Hubble constant is 20 km/s per million light-years, what is its velocity? How long ago was that galaxy right next door to our own Galaxy if it has always been receding at its present rate? Express your answer in years. Since the universe began when all galaxies were very close together, this number is a rough estimate for the age of the universe. |
There are many ways to type a pipe. You could use \$|\$ ($|$), \$\vert\$ ($\vert$), \$\mid\$ ($\mid$), or just a plain | (not surrounded by dollar signs). You could also use vmatrix to indicate matrix determinants.
I wanted to know when is it appropriate to use each type of pipe on Mathematics Stack Exchange. For example, pipes can be used in the following cases:
To indicate that one integer is a factor (or divisor) of another (e.g. $2|4$) To indicate conditions in set notation (e.g. $Dom(\sqrt{x}) = \{x \in \mathbb{R} \mid x \ge 0\}$) To indicate absolute value (e.g. $|-2019| = |2019| = 2019$) To indicate the cardinality of a set (e.g. $|\emptyset|=0$) To indicate the order of an element of a group (e.g. $\forall x \in K_4 ((x=e) \lor (|x|=2))$, where $K_4$ is the Klein four-group) To indicate the determinant of a square matrix (e.g. $\begin{vmatrix} 2 & 3\\5 & 7 \end{vmatrix}=-1$)
There is also of course the double pipe symbol ($||$), which is used for logical or in programming, concatenation, and parallel lines; and should not be confused with the number eleven. |
I'm trying to solve the following differential equation:
$$J^2\frac{1}{2^{q-1}}\operatorname{sgn}(\tau)e^{g(\tau)}=\partial_\tau^2\Big(\frac{1}{2q}\operatorname{sgn}(\tau)g(\tau)\Big).$$
Here $J^2$ and $q$ are constants and I want to solve for $e^{g(\tau)}$. I encountered this equation in a paper by Gábor Sárosi
et al. called "AdS$_2$ holography and the SYK model" (p.38) and "Comments on the Sachdev-Ye-Kitaev model" (p.12) by Juan Maldacena and Douglas Stanford.According to them the general solution to this differential equation is given by:
$$e^{g(\tau)} =\frac{c_1^2}{\mathcal{J}^2}\frac{1}{\sin^2(c_1(|\tau|+c_2))}$$ where $$\mathcal{J}=J\sqrt{\frac{q}{2^{q-1}}}$$
I am unable to reach the same result unfortunately. What I tried to do is solve the equation for $$|\tau|>0$$ so that I loose the sgn function (this could potentially be the problem) since I'm not sure how to deal with it otherwise. I get:
$$\frac{d^2g(\tau)}{d\tau^2}=J^2\frac{q}{2^{q-2}}e^{g(\tau)}$$
It would be nice if we could make this a first order differential equation so multiplying both sides by $$\frac{dg(\tau)}{d\tau}$$ and integrating w.r.t. $\tau$ gives:
$$\int\frac{dg(\tau)}{d\tau}\frac{d^2g(\tau)}{d\tau^2}d\tau=J^2\frac{q}{2^{q-2}}\int e^{g(\tau)}\frac{dg(\tau)}{d\tau}d\tau\\$$
$$\frac{1}{2}\Big(\frac{dg(\tau)}{d\tau}\Big)^2=J^2\frac{q}{2^{q-2}}(e^{g(\tau)}+c_1)$$
Here I used integration by parts to rewrite the lhs. Rearranging a bit gives:
$$\frac{dg(\tau)}{d\tau}=J\sqrt{\frac{q}{2^{q-3}}}\sqrt{(e^{g(\tau)}+c_1)}$$
Now that the equation is a separable first order differential equation we can integrate to find the general solution:
$$\int\frac{dg(\tau)}{\sqrt{(e^{g(\tau)}+c_1)}} =J\sqrt{\frac{q}{2^{q-3}}}\int d\tau$$
$$-\frac{2}{\sqrt{c_1}} \operatorname{artanh}\Big(\frac{\sqrt{e^{g(\tau)}+c_1}}{\sqrt{c_1}}\Big) =J\sqrt{\frac{q}{2^{q-3}}}(\tau+c_2)$$
So that:
$$e^{g(\tau)} =c_1\bigg(\tanh^2\Big(\mathcal{J}\sqrt{c_1}(\tau+c_2)\Big)-1\bigg)$$ where again $$\mathcal{J}=J\sqrt{\frac{q}{2^{q-1}}}$$
Obviously this is not the same as the general solution they state in their paper. Hopefully someone can help me see where I go wrong.
If I check the solution they give in the paper $\Big(e^{g(\tau)} =\frac{c_1^2}{\mathcal{J}^2}\frac{1}{\sin^2(c_1(|\tau|+c_2))}\Big)$ it does work out:
$$\frac{d^2g(\tau)}{d\tau^2}=\mathcal{J}^2e^{g(\tau)}$$
$$\frac{c_1^2}{\sin^2(c_1(|\tau|+c_2))}=\frac{c_1^2}{\sin^2(c_1(|\tau|+c_2))}$$
But I can think of no way of finding the constants of integration ($c_1$ and $c_2$) using boundary conditions $$g(0)=g(\beta)=0$$ |
Equivalence of Definitions of Compact Topological Subspace Contents Theorem
Let $T = \left({S, \tau}\right)$ be a topological space.
Let $T_H = \left({H, \tau_H}\right)$ be a topological subspace of $T$, where $H \subseteq S$.
Proof 1 implies 2
Suppose $T_H$ is compact in the sense of Definition 1.
Thus let $\mathcal F' = \left\{{U_1 \cap H, U_2 \cap H, \ldots, U_n \cap H}\right\}$ where $U_1, U_2, \ldots, U_n \in \mathcal C$.
It follows that $\mathcal F = \left\{{U_1, U_2, \ldots, U_n}\right\}$ is a finite subcover of $H$.
That is, $T_H$ is compact in the sense of Definition 2.
$\Box$
2 implies 1
Suppose $T_H$ is compact in the sense of Definition 2.
By definition of the subspace topology, each $V \in \mathcal C$ is of the form $U \cap H$ for some $U \in \tau$.
Thus $\mathcal F' = \left\{{U_1 \cap H, U_2 \cap H, \ldots, U_n \cap H}\right\}$ is also a finite subcover of $H$
That is, $T_H$ is compact in the sense of Definition 1.
$\blacksquare$
Also see |
just implement the Modified Bessel function. it's easy.
i always like my window definitions centered about zero, since pretty much all of them are even symmetry.
i'll do this in discrete-time, but it's essentially the same thing in continuous-time.
Kaiser window:
$$ w[n] \triangleq \begin{cases} \frac{1}{I_0(\beta)} I_0\left(\beta \sqrt{1 - \left(\frac{n}{M/2}\right)^2} \right) \quad \quad & |n| \le M/2 \\0 & |n|>M/2 \\\end{cases} $$
$I_0(x)$ is the 0th-order modified Bessel function of the 1st kind. $M+1$ is the number of non-zero samples or FIR taps (the FIR filter order is $M$ and, in my centered and symmetrical case, must be even). $\beta$ is a
"shape parameter" and O&S recommend this heuristic:
$$ \beta = \begin{cases} 0.1102 \cdot (A-8.7) & A>50 \\0.5842 \cdot (A-21)^{2/5} + 0.07886 \cdot (A-21) \quad & 21 \le A \le 50 \\0.0 & A<21 \\\end{cases}$$
$$ M = 2 \left\lceil \frac{A-8}{4.57 \cdot \Delta\omega} \right\rceil $$
$A$ is the desired stopband attention in dB and $\Delta\omega$ is the desired width of the transition band in normalized angular frequency.
finally, the Bessel is evaluated as:
$$ I_0(x) = 1 \ + \ \lim_{K \to \infty} \ \sum\limits_{k=1}^{K} \left(\frac{x^2}{4}\right)^{k} (k!)^{-2} $$
when you evaluate this with a computer, pick a $K$ decently large (my guess is that $K=32$ is good enough) and evaluate the summation starting with $k=K$ and work it backwards to $k=1$ to keep numerical accuracy. you might want to use Horner's method.
$$ I_0(x) \approx 1 + x^2\left( \tfrac{1}{(1!)^2 \, 4^1} + x^2\left(\tfrac{1}{(2!)^2 \, 4^2} + x^2\left(... + \, x^2\left(\tfrac{1}{((K-1)!)^2 \, 4^{K-1}} + x^2 \tfrac{1}{(K!)^2 \, 4^K} \right) \right) \right) \right) $$
you can evaluate all of the $(k!)^{-2}$ in advance with a short table.
Alright, someone made me do some work. This took about 45 minutes to code up and debug. So here is my MATLAB code for implementing the 0th-order Bessel function of the first find (which is $I_0(x)$ above):
function y = mybessel(x)
%
% Computes the 0th-order Modified Bessel function of the first kind
%
K = 32;
bessel_coef = zeros(1,K);
kfac = 1;
two_to_the_k = 1;
for k = 1:K
kfac = kfac * k;
two_to_the_k = two_to_the_k * 2;
bessel_coef(k) = 1/(kfac*two_to_the_k)^2; % compute power series coefficients in advance
end
x = x.^2;
y = x .* bessel_coef(K);
for k = K-1:-1:1
y = x .* (bessel_coef(k) + y); % Horner's method
end
y = 1 + y;
end
and here is the test code:
x = linspace(-16, 16, 32*4096+1);
I_0 = besseli(0, x); % MATLAB's modified bessel
y = mybessel(x); % my bessel
figure(1)
plot(x, I_0)
hold on
plot(x, y)
hold off
figure(2)
plot(x, y - I_0) % plot error
with results:
and error:
for $|x| \le 16$, the
relative error is less than $10^{-15}$. the error increases with increasing $|x|$. |
Are two signals are the same if their
auto-convolution functions are the same?
Almost. Look at the autoconvolution in the frequency domain where the autoconvolution of $x$ (with itself) gives us $(X(f))^2$ in the frequency domain while the autoconvolution of $-x$ (with itself) gives us $(-X(f))^2 = (X(f))^2$. So, given an autoconvolution function, there are two (very related) signals $x$ and $-x$ that have the same autoconvolution function.
Are two signals are the same if their
auto-correlation functions are the same?
Not quite. Now we are given $|X(f)|^2$ in the frequency domain and there are many different factorizations possible. For example, if $y(t)$ is a signal such that values taken in by its Fourier transform always lie on the unit circle in the complex plane (for every $f$, $|Y(f)|=1$) then $|X(f)Y(f)|^2 = |X(f)|^2$ and so $x\star y$ has the same auto-correlation function as $x$. Note that $x(t)$ and $x(t-\tau)$ (which is a
delayed version of $x(t)$) have the same autocorrelation function ($Y(f)$ happens to be $\exp(-j2\pi f \tau)$ here). Another factorization replaces $X(f)$ by $X^*(f)$ which tells us that $x(t)$ and $x(-t)$ (which is just $x(t)$ running backwards in time) have the same autocorrelation function. |
Let us for simplicity discuss RHF formalism. For $2n$-electron system we have $n$ Hartree-Fock equations written for $n$ spatial orbitals $\{ \phi_{k} \}_{k=1}^{n}$ $$ \newcommand{\mat}[1]{\boldsymbol{\mathbf{#1}}} $$ \begin{equation} \hat{F}(1) \phi_{k}(1) = \varepsilon_{k} \phi_{k}(1) \, , \quad k = 1, 2, \dotsc, n \, . \end{equation} Once we introduce finite basis $\{ \chi_{q} \}_{q=1}^{m}$ and express spatial orbitals as a linear combination of basis functions $\chi_{q}$ \begin{equation} \phi_{k}(1) = \sum\limits_{q=1}^{m} c_{qk} \chi_{q}(1) \, , \quad k = 1, 2, \dotsc, n \, . \end{equation} we end up with $n$ Roothaan–Hall equations \begin{equation} \sum\limits_{q=1}^{m} F_{pq} c_{qk} = \varepsilon_{k} \sum\limits_{q=1}^{m} S_{pq} c_{qk} \, , \quad k = 1, 2, \dotsc, n \, , \end{equation} which can be rewritten in the following matrix form \begin{equation} \mat{F} \mat{c}_{k} = \varepsilon_{k} \mat{S} \mat{c}_{k} \quad k = 1, 2, \dotsc, n \, . \end{equation} The Fock matrix $\mat{F}$ and the overlap matrix $\mat{S}$ are both $m \times m$ square matrices, $\mat{c}_{k}$ is a column $m \times 1$ matrix, $\varepsilon_{k}$ is just a scalar value.
We can then collecl all $n$ $\mat{c}_{k}$ column $m \times 1$ matrices into one $m \times n$ matrix $\mat{C}$ and all $n$ values $\varepsilon_{k}$ into $n \times n$ square matrix $\mat{\varepsilon}$ \begin{equation} \mat{F} \mat{C} = \mat{S} \mat{C} \mat{\varepsilon} \, . \end{equation}
In practice, however, we extend both $\mat{C}$ and $\mat{\varepsilon}$ to $m \times m$ matrices from $m \times n$ and $n \times n$ respectively, which results in having $m-n$ virtual (unoccupied) orbitals.
Taking into account that virtual orbitals are even more unphysical than their occupied counterparts the question is what is the point of such extension of $\mat{C}$ and $\mat{\varepsilon}$? Why do not we just leave them of $m \times n$ and $n \times n$ sizes respectively? |
Carte non disponible
Adaptive nonparametric estimation for compound Poisson processes robust to the discrete-observation scheme
vendredi 1 juin 2018,
9h30 - 10h30
Compound Poisson processes (CPPs) are the textbook example of pure jump Lévy processes (LPs). They have two defining parameters: the jump distribution $\mu$ and the intensity $\lambda$. A sample path is simply a staircase where the step size is $\mu$-distributed and the time between jumps follows an exponential distribution with parameter $\lambda$. Therefore, CPPs provide a simple, yet fundamental, model for random shocks in a system, which itself and its generalisations are applied in a myriad of applications within natural sciences, engineering and economics. In most of these, the underlying CPP is not perfectly observed: only $n$ discrete observations every $\Delta>0$ amount of time are available. Hence, the process may jump several times between two observations and we are effectively observing a random variable corrupted by a sum of a random number of copies of itself. Consequently, estimating the Lévy distribution $\lambda\mu$, or its density $\nu$ if it exists, is a non-linear statistical inverse problem.
In the last decade, this problem and its generalisation within LPs have attracted much attention (cf. [2]). Existing literature can be roughly split into high-frequency observations, $\Delta=\Delta_n\to 0$, and low-frequency, $\Delta$ fixed, and both regimes use different techniques to build estimators. We will present our recent results in [1], where we show that an estimator of $\nu$ constructed using the spectral approach (generally used in the second setting) is robust to both observation regimes and, under minimal tail assumptions, is minimax-optimal without knowledge of the Nikolskii-regularity of $\nu$ for the losses $L^p({R})$, $p\in[1,\infty]$. This is particularly novel because all existing results are shown either for $p=2$ or in settings where some $L^2$ structure can be exploited. Adaptive results are sparse and use model selection techniques that are especially well-suited for the $L^2$ setting. Instead, we use Lepskii’s method and, to do so, show new exponential-concentration inequalities. This includes one for the supremum of the fluctuations of the empirical characteristic function from which it follows that, up to logarithms, it concentrates at the parametric rate with sample size $T_{\lambda}:=\lambda \Delta n$; note that $T_{\lambda}$ is the expected number of jumps the CPP gives in $[0,\Delta n]$ and, thus, it is the natural sample size in this context.
References:
[1] Coca, A. J. (2018) Adaptive nonparametric estimation for compound Poisson processes robust to the discrete-observation scheme, ArXiv preprint, https://arxiv.org/abs/1803.09849
[2] Belomestny, D., Comte, F., Genon-Catalot, V., Masuda, H., Rei{\ss}, M. (2015) Lévy Matters IV, Springer. |
In Jackson's 'classical electrodynamics' he re-expresses a volume integral of a vector in terms of a moment like divergence:
$$\int \mathbf{J} d^3 x = - \int \mathbf{x} ( \boldsymbol{\nabla} \cdot \mathbf{J} ) d^3 x$$
He calls this change "integration by parts". If this is integration by parts, there must be some form of chain rule (where one of the terms is zero on the boundry), but I can't figure out what that chain rule would be. I initially thought that the expansion of
$$\boldsymbol{\nabla} (\mathbf{x} \cdot \mathbf{J})$$
might have the structure I was looking for (i.e. something like $\mathbf{x} \boldsymbol{\nabla} \cdot \mathbf{J}+\mathbf{J} \boldsymbol{\nabla} \cdot \mathbf{x}$), however
$$\boldsymbol{\nabla} (\mathbf{x} \cdot \mathbf{J}) = \mathbf{x} \cdot \boldsymbol{\nabla} \mathbf{J} +\mathbf{J} \cdot \boldsymbol{\nabla} \mathbf{x} + \mathbf{x} \times ( \boldsymbol{\nabla} \times \mathbf{J} ) = \mathbf{J} + \sum_a x_a \boldsymbol{\nabla} J_a. $$
I tried a few other gradients of various vector products (including $\boldsymbol{\nabla} \times ( \mathbf{x} \times \mathbf{J} )$), but wasn't able to figure out one that justifies what the author did with this integral. |
If you want to introduce a single universal quantifier at the end, then begin by eliminating both universals in each premise to the
same arbitrary witness.
$\begin{array}{|l}\forall x~\forall y ~Q(x,y)\\\forall x~\forall y ~R(x,y)\\\hline \begin{array}{|l}\boxed c\\\hline \forall y~Q(c,y)\\Q(c,c)\\ \forall y~ R(c,y)\\ R(c,c)\\~~\vdots\\ P(c,c)\end{array}\\\forall x~P(x,x)\end{array}$
Now which premises you choose and what argument you make with them is left up to you, but as a hint: you do only need two of the four.
@GrahamKemp Not yet finish. I tried introducing a single variable and it did work (picture above). But I was stuck with how to end the proof because Fitch states the last sentence is of the wrong form unless I delete the premise in the subproof. But if so, I have no idea how to derive Indiff(a,a).
The witness needs to be arbitrary for the required universal introduction; no assumption must be made with it. You then use universal elimination, and so derive $\text{WeakPref}(c,c)\land\text{WeakPref}(c,c)$
from $\text{WeakPref}(c,c)\lor\text{WeakPref}(c,c)$ to prepare for the biconditional elimination.
$\def\oto{\leftrightarrow}\begin{array}{|l}\forall x~\forall y ~(\text{WeakPref}(x,y)\lor\text{WeakPref}(y,x))\\\forall x~\forall y~(\text{StrongPref}(x,y)\to \lnot\text{WeakPref}(y,x))\\\forall x~\forall y ~(\text{Indiff}(x,y)\oto(\text{WeakPref}(x,y)\land\text{WeakPref}(y,x)))\\\hline \begin{array}{|l}\boxed c\\\hline \forall y~(\text{WeakPref}(c,y)\lor\text{WeakPref}(y,c))\\\text{WeakPref}(c,c)\lor\text{WeakPref}(c,c)\\\forall y~(\text{StrongPref}(c,y)\oto \lnot\text{WeakPref}(y,c))\\\text{StrongPref}(c,c)\oto\lnot\text{WeakPref}(c,c)\\ \forall y~ (\text{Indiff}(c,y)\oto(\text{WeakPref}(c,y)\land \text{WeakPref}(y,c)))\\ \text{Indiff}(c,c)\oto(\text{WeakPref}(c,c)\land\text{WeakPref}(c,c))\\~~\vdots\\ \text{WeakPref}(c,c)\land\text{WeakPref}(c,c)\\\text{Indiff}(c,c)\end{array}\\\forall x~\text{Indiff}(x,x)\end{array}$ |
2019-10-16 07:36
Working Group 6 Summary: Spin and 3D Structure / Eyser, Oleg (Brookhaven) ; Parsamyan, Bakur (CERN ; INFN, Turin ; Turin U.) ; Rogers, Ted (Old Dominion U.) The spin and 3D structure session of the DIS2019 conference focused on recentefforts to understand nucleon structure using collinear factorization theorems, transverse momentumdependent correlation functions (TMDs), generalized parton distribution (GPDs) and similar objects.A large amount of progress in both theoretical and experimental directions was reported. We summarize some of the highlights here.. SISSA, 2019 - 15 p. - Published in : PoS DIS2019 (2019) 284 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.284 Registro completo - Registros similares 2019-10-16 07:36
Future SIDIS measurements with a transversely polarized deuteron target at COMPASS / Martin, Anna (Trieste U. ; INFN, Trieste)/Compass Since 2005, measurements of Collins and Sivers asymmetries from the HERMES and COMPASS experiments have shown that both the transversity and the Sivers PDFs are different from zero and measurable in semi-inclusive DIS on transversely polarised targets. Most of the data were collected on proton targets, and only small event samples were collected in the early phase of the COMPASS experiment on a deuteron (6LiD) target and more recently at JLab, on 3He. [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 267 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.267 Registro completo - Registros similares 2019-10-16 07:36 Registro completo - Registros similares 2019-10-16 07:36 Registro completo - Registros similares 2019-10-16 07:36
Search for new physics in CP violation with beauty and charm decays at LHCb / Bartolini, Matteo (Genoa U. ; INFN, Genoa)/LHCb LHCb is one of the four big experiments operating at LHC and it's mainly dedicatedto measurements of $C\!P$ violation and to the search for new physics in the decays ofrare hadrons containing heavy quarks. The LHCb collaboration has recently published a result which shows for the first time a compelling 5.3 $\sigma$ evidence of $C\!P$ violation in the two-body meson $D^{0}\rightarrow K^{+}K^{-}$ and $D^{0}\rightarrow \pi^{+}\pi^{-}$ decays.$C\!P$ violation in the Cabibbo-suppressed decays $D^{+}_{s} \rightarrow K^{0}_{S}\pi^{+}$, $D^{+} \rightarrow K^{0}_{S}K^{+}$, $D^{+} \rightarrow \phi \pi^{+}$ is expected to be small ($\sim10^{-3}$) due to interference between tree and penguin diagrams and thus sensible to contributions Beyond Standard Model (BSM). [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 250 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.250 Registro completo - Registros similares 2019-10-16 07:36 Registro completo - Registros similares 2019-10-16 07:36 Registro completo - Registros similares 2019-10-16 07:36
Heavy flavour spectroscopy and exotic states at the LHC / Cardinale, Roberta (Genoa U. ; INFN, Genoa)/LHCb The LHC, producing huge amount of $b\bar{b}$ and $c\bar c$ pairs, is the ideal place for spectroscopy studies which are fundamental as tests and inputs for QCD models. Many of the recently observed states, which are not fitting the standard picture, are still lacking of interpretation. [...] SISSA, 2019 - 6 p. - Published in : PoS DIS2019 (2019) 146 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.146 Registro completo - Registros similares 2019-10-16 07:36 Registro completo - Registros similares 2019-10-16 07:36
Z boson production in proton-lead collisions accounting for transverse momenta of initial partons / Kutak, Krzysztof (Cracow, INP) ; Blanco, Etienne (Cracow, INP) ; Jung, Hannes (CERN ; DESY) ; Kusina, Aleksander (Cracow, INP) ; van Hameren, Andreas (Cracow, INP) We report on a recent calculation of inclusive Z boson production in proton-lead collisions at theLHC taking into account the transverse momenta of the initial partons [1]. In the calculation the frameworkof $k_T$-factorization has been used. [...] SISSA, 2019 - 5 p. - Published in : PoS DIS2019 (2019) 126 Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019, pp.126 Registro completo - Registros similares |
I am stuck with the following task: Show that the Decision Problem "Vertex Cover" is polynomial-time reducible to the Decision Problem "Binary Integer programming".
I have the feeling that there must be a very easy way.
My approach until now:
We can convert any decision problem to an optimization problem in polynomial time. We can reduce the optimization problem of vertex cover to a linear program in polynomial time. (https://en.wikipedia.org/wiki/Integer_programming#Proof_of_NP-hardness) We can convert any optimization problem to a decision problem in polynomial time.
Is this approach ok? And/Or is there a better/easier way?
kind regards James
EDIT: I think the following solution should work
Let $a_{i,j}$ be 1 if vertex $j$ is connected with edge $i$ and otherwise 0. Let $r=|E|$ and $n=|V|$.
A= \begin{bmatrix} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{r,1} & a_{r,2} & \dots & a_{r,n} \\ -1 & -1 & \dots & -1 \\ \end{bmatrix}
Let $b_i = 1$ $\forall i \in 1..r$
b= \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_r \\ -k \\ \end{bmatrix} |
Let $(G, +)$ be a commutative group. The endomorphism set $\text{End}(G)$ of all group endomorphisms $f:G\to G$ is a ring, where $+$ is taken pointwise and the multiplication is the composition of endomorphisms.
Suppose $f\in \text{End}(G)$ is not bijective (i.e. not a unit in the ring) and not a zero-divisor. Can it be written as a product of irreducible elements?
(We call $g\in\text{End}(G)$
reducible if there are non-units $h_1,h_2\in\text{End}(G)$ such that $g = h_1\circ h_2$; and we call $g$ irreducible otherwise.) |
No, not at all.
Consider the following example which is superficially closely related to Simon's algorithm and Shor's algorithm. Suppose you have two functions f and g, from ${\bf Z}_{2^N}$ to $\{0,1\}^m$ with $m\geqslant N$. Suppose they are random oracles with the property that $f(x+s \text{ mod } 2^N) = g(x)$ for some random s picked uniformly from ${\bf Z}_{2^N}$, and that if $x\neq y$, $f(x)\neq f(y)$. Otherwise, the functions are picked uniformly at random.
Can a quantum computer figure out s in polynomial time? Based on the superficial similarity with Simon's and Shor's algorithms, you might think so. Have three registers: the first a qubit, the second N qubits specifying an element of ${\bf Z}_{2^N}$, and the third giving the function value. It's easy to prepare the state $$\frac{1}{2^{(N+1)/2}} \sum_{x\in {\bf Z}_{2^N}} \left[ |0\rangle |x\rangle|f(x)\rangle +|1\rangle |x\rangle |g(x)\rangle \right].$$
Then, take the quantum Fourier transform of the second register, and the Hadamard transform of the first register. Then, measure the values of the first qubit and the second register in the computational basis. Repeat this $cN$ times where c is around 4 or so.
Each measurement, the measured value for the second register, k, is uniformly distributed over ${\bf Z}_{2^N}$. With probability $p_0(k,s)\equiv \cos^2(\pi ks/2^N)$, the first register will measure 0, and with probability $p_1(k,s)\equiv \sin^2(\pi ks/2^N)$, it will measure 1.
So, we have the pairs $(r_i,k_i)$. With probability nearly one, this sequence of pairs already contain enough information to figure out the value for $s$. Just evaluate the function $h(x)\equiv \prod_i p_{r_i}(k,x)$. $h(s)$ will have the largest value by far, typically exceeding a certain bound with nearly probability 1, while the other values will typically be less than another bound with a probability of nearly 1.
If $s\neq 0$, this can be found out the moment the r value of 1 shows up. Similarly, if $s$ isn't an integer multiple of $2^{N-a}$, this can be found out after an order of $2^a$ measurements or so. However, this can also be verified by $2^a$ direct evaluations.
The only problem is there is no known polynomial time quantum algorithm which can extract the value of s given these pairs. So, this algorithm can output a sequence of pairs which can be used to test if a number x is s, or not, but it can't find s by itself.
If there is a quantum algorithm which could, then the closely related problem of finding the seed of the pseudorandom generator can also likely be solved. p is a prime number N bits long. The seed is $y_0\in Z_p$. $y_{i+1}=ay_i +b \text{ mod p}$. $x_i$ is the last bit of $y_i$. Given the sequence $x_i$ for i up to $cN$, figure out the seed.$y_i=[y_0+b/(a-1)]a^i-b/(a-1) \text { mod p}$ |
Ok, I need to write a java algorithm which simulates the SMOOTH function written in IDL where the SMOOTH function is given by $$ R_i = \begin{cases} \displaystyle \frac{1}{w} \sum_{j = 0}^{w-1} A_{i+j+w/2} \quad &\text{if}\;\; \frac{w-1}{2} \leq i \leq N - \frac{w+1}{2} \\[1ex] \displaystyle A_i &\text{otherwise} \end{cases} $$
The problem is I don't understand how that algorithm works. I know there is already a similar post regarding boxcar averaging. But the algorithm seems to be different. What I understand in this equation is that there is two state (if statement), the first one is calculating the weight average, the second one is to ignore the boundary. In the first equation, I think I got the summation notation, it starts from $0$ to $(w - 1)$. What I don't get is the one inside summation
A.
i+j-w/2
The following is the sample data (just corner part of large data) that was calculated using IDL. I used weight 5 to calculate this.
0 0.3271947 0.6183698 0.841471 0.9719379 0.3271947 0.4541381 0.6782335 0.8694523 0.98077 0.6183698 0.6782335 0.932708 0.9967949 0.841471 0.8694523 0.932708 0.987766 0.9954079 0.9719379 0.98077 0.9967949 0.9954079 0.9508516 0.8092117 0 0.3271947 0.6183698 0.841471 0.9719379 0.3271947 0.4541381 0.6782335 0.8694523 0.98077 0.6183698 0.6782335 0.8642555 0.8968759 0.841471 0.8694523 0.8642555 0.8920734 0.8822442 0.9719379 0.98077 0.8968759 0.8822441 0.8311055 0.7850659
Please, explain me how that algorithm works.
Thanks |
I am quite new to this copula idea. In particular I am confused about the definition of a Gaussian copula. For a copula to be a Gaussian copula does the marginals have to Gaussian as well? Or it can be of any distribution? From the wikipedia page it looks like it has to be (http://en.wikipedia.org/wiki/Copula_(probability_theory)#Gaussian_copula) but I thought it needn't be.
No, that's the point of the copula.
Consider a random variable $X$ and its CDF $F$. Since $F$ is just a function, you can apply it to $X$ to obtain a new random variable $W \equiv F{(X)}$. It is always true that $W\sim \operatorname{Uniform}{[0,1]}$ when defined this way. (this was actually the content of my very first question here)
Now think of a vector of random variables $(X_1,\dots,X_d)$ with their respective marginal CDFs $F_1,\dots,F_d$. $\left(F_1{(X_1)},\dots,F_K{(X_d)}\right)$ is just a vector of those $W$'s. It's a vector of random variables that are uniformly distributed on $[0,1]$. Keep in mind that these are
marginal distributions; we haven't said anything yet about if and how they might depend on each other.
The Gaussian copula is just a multivariate probability distribution defined on the unit square/cube/hypercube $[0,1]^d$. Using what we demonstrated above, it should be apparent that $\left(F_1{(X_1)},\dots,F_d{(X_d)}\right)$ is a function that maps onto $[0,1]^d$. Therefore any such vector could have a Gaussian copula as its distribution.
So when Wikipedia says the Gaussian copula is "$\Phi_R(\Phi^{-1}{(u_1)},\dots,\Phi^{-1}{(u_d)})$", it doesn't mean that the $U$'s are your data. They can be, but they don't have to be. $U_i$ can be freely defined as $F_i{(X_i)}$ where $X_i$ is one of your data variables and $F_i$ is its CDF. So that distribution is equivalent to $\Phi_R(\Phi^{-1}{(F_1{(X_1)})},\dots,\Phi^{-1}{(F_d{(X_d)})})$. If you wanted standard Gaussian marginals, your distribution would be $\Phi_R(\Phi^{-1}{(\Phi{(X_1)})},\dots,\Phi^{-1}{(\Phi{(X_d)})})$ -- i.e. the copula would reduce to a multivariate normal distribution.
This is all laid out in that same article, in the Mathematical Definition section, but it's pretty terse. |
This question already has an answer here:
I stumpled upon the equation
$$\sum_{k=1}^\infty \frac{k^2}{k!} = 2\mathrm{e}$$
and was just curious how to deduce the right hand side of the eqution - which identities could be of use here? Trying to simplify the partial sums to deduce the value of the series itself didn't help too much thus far.
Edit:
The only obvious transformation is $$\sum_{k=1}^\infty \frac{k^2}{k!} = \sum_{k=0}^\infty \frac{k+1}{k!}$$ but there was nothing more I came up with. |
The trick here is to notice that the Witten index for a finite temperature $\beta$ is given by
$$\text{Tr}\left\{(-1)^Fe^{-\beta H}\right\}=\int_{\text{PBC}}\mathcal{D}\phi\mathcal{D}\overline\psi\mathcal{D}\psi\,e^{-S},$$
where the boundary conditions are on a circle of circumference $\beta$. Next, we know that the Witten index is independent of the temperature (it computes the Euler characteristic of the Riemannian manifold), and so we can take the $\beta\to 0$ limit of this expression. In this case, all non-constant modes of the fields $\phi$ and $\psi$ have energy proportional to $1/\beta$, and thus will be exponentially suppressed in the $\beta\to 0$ limit. Thus, the path integral will localize only to those modes which are constant in time, namely
$$\text{Tr}\,(-1)^F\propto\int_{\mathcal{M}}\mathrm{d}\phi\,\sqrt{g}\int\mathrm{d}\overline{\psi}\,\mathrm{d}\psi\,\exp\left(-\frac{\beta}{2}R_{IJKL}\psi^I\overline{\psi}^J\psi^K\overline{\psi}^L\right),$$
where we have traded out our path integral for a standard integral over constant modes only, and the $\sqrt{g}$ term comes from the integral over non-constant modes in the Guassian limit (a factor of $1/\sqrt{g}$ from the bosonic fields and a factor of $g$ from the fermionic ones). The constant of proportionality can be worked out by being careful with the suppression of non-constant modes and working explicitly with the path integral measure over fourier components. However, this is quite technical.
Now, as a warm-up, if the manifold is $2$-dimensional, we have
$$\text{Tr}\,(-1)^F\propto\int\mathrm{d}^2\phi\,\sqrt{g}\int\mathrm{d}^2\overline{\psi}\mathrm{d}^2\psi\exp\left(-\frac{\beta}{2}R_{IJKL}\psi^I\overline{\psi}^J\psi^K\overline{\psi}^L\right).$$
I will leave it to you to show that, when you bring the Grassmann coordinates down from the exponential and integrate over them, the result is
$$\text{Tr}\,(-1)^F\propto\int\mathrm{d}^2\phi\,\sqrt{g}\,R,$$
where $R$ is the Ricci scalar. Since $\text{Tr}\,(-1)^F=\chi(M)$ is the Euler characteristic, this is exactly the statement of the Gauss-Bonnet theorem, up to a constant of proportionality ($1/4\pi$). The technique for higher dimensions can be worked out in a similar fashion. |
I'm interested in finite-difference approaches to the incompressible Navier-Stokes equations that can handle complex geometry without the use of an unstructured mesh or a non-Cartesian grid. To be clear, I'm aware of standard approaches, e.g. Chorin's projection method, to solving the Navier-Stokes equations on a rectangular domain, but I'd I'd like to know more about what methodologies exist to extend these techniques to more sophisticated geometries.
To clarify my intent, one particularly notable example of what I'm looking for would be Peskin's Immersed Boundary Method.
See below for a more precise statement of the particular problem I'm interested in.
Consider solving the incompressible Navier-Stokes equations \begin{align*} \rho\left(\mathbf{u}_t + (\mathbf{u}\cdot\nabla)\mathbf{u}\right) &= - \nabla p + \mu\Delta\mathbf{u} + \mathbf{f}\\ \nabla\cdot\mathbf{u} &= 0 \end{align*} with $$\mathbf{f} = (f_0,0,0)$$ on the domain $\Omega = [-1,1]^d \setminus C$ where $$C = \left\{\mathbf{x} \in [-1,1]^d : |\mathbf{x}| < \frac{1}{2}\right\}.$$ The boundary conditions are no-slip (i.e., $\mathbf{u} = 0$) except at $\{x=-1\}$ and $\{x=1\}$, where we enforce a periodic boundary condition. In other words, this is periodic Poiseuille flow around a cylinder.
The challenge here lies entirely in enforcing the no-slip condition on $\partial C$, the boundary of the cylinder. A naive -- and inaccurate -- approach is to simply set $\mathbf{u} = 0$ at grid points inside the cylinder every time step. The Immersed Boundary Method is another option. Simply put, what other techniques are out there? |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-7 of 7
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
π0 and η meson production in proton-proton collisions at √s=8 TeV
(Springer, 2018-03-26)
An invariant differential cross section measurement of inclusive π^0 and η meson production at mid-rapidity in pp collisions at √s=8 TeV was carried out by the ALICE experiment at the LHC. The spectra of π^0 and η mesons ...
Longitudinal asymmetry and its effect on pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV
(Elsevier, 2018-03-22)
First results on the longitudinal asymmetry and its effect on the pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV at the Large Hadron Collider are obtained with the ALICE detector. The longitudinal ...
Production of deuterons, tritons, 3He nuclei, and their antinuclei in pp collisions at √s=0.9, 2.76, and 7 TeV
(American Physical Society, 2018-02)
Invariant differential yields of deuterons and antideuterons in p p collisions at √ s = 0.9, 2.76 and 7 TeV and the yields of tritons, 3 He nuclei, and their antinuclei at √ s = 7 TeV have been measured with the ALICE ... |
The norm on any normed space is convex and continuous and it is weakly lower semicontinuous, see also Aliprantis-Border,
Infinite Dimensional Analysis: A Hitchhiker's Guide, Lemma 6.22, p. 235.
If $X$ is infinite-dimensional, the weak topology and the norm topology are distinct. Therefore the norm is not weakly upper semicontinuous since this would imply that it is weakly continuous, and consequently the weak topology and the norm topology would have to coincide.
For example, the norm is not weakly sequentially upper semicontinuous in a Hilbert space. An orthonormal sequence $\{e_n\}_{n \in \mathbb{N}}$ converges weakly to zero by Bessel's inequality, but $$0 \lt 1 = \liminf\limits_{n \to \infty} \lVert e_n\rVert = \limsup\limits_{n\to\infty} \lVert e_n\rVert.$$ |
Let $\mathcal{G}$ denote a (stable) tangential structure such as $O$, $SO$, $Spin$, or $Pin^\pm$. Which bordism classes $[M,f]\in\Omega_*^\mathcal{G}(X)$ are represented by an $f:M\rightarrow X$ where the $\mathcal{G}$-manifold $M$ fibers over $S^1$?
A manifold $M$ fibres over $S^1$ with fibre $F$ if and only if it is isomorphic to the mapping torus $$T(h)=F \times [0,1]/\{(x,0) \sim (h(x),1)\vert x \in F\}$$ of an automorphism $h:F \to F$. Mapping tori are particular examples of open books. Walter Neumann's result for $G=SO$ extends to arbitrary $X$ with the signature replaced by the invariant of
Quinn, Frank, Open book decompositions, and the bordism of automorphisms. Topology 18 (1979), no. 1, 55-73
For odd-dimensional $M$ the invariant is 0. For even-dimensional $M$ the invariant is the asymmetric Witt class of the $Z[\pi_1(X)]$-module chain complex $C(\tilde{M})$ with Poincar\'e duality, where $\tilde{M}$ is the pullback to $M$ of the universal cover $\tilde{X}$ of $X$ - see Chapters 29,30 of
Ranicki, Andrew, High-dimensional knot theory, Springer Monographs in Mathematics, 1998.
The case $G=SO$ and $X=*$ is considered in
Neumann, Walter D., Fibering over the circle within a bordism class. Math. Ann. 192 1971 191–192.
where it is shown that a bordism class fibres over the circle if and only if it has signature zero. |
Search
Now showing items 1-10 of 21
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV
(American Physical Society, 2015-06)
We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ... |
My assignment says:
"Determine if the following statement is correct: If $A$ and $A \cup B$ are decidable, then $B$ is decidable."
The solution says:
"Incorrect. If $B = H_0 \subseteq \{0,1\}^*$ is the halting problem with input $\epsilon$ and $A = \{0,1\}^*$, then $A$ and $A \cup B$ are decidable and $B$ is undecidable.
My question is:
How can $A \cup B$ be decidable if $B$ is undecidable? If for example $b \in B$ then $b \in A \cup B$ but a Turing Machine may not halt on $b$ because $B$ is undecidable, then how can $A \cup B$ still be decidable with $b \in A \cup B$ ? |
If we convolve 2 signals we get a third signal. What does this third signal represent in relation to the input signals?
There's not particularly any "physical" meaning to the convolution operation. The main use of convolution in engineering is in describing the output of a linear, time-invariant (LTI) system. The input-output behavior of an LTI system can be characterized via its impulse response, and the output of an LTI system for any input signal $x(t)$ can be expressed as the convolution of the input signal with the system's impulse response.
Namely, if the signal $x(t)$ is applied to an LTI system with impulse response $h(t)$, then the output signal is:
$$ y(t) = x(t) * h(t) = \int_{-\infty}^{\infty}x(\tau)h(t - \tau)d\tau $$
Like I said, there's not much of a physical interpretation, but you can think of a convolution qualitatively as "smearing" the energy present in $x(t)$ out in time in some way, dependent upon the shape of the impulse response $h(t)$. At an engineering level (rigorous mathematicians wouldn't approve), you can get some insight by looking more closely at the structure of the integrand itself. You can think of the output $y(t)$ as the sum of an infinite number of copies of the impulse response, each shifted by a slightly different time delay ($\tau$) and scaled according to the value of the input signal at the value of $t$ that corresponds to the delay: $x(\tau)$.
This sort of interpretation is similar to taking discrete-time convolution (discussed in Atul Ingle's answer) to a limit of an infinitesimally-short sample period, which again isn't fully mathematically sound, but makes for a decently intuitive way to visualize the action for a continuous-time system.
A particularly useful intuitive explanation that works well for discrete signals is to think of convolution as a "weighted sum of echoes" or "weighted sum of memories."
For a moment, suppose the input signal to a discrete LTI system with transfer function $h(n)$ is a delta impulse $\delta(n-k)$. The convolution is \begin{eqnarray} y(n) &=& \sum_{m=-\infty}^{\infty} \delta(m-k) h(n-m) \\ &=& h(n-k). \end{eqnarray} This is just an echo (or memory) of the transfer function with delay of k units.
Now think of an arbitrary input signal $x(n)$ as a sum of weighted $\delta$ functions. Then the output is a weighted sum of delayed versions of h(n).
For example, if $x(n) = \{1, 2, 3\}$, then write $x(n) = \delta(n) + 2 \delta(n-1) + 3 \delta(n-2)$.
The system output is a sum of the echoes $h(n)$, $h(n-1)$ and $h(n-2)$ with appropriate weights 1, 2, and 3, respectively.
So $y(n) = h(n) + 2h(n-1)+3h(n-2)$.
A good intuitive way of understanding convolution is to look at the result of convolution with a point source.
As an example, the 2D convolution of a point with the flawed optics of Hubble Space Telescope creates this image:
Now imagine what happens if there are two (or more) stars in a picture: you get this pattern twice (or more), centered on each star. The luminosity of the pattern is related to the luminosity of a star. (Note that a star is practically always a point source.)
These patterns are basically the multiplication of the point source with the convoluted pattern, with the result stored at the pixel such that it reproduces the pattern when the resulting picture is viewed in its entirety.
My personal way of visualizing a convolution algorithm is that of a loop on every pixel of the source image. On each pixel, you multiply by the value of the convoluted pattern, and you store the result on the pixel which relative position corresponds to the pattern. Do that on every pixels (and sum results on every pixels), and you get the result.
Think of this... Imagine a drum you are beating it repeatedly to hear the music right? Your drum stick will land on the membrane for the first time due to the impact it will vibrate , when you strikes it for the second time ,vibration due to first impact has already decayed to some extent. So whatever sound you will hear is the current beating and sum of the decayed response of previous impacts. So if $x(k)$ is the impact force on $k$ th moment, then the impact will be Force $x$ Impact time
Which is
$x(k)dk$
Where $dk$ is infinitesimaly small time of impact
and you are hearing the sound @ $t$ , then the elapsed time will be $t-k$ , suppose if the membrane of the drum has a decay effect , defined by a function $h(u)$ , where $u$ is elapsed time, in our case $t-k$ , so the response of impact @ $k$ will be $h(t-k)$. So the effect of $x(k)dk$ at time t will be multiplication of both, i.e. $x(k)h(t-k)dk$.
So the overall effect of the music we hear will be the integrated effect of all the impacts. That too from negative infinity to plus infinity. Which is what is know as convolution.
You can also think of convolution as smearing/smoothing of one signal by another. If you have a signal with pulses and another of, say, a single square pulse, the result will the smeared or smoothed out pulses.
Another example is two square pulses convolved come out as a flattened trapezoid.
If you take a picture with a camera with the lens defocused, the result is a convolution of the focused image with the point spread function of the defocus.
The probability distribution of the sum of a pair of dice is the convolution of the probability distributions of the individual dice.
Long multiplication is convolution, if you don't carry from one digit to the next. And if you flip one of the numbers. {2, 3, 7} convolved with {9, 4} is {8, 30, 55, 63}
2 3 7 X 4 9--------------- 18 27 63 8 12 28--------------- 8 30 55 63
(You could finish out the multiplication by carrying the "6" from 63 into the 55, and so on.)
In signals and systems, convolution is usually used with input signal and impulse response to get an output signal(third signal). It's easier to see convolution as "weighted sum of past inputs" because past signals also influence current output.
I'm not sure if this is the answer you were looking for, but I made a video on it recently because it bothered me for a long time. https://www.youtube.com/watch?v=1Y8wHa3fCKs&t=14s Here's a short video. Please excuse my English lol.
Another way to look at convolution is to consider that you have two things:
DATA - quantities certainly corrupted by some noise - and at random positions (in time, space, name it) PATTERN = some knowledge of how information should look like
the convolution of DATA with (the mirror symmetric of the) PATTERN is another quantity that evaluates -knowing the PATTERN- how likely it is that it is at each of the positions within the DATA.
Technically, at every position, this quantity is the correlation (this is the mirror of the PATTERN) and thus measures the log-likelihood under some general assumptions (independent Gaussian noise). The convolution allows to compute it at each position (in space, time ...) in parallel.
A convolution is an integral that expresses the amount of overlap of one function (say $g$) as it shifted over another function ( say $f$) where $g*f$.
The physical meaning is a signal passes through an LTI system! Convolution is defined as flip (one of the signals), shift, multiply and sum. I am going to explain my intuition about each.
1. Why we flip one of the signals in convolution, What does it mean?
Because the last point in the
representation of the input signal actually is the first which enters the system (notice the time axis). Convolution is defined for Linear-Timer Invariant systems. It is all related to Time and how we represent it in math. There are two signals in convolution, one represents the input signal and one represent the system response. So the first question here is What is the signal of system response? System response is the output of the system in a given time
t to an input with only one non-zero element in a given time
t (impulse signal which is shifted by
t).
2. Why the signals are multiplied point by point?
Again, lets refer to the definition of signal of system response. As said, it is the signal which is formed through shifting an impulse function by
t and plotting the output for each of these
t's. We can also imagine the input signal as sum of impulse functions with different amplitudes (scales) and phases. OK, so the system response to the input signal in any given time is the signal response itself
multiplied by (or scaled by) the amplitude of the input in that given time. 3. What does shifting mean?
Having said those (1 & 2), shifting is performed to get the output of the system for any input signal point at an time
t.
I hope it helps you folks!
[As the question keeps bumping, a short edit]
The output is the joint filtering of the two input signals or functions. In other words, how $x_1$ is smoothed by $x_2$ considered as a filter, and symmetrically how $x_2$ is smoothed by $x_1$ considered as a smoothing function. To some extent, this convolution is a kind of "Least common multiple" between two signals (instead of numbers).
A longer "system view" follows: Think of an ideal (Platonist) vision of a point. The head of a pin, very thin, somewhere in the empty space. You can abstract it like a Dirac (discrete or continuous).
Look at it from afar, or like a short-sighted person (as I am), it gets blurred. Now imagine the point is looking at you, too. From the point "point of view", you can be a singularity, too. The point can be short-sighted as well, and the medium between you both (you as a singularity and the point) can be non-transparent.
So, convolution is like A bridge over troubled water. I never thought I could quote Simon and Garfunkel here. Two phenomena trying to seize each other. The result is the blur of one blurred by the other, symmetrically. The blurs don't have to be the same. Your short-sighted blurring combines evenly with the fuzziness of the object. The symmetry is such that if the fuzziness of the object becomes your eye-impairment, and vice-versa, the overall blur remains the same. If one of them is ideal, the other is untouched. If you can see perfectly, you see the exact blurriness of the object. If the object is a perfect point, one gets the exact measure of your short-sightedness.
All that under some linearity assumptions.
The convolution is a complicated operation. In the Fourier domain, you can interpret it as a product of blurs. Or in the $\log$-Fourier domain, it can be interpreted as a sum of blurs.
You can check But Why? Intuitive Mathematics: Convolution
The way you hear sound in a given environment (room, open space etc) is a convolution of audio signal with the impulse response of that environment.
In this case the impulse response represents the characteristics of the environment like audio reflections, delay and speed of audio which varies with temperature.
To rephrase the answers:
For signal processing it is the weighted sum of the past into the present. Typically one term is the voltage history at an input to a filter and the other term is the a filter or some such that has "memory". Of course in video processing all of the adjacent pixels take the place of "past".
For probability it is a cross probability for an event given other events; the number of ways to get a 7 in craps is the chance of getting a: 6 and 1, 3 and 4, 2 and 5. i.e. the sum of probabilities P(2) times the probability P(7-2): P(7-2)P(2)+P(7-1)*P(1)+.....
Convolution is a mathematical way of combing two signals to form a third signal. It is one of the most important techniques in DSP… why? Because using this mathematical operation you can extract the system impulse response. If you do not know why system impulse response is important, read about it in http://www.dspguide.com/ch6.htm. Using the strategy of impulse decomposition, systems are described by a signal called impulse response.
Convolution is important because it relates the three signals of interest: the input signal, the output signal, and the impulse response. It is a formal mathematical operation, just as multiplication, addition, and integration. Addition takes two numbers and produces a third number, while convolution takes two signals and produces a third signal. In linear systems, convolution is used to describe the relationship between three signals of interest: the input signal, the impulse response, and the output signal (from Steven W. Smith). Again, this is highly bound to the concept of impulse response which you need to read about it.
Impulse causes output sequence which captures the dynamics of the system ( future). By flipping over this impulse response we use it to calculate the output from The weighted combination of all previous input values. This is an amazing duality.
In simple terms it means to transfer inputs from one domain to another domain where we find it easier to work with. Convulation is tied with Laplace transform, and sometimes it is easier to work in the s domain, where we can do basic additions to the frequencies. and also as laplace transform is a one to one function we are most likely to not corrupt the input. Before trying to understand what the general theorem of convulation means in physical significance, we should instead start at the frequency domain. addition and scalar multiplication follows the same rule as Laplace transform is a linear operator. c1.Lap(f(x)+ c2.Lap g(x)= Lap (c1.f(x) + c2.g(x)). But what is Lap f(x).Lap g(x). is what defines the convulation theorem. |
Let $H$ be Hilbert space, $\{a_n\}_{n=1},$ be ONS and $K$ be a compact operator. Suppose $K_nx:=(Kx,a_n)a_n$, $\sum _{n=1}^{N} K_n$ convergent as $N \to \infty$. So,$||K_nx||=|(Kx,a_n)|=|(x,K^* a_n)| \leq ||x|| ||K^* a_n||$ and so $\lim ||K_n||\leq \lim ||K^* a_n||=0$ since $K^*$ is compact and $\{a_n\}$ weakly convergent.
Let $x$ be an element of the closed unit ball. Then by orthogonality of $\left(b_n\right)$, $$ \left\lVert \sum_{n=M}^N K_nx\right\rVert^2=\sum_{n=M}^N \left(Kx,a_n\right)^2. $$Since the set $\left\{Kx,x\in H,\left\lVert x\right\rVert\leqslant 1\right\}$ has a compact closure, it suffices to prove that for each subset $C$ of $H$ having a compact closure,
$$ \lim_{M,N\to +\infty}\sup_{y\in C}\sum_{n=M}^N \left(y,a_n\right)^2=0. $$ This is clear when $C$ is finite; use precompactness to generalize.
Let $L : H\to H$ be given by
$$ L(x) = \sum_{n=1}^\infty \langle x, a_n\rangle b_n.$$
Then
$$K_N :=\sum_{n=1}^N K_n =L_N \circ K,$$
where $L_n (y)= \sum_{n=1}^N \langle y, a_n \rangle b_n.$ To show that $K_N$ converges, it suffices to show that $L^{-1}\circ K_N$ converges to $K$.But
$$L^{-1} \circ K_N (x)= \sum_{n=1}^N \langle Kx, a_n\rangle a_n$$
and it reduces to the usual argument where one proves that finite rank operators in Hilbert space are dense in the space of compact operators, that can be found here. |
Some understanding of the
why can be gleaned from a simple but realistic model.
The curve shown in the question is consistent with a 46-question test in which each question contributes $100/46 \approx 2$ to the total score when answered correctly and otherwise contributes nothing. It is "consistent" in the sense that the distribution of scores is extremely close to what would obtain if each student were to be guessing each question independently, with a $54.5\%$ chance of being correct and $100-54.5 = 45.5\%$ chance of being incorrect.
Consider some circumstances near the end of the administration of the test. You have answered all questions; you do not know your score; but you are contemplating changing some answers.
Suppose your score (unbeknownst to you) is at the middle, equal to $54.5$. This corresponds to a raw score of $54.5\% \times 46 = 25$, indicating you got $25$ questions right and $46-25=21$ wrong. If you were to pick a question randomly and change it, there would be a $25/46 = 54.5\%$ chance it is correct--and you would turn your answer into a wrong one--and only a $45.5\%$ chance it is incorrect and you would turn it into a correct one. Therefore it's a little bit harder to increase your score than to decrease it.
Suppose your score actually is high, equal to $65$: that is, $30$ correct and $16$ incorrect answers. Now your chance of alighting randomly on one of the incorrect questions and changing it--thereby improving your score--is only about $1/3$.
It is twice as hard to increase this high score than to decrease it.
Conversely, using a similar analysis,
it is easier to improve a low score by randomly changing one of the answers.
More generally--and you might find this to be a more appealing model than one that seems based on luck alone--consider any test in which your score is expected to be $100p\%$ of the total based on your underlying knowledge. To
improve your expected test score from $100p$ to $100(p+x)\%$ -- that is, an increase of $100x$ points -- you would have to retain your performance on the $100p\%$ of the answers you got right while learning enough to add $100x$ points out of the $100(1-p)$ points lost on the wrong answers. This relative improvement in your knowledge can be expressed in two ways:
You reduced the proportion $1-p$ of wrong answers to $1-p-x$, a change of $-x/(1-p)$; and
You increased the proportion $p$ of right answers to $p+x$, a change of $+x/p$.
The ratio of these (up to sign), namely
$$\frac{xp}{x(1-p)} = \frac{p}{1-p}$$
is the
odds of $p$. In a balanced way--by accounting for the need both to get fewer wrong answers and more right answers--it measures how difficult it is to make a small increase of $100x$ starting with a score of $100p$. As $100p$ grows towards $100$ points, the dwindling size of the denominator $1-p$ shows how it gets progressively much more difficult to improve an already high score. Roughly, increases from $90\%$ to $95\%$ to $97\%$ are equally difficult. (These are odds of approximately $9$, $19$, and $32$, respectively.)
Note, too, that it's far more likely for your score to drop due to small errors on questions than to rise when your score is above 50%, with the reverse being the case for lower scores:
guessing and random mistakes benefit the poor student and hurt the good student.
As far as a study strategy goes,
this analysis suggests you get the most benefit from studying the sections you are weakest at--assuming that each unit of study effort results in the same relative increase in performance in each section. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.