url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://stats.stackexchange.com/questions/408408/why-is-the-marginal-distribution-marginal-probability-described-as-marginal/408410
|
# Why is the marginal distribution/marginal probability described as "marginal"?
Marginal generally refers to something that's a small effect, something that's on the outside of a bigger system. It tends to diminish the importance of whatever is described as "marginal".
So how does that apply to the probability of a subset of random variables?
Assuming that words get used because of their meaning can be a risky proposition in mathematics, so I know there isn't necessarily an answer here, but sometimes the answer to this sort of question can help you to gain genuine insight, hence why I'm asking.
• May 15, 2019 at 1:57
• Thanks! That matches with Jake-Westfall's answer so consider my posterior belief updated :) May 15, 2019 at 4:02
• Fermat's Last Theorem comment was not marginal...
– smci
May 16, 2019 at 2:38
Consider the table below (copied from this website) representing joint probabilities of outcomes from rolling two dice:
In this common and natural way of showing the distribution, the marginal probabilities of the outcomes from the individual dice are written literally in the margins of the table (the highlighted row/column).
Of course we can't really construct such tables for continuous random variables, but anyway I'd guess that this is the origin of the term.
• For 2d continuous variables, the equivalent would be some form of density plot (possibly using colour to represent density), with the marginal distributions literally in the margins of the plot May 15, 2019 at 9:00
To add to Jake Westfall's answer (https://stats.stackexchange.com/q/408410), we can consider the marginal density as integrating out the other variable. In detail, if we have $$(X, Y)$$ being two random variables, then the density of $$X$$ at $$x$$ is $$p(x) = \int p(x, y)dy = \int p(x | y)p(y)dy,$$ which when the variables are discrete, for example if $$X$$ and $$Y$$ only take on values of $$1, \dots, 6$$, then finding the probability of $$p(X = 1) = \sum_{y = 1}^6 p(X = 1, Y = y)$$ which is the same as summing the elements in the first row ($$i = 1$$) of his table.
I think it's easier to view this in terms of a plot though. Below is a plot of the joint density when sampling from a mixture of two Gaussians, the marginal of $$X$$ and $$Y$$ to the top and on right respectively
Same plot with smoothed densities (you can think of this as the same but with $$X$$ and $$Y$$ now being continuous, in which case you can still find the marginal, but we will use an integral instead of summing)
Both of these plots were generated using the jointplot function from seaborn (https://seaborn.pydata.org/generated/seaborn.jointplot.html#seaborn.jointplot).
Hope this helps!
• phwoah! nice chart. helpful indeed :) May 16, 2019 at 7:24
• @stephan thank you! It's very simple to make, seaborn is very nice for doing aesthetically pleasing and informative plots. May 16, 2019 at 14:54
|
2022-05-21 12:17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284966945648193, "perplexity": 401.49747121627115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00416.warc.gz"}
|
https://mathoverflow.net/questions/70459/compact-riemann-surfaces-and-algebraic-functions
|
# Compact Riemann surfaces and Algebraic Functions
Good evening,
In Riemann surfaces by Otto Forster there is the following theorem : Let $X$ be a Riemann surface and $P(T)=T^n+c_1T^{n_1}+\ldots + c_n\in\mathcal{M}(X)[T]$ an irreducible polynomial of degree $n,$ where $\mathcal{M}(X)$ is the set of all meromorphic functions on $X.$ Then there exist a Riemann surface $Y,$ a branched holomorphic n-sheeted covering $\pi : Y\to X$ and a meromorphic function $F$ on $Y$ such that $(\pi^{\ast}P)(F) = 0$.
We call $Y$ the algebraic function defined by the polynomial $P(T).$ (I don't restate the uniqueness of this Riemann surface).
My question : If $X$ is a compact Riemann surface, can we consider it as an algebraic function defined by some irreducible polynomial $P(T)\in\mathcal{M}(\mathbb{P}^1)[T]$?
I'm thinking of meromorphic functions on $X,$ which we can consider them as holomorphic mappings $X\to\mathbb{P}^1,$ having the smallest positive degree,i.e the inverse image of each point of $\mathbb{P}^1$ contains the smallest number of points. But I'm not sure.
-
Yes. That's the Riemann existence theorem, every compact Riemann surface is an algebraic curve over $\mathbb{C}$. – Felipe Voloch Jul 15 '11 at 23:34
Just to quibble, maybe it's best to talk about compact connected Riemann surfaces, to avoid silly falsities about details. – paul garrett Jul 15 '11 at 23:51
Thank you very much for this information. I will search for this theorem. – Đức Anh Jul 15 '11 at 23:55
@ paul garret : Yes, i'm sorry. In Otto Forster's book, he has a convention that every Riemann surface is connected. – Đức Anh Jul 15 '11 at 23:56
BTW, the Riemann existence is in Forster (Cor. 14.13 in my edition). – Torsten Ekedahl Jul 16 '11 at 5:09
|
2016-05-03 10:49:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266681671142578, "perplexity": 246.6300448964969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00136-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://space.stackexchange.com/questions/51393/apogee-estimate-starting-from-simplified-one-dimensional-numerical-simulations
|
# Apogee estimate starting from simplified one-dimensional numerical simulations
I asked a question on pen and paper apogee estimation and got an answer from @tom-spilker that called my attention: He said that he quickly estimated an inclined launch apogee from the results of a 1-D simulation. So I tried to find a general expression that could illustrate what I think that he did and I would like some feedback on whether or not this makes any sense. Here is what I did:
A one dimensional scenario is a particular case of a two dimensional scenario so we can define the acceleration vector as $$\overrightarrow{a} =\begin{bmatrix} \frac{D_{x}}{M} \cdot \widehat{i}\\ (-g+\frac{D_{y}}{M}) \cdot \widehat{j} \end{bmatrix}$$, where $$D_{x} = D\cdot \cos \theta$$ and $$D_{y} = D\cdot \sin \theta$$. Then, to estimate the apogee we integrate the acceleration over time to find the velocity vector, solve for the time $$t$$ when the vertical component is zero and after integrating the velocity vector a second time to obtain the spatial coordinates vector, we use the $$t$$ we have found to calculate the apogee. Now, let us go over this process symbolically only for the vertical component of the acceleration:
$$\frac{\mathrm{d} v_{y}}{\mathrm{d} t} = -g+\frac{D_{y}}{M}$$
$$\int_{t_{0}}^{t}dv_{y} = -\int_{t_{0}}^{t}g\cdot dt+\int_{t_{0}}^{t}\frac{D\cdot \sin \theta}{M}\cdot dt\;$$
$$\frac{\mathrm{d} h}{\mathrm{d} t} = -g\cdot t+\sin \theta\cdot \int_{t_{0}}^{t}\frac{D}{M}\cdot dt\; +v_{y0}$$
$$\int_{t_{0}}^{t}dh = -g\cdot \int_{t_{0}}^{t}t\cdot dt+\sin \theta\cdot \iint_{t_{0}}^{t}\frac{D}{M}dt\;+\int_{t_{0}}^{t}v_{y0}\cdot dt$$
And we finally arrive at the expression for the rocket altitude $$h$$ at a time instant $$t$$. Note first that for $$v_{y}(t_{0})$$ I set a initial vertical speed $$v_{y0}$$ (same case for $$h_{0}$$), and secondly that I assumed the attack angle $$\theta$$ remains constant. (obs: I know this is not accurate but this was the path I found to keep it "pen and paper" friendly)
$$h = -g\cdot \frac{t^{2}}{2}+\sin \theta\cdot \iint_{t_{0}}^{t}\frac{D}{M}dt\;+v_{y0}\cdot t +h_{0}$$
So if we consider the vertical launch case $$h_{90^{\circ}}$$ and the inclined launch case $$h_{75^{\circ}}$$, we can estimate that $$h_{75^{\circ}} = (h_{90^{\circ}} -(-g\cdot \frac{t^{2}}{2}+v_{y0}\cdot t +h_{0}))\cdot \sin 75^{\circ}+(-g\cdot \frac{t^{2}}{2}+v_{y0}\cdot t +h_{0}).$$
Or alternatively:
$$h_{75^{\circ}} = h_{90^{\circ}}\cdot \sin 75^{\circ} +(1 -\sin 75^{\circ})(-g\cdot \frac{t^{2}}{2}+v_{y0}\cdot t +h_{0})$$
Now, we need to find a relationship between $$t_{75^{\circ}}$$ and $$t_{90^{\circ}}$$ the apogee instants for the vertical and inclined flights.
For initial vertical speed equal to zero we have: $$v_{y90^{\circ}} = -g\cdot t +\sin 90^{\circ}\cdot \int_{t_{0}}^{t}\frac{D}{M}\cdot dt$$. If we write the same expression for $$v_{y75^{\circ}}$$ and solve for $$t$$ to find the apogee instant, it will become evident that $$t_{apogee 75^{\circ}} = \sin 75^{\circ}\cdot t_{apogee 90^{\circ}}$$.
So rewriting the apogee extimation expression, we can estimate that: $$h_{75^{\circ}} = h_{90^{\circ}}\cdot \sin 75^{\circ} +(1 -\sin 75^{\circ})(-g\cdot \frac{\sin^{2} 75^{\circ}\cdot t_{apogee\, 90^{\circ}}^{2}}{2})$$(considerng the initial conditions to be zero.)
|
2021-05-09 14:33:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392880797386169, "perplexity": 276.15561083847234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00253.warc.gz"}
|
http://two.pairlist.net/pipermail/reportlab-users/2005-May/003936.html
|
# [reportlab-users] Printing PDF from reportlab/python/WinXP
Thomas Blatter bebabo at swissonline.ch
Mon May 2 11:09:44 EDT 2005
Hi Scott,
you can try to use the file associations:
import win32api
win32api.ShellExecute(0,"print","path\\to\\the.pdf",None,".",0)
This should work as long as Acrobat reader is correctly installed. If
u really need special options you'd have to fix the print for .pdf in
the registry.
Thomas Blatter
Scott Karlin schrieb:
> I can successfully generate a PDF from within a python application
> running under Windows XP. Printing the PDF now involves finding
> the file and then invoking acrobat. Since this involves an extra
> step, I'd like to be able to either (1) print directly using
> reportlab, or (2) invoke acrobat (with some print option) from
> my application. Does anyone have some suggestions as to how to
|
2022-06-25 22:32:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6884016990661621, "perplexity": 7173.044095165301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00064.warc.gz"}
|
http://www.ck12.org/algebra/Determining-the-Type-of-Linear-System/lesson/Recognizing-Linear-Systems/r8/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
You are viewing an older version of this Concept. Go to the latest version.
Determining the Type of Linear System
Identify consistent, inconsistent, and dependent systems
0%
Progress
Practice Determining the Type of Linear System
Progress
0%
Recognizing Linear Systems
Do you know how to identify a linear system of equations?
If you think about it, you are already familiar with a linear equation.
2x+3=11\begin{align*}2x+3=11\end{align*}
Here is a linear equation. When you have a linear equation, you can simply solve it for x\begin{align*}x\end{align*}.
You can also have an equation where two variables are present.
2x+y=10\begin{align*}2x+y=10\end{align*}
This is a linear equation in standard form.
What about a system of linear equations? Do you know how to identify one? Do you know how to solve one?
This Concept will show you how to work with linear systems. You will be able to answer these questions by the end of the Concept.
Guidance
Linear functions are useful all by themselves, and yet there are other applications of the idea. In a system of linear equations, you will see how linear equations can work together in a system to solve even more complex problems. Indeed, there are various ways that we can find solutions to these problems or find that there may be no solution at all.
If you add two numbers together, you get 13. Can you think of any ordered pairs that would work for this?
(1, 12), (3, 10), (-4, 17), (4.5, 8.5)
Hopefully you’ll agree that there are infinite pairs of numbers whose sum is 13. You might also agree that there are infinite pairs of numbers whose difference is 7.
(9, 2), (11, 4), (37, 30), (95.8, 88.8), (-3, -10)
However, which ordered pair is true for both conditions at once? Which pair has a sum of 13 and a difference of 7?
If you make a list of ordered pairs, you can check them to see which makes both equations true.
This is a system of equations—two or more equations at the same time.
In the situation above, the solution is (10, 3) because the sum of the two numbers is 13 and their difference is 7.
The pair (10, 3) makes both equations true.
A solution to a system of equations is an ordered pair that makes both equations true. Is there always a solution? Can there be more than one solution? Let’s investigate this.
Two numbers have a sum of 17. If you add two numbers together, their sum is 15. As you know, there are infinite ordered pairs whose sum is 17. There are also infinite ordered pairs whose sum is 15. But can a single ordered pair have a sum of both 17 and 15 at the same time?
First, let’s write two equations to help us to sort out the information in this system of equations. There are two equations and both have a different sum.
x+yx+y=17=15
If we think about these two equations, you will see that there aren’t any values that will work for both of these equations.
Therefore, this system has no solutions.
Here is another one.
Two numbers have a sum of -8. Twice the first number plus twice the second number is -16.
First, let’s write the two equations described above. Then we can investigate possible solutions.
x+y2x+2y=8=16
Does this system have a solution? Think of a solution for the first equation. How about (-3, -5)? Does it work for the second? Yes. Think of another solution like (9, -1). This one is also true in both equations.
This equation has an infinite number of solutions.
Some systems of equations have infinite solutions because all ordered pairs that make one equation true also make the other true.
Answer each question true or false.
Example A
A linear system is two equations where the value of x\begin{align*}x\end{align*} is the solution for the system.
Solution: False
Example B
The solution for a linear system is written as an ordered pair.
Solution: True
Example C
Some linear systems do not have a solution.
Solution: True
Now let's go back to the dilemma from the beginning of the Concept.
Identifying a linear system means that you will see two equations where there are unknown values for both x\begin{align*}x\end{align*} and y\begin{align*}y\end{align*}.
Solving a linear system requires you to find two values that will work as the values for x\begin{align*}x\end{align*} and y\begin{align*}y\end{align*} in both equations.
This solution is then written as an ordered pair.
If you can't find a solution, then the system does not have a solution.
Vocabulary
System of Equations
two or more equations at the same time. The solution will be the ordered pair that works for both equations.
Guided Practice
Here is one for you to try on your own.
Which ordered pair makes both equations true?
1.
x+y4xy=8=3
a. (2, 6)
b. (3, 15)
c. (4, 4)
d. (1, 7)
Let’s test each pair and see which pair, if any, works:
a.
x+y2+684xy426862=8=8?=8=3=3?=3?3
Solution
The ordered pair (2, 6) makes the first equation true, but not the second. Because it is not true for both equations, it is not a solution to the system.
b.
x+y3+1518=8=8?8
The ordered pair (3, 15) does not even make the first equation true. It cannot be a solution to the system.
c.
x+y4+484xy44416412=8=8?=8=3=3?=33
The ordered pair (4, 4) makes the first equation true, but not the second. Because it is not true for both equations, it is not a solution to the system.
d.
x+y1+784xy417473=8=8?=8=3=3?=3?=3
The ordered pair (1, 7) makes both equations true. This is a solution to the system.
Practice
Directions: Figure out which pair is a solution for each given system.
1. Which ordered pair is a solution of the following system?
x3y3x+y=9=7
(a) (6,1)\begin{align*}(6, -1)\end{align*}
(b) (1,4)\begin{align*}(-1, -4)\end{align*}
(c) (0,7)\begin{align*}(0, 7)\end{align*}
(d) (3,2)\begin{align*}(3, -2)\end{align*}
1. Which ordered pair is a solution of the following system?
y5x3y=3x7=13
(a) (3,23)\begin{align*} \left (3, \frac{2}{3} \right) \end{align*}
(b) (2,1)\begin{align*}(2, -1)\end{align*}
(c) (4,7)\begin{align*}(4, 7)\end{align*}
(d) (5,8)\begin{align*}(5, 8)\end{align*}
Directions: Determine whether each system has infinite solutions or no solutions.
1. .
x+yy=10=x+10
1. .
3x6yx2y=24=8
1. .
34x9x=23y1=8y12
1. .
yyyy=12x+3=12x2=3x5=3x2
1. .
yy=12x+3=12x2
}}
Directions: Answer each question true or false.
1. Parallel lines have the same slope.
2. A linear system of equations can not be graphed on the coordinate plane.
3. Parallel lines have infinite solutions.
4. Perpendicular lines have one solution.
5. Lines with an infinite number of solutions are not parallel.
6. Some linear systems do not have a solution.
7. To solve a linear system, you must have a value for x and y.
8. An ordered pair is never a solution for a linear system.
Vocabulary Language: English
system of equations
system of equations
A system of equations is a set of two or more equations.
|
2015-11-30 05:27:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 16, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6751747131347656, "perplexity": 408.90578853924563}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460982.68/warc/CC-MAIN-20151124205420-00119-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://primetimegh.com/dgyku50f/95a323-what-is-encapsulated-postscript-used-for
|
Saving from CS2 as an Illustrator 8 file does the trick on my system – I can import the file and get a preview in Word 2003. EPS is used for PostScript graphics files that are to be incorporated into other documents. What does Encapsulated PostScript actually mean? Others don’t output the file because the ‘showpage’ operator is missing. Any advice is appreciated. I think this is the most likely reason why EPS-files sometimes don’t print properly. Can anyone please help Find attached a sample graph in maple and the corresponding encapsulated postscript. It is Encapsulated PostScript. It doesnt open it with layers.. EPS (Encapsulated Postscript) is a language not a format. Our marketing staff requests vector eps files from clients but often receive raster eps files. I’ve done that a few times with Adobe illustrator but there are cheaper, more powerful and more automated solutions for such a specific task. The software i am using says it converts EPS however its not converting and I am not familiar with this type of format. 1. how to use Ghostscript. PDF is ADOBES own VECTOR/BITMAP program. EPS files can be either binary or ASCII. There are two possible solutions for your problem: EPS (Encapsulated PostScript) EPS files use the PostScript page description language to describe vector and raster objects. Short for encapsulated PostScript, EPS is a file format used with IBM compatible computers that instructs a PostScript printer on how a file should be printed. For web publishing, I think PNG is the most suitable bitmap format while Flash or SVG are more suitable for vector graphics. I noticed that some labels are surrounded with [( and )]? There is also no cheap universal EPS-editing tool that can be used to open any EPS-file and edit it. If the logo is a vector-based drawing, there is no disadvantage in reducing its size and making it 10 centimeters or 4 inches wide. Just trying to get my ducks in a row before the cutter arrives. A number of drawing applications like Adobe Illustrator or Corel Draw can open EPS-files and save the data as a bitmap file. It can be found at http://en.wikipedia.org/wiki/Adobe_Illustrator_Artwork, tnx for your helpping When I open it in Illustrator, I can select the individual components but I can’t seem to be able to change the color. An EPS file includes pragmas (special PostScript comments) giving information such as the bounding box, page number and fonts used. the encapsulated PostScript (EPS) – A Post-Script file format that can be used as an independent entity. third, why is anyone still using Corell? Go play somewhere else. Also, the artboard size doesn’t save. Privacy. The Web's largest and most authoritative acronyms and abbreviations resource. Our software (solidworks) cannot export to EPS. We are a screen printing company that need the layers for color separation. @Benro – JUST NOT THE REALITY OF PRINTING. Spell Error: confirm in “An EPS file must conform to the Adobe”. What are the advantages of using EPS files over AI files? Hope to end this cycle somehow. The use of EPS files isn’t as common as, say, PNGs or JPGs. If I send the file to the printer in eps format do I need to convert all text to outline and embed all image? Are there no impacts when using PDFs and converting them to AFP? For .eps to .ps extension weirdness, here is that technote: http://support.microsoft.com/kb/180030. Encapsulated PostScript. All of a sudden today all of me eps files are opening to a blank box with the transparent background? quand je recadre un pdf et l’enregistre en eps, acrobat me met un cadre blanc tout autour le l’eps. My latest saves with EPS format jsut opens as I saved them and I can’t choose to make it smaller or bigger at opening…. I then make a high-end PDF to have it professionally printed. PostScript is a page description language developed by Adobe Systems to support both vector and raster information. I am trying to convert an EPS File with the data type being CMYK true color however the file is not converting. this is extremely frustrating for me. Can’t tell you a straight answer, as much as people may feel the same with my question. Thanks. It’s a black and white coloring but printer sent me this message: Saving as a .tif file reduces the file size. Even if the font is available on the local system but the EPS was generated on another platform (eg created on Mac and opened on a PC°, this can lead to this type of problem. The one thing I don’t know is if the document was originally created in photoshop. New ability to generate Encapsulated postscript of vector graphics. I need to print it on my vinyl cutter. Hi, The EPS image must be incorporated into the Post-Script output of an application such as a desktop publisher. Take into account that saving in an older file format may impact your ability to edit the EPS file afterward. The page on bitmap versus vector graphics explains this in more detail. That may halve the file size of the EPS file. EPSI is mainly used on Unix systems. Save my name, email, and website in this browser for the next time I comment. Can I convert PDF into EPS? EPS files can be generated by all professional drawing applications as well as most layout applications. The quality will not equal that of the read EPS artwork but at least there is an image on the print-out. Proper encapsulated postscript files are likely to produce a blank page when sent to a printer. (EPS) An extension of the PostScript graphics file format developed by Adobe Systems. How do edit an eps file? Size and resolution don’t matter if it is a properly created EPS files. Should I be sending EPS to everyone? EPS files can be either binary or ASCII. The print font is not stored on the computer but rather installed on the printer – either by … 33 dup scale. I hope this also answers Habib’s question: don’t try to convert the EPS file but import them as an EPS in Word. Encapsulated PostScript (EPS) is a Document Structuring Conventions–conforming (DSC) PostScript document format usable as a graphics file format. Required fields are marked *. I wrote my thesis in 2000 as a Word 97 document. In this particular case I assume the file has been created with Adobe software, most likely Illustrator. Encapsulated PostScript Similar. It seems that most folks are able to download in .pdf but as I mentioned, I cannot convert the files to be able to view them with my software. if yes why? – output the spot color as a separate plate I am having a terrible time with our eps files. For corporate logos and logos of major events, it does pay to see if none of the big logo web sites don’t have them. Related pages For more file extensions, refer to our file name extensions page. A system such as Agfa ApogeeX allows a user to At the moment they are pdfs. Many thanks! and photoshop for that matter as well??? how do I do this? I really appreciate any help. I am a newbie using CS2 trying to give design toa printer. Can someone please help me with this? How do I convert an EPS file into a JPEG format? Without knowing from which application or type of file you want to start, it is going to be difficult to answer that question. Please help w/ basic 1-2-3- steps. what program should i install for opening EPS file format? I can only import .eps files and most people arenot able to provide such format. so you can specify in distiller whether to make a HUGE FILE LIKE FOR PREPRESS USES. THANK YOU!!!! thanks alot…. There are conversion tools such as GraphicConverter that can do this. i have created a page in powerpoint , now can i change it into a eps? EPS is a reliable, universal file format that can be used to reproduce graphics from just about any professional (and some non-professional) graphics applications. I am in the process of creating various file formats for our new logo for downloading off the main site. 10) and the resulting content was different (® was displayed for example). ), our user base should feel comfortable that there is no need to worry about a need to convert their very sizable libraries of EPS-based graphic assets.”. The eps is used to print a scale for a gauge. My family and I are sincerely thankful for your generosity and for presenting me potential to pursue the chosen career path. Encapsulated PostScript files can be easily embedded into TeX documents. Category Practical. 1. Even though PDF and native file formats are the way to go, your existing library of EPS files will still remain usable for a long time. You can, for example, use the graphicx package as follows: \documentclass{article} \usepackage{graphicx} \begin{document} \includegraphics{fig1} \end{document} If you use LaTeX and dvips to process this input file, the output will include the figure from fig1.eps. – convert the EPS-files to PC-style EPS files. • Use EPS for saving and importing (placing) flat colored graphics Use EPS for saving and importing (placing We use cookies and similar technologies to give you a better experience, improve performance, analyze traffic, and to personalize content. It can do this in two different ways: Seeing the content of an EPS can be a real hassle, both on PCs and on Macs. Find out what is the most common shorthand of encapsulated postscript on Abbreviations.com! this will enable you to not loose the font when opening it on another computer. Using the proper driver should solve the problem. Forgive the simplistic question, but I have saved an image as an eps but do not appear to be able to upload the document as it is ‘greyed out’ and I cannot select it when I go to files. Since it is actually a PostScript file, it is one of the most versatile file formats that are available. Thank you. thank you. The file opens in Illustrator Cs5, but I can’t figure out how to edit it. http://www.eternalstorms.at/utilities/epsqlplg/index.html. I need to know how to convert an .eps graphics file into a .jpg or a bitmap file, is that possible and how can I do it? It’s possíble? Illustrator’s native file format is called AI. Is EPS file format the best to use ? Unfortunately there are two things about the EPS file format that make this a pretty difficult issue to tackle: EPS is not really an intermediate format that is meant for editing. I recently purchased a vinyl cutter that will be here soon. EPS peut avoir une structure binaire ou ASCII. I am trying to load .eps files into Illustrator CS2 V.11. Here is what one user had to say: Illustrator files with transparency that are never saved as an EPS file and passed to a prepress department (usually as a PDF saved from Illustrator) are well known to present significant issues when it comes to ripping and printing. What is the best solution to get these images into an EPS format without degrading them? Understanding the legal consequences of font embedding would also be useful, especially for TrueType fonts that can have a flag to indicate that they should never be embedded. Which one displays better quality, an eps file or a pdf? The eps file extension is used for files that contain Encapsulated PostScript - graphics file format used by the PostScript language.EPS files can be either binary or ASCII.. There is a little tool called PS+Ai Thumb which at least partly solves this problem. EPS - Encapsulated PostScript. EPS or encapsulated postscript files are a filetype that is used in order to store high-resolution graphics and images in the PostScript page description language. The problem is that the EPS art image looks pixelated/deteriorated on the Mac that can run the Macro (since Macros don’t work in Word 2008 for Mac, only Word 2004), and so the art shows up in the PDF looking terrible. Thanks…. Please anyone tell me what does the below line signify in EPS files and what can be done to produce it : BeginEPSF The file format is completely unsuitable for that type of usage and you can only hope the software can convert everything into a format that all browsers can digest. EPS: Stands for "Encapsulated PostScript." Encapsulated PostScript is a DSC-conforming PostScript document with additional restrictions which is intended to be usable as a graphics file format. EPS images can be sized and resized without loss of quality, which is a problem other … I have some line art I made that I would like to make into vinyl decals. Wikipedia has a nice list of additional applications that might be able to open such a file for edition. Bridge is bundled with applications such as the Adobe Creative Suite or Photoshop. Does anyone know why and how I can fix it? I am using my company’s pdf setting which normally works, not much I can alter there. Plus if you only need vectors for your files you need to convert it into outlines too- because Text Input is Art input not VECTOR. If I have a file that my Mac says it's "kind" is EPS, it will open just fine. All I get is a blank white “placeholder” box. If an EPS file is sent to a printer that doesn’t support PostScript, it is once again this preview image that is printed. In other words, EPS files are more-or-less self-contained, reasonably predictable PostScript documents that describe an image or drawing and can be placed within another PostScript document. EPSI is an EPS file with a platform device independent preview. There are millions of people working with *.eps files without realizing how complex the artwork they are using really is. This happens when printing to a Mimaki printer or a HP LaserJet 5550. JPG, PNG or even GIF are more suitable when it comes to raster images. What are your recommendations for graphic file format when working with AFP print files? They have no way of telling which is which. However, it is smarter than PostScript. Your email address will not be published. Catégorie Pratique. Buy Encapsulated Postscript: Application Guide for the Macintosh and Personal Computers By Peter Vollenweider. Most professional layout applications can display both PC and Mac style previews. If you put a big rectangle in the background to emulate some kind of background, your EPS is going to be as big as that rectangle. Whether there is actually still a need to convert EPS files to PDF is an entirely different issue. Find out inside PCMag's comprehensive tech and computer-related encyclopedia. A typical reason to do that is to have an EPSF stream that describes a picture you can put in a larger document. you can view the files by downloading the above. Is there a better way to achieve clarity? Having a lot of trouble with InDesign crashing when I try to export a document with eps files in it to pdf. Keep in mind that converting an EPS that contains vector data to an image file format that only can contain bitmap data means you are converting to a file that is optimized to be used at one specific size. Or, just in case anyone knows, how can I convert a pdf to a tiff!? These issues simply don’t come into play if the file has been saved as an EPS at some point. But if what you need is a 100% vector eps then you will need to convert the image pixels into vectors. I have tried two programs which ‘apparently’ convert the image (by that I mean create a *.eps file). Hi I have a question. extremely!!! However, it would seem that what actually happens is that the converter simply ‘inserts’ the jpg image into the eps file (that is, maintains the raster image within the vector shell of the *.eps image, or similar — I don’t know the exact terminology). The list of acronyms and abbreviations related to EPS - Encapsulated PostScript TIFF: Most EPS files created by Windows applications contain a TIFF file for preview purposes. I have been trained to always save to eps but I’m beginning to wonder if that is still necessary. In other words, EPS files are more-or-less self-contained, reasonably predictable PostScript documents that describe an image or drawing and can be placed within another PostScript document. What does encapsulated postscript mean? It is a dynamically typed, concatenative programming language and was created at Adobe Systems by John Warnock, Charles Geschke, Doug Brotz, Ed … I saved my EPS logo file created in CS2 on my MAC for my client who uses Word on his PC. If you own a legal copy of the software you use to export these file types, then you are legally covered. My questions is does EOF is required for printing. I am currently working on making a logo for a company I wanna make sure i do this right an eps file is a vector format correct? I have saved them as photoshop eps files, then opened them in illustrator, and they look ok. Looking for abbreviations of EPS? Find out inside PCMag's comprehensive tech and computer-related encyclopedia. Hope this helps some of you out there. Spoiler: there is no good reason any more . Pronounced as separate letters, EPS is the graphics file format used by the PostScript language. EPS file open in Adobe Illustrator CC 2017. Thanks in advance- FAQ ID: Q-eps. to do this select your text input then do this (in CS3)–>Object/Create outlines. I am making a .pdf for web viewing and it comes to 7 MB which surpasses my “limits”. When I print the same doc from a PC (to the same printer) it is pixelated. I am trying to publish some eps files for download from a website. There are some operators that should not be used within an EPS file: banddevice, cleardictstack, copypage, erasepage, exitserver, framedevice, grestoreall, initclip, initgraphics, initmatrix, quit, renderbands, setglobal, setpagedevice, setshared and startjob. Making this a useful format for PostScript graphics files that are to be as! Soil permeabilities determined with water as the Adobe document Structuring Convention that this... Properly created EPS file includes pragmas ( special PostScript comments ) giving information such the... Files between various graphics applications, scaling it does not mean it ’ s,! Supports transparency, such as the infiltrating fluid can also be affected by the graphics! The case but there is no guarantee that it does Mathematical or vector Formulas sure! Know any of the graphics for display purposes wasn ’ t save created. Pc to print EPS files can be embedded in an EPS file in PS CS3 test. Or sheath market that promises to be incorporated into other documents a brush stroke for lines. As curves file contains a bit-mapped representation of the PostScript graphic arts works mostly with Macs, most applications. I deleted a message from Scott who posted a cryptic message using an incorrect e-mail address friend me! Why is this happening, how can i use Gimp to convert a dwg to EPS their and. ( Illustrator ) now can i change the colour within what is encapsulated postscript used for PDF, PostScript, and and... Ok that should not matter to much save a large.eps file ) first time i.! The capabilities they need there ’ s PDF setting which normally works, the... A vector file in AI the end of your file ’ EPS?. In that case there are however applications that produce EPS files cheapest way for me in the =! Can you tell me how to see the preview is not the image or drawing and be. Any suggestions to reliably get the ‘ flatness ’ setting which normally works, not print. Special PostScript comments ) giving information such as jpg or PNG, Explorer shows a thumbnail.... Printing devices, we have to go through the hassle of finding a system of. Right, how can i COVERT Corel DRAW ’ s just that file... Let me import and edit EPS files can u tell me an.eps file into Word used... Still cant manage to open in CorelDraw 12 the line styles are changed and the corresponding encapsulated (. Linked to resolution because of an EPS file and i see just one layer so i use. As headers and footers and website in this case, the images havent the first time i have included images! An entirely different issue conversion ” brings up a large file size might also be bitmap images contenir des vectorielles... ’ t save what is encapsulated postscript used for there is a little tool called PS+Ai Thumb which at partly! Documents safer, but my OS in Windows 98 that don ’ t change the colour in the process/ there! Editing EPS files re smaller files several EPS images when doing this be! My best way to access it online using an incorrect e-mail address: ( en-kap'sū-lā-tĕd,! Version not with CS or so? size possible longer visible or printable i 'm writing Ecapsulated! Place the graphic files as AI instead ; they ’ re smaller files most common shorthand of PostScript... And changed the drop shadow from a EPS file to the actual output the pen tool the... An online vector conversion service actually that one will be converted to EPS! Any combination of text, saved as EPS designer at a file created using Adobe Illustrator does not their... Flatness ’ setting which normally works, not much i can open files! Time of printing format, mainly compatible with z/VM 5.4 an artwork in jpg format exported as EPS... To PC-style EPS files from clients but often receive raster EPS files may include a bitmap preview that! Look pretty awful at least there is no good reason any more any info on this please bit and! Recently updated one machine the problem for my client who uses Word on PC! For opening EPS file to determine what will be printed out and assembled a series of descriptions! Me how to fix this was available and i was looking for advice colorspace. Software also? with EPS files used what does the concept of DPI apply EPS. File and add it as EPS looking for abbreviations of EPS files isn ’ t have ideas. A link to my own website loose the what is encapsulated postscript used for when opening it Word. Guarantee accuracy as the bounding box, not much i can do this they... Not a format tool like http: //forums.b4print.com – maybe one of the file... Be bitmap images, and they look fine ) format open Photoshop can do this more. Driver, the graphics for display by applications that might be to look at a university, on! Without affecting the EPS file epsi files were documented by Adobe Systems to support both vector raster... Been able to get an answer to this becomes Lab 24 bit d import and then we have to into... On this page: http: //www.epsconverter.com cope ) it has probably been discussed before but couldn ’ prefer. If anyone had any suggestions to reliably get the original source file impact ability... Now knowing which file to be included or encapsulated in another PostScript stream mean the... Mainly compatible with PostScript printers and is often used for PostScript graphics file format.eps ( or including ) in! Fonts used can you provide me some tips on this website but ’. Viewers it tells me “ the file contains a bit-mapped representation of the PostScript file! Suitable when it comes to 7 MB which surpasses my “ limits ” whom i have customer... Roots of PC to print a file that does not guarantee accuracy as the EPS-file only contains and... Of generating EPS-files as well as print probably been discussed before but couldn ’ t make clear... Standard file format pretty awful the manual of your unnamed layout application use Adobe Illustrator does not require EPSF!, not making the programs myself you dislike their use ( ® was displayed for )... Stroke for those lines written into EPS format file by using postsript language alter there applications that not. Are changed the EPS-data themselves will degrade in quality irrespective of resolution avoiding the complexity of yet another (... The actual PostScript data, it ’ s native file formats » the EPS – which is most likely what is encapsulated postscript used for! That my Mac says it 's kind '' is encapsulated PostScript image format., can i change the colour in the image and then import it into a blank when. My illustrations will lose detail when printed first quality a fixed resolution, which may indeed be inferior to trick... Of DPI apply to EPS were some ( even more serious ).... These documents safer, but it will go web page encapsulated PostScript, and printing instructions and a of. Whole new one this and i see just one layer so i don ’ t know what is encapsulated postscript used for do. Not at all opening, i tried to read them all inserted into (... Format do i save the data fork PNG, Explorer shows a thumbnail the. Extensions page not save!!!!!!!!!!!! Art department and then generate a new EPS-file the posts but got to 75 and... A non-encapsulated PostScript file which is intended to be written into EPS format if... Them in other pages RGB, raster and scalable vectors applications can display both PC and style... Files will be here soon something i can alter there try Photoshop what is encapsulated postscript used for Acrobat... Quality will not import EPS files been handed Word docs that have EPS images when doing this it web! Cases a Parser error indicated that there were also error messages appearing when certain fonts missing! ’ convert the file with applications such as a desktop publisher editable transfer the best approach is be. File what is encapsulated postscript used for in the right tools to edit your file, and is often used for files. There were some ( even more serious ) problems can view the files on a Windows Metafile preview the bit. Includes more than 6,500 Clip arts freely available without restrictions for free at:... For creating drawings, it is Adobe ’ s way of converting a Word document but cost... Decided to use non-adobe programs to display the preview image has a pulldown menu to select the file AI... S EPS file without a preview, though for edition make opening these documents safer, but older. For our new logo for downloading off the file programs in order to open the becomes... Application that will be using what DPI i nominate when i print the same with Photoshop properly... Print technology even has a nice list of additional applications that might be “! Data as a.tif file reduces the file size changes from 384kb to over 1mb reduces... Include some program its EPS output of an EPS version of it. ) have entered... Placed into an EPS file employs Adobe 's EPS ( encapsulated PostScript is the most likely what you are difficult... By using postsript language i comment do not know the format EPS definition and its important. Take a snapshot of them with Adobe Acrobat create the file up, and text to incorporated. Devices the output to a ‘ true ’ EPS image must be incorporated other. The EPS-file only contains text and shapes least partly solves this problem from afar the...: some EPS files are more-or-less self-contained, reasonably predictable PostScript documents that describe an.... Try optimizing the design by simplifying paths or merging multiple paths into a jpeg format m beginning to if...
|
2021-02-25 18:39:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39832255244255066, "perplexity": 2391.138790187831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00373.warc.gz"}
|
http://reports.ias.ac.in/report/21017/special-theory-of-relativity
|
# Special theory of relativity
Chandra Kumar Chandravanshi
M.Sc. 2nd semester, Pt. Ravishankar Shukla University, Raipur, Chattisgarh, 492010
Professor Anirban Kundu
University of Calcutta, Kolkata, 700009
## Abstract
As I have done my project in the topic Special Theory of Relativity in which I understood that how this theory is applicable to all the relativistic phenomena. The Lorentz Transformation which relates the co-ordinates of two different frames. Using four-vectors I proved the invariance of space-time interval and many other invariants of Lorentz Transformation. In Relativistic kinematics I studied about two-body decays in which I wrote four-momentum for decaying particles and further solving, found that the neutrino was postulated because of the energy distribution of the final state electron and in collider experiments got the result that energy reach for moving-target experiments can be much higher than fixed-target experiments, fixed-target experiment is easy to design. I have done Relativistic formulation of Maxwell's equation, all four Maxwell's equation can be obtained by electromagnetic field tensor and its dual tensor. Further the Electrodynamics of moving bodies tells us that Electric and Magnetic fields are not to be treated separately. Since Lorentz force is an ordinary force and I came to know the four-force called Minkownski force which behaves like Lorentz vector and derived Lorentz force equation. Combined uniform and static electric and magnetic field can be used to get monoenergetic particle beams which are important in the high and low-energy collider experiments and I derived the equation of motion for a field. The generalised momentum equation is the most important equation in the theory of interaction of electromagnetic field with a charged particle. Substitution of the four-differential operator with another new operator and by operating this operator on the electron wave function we can get the entire Quantum electrodynamics. By writing the Langragian for the electromagnetic field I found that the Langrangian density of the electromagnetic field in the presence of an external current is gauge-invariant only because the current is conserved. In short current conservation is equivalent to Gauge invariance. By the experimental tests of Special theory of relativity it proved that there is no any aether medium i.e there is not any absolute frame of reference and the speed of light is constant in all directions.
## Abbreviations
Abbreviations
STR Special Theory of Relativity IAS Indian Academy of Science
## INTRODUCTION
In 1905 Albert Einstein proposed the Special Theory of Relativity in his paper titled ''On the Electrodynamics of Moving Bodies". This theory tells us that how space and time are related for any object moving with consistent speed in a straight path. It is based on two postulates:
1. The physical laws are invariant in all the inertial frame of reference.
2. The speed of light is same for all observers and it does not depend on the speed of observers.
According to this theory many different phenomena occur when any object moves with very high speed, i.e. if any massive object moves with the approximate speed of light then it's mass will start to increase, as well as length contraction and time dilation type phenomena also occurs at this speed. These are the consequences of the Special Theory of Relativity.
## LORENTZ TRANSFORMATION
If (t,x,y,z) is the co-ordinate of an inertial frame S and (t',x',y',z') is the co-ordinate of another inertial frame S' which is moving with velocity v along the common x-x' axis, Then the co-ordinates of these two frames are related by (in the natural system of units, i.e., with c = 1)
t' = γ(t - vx), x' = γ(x - vt), y'= y, z'=z, where $\gamma=\sqrt{1-v^2}$ (1)
Here γ is known as Boost factor and considering c=1. For v ≪ 1 we get the Galilean Transformation. Above equation (1) can be represented in the form of matrix as following
$\Lambda^\mu_{\nu}=\begin{pmatrix} \gamma & -\gamma{v}&0&0 \\-\gamma{v} & \gamma&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}\quad$
Where $\Lambda^\mu_{\nu}$ is known as Lorentz transformation matrix. Note that $\Lambda^0_{0}>1$ and $det\Lambda=1$; any transformation that satisfies these two conditions is called proper Lorentz transformation. For a contravariant four-vector { $A^\mu\equiv(A^0,A)$} the transformation law is
$A'^\mu=\Lambda^\mu_{\nu}A^\nu$
Here the repeated indices are summed over.
Similarly for a covariant four-vector { $A_\mu\equiv(A_0,A)$} the transformation law is
$A'_\mu=\Lambda^\nu_{\mu}A_\nu$
Where $\Lambda_\mu^\nu$ is nothing but the inverse of $\Lambda_\nu^\mu$. Here also the repeated indices are summed over.
Suppose there are three frames S, S' and S". Their origins coincide at t = t' = t" =0. Here S' is moving with a velocity v' with respect to S and S" is moving with velocity v' with respect to S' along the common x-x' axis.
Hence we write Lorentz transformation as
x" = γ'(x' - v't') = γ'[γ(x - vt) - v'γ(t - vx)] (2)
t" = γ'(t' - v'x') = γ'[γ(t - vx) - v'γ(x - vt)]
and the Lorentz Transforation between S and S'' frame
x'' = γ''(x - v''t), t'' = γ''(t - v''x) (3)
Equating the coefficient of x in (2) and (3) and calculating further we get
γ''=γγ'(1+vv') ⇒ v"= $\sqrt{\frac{1-\left(1-{v}^{2}\right)\left(1-v{\text{'}}^{2}\right)}{\left(1-vv\text{'}{\right)}^{2}}}$$v"=\frac{v+v\text{'}}{1+vv\text{'}}$ (4)
This equation (4) is the rule for velocity addition. Hence the velocity of a massive particle will always be less than the speed of light. If v ≪ 1, we get v'' = v+v' the non-relativitic velocity addition rule.
## RELATIVISTIC FORMULATION OF MAXWELL'S EQUATION
As we know the four equations of Maxwell for electromagnetism are (in the rationalised Lorentz-Heaviside system)
$\nabla.B=0, \nabla\times{E}=-\frac{\partial{B}}{\partial{t}}$ (5)
Above equation (5) is known as Faraday's law. Another equation Ampere's law (with Maxwell's correction) is
$\nabla(\nabla.A+\frac{\partial\phi}{\partial{t}})+(\frac{\partial^2}{\partial{t^2}}-\nabla^2)A=j$ (6)
and Gauss's law gives, after the addition and subtraction of $\frac{\partial^2\phi}{\partial{t^2}}$
$-\frac{\partial}{\partial{t}}(\nabla.A+\frac{\partial\phi}{\partial t})+(\frac{\partial^2}{\partial t^2}-\nabla^2)\phi=\rho$ (7)
Since we know that so we can easily combine equations (6) and (7) as
$\partial_\mu\partial^\mu A^\nu-\partial^\nu\partial_\mu A^\mu=j^\nu$ (8)
or it can be written in more elegant way-
$\partial_{\mu}F^{\mu\nu}=j^\nu$ (9)
where
$F^{\mu\nu}=\partial^{\mu}A^\nu-\partial^{\nu}A^\mu$
Equation (13) is known as the Electromagnetic field tensor. It is rank-2 tensor which is antisymmetric by construction and it is most important antisymmetric tensor in physics which can be represented in the form of martrix
$F^{\mu\nu}=\begin{pmatrix} 0 & -E_x&-E_y &E_z\\E_x & 0&-B_z&B_y\\E_y&B_z&0&-B_x\\E_z&-B_y&B_x&0 \end{pmatrix}\quad$ (10)
and it's dual tensor can be obtained by substituting $E\to{B}, B\to-E$ in the above equation (10) as following
$G^{\mu\nu}=\begin{pmatrix} 0 & -B_x&-B_y &-B_z\\B_x & 0&E_z&-E_y\\B_y&-E_z&0&E_x\\B_z&E_y&-E_x&0 \end{pmatrix}\quad$ (11)
It is quite straightforward to show that
$\partial_{\mu}G^{\mu\nu}=0$ (12)
leads to the second pair of Maxwell's equation.
## TRANSFORMATION OF THE FIELDS
Electric and magnetic fields can be transformed under Lorentz transformations. Electricity and magnetism both are relativistc because it depends on the frame of reference. So what appears as an electric phenomena in one frame may appear to be a magnetic phenomena in another frame.
We can derive the laws of field transformation by rank-2 tensor transformation law. Lets take-
$F^{01'}=\Lambda_\alpha^0\Lambda_\beta^1 F^{\alpha\beta}$ (13)
If the motion is along the common x-axis, so (v,0,0) and only $\Lambda^0_0, \Lambda^0_1 , \Lambda^1_0$ and $\Lambda_1^1$ are nonzero but $F^{00}=F^{11}=0$ and $F^{10}=-F^{01}$so
$-E_x'= \Lambda_0^0 \Lambda_1^1F^{01}+ \Lambda_1^0 \Lambda_0^1F^{10}$ (14)
Substituting the value of Λ from Lorentz transformation matrix in above equation (14) we get
$(\gamma^2-\gamma^2v^2)F^{01}= -E_x$ (15)
further we write- $F^{03'}=\Lambda_\alpha^0\Lambda_\beta^3F^{\alpha\beta}$
$-E_z= \Lambda^0_0 \Lambda^3_3F^{03}+ \Lambda^0_1 \Lambda^3_3F^{13}= \gamma(-E_z-vB_y)$
similarly we get a complete set of transformation of fields
$E_x'=E_x, E_y'=\gamma(E_y-vB_z), E_z'=\gamma(E_z+vB_y)$ (16)
$B_x'=B_x, B_y'=\gamma(B_y+vE_z), B_z'=\gamma(B_z-vE_y)$ (17)
Above equations (16)and (17) are symmetric under the exchange of $E\to{B}$ and $B\to-E$.
Hence we can get the same transformation law from the dual tensor $G^{\mu\nu}$.
## FIELDS DUE TO A UNIFORMLY MOVING PARTICLE
Now we will find the field due to a uniformly moving charged particle which is moving along positive x-axis in the lab frame. For this we will do a very easy approach that is go to the frame where the particle is at rest, after that calculate the fields there and back to the lab frame. For this we need the reverse transformation of equation (16) and (17).
Fields in S due to a uniformly moving particle which is at rest in S'
Let the particle with chanrge q is moving with a velocity v along the x-axis in S frame and there is a detector at point (0,b,0) in the S frame. Suppose S' be the the frame where the particle is at rest. These two frame coincide at t=t'=0 and let n be the unit vector along the line joining the instantaneous position of the charge (at the origin of S') and the detector (as shown in figure 1).
Thus $n.v=cos\psi$ , $b=rsin\psi$ and $vt=-rcos\psi$.
At time t and t' in S and S' respectively, the coordinate of detector in S' is $x'_1=-vt'$ , $x'_2=b$and $x_3'=0$. The distance is
$r'=\sqrt{(vt')^2+b^2}$ (18)
Hence the Electric field component in S' frame are:
$E_1'=-qvt'/4{\pi}r'^3$
$E_2'=qb/4{\pi}r'^3$
$E_3'=0$
and the magnetic field component in this frame S' $B_1', B_2', B_3'$are all zero.
Let us now we boost the fields back to the lab frame as it has been stated earlier (replacing v by -v):
$E_1=E_1'=-qvt\gamma/4{\pi}(b^2+{\gamma}^2v^2t^2)^{3/2}$
$E_2=E_2'=-qb\gamma/4{\pi}(b^2+{\gamma}^2v^2t^2)^{3/2}$
$B_3=E_2'=-qvb\gamma/4{\pi}(b^2+{\gamma}^2v^2t^2)^{3/2}$
The above equation can be written in a more compact way. Here $E_1/E_2=-vt/b$. So $E$ is always directed along just as static Coulomb field. Also the denominator $(b^2+{\gamma}^2v^2t^2)^{3/2}$ can be written as $r^3{\gamma}^3(1-v^2sin^2\psi)^{3/2}$, so
$E=\frac{qr}{r^3{\gamma}^3(1-v^2sin^2\psi)^{3/2}}$ (19)
and the magnetic field is given by
$B=v\times{E}$ (20)
Hence we conclude that the moving charge produces a magnetic field and if we calculate the Poynting vector $E\times{B}$ from above equation we will get a nonzero result, so the field carries some energy. However it does not radiate. In this way we get that a charge with constant velocity does not radiate, so get radiation one must have an accelerated charge.
We can say above statement in another way that a charge moving with uniform velocity can be made static in another inertial frame of reference, so that charge does not radiate because physical laws must be invariant in all inertial frames.
## LAGRANGIAN AND EQUATION OF MOTION
From classical mechanics we know that the Lagrangian $L=T-V$ is a function of generalised co-ordinate and generalised velovity. Lagrangian can even be the function of time also it let us suppose a system where $L=L(q,\dot{q})$. Equation of motion by Euler-Lagrangian can be written as:
$\frac{d}{dt}(\frac{\partial{L}}{\partial{\dot{q}}})-\frac{\partial{L}}{\partial{q}}=0$ (21)
where q is generalised co-ordinates and $\dot{q}$is generalised velocity.
The Hamiltonian is given by
$H(q,p)=p\dot{q}-L$ where $p=\partial{L}/\partial{\dot{q}}$ (22)
In classical mechanics if we integrate the total energy of the electromagnetic field over an infinite volume, we get infinity. Thus it is better to talk about a density (i.e. energy density).
So we can take a Lagrangian density $\mathcal{L}$, with
$\int{\mathcal{L}} dv=L$ (23)
Apart from $\mathcal{L}$ being finite, there is extra advantage of this; the form of action looks better from the relativistic point of view:
$S=\int{\mathcal{L}} d^4x$ (24)
Since here the grneralised coordinate is the field $\varphi$ which depends on the coordinates $x^\mu$ and $\partial^{\mu}\varphi$. Now the Euler-Lagrange equation becomes more complicated as following:
$\frac{d}{dt}(\frac{\partial{\mathcal{L}}}{\partial{\dot{\varphi}}})+\nabla.(\frac{\partial\mathcal{L}}{\partial{(\nabla\varphi)}})-\frac{\partial\mathcal{L}}{\partial\varphi}=0$ (25)
Above equation can be written in more compact notation:
$\partial_\mu(\frac{\partial\mathcal{L}}{\partial{(\partial_\mu\varphi)}})-\frac{\partial\mathcal{L}}{\partial\varphi}=0$ (26)
This equation is known as the equation of motion of a field.
## LAGRANGIAN FOR THE ELECTROMAGNETIC FIELD
The Lagrangian density must be scalar, because we don't want to get it transformed under Lorentz transformation. Since we know that electromagnetism respects parity. Thus we expect that $\mathcal{L}$ to be invariant under parity transformation $x\to-x$ too. We see that $F^{\mu\nu}G_{\mu\nu}$ which is directly proportional to $E.B$, is not invariant under parity .
Thus $\mathcal{L}\propto{F^{\mu\nu}F_{\mu\nu}}$ now we start with
$\mathcal{L}= -\frac{1}{4}{F^{\mu\nu}F_{\mu\nu}}$ (27)
Using the explicit form of $F^{\mu\nu}$ this equation becomes
$\mathcal{L}= -\frac{1}{2}(\partial^{\mu}A^\nu\partial_{\mu}A_\nu-\partial^{\mu}A^\nu\partial_{\nu}A_\mu)$ (28)
$A^\mu$is treated here as the electromagnetic field. Ultimately we are going to quantise $A^\mu$and we will find the photon as the exitation quantum of the field. Since above equation has only derivatives of $A^\mu$, not $A^\mu$ itself, so the equation of motion of a field can be written as
$\partial_\mu\frac{\partial\mathcal{L}}{\partial{(\partial_{\mu}A_\nu)}}=0$ (29)
Now let us compute ${\partial\mathcal{L}}/{\partial{(\partial_{\mu}A_\nu)}}$. Second term of equation (28) we write that
$\frac{\partial\mathcal{L_2}}{\partial{(\partial_{\rho}A_\tau)}} = \frac{1}{2}\delta^\rho_\mu\delta^\tau_\nu\eta^{\nu\lambda}\eta^{\mu\kappa}\partial_{\lambda}A_\kappa+\frac{1}{2}\partial_{\mu}A_\nu\eta^{\nu\lambda}\eta^{\mu\kappa}\delta^\rho_\lambda\delta^\tau_\kappa$
$=\frac{1}{2}\eta^{\rho\kappa}\eta^{\tau\lambda}\partial_{\lambda}A_\kappa+\frac{1}{2}\partial_{\mu}A_\nu\eta^{\nu\rho}\eta^{\mu\tau}$
$=\partial^{\tau}A^\rho$
so that
$\frac{\partial\mathcal{L}}{\partial{(\partial_{\mu}A_\nu)}} = -\partial^{\mu}A^\nu+\partial^{\nu}A^\mu$
$=-F^{\mu\nu}$
Hence the free field Euler-Lagrange equations become
$\partial_{\mu}F^{\mu\nu} = 0$
This is nothing but two equations of Maxwell's equations, Gauss' law and 3-component Ampere's law, which is written in the absence of external charge or current densities. It is obvious that if we started with
$\mathcal{L}= -\frac{1}{4}{G^{\mu\nu}G_{\mu\nu}}$
we would have obtained the other two equations of Maxwell.We write the Lagrangian density in terms of $F^{\mu\nu}$ not its dual because we have $j^\mu=(\rho,j)$ (four current density) which is nonzero. In such case we can write another term $\mathcal{L}$ of the form $j^{\mu}A_\mu$, and
$\mathcal{L}= -\frac{1}{4}{F^{\mu\nu}F_{\mu\nu}}-j^{\mu}A_\mu$
gives the correct equation of motion, namely
$\partial_{\mu}F^{\mu\nu} =j^\nu$
We see that there are no such magnetic analogue of $j^\mu$.
Let us see the Gauge invariance of above equation. Suppose the transformation $A^\mu{\to} A^\mu+\partial^\mu\lambda$. In equation (25) the term $j^{\mu}A_\mu$ is apparently not invariant, but gets an extra contribution of $j^\mu\partial_\mu\lambda$. However,
$j^\mu\partial_\mu\lambda = \partial_\mu(j^\mu\lambda)-(\partial_{\mu}j^\mu)\lambda$ (30)
In this equation the second term is zero only because the electric four-current is conserved because by the continuity equation. So we conclude that the Lagrangian density of the electromagnetic field in the presence of an external current is gauge-invariant only because the current is conserved. We can say in other words also that: the current is conserved because we demand gauge invariance.
## EXPERIMENTAL TESTS OF SPECIAL THEORY OF RELATIVITY
There are three major tests of Special Theory of Relativity which are following:
1. Michelson-Morley Experiment
2. Kennedy-Thorndike Experiment
3. Trouton-Noble experiment
## Michelson-Morley Experiment[1]
The Michelson–Morley experiment was an attempt to detect the existence of aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed by Albert A. Michelson and Edward W. Morley. Earth orbits around the sun at a speed of around 30 km/s. The Earth is in motion, so two main possibilities were considered: (1) The aether is stationary and only partially dragged by Earth or (2) The aether is completely dragged by Earth and thus shares its motion at Earth's surface. In addition, Jammes Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's equation, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown.
Experimental set-up of Michelson-Morley experiment[2]
Michelson and Morley had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. The device they designed, later known as a Michelson-Morley interferometer, sent white light through a half silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal versus the transverse arms. If the Earth is traveling through an aether medium, a beam reflecting back and forth parallel to the flow of aether would take longer than a beam reflecting perpendicular to the aether because the time gained from traveling downwind is less than that lost traveling upwind.
Expected differential phase shift between light traveling the longitudinal versus the transverse arms of the Michelson–Morley apparatus[3]
The beam travel time in the longitudinal direction can be derived as follows:
Light is sent from the source and propagates with the speed of light c in the aether. It passes through the half-silvered mirror at the origin at T = 0. The reflecting mirror is at that moment at distance L (the length of the interferometer arm) and is moving with velocity $v$. The beam hits the mirror at time $T_1$ and thus travels the distance $cT_1$ At this time, the mirror has traveled the distance $vT_1$. Thus $cT_1=L+vT_1$ and consequently the travel time ${\textstyle T_{1}=L/(c-v)}$. The same consideration applies to the backward journey, with the sign of v reversed, resulting in ${\textstyle cT_{2}=L-vT_{2}}$ and ${\textstyle T_{2}=L/(c+v)}$. The total travel time ${\textstyle T_{\ell }=T_{1}+T_{2}}$ is:
$T_l=\frac{L}{c-v}+\frac{L}{c+v}=\frac{2L}{c}\frac{1}{1-v^2/c^2}\approx {\frac{2L}{c}}(1-v^2/c^2)$
The beam is propagating at the speed of light ${\textstyle c}$ and hits the mirror at time $T_3$, traveling the distance ${\textstyle cT_{3}}$. At the same time, the mirror has traveled the distance ${\textstyle vT_{3}}$ in the x direction. So in order to hit the mirror, the travel path of the beam is L in the y direction (assuming equal-length arms) and ${\textstyle vT_{3}}$ in the x direction. This inclined travel path follows from the transformation from the interferometer rest frame to the aether rest frame. Therefore, the Pythagorean Theorem gives the actual beam travel distance of ${\textstyle {\sqrt {L^{2}+\left(vT_{3}\right)^{2}}}}$. Thus ${\textstyle cT_{3}={\sqrt {L^{2}+\left(vT_{3}\right)^{2}}}}$ and consequently the travel time ${\textstyle T_{3}=L/{\sqrt {c^{2}-v^{2}}}}$ which is the same for the backward journey. The total travel time ${\textstyle T_{t}=2T_{3}}$ is:
${\displaystyle T_{t}={\frac {2L}{\sqrt {c^{2}-v^{2}}}}={\frac {2L}{c}}{\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\approx {\frac {2L}{c}}\left(1+{\frac {v^{2}}{2c^{2}}}\right)}$
The time difference between T and Tt before rotation is given by
${\displaystyle T_{\ell }-T_{t}={\frac {2}{c}}\left({\frac {L}{1-{\frac {v^{2}}{c^{2}}}}}-{\frac {L}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right).}$.
By multiplying with c, the corresponding length difference before rotation is
${\displaystyle \Delta _{1}=2\left({\frac {L}{1-{\frac {v^{2}}{c^{2}}}}}-{\frac {L}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right)}$
and after rotation
${\displaystyle \Delta _{2}=2\left({\frac {L}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-{\frac {L}{1-{\frac {v^{2}}{c^{2}}}}}\right)}$
Dividing ${\textstyle \Delta _{1}-\Delta _{2}}$ by the wavelength λ, the fringe shift n is found:
${\displaystyle n={\frac {\Delta _{1}-\Delta _{2}}{\lambda }}\approx {\frac {2Lv^{2}}{\lambda c^{2}}}}$
Since L ≈ 11 meters and λ≈500 nanometer, the expected fringe shift was n ≈ 0.44. So the result would be a delay in one of the light beams that could be detected when the beams were recombined through interference. Any slight change in the spent time would then be observed as a shift in the positions of the interference fringes. The negative result led Michelson and Morley to the conclusion that there is no measurable aether drift.
${\displaystyle T_{t}={\frac {2L}{c}},}$
## Kennedy Thorndike Experiment[4]
The Kennedy–Thorndike experiment, first conducted in 1932 by Roy J. Kennedy and Edward M. Thorndike, is a modified form of the Michelson-Morley experimental procedure, testing special relativity. The modification is to make one arm of the classical Michelson–Morley apparatus shorter than the other one. While the Michelson–Morley experiment showed that the speed of light is independent of the orientation of the apparatus, the Kennedy–Thorndike experiment showed that it is also independent of the velocity of the apparatus in different inertial frames.
Experimental set-up for Kennedy-Thorndike Experiment[5]
Although Lorentz–FitzGerald contraction (Lorentz contraction) by itself is fully able to explain the null results of the Michelson–Morley experiment, it is unable by itself to explain the null results of the Kennedy–Thorndike experiment. Lorentz–FitzGerald contraction is given by the formula
$L=L_{0} \sqrt{1-v^2/c^2}= L_0/\gamma(v)$ (31)
where
$L_0$ is the proper length (the length of the object in its rest frame),
$L$ is the length observed by an observer in relative motion with respect to the object,
$v$is the relative velocity between the observer and the moving object, i.e. between the hypothetical aether and the moving object
$c$is the speed of light,
and the Lorentz factor is defined as
$\gamma(v)= \sqrt{1-v^2/c^2}$ (32)
If the apparatus is motionless with respect to the hypothetical aether, the difference in time that it takes light to traverse the longitudinal and transverse arms is given by:
$T_L-T_T=\frac{2(L_L-L_T)}{c}$ (33)
The time it takes light to traverse back-and-forth along the Lorentz–contracted length of the longitudinal arm is given by:
$T_L=T_1+T_2$
$=\frac{L_L/\gamma(v)}{c-v}+\frac{L_L/\gamma(v)}{c+v}$
$=\frac{2L_L/\gamma(v)}{c}\frac{1}{1-v^2/c^2}$
$=\frac{2L_L\gamma(v)}c$ (34)
where T1 is the travel time in direction of motion, T2 in the opposite direction,viz the velocity component with respect to the luminiferous aether, c is the speed of light, and LL the length of the longitudinal interferometer arm. The time it takes light to go across and back the transverse arm is given by:
$T_T=\frac{2L_T}{\sqrt{c^2-v^2}}$
$=\frac{2L_T}{c}\frac{1}{\sqrt{1-v^2/c^2}}$
$= \frac{2L_T\gamma(v)}c$
The difference in time that it takes light to traverse the longitudinal and transverse arms is given by:
$T_L-T_T=\frac{(2L_L-L_T)\gamma(v)}c$ (35)
Because ΔL=c(TL-TT), the following travel length differences are given (ΔLA being the initial travel length difference and vA the initial velocity of the apparatus, and ΔLB and vB after rotation or velocity change due to Earth's own rotation or its rotation around the Sun)
$\Delta{L_A}=\frac{2(L_L-L_T)}{\sqrt{1-v_A^2/c^2}}$
$\Delta{L_B}=\frac{2(L_L-L_T)}{\sqrt{1-v_B^2/c^2}}{}$ (36)
In order to obtain a negative result, we should have ΔLA−ΔLB=0. However, it can be seen that both formulas only cancel each other as long as the velocities are the same (vA=vB). But if the velocities are different, then ΔLA and ΔLB are no longer equal (The Michelson–Morley experiment isn't affected by velocity changes since the difference between LL and LTis zero. Therefore, the MM experiment only tests whether the speed of light depends on the orientation of the apparatus.) But in the Kennedy–Thorndike experiment, the lengths LLand LT are different from the outset, so it is also capable of measuring the dependence of the speed of light on the velocity of the apparatus.
According to the previous formula, the travel length difference ΔLA−ΔLB and consequently the expected fringe shift ΔN are given by (λ being the wavelength):
$\Delta{N}=\frac{\Delta{L_A}-\Delta{L_B}}\lambda$
$= \frac{2(L_L-L_T)}{\lambda}(\frac{1}{\sqrt{1-v_A^2/c^2}}-\frac{1}{\sqrt{1-v_B^2/c^2}})$
Neglecting magnitudes higher than second order in v/c:
$\approx\frac{L_L-L_T}{\lambda}(\frac{v_A^2-v_B^2}{c^2})$ (37)
For constant ΔN, i.e. for the fringe shift to be independent of velocity or orientation of the apparatus, it is necessary that the frequency and thus the wavelength λ be modified by the Lorentz factor. This is actually the case when the effect of time dilation on the frequency is considered. Therefore, both length contraction and time dilation are required to explain the negative result of the Kennedy–Thorndike experiment.
## Trouton-Noble Experiment[6]
The Trouton–Noble experiment was an attempt to detect motion of the Earth through the aether, and was conducted in 1901–1903 by Frederick Thomas Trouton and H.R. Noble. It was based on a suggestion by George FitzGerald that a charged parallel-plate capacitor moving through the aether should orient itself perpendicular to the motion. Like the earlier Michelson-Morley Experiment, Trouton and Noble obtained a null result: no motion relative to the aether could be detected.
A circular capacitor B, was fitted into a smooth spherical celluloid ball D that was covered with conductive paint. A mirror attached to the capacitor was viewed through a telescope and allowed fine changes in orientation to be viewed.[7]
In the experiment, a suspended parallel-plate capacitor is held by a fine torsion fiber and is charged. If the aether theory were correct, the change in Maxwell's Equation due to the Earth's motion through the aether would lead to a torque causing the plates to align perpendicular to the motion. This is given by:
$\tau=-E'\frac{v^2}{c^2}\sin2\alpha'$
where $\tau$is the torque, $E$ the energy of the condenser, $\alpha$ the angle between the normal of the plate and the velocity.
On the other hand, the assertion of special relativity that Maxwell's equations are invariant for all frames of reference moving at constant velocities would predict no torque (a null result). Thus, unless the aether were somehow fixed relative to the Earth, the experiment is a test of which of these two descriptions is more accurate. Its null result thus confirms Lorentz invariance of special relativity and the absence of any absolute rest frame (or aether).
## REFERENCES
More
Written, reviewed, revised, proofed and published with
|
2020-06-04 10:21:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 154, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8859198093414307, "perplexity": 1875.7112812561434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439928.61/warc/CC-MAIN-20200604094848-20200604124848-00289.warc.gz"}
|
https://www.quizover.com/physics-k12/course/19-3-rolling-as-pure-rotation-exercise-by-openstax
|
# 19.3 Rolling as pure rotation (exercise)
Page 1 / 2
Solving problems is an essential part of the understanding process.
Questions and their answers are presented here in the module text format as if it were an extension of the theoretical treatment of the topic. The idea is to provide a verbose explanation of the solution, detailing the application of theory. Solution presented here, therefore, is treated as the part of the understanding process – not merely a Q/A session. The emphasis is to enforce ideas and concepts, which can not be completely absorbed unless they are put to real time situation.
## Representative problems and their solutions
We discuss problems, whose analysis is suited to the technique of treating rolling motion as pure rotation. For this reason, questions are categorized in terms of the characterizing features pertaining to the questions :
• Positions on the rolling body with a specified velocity
• Velocity of a particle situated at a specified position
• Distance covered by a particle in rolling
• Kinetic energy of rolling
## Position on the rolling body with specified velocity
Example 1
Problem : At an instant, the contact point of a rolling disk of radius “R” coincides with the origin of the coordinate system. If the disk rolls with constant angular velocity, “ω”, along a straight line, then find the position of a particle on the vertical diameter, whose velocity is 1/ 2 of the velocity with which the disk rolls.
Solution : Here, the particle on the vertical diameter moves with a velocity, which is 1/ 2 of that of the velocity of the center of mass. Now, velocity of center of mass is :
$\begin{array}{l}{v}_{C}=\omega R\end{array}$
Let the particle be at a distance “y” from the point of contact on the vertical diameter. Then, velocity of the particle is :
$\begin{array}{l}v=\omega r=\omega y\end{array}$
According to question,
$\begin{array}{l}v=\frac{{v}_{C}}{2}\end{array}$
Putting values,
$\begin{array}{l}⇒\omega y=\frac{{v}_{C}}{2}=\frac{\omega R}{2}\end{array}$
$\begin{array}{l}⇒y=\frac{R}{2}\end{array}$
This result is expected from the nature of relation “ $v=\omega r$ ”. It is a linear relation for vertical distance. The velocity varies linearly with the vertical distance.
Example 2
Problem : At an instant, the contact point of a rolling disk of radius “R” coincides with the origin of the coordinate system. If the disk rolls with constant angular velocity, “ω”, along a straight line, then find the position of a particle on the rim of the disk, whose speed is same as the speed with which the disk rolls.
Solution : Here, the particle on the rim of the disk moves with the same velocity as that of the velocity of the center of mass. Now, velocity of center of mass is :
$\begin{array}{l}{v}_{C}=\omega R\end{array}$
Let the particle be at P(x,y) as shown in the figure. Then, velocity of the particle is :
$\begin{array}{l}v=2{v}_{C}\mathrm{sin}\left(\frac{\theta }{2}\right)=2\omega R\mathrm{sin}\left(\frac{\theta }{2}\right)\end{array}$
According to question,
$\begin{array}{l}v={v}_{C}\end{array}$
Putting values,
$\begin{array}{l}2{v}_{C}\mathrm{sin}\left(\frac{\theta }{2}\right)=2\omega R\mathrm{sin}\left(\frac{\theta }{2}\right)=\omega R\end{array}$
$\begin{array}{l}⇒\mathrm{sin}\left(\frac{\theta }{2}\right)=\frac{1}{2}=\mathrm{sin}\mathrm{30}°\\ ⇒\frac{\theta }{2}=\mathrm{30}°\\ ⇒\theta =\mathrm{60}°\end{array}$
$\begin{array}{l}⇒x=-R\mathrm{sin}\mathrm{60}°=-\frac{\surd 3R}{2}\end{array}$
$\begin{array}{l}⇒y=R\mathrm{cos}\mathrm{60}°=\frac{R}{2}\end{array}$
Since there are two such points on the rim on either side of the vertical line, the coordinates of the positions of the particles, having same speed as that of center of mass are :
$\begin{array}{l}-\frac{\surd 3R}{2},\phantom{\rule{2pt}{0ex}}\frac{R}{2}\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}\frac{\surd 3R}{2},\phantom{\rule{2pt}{0ex}}\frac{R}{2}\end{array}$
## Velocity of a particle situated at a specified position
Example 3
which colour has the shortest wavelength in the white light spectrum
if x=a-b, a=5.8cm b=3.22 cm find percentage error in x
x=5.8-3.22 x=2.58
what is the definition of resolution of forces
what is energy?
Ability of doing work is called energy energy neither be create nor destryoed but change in one form to an other form
Abdul
motion
Mustapha
highlights of atomic physics
Benjamin
can anyone tell who founded equations of motion !?
n=a+b/T² find the linear express
أوك
عباس
Quiklyyy
Moment of inertia of a bar in terms of perpendicular axis theorem
How should i know when to add/subtract the velocities and when to use the Pythagoras theorem?
Centre of mass of two uniform rods of same length but made of different materials and kept at L-shape meeting point is origin of coordinate
A balloon is released from the ground which rises vertically up with acceleration 1.4m/sec^2.a ball is released from the balloon 20 second after the balloon has left the ground. The maximum height reached by the ball from the ground is
work done by frictional force formula
Torque
Why are we takingspherical surface area in case of solid sphere
|
2018-08-18 04:10:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6564643383026123, "perplexity": 494.64378372042285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213286.48/warc/CC-MAIN-20180818040558-20180818060558-00400.warc.gz"}
|
https://quantumcomputing.stackexchange.com/tags/neural-network/hot
|
# Tag Info
13
This is very much an open question, but yes, there is a considerable amount of work that is being done on this front. Some clarifications It is, first of all, to be noted that there are two major ways to merge machine learning (and deep learning in particular) with quantum mechanics/quantum computing: 1) ML $\to$ QM Apply classical machine learning ...
8
Yes, all classical algorithms can be run on quantum computers, moreover any classical algorithm involving searching can get a $\sqrt{\text{original time}}$ boost by the use of grovers algorithm. An example that comes to mind is treating the fine tuning of neural network parameters as a "search for coefficients" problem. For the fact there are clear ...
6
Again, this is still an open question. There are two lines of work that come to mind when you talk of "hardware-based neural networks" which try/claim to use photonics as a mean to speed-up processing, and make direct reference to speeding up machine learning tasks. Shen et al. 2016 (1610.02365) propose a method to implement "fully-optical neural networks" ...
5
Taking the density matrix $$\rho=W+\frac{I_d}{d}=\frac 1M \sum_{m=1}^M\left|x^{\left(m\right)}\rangle\langle x^{\left(m\right)}\right|,$$ many of the details are all contained in the following paragraph on page 2: Crucial for quantum adaptations of neural networks is the classical-to-quantum read-in of activation patterns. In our setting, ...
4
First: The paper references [37] for Levy's Lemma, but you will find no mention of "Levy's Lemma" in [37]. You will find it called "Levy's Inequality", which is called Levy's Lemma in this, which is not cited in the paper you mention. Second: There is an easy proof that this claim is false for VQE. In quantum chemistry we optimize the parameters of a ...
3
First, they reduce the size from 28*28 to 4*4 images (by downsampling), then convert into binary values for pixels by just comparing to a value. Then, they encode the data in a quantum uniform superposition (with computational basis representing a bitstring data image with its label).
3
All of the answers here seem to be ignoring a fundamental practical limitation: Deep Learning specifically works best with big data. MNIST is 60000 images, ImageNet is 14 Million images. Meanwhile, the largest quantum computers right now have 50~72 Qbits. Even in the most optimistic scenarios, quantum computers that can handle the volumes of data that ...
2
Here is a latest development from Xanadu, a photonic quantum circuit which mimics a neural network. This is an example of a neural network running on a quantum computer. This photonic circuit contains interferometers and squeezing gates which mimic the weighing functions of a NN, a displacement gate acting as bias and a non-linear transformation similar to ...
2
As of now we can properly simulate only ~50 qubits. You are talking about a full quantum simulation of a vector containing $2^{50}$ elements. In quantum neural networks and quantum annealing, we usually only need something close to the ground state (optimal value) rather than the absolute global minimum. Here is another example from 2017 where 1000 ...
2
I will assume you are asking about D-Wave's quantum annealer. If there is a part of the learning process that can fit the QUBO (Quadratic Unconstrained Binary Optimization) formulation, then yes. The problem however is what to consider as binary variables of your problem. In CNN, we have in general real-valued parameters that we tweak for training (using ...
2
Calculation of the inverse of an $N\times N$ matrix can be done by applying HHL with $N$ different $\vec{b}_i$ (specifically, HHL is applied $N$ times, once for each computational basis vector used as the $\vec{b}_i$). In each case, phase estimation has to be done for an $N \times N$ matrix. The number of qubits required for phase estimation is written on ...
2
Short, sort-of right answer: you can't This is in essence due to the superconducting qubits that e.g. IBM use being, well, qubits, while continuous variable (CV) operations don't act on qubits. Well, sort of. These are two fundamentally different ways of going about making a quantum computer, so let's start from first principles: When you take a state \$\...
1
I don't think a hello world really exists here. You can have different points of view or goals here. I will give references. The first one is speeding up parts of the algorithm with a quantum version (here is an example reference). But here, we assume a perfect hardware. Another one is to apply it to quantum many-body systems. The interesting point here is ...
1
What are some other proposed applications of quantum neural networks? Absolutely any application of classical neural networks can be an application of quantum neural networks. There's a lot of examples beyond the two you listed. Also, have any of those proposed solutions been programmed/simulated? Yes, for example Ed Farhi of MIT and Hartmut Neven of ...
1
I am not an expert but I read a few papers and here is what I have found. Similarly to NN, people found strategies to avoid this issue with the gradients. Basically, for some problems, you can use ansatzes that are inspired by the physics of the problem itself. For example, in quantum chemistry, people use something called unitary coupled clusters. See ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-04-10 06:55:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6146830916404724, "perplexity": 597.7467293454655}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00030.warc.gz"}
|
https://www.scienceopen.com/document?vid=05fb6945-2890-4b25-a17a-ad49de41e357
|
9
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Can Compactifications Solve the Cosmological Constant Problem?
Preprint
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant $$\Lambda$$ is much smaller than the Planck density and in fact accumulates at $$\Lambda=0$$. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain $$\Lambda$$ that is small in Planck units in a toy model, but to explain why $$\Lambda$$ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.
### Author and article information
###### Journal
1509.05094
|
2019-09-15 08:13:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793243527412415, "perplexity": 653.2112673407223}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00464.warc.gz"}
|
http://physics.stackexchange.com/questions/72447/why-this-perpetuum-mobile-cant-be-possible?answertab=oldest
|
Why this perpetuum mobile can't be possible? [duplicate]
I know that this won't work but I'm asking Why?
Becuase as far as the vehicle POV - there is a force which drags him to the right.
Isnt $F=ma$ applies here? What is that im missing?
-
marked as duplicate by Qmechanic♦Jul 27 '13 at 17:44
There is no reason to downvote. Even if you know the answer, many people out there actually believe it will work, or don't know the reason why it won't. This is NOT a bad question. – mikhailcazi Jul 27 '13 at 14:41
@mikhailcazi agree. but I dont tend to educate people about helping other people raise their knowledge. – Royi Namir Jul 27 '13 at 14:42
I just updated my answer again. Did it help you understand? :) – mikhailcazi Jul 27 '13 at 14:44
@mikhailcazi yes indeed thank you. – Royi Namir Jul 27 '13 at 14:44
duplicate of Why does the "Troll-Mobile" not work? – EnergyNumbers Jul 27 '13 at 16:54
|
2014-07-25 16:07:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6256051659584045, "perplexity": 1712.6181449648209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894378.97/warc/CC-MAIN-20140722025814-00137-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://pkg.go.dev/gonum.org/v1/gonum/integrate
|
# integrate
package
Version: v0.9.3 Latest Latest
Go to latest
Published: Jun 30, 2021 License: BSD-3-Clause
### Gonum integrate
Package integrate provides numerical evaluation of definite integrals of single-variable functions for the Go programming language.
## Documentation ¶
### Overview ¶
Package integrate provides functions to compute an integral given a specific list of evaluations.
### Constants ¶
This section is empty.
### Variables ¶
This section is empty.
### Functions ¶
#### func Romberg ¶
func Romberg(f []float64, dx float64) float64
Romberg returns an approximate value of the integral
\int_a^b f(x)dx
computed using the Romberg's method. The function f is given as a slice of equally-spaced samples, that is,
f[i] = f(a + i*dx)
and dx is the spacing between the samples.
The length of f must be 2^k + 1, where k is a positive integer, and dx must be positive.
See https://en.wikipedia.org/wiki/Romberg%27s_method for a description of the algorithm.
#### func Simpsons ¶
func Simpsons(x, f []float64) float64
Simpsons returns an approximate value of the integral
\int_a^b f(x)dx
computed using the Simpsons's method. The function f is given as a slice of samples evaluated at locations in x, that is,
f[i] = f(x[i]), x[0] = a, x[len(x)-1] = b
The slice x must be sorted in strictly increasing order. x and f must be of equal length and the length must be at least 3.
#### func Trapezoidal ¶
func Trapezoidal(x, f []float64) float64
Trapezoidal returns an approximate value of the integral
\int_a^b f(x) dx
computed using the trapezoidal rule. The function f is given as a slice of samples evaluated at locations in x, that is,
f[i] = f(x[i]), x[0] = a, x[len(x)-1] = b
The slice x must be sorted in strictly increasing order. x and f must be of equal length and the length must be at least 2.
The trapezoidal rule approximates f by a piecewise linear function and estimates
\int_x[i]^x[i+1] f(x) dx
as
(x[i+1] - x[i]) * (f[i] + f[i+1])/2
More details on the trapezoidal rule can be found at: https://en.wikipedia.org/wiki/Trapezoidal_rule
### Types ¶
This section is empty.
## Directories ¶
Path Synopsis
Package quad provides numerical evaluation of definite integrals of single-variable functions.
Package quad provides numerical evaluation of definite integrals of single-variable functions.
|
2021-09-28 21:14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48907536268234253, "perplexity": 3694.481403847883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00677.warc.gz"}
|
https://astarmathsandphysics.com/ib-maths-notes/sequences-and-series/4882-recurring-decimals-from-fractions.html
|
## Recurring Decimals From Fractions
All fractions with one whole number divided by another whole number forms a sequence which eventually terminates or repeats.
$\frac{5}{3}=1.6666666...$
$\frac{4}{7}=0.57142857142857...$
$\frac{7}{8}=0.875$
If the denominator is a poer of 2 or 5, or any multiple of a power of 2 by a power of 5, then the decimal expansion terminates.
If the denominator is an odd prime number
$p$
then there are
$p-1$
possible remainders at each stage of long division, and they must repeat after
$p-1$
iterations of long division, so the decimal expansion is at most
$p-1$
digits long. In fct the length of the recurring expansion mus t divide
$p-1$
.
For example
$\frac{3}{11}=0.27272727..$
The length of the recurring expansion is 2, which divides 11-1=10.
In fact a slight extension of the same remainder argument gives that if the denominator is a product of different odd primes
$p, \: q$
then the decimal expansion of
$\frac{a}{pq}$
must be of at length
$(p-1)(q-1)$
and may be
$\frac{(p-1)(q-1)}{n}$
where
$n$
divides
$(p-1)(q-1)$
.
For example
$\frac{1}{37}=0.027027027...$
whch repeats every three digits.
$\frac{1}{11}=0.09090909090...$
whch repeats every 2 digits.
$\frac{1}{37 \times 11}=0.00245700245700...$
whch repeats every three times two digits.
|
2023-01-28 19:12:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814561367034912, "perplexity": 1154.5477677267843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00456.warc.gz"}
|
https://math.meta.stackexchange.com/questions/17067/should-latex-be-discouraged-in-titles
|
# Should latex be discouraged in titles
Something I have been wondering about.
I have seen people remove latex from question titles, replacing it with a general phrase describing the kind of problem being solved.
For example, instead of "how to solve $ax^2 + bx + c = 0$", the title is replaced with "How can I solve this quadratic?".
It was mentioned by one user that this was done for more appropriate indexing or something of that affect. I observed this some time ago, so I have forgotten the exact reasoning.
Is this something common? Is it something I should look out for? Or is latex generally acceptable in titles and could even be an improvement?
The only negative I see is that it makes it appear that the site is used as a repository for specific homework questions.
• I guess one of the reasons is that LaTeX-snippets are too much for the local search engine. Try it yourself! Give it a piece of LaTeX to search for and behold the confusion. – Jyrki Lahtonen Oct 13 '14 at 20:06
• I wouldn't remove the LaTeX from the titles, I would work on making a better math searching algorithm (a hard task, unfortunately). – robjohn Oct 13 '14 at 20:09
• No. It should be encouraged when it helps to tell you what's inside the question. There's a reason we write $2+2=4$ and not "the sum of twice the unit with itself equals to four times the unit". – Asaf Karagila Oct 13 '14 at 22:22
• Yes. It should be discouraged except when it helps to tell you what's inside the question. (Thanks to @Asaf for the template :-) ) – quid Oct 14 '14 at 6:16
• But in a title, I would use 2+2=4 and not $2+2=4$. It loads faster, renders correctly everywhere, and serves to educate the SE users outside of Math.SE, should the question get on the hot list... The only downside is that the numerals in Georgia font are weird. :-/ @AsafKaragila – user147263 Oct 14 '14 at 6:25
## 1 Answer
http://meta.math.stackexchange.com/q/9687/ cover most of your question.
At times, formulas are necessary to describe the question. Other times they are not.
In the case you quoted, replacing $ax^2+bx+c=0$ with "quadratic equation" may be an improvement, since the same information is conveyed by words. Replacing $\Delta u = u^{3}+f$ with "nonlinear PDE" is probably not an improvement, since too much information is lost.
For all the struggle with searching math, Related column of this site can benefit from LaTeX in titles. E.g., the related column to this question has several examples with PDE of the same type.
So, neither of two blanket statements "latex is encouraged / discouraged in titles" would be true.
However, display style equations in titles are strongly discouraged and in some cases blocked by the software.
• I think there is a universal recommendation that display style is discouraged in titles. – Gerry Myerson Oct 13 '14 at 21:53
|
2019-07-23 05:28:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6530570983886719, "perplexity": 1044.9620535036313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528869.90/warc/CC-MAIN-20190723043719-20190723065719-00451.warc.gz"}
|
https://math.stackexchange.com/questions/3014029/definition-of-affine-space-as-a-quasiprojective-variety-shafarevich
|
# Definition of affine space as a quasiprojective variety, Shafarevich
I am quite confused the definition given by Shafaverich:
Definition: A regular map $$f:X \rightarrow \mathbb P^n$$ of an irreducible quasiproj. variety $$X$$ to projective space $$\Bbb P^n$$ is given by an $$(m+1)$$ truple of form $$(F_0 : \cdots : F_m)$$ of same degree in hom. coord. of $$x \in \Bbb P^n$$. We require that for every $$x \in X$$ there exists such an expression for $$f$$ such that $$F_i(x)\not=0$$ for at least one $$i$$.
The definition of regular maps between quasiprojective varieties allows us to define isomorphism.
Definition: We say a quasiproj. variety is isomoprhic to a closed subspace of an affine space , an affine variety
In this case, to apply the first definition, we must regard a closed subspace of an affine space as one in $$\Bbb P^n$$? What are we doing here?
I suppose we for "embed" $$\Bbb A^n$$ into $$\Bbb P^n$$, by one of the choice $$\Bbb A_i^n$$ (where $$i$$th coordinate is nonzero), and show it is independent? Also, are there not more ways to regard $$\Bbb A^n$$ as a subset in $$\Bbb P^n$$?
• The first definition is the definition of a regular map. How is this connected to the second definition? You would do well to check the precise wording for the second definition, as it looks to have been altered in transcription (the first also has typos). Nov 26 '18 at 8:26
• Ok, my first definition is incomplete, I will edit it now. The second definition is word by word. Which I do not understand because we have not defined the notion of maps between quas proj and affine space. Nov 26 '18 at 8:36
There's not much deep going on here, it's just a definition. In spirit, we want closed subsets of affine space $$\Bbb A^n$$ to be our "affine varieties", but sometimes you have a situation where $$X$$ is a quasiprojective variety that just happens to be also isomorphic to a closed subset of $$\Bbb A^n$$. Since we believe that "up to isomorphism" is the proper equivalence to put on varieties, we will call these things affine varieties as well.
Nothing about "quasiprojective" was important here; if we define more general types of varieties (I don't know in how much generality Shafarevich does this), then we will always call such varieties affine varieties when they are isomorphic to a closed subset of $$\Bbb A^n$$.
So a good example is the one given right here after the line you mention. If you take $$X:=\Bbb A^1\smallsetminus\{0\}$$, then $$X$$ is quasiprojective because it is open in $$\Bbb A^1$$, which is open in $$\Bbb P^1$$ under the identification $$\Bbb A^1\simeq\{[x:y]\mid x\neq0\}\subset\Bbb P^1,$$ but on the other hand it is isomorphic to the closed subset $$\{(x,y)\mid xy=1\}\subset\Bbb A^2$$. Since it "looks like" a closed subset of $$\Bbb A^n$$, we will call $$X$$ an affine variety.
I want to emphasize further that Shafarevich is not saying that every quasiprojective variety can be embedded as a closed subset of affine space. For instance, $$\Bbb P^1$$ is a quasiprojective variety, which is not an affine variety. If you know stuff about regular functions then you know that the only globally defined regular functions on $$\Bbb P^1$$ are constants (this is true for any $$\Bbb P^m$$), but this is never true for closed subsets $$Z\subset\Bbb A^n$$ unless $$Z$$ is a single point.
• My problem is perhaps even more elementary, (i) what does it mean to have a quasi projective variety isomorphic to an affine variety? We have only defined isomoprhisms for between quasiprojective varieties and projective spaces. This leads to an even more fundamental question (ii) why is a closed affine set a quasiprojective vareity - which is defined as an open set of a closed projective set. We can regard affine space as, say A_0^n in P^n, and a closed set of affine space, is also a closed set of P^n with A_0^n. These definitions are really confusing me. I will update post. Nov 26 '18 at 19:33
• @André3000 yes, my mistake. corrected now Nov 26 '18 at 21:55
• @CL. I want to double check what I was about to write, let me get back to this in a few hours Nov 26 '18 at 22:03
• @CL. Okay, so using the definitions of subspace topology, you can show that if $Y$ is a closed subset of an open subspace of $X$ if and only if $Y$ is an open subset of a closed subspace of $X$. Using this, by choosing an identification of $\Bbb A^n$ with an open subset of $\Bbb P^n$, we see that every closed subset of affine space $\Bbb A^n$ can be considered as a quasiprojective variety. Nov 26 '18 at 23:38
• Now using the fact that you know what an isomorphism of quasiprojective varieties is, we can say that a quasiprojective variety $X$ is affine if and only if if it isomorphic (as quasiprojective varieties) to a closed subset of some affine space $\Bbb A^n$. Nov 26 '18 at 23:38
|
2022-01-27 15:07:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918300986289978, "perplexity": 162.21357698615765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00563.warc.gz"}
|
https://stats.stackexchange.com/questions/531254/random-walks-and-martingales
|
# Random walks and martingales
In class, our professor explained that the martingale process is the in between case of random walk type I (innovations are i.i.d.) and random walk type II (innovations are serially uncorrelated).
This means that every random walk type I is a martingale but not vice versa, and that every martingale is a random walk type II but not vice versa.
Is it correct that the independence is needed so that the conditional and unconditional expectation is equivalent to satisfy the martingale condition?
Further, what would be an example of a type II random walk that is not a martingale? I am a bit confused how random walk type II processes can be different that one fulfills the martingale condition and one does not. Are there multiple "types" or "cases" of the type II random walk?
• Hi: I think it would be clearer if you provided the exact definition of random walk type II. Unless of course, the definition is a random walk with innovation terms that are serially uncorrelated but not IID ? – mlofton Jun 18 at 14:46
• exaclty, they are serially uncorrelated but allowed to be dependent. Therefore, a type I is automatically a type II not vice versa. – J3lackkyy Jun 18 at 14:52
• Hi: I'm still slightly confused. Are you referring to the error term, $\epsilon_t$, in $y_t = y_t-1 + \epsilon_t$ ? So, a type II random walk means that the $\epsilon_t$ is not correlated with itself but could be dependent on its previous value in some non-linear way ? If that's correct, then that sounds an ARCH model would fit that criteria. In an ARCH model, the squared error terms are dependent but the error terms themselves are uncorrelated. – mlofton Jun 19 at 15:14
• One other thing: I'm somewhat famiiar with the econometrics literature but I've never heard of type II random walks. You may want to look at "martingale difference sequences" because that could possibly be the same topic but under a different name. Also, I don't think I answered adequately and, since no one else has added anything, you may be better off sending your question to economics.stackexchange.com or quant.stackexchange.com. There may be a terminology issue that's causing confusion. – mlofton Jun 19 at 15:18
|
2021-08-05 15:45:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7787977457046509, "perplexity": 459.97828502972243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155925.8/warc/CC-MAIN-20210805130514-20210805160514-00033.warc.gz"}
|
http://math.stackexchange.com/questions/388435/seems-that-i-just-proved-2-4
|
# Seems that I just proved $2=4$.
Solving $x^{x^{x^{.^{.^.}}}}=2\Rightarrow x^2=2\Rightarrow x=\sqrt 2$.
Solving $x^{x^{x^{.^{.^.}}}}=4\Rightarrow x^4=4\Rightarrow x=\sqrt 2$.
Therefore, $\sqrt 2^{\sqrt 2^{\sqrt 2^{.^{.^.}}}}=2$ and $\sqrt 2^{\sqrt 2^{\sqrt 2^{.^{.^.}}}}=4\Rightarrow\bf{2=4}$.
What's happening!?
-
$x^{x^{x^{.^{.^.}}}}=2\Rightarrow x^2=2$ Why is this true ? – Kasper May 11 '13 at 12:46
@Kasper He replaced the power tower with 2 because it is equal to 2. You could also say that $x^{x^2} = 2$ as well. – Mohammad Ali Baydoun May 11 '13 at 12:47
@Magtheridon96 Is "power tower" the accepted name for this construction or did you just make that up? (I like it!) – Kris Williams May 11 '13 at 13:37
The technical term is Tetration, but I've seen people call it power tower :D – Mohammad Ali Baydoun May 11 '13 at 13:41
@Magtheridon96 The power tower is the infinite tetration. – Lucas May 11 '13 at 19:33
Let's add the hypothesis that $x>0$ to the problem, so that it's clear your derivations are correct.
Pay attention to what you've proven:
• If $x^{x^{\cdot^\cdot}} = 2$, then $x = \sqrt{2}$
• If $x^{x^{\cdot^\cdot}} = 4$, then $x = \sqrt{2}$
This is very different from
• If $x = \sqrt{2}$, then $x^{x^{\cdot^\cdot}} = 2$
• If $x = \sqrt{2}$, then $x^{x^{\cdot^\cdot}} = 4$
Your argument that $2=4$ requires this latter pair of statements, but you haven't proven either of them; instead, what you've proven are the first pair of statements!
It's easy to get in the habit of forgetting about the direction you've argued a problem, and in many situations, arguments are reversible, making it hard to see why direction matters. But this is an example of the dangers of getting things wrong!
Incidentally, if $x = \sqrt{2}$ then $x^{x^{\cdot^\cdot}} = 2$ is correct, if we assume the usual meaning of infinite power towers as a limit of finite ones. If you're familiar with limits of sequences, then you can use an inductive proof to show that the sequence
$$a_0 = \sqrt{2} \qquad \qquad a_{n+1} = \sqrt{2}^{a_n}$$
is strictly increasing and bounded above by $2$, and so the limit converges. And if $L$ is the limit, then because exponentiation is continuous, we can take the limit of the recursive relation to see that
$$L = \sqrt{2}^L$$
letting you complete the proof.
-
Your last statement $L=\sqrt2^L$ lets $L=2 \text{ or } L=4$, so doesn't look helpful for finding the limit. – Ruslan May 11 '13 at 13:06
Those are the only possibilities, and the information above let's you rule one of them out! The harder part is showing those are the only possibilities. – Hurkyl May 11 '13 at 13:07
See the wikipedia entry on the Lambert W function, and note in particular Example 3, which gives the formula $z^{z^{z^\cdots}}=-W(\ln z)/\ln z$ whenever the left hand side converges. – Harald Hanche-Olsen May 11 '13 at 13:44
Then how to show $\sqrt 2^{\sqrt 2^{.^{.^.}}}$ is $2$ not $4$? – ᴊ ᴀ s ᴏ ɴ May 12 '13 at 3:12
@ᴊᴀsᴏɴ: If $a_n$ is an increasing sequence bounded by $2$.... – Hurkyl May 12 '13 at 7:48
You have merely shown that the equation $\sqrt 2^y = y$ has more than one solution.
Then you assumed that $x^{x^{x^\ldots}}$ somehow made sense and tried to talk about it as if it meant "the solution $y$ of $x^y = y$". Which of course is nonsense when the equation has several solutions.
-
Your reasoning does not make sense because you did not specified what does $x^{x^{x\ldots}}$ mean. It is not a finitary operation so it is not clear what does that term denote. If it stands for an outcome of a certain limiting procedure then you are in trouble since that limit can be $1$ or $\infty$ for positive $x$. Thus with this interpretation your premises are false and you can deduce from them anything you want, for instance that $0=1$.
-
In general, you can solve the equation in terms of the Lambert W function as
$$y=x^{x^{x^{.^{.^.}}}} \implies \ln(y)=y\ln(x) \implies y = -\frac{W(-\ln(x))}{\ln(x)}.$$
Try to use this closed form to see what the problem is. Note this, if you ask maple to solve the equations
$$-\frac{W(-\ln(x))}{\ln(x)}=2,\quad -\frac{W(-\ln(x))}{\ln(x)}=4,$$
you will get the same answers
$$x=1,\sqrt{2}$$
More generally, the solution of
$$-\frac{W(-\ln(x))}{\ln(x)}=a$$
is given by
$$\left\{ x=1,x={{\rm e}^{{\frac {\ln \left( a \right) }{a}}}}\right\}.$$
Now, if you use the above solution with $a=2,4$, you will see why?
-
the solution $1$ seems intuitively weird. I mean if you take $1$ billion of them in the exponentiation, the result is one. – Seyhmus Güngören May 11 '13 at 16:39
|
2014-08-20 19:15:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332306385040283, "perplexity": 394.95346430997034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811913.46/warc/CC-MAIN-20140820021331-00127-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/143731/why-do-we-need-a-ramp-for-stepper-motor/143736
|
# Why do we need a ramp for stepper motor?
I am a newbie and trying to understand how i can run a stepper motor. The concept i had in mind was that steppers need digital pulses to run, and i tried it out too. I was able to run the stepper i am using very easily. But lately I came across a link where they have used a ramp for starting a stepper justifying it by saying that
"if we try to start the stepper motor with fast pulses then it just sits there and hums away not turning, We need to start the stepper off slowly and gradually increase the speed of the steps (ramping up)." Source:http://www.societyofrobots.com/member_tutorials/book/export/html/314
My question is why does the stepper then starts up with regular square pulses? Why do we need a ramp? All the other forums and tutorial always talk about providing digital pulses to the stepper for starting it up, why is the concept of ramp generation not discussed there? Is it a bad practice to run stepper with digital pulses?
• I think you are confusing the "ramp" with "square wave" shapes. The control is still by square wave, just the speed/rate of change of these control steps is increased from zero to the intended steps per second or whatever velocity you are trying to get. – KyranF Dec 13 '14 at 7:49
• Say your stepper square wave looks like a 3KHz signal. Rather than go from dead stop to flat-out, you should start with a low frequency (or a longer gap between pulses). Flooring a car accelerator pedal takes about half a second, and either smokes tyres (in low gear) or takes a while to respond (in high gear). – Alan Campbell Dec 13 '14 at 9:42
• Yep i did confuse the ramp with the pulses, thanks for the feedback guys! – alexhilton Dec 18 '14 at 4:16
When the controller steps the motor, the rotor has to move far enough (angle) that when the next coil (or coil pair) is energized it will pull the rotor in the correct direction. If the rotor has not moved through enough angle, then the coils will pull the rotor backwards and the motor just sits there and buzzes. You can find many illustrations and animations online that explain how normal operation works- imagine if the rotor only moved a fraction of the intended amount.
The rotor, shaft, and whatever is connected to the shaft all have inertia and there is friction of various kinds.
The maximum speed the stepper can turn the shaft is related to the torque available from the motor and the torque required to turn the shaft (available torque drops as RPM increases, and the required torque generally increases as the RPMs increase). That's not directly related to the inertia.
To actually get to the maximum (or some fraction thereof) you can only accelerate the RPM so fast without missing steps. The maximum acceleration is related to the inertia and the excess available torque at a given RPM. If the motor is doing all it can just to keep up with the current RPM then you can no longer accelerate. If the RPM are low enough, you don't need to ramp it up, you can simply tell it to step, but that will typically be only a fraction of the RPM the motor is capable of. Often linear ramps are used for simplicity, but a more convex curve would be optimal.
Here is a motor torque curve from Oriental Motor (a major Japanese maker):
To predict the maximum rate of acceleration you need to know the torque and the mass moment of inertia. If you exceed the maximum rate of acceleration at a given loading then the motor will lose steps, so a reasonable safety margin is a good idea.
• Thanks Sphero for such a detailed reply, i was actually confusing myself with two major things, i will work on a way to select the frequency of the steps to make up a ramp! – alexhilton Dec 18 '14 at 4:16
• Do you have some literature? – Carlton Banks Dec 12 '16 at 0:38
• @CarltonBanks Check out the link above to Oriental Motor. – Spehro Pefhany Dec 12 '16 at 1:23
• It doesn't nessesarily mention why it is better ramping than not, (If at all, only mention selection as far i can read) I mean as far I understand one could microstep the motor and not ramp it, difference would be the torque not being that powerfull. – Carlton Banks Dec 12 '16 at 1:27
• If you don't care about maximum speed there is no reason to ramp. Ramping lets you get a higher maximum speed for a given inertia + torque without losing steps. – Spehro Pefhany Dec 12 '16 at 1:58
It sounds like the description you have read is talking about ramping up speed, in other words, the frequency of the steps. The pulses for each step are still square.
The reason is that a stepper motor can generate only so much torque. When we exceed this maximum torque, the motor misses steps.
Furthermore, accelerating the motor requires torque by Newton's second law of motion: force equals mass times acceleration:
$$F=ma$$
For a rotating system the terms change a bit, but they are mostly analogous: torque equals the moment of inertia times angular acceleration:
$$\tau = I \alpha$$
The consequence is that to instantly accelerate the motor would require infinite torque which is not possible. Thus, we must limit acceleration, that is, "ramp up" the speed, to limit the torque required to something that the motor can generate without missing steps.
Two years later... I wanted to add some details about the typical speed vs vibration/noise for any step motor.
When stepping very slowly, like one per second, the shaft will move to the new location and overshoot then undershoot many times until it stabilize on that step. The process repeat on each new step.
The electric voltage/current has to be sufficient for the load and the motor size need to be selected to match the torque required.
Once the motor does not need to move, the voltage/current can be reduced by about 50% to 75% to maintain that position. In cases where the friction is dominant, or using some type of gear, the motor can be desenergized completely. This is similar to relays which need for example 12 volt to activate, but then easily keep the contact activated with only 9 volt.
When increasing the speed to about 20 per second, the vibration/noise reach it maximum. This is a speed that most engineer will try to avoid.
As the speed is increased, the vibration/noise decrease, by the torque also falls. If you plot the noise vs frequency, the shape will show a clear down direction with some local maxima, often at harmonic frequency.
Let's assume that a typical value above 100 step per second, the vibration is low enough to be tolerable and let say that the torque become too weak for reliable operation above 500 hertz.
You can start a step motor using any of these frequency right away, without ramping the speed from 100 Hz to 500 Hz. Similarly, you can stop abruptly the steps, no matter the frequency. The holding current is sufficient to lock the motor at that step.
The ramping is needed when you want to exceed the maximum frequency. Given the "typical" number above, you may find that your motor still have enough torque, when smootly accelerated, to work from 500 Hz to 700 Hz. The trick for a reliable operation is to start the ramp somewhere like 400 Hz, then let it increase up to 700 Hz. Keep it at that speed until approaching the target position.
Then, decelerate smoothly from 700 Hz to 450 Hz. If target position still not reached, keep the motor at that speed. Then, from 450 Hz, you can stop. Keep the motor energized at max current/voltage for 0.1 second to 1 second to make sure all source of vibration dissipated.
The linear ramp is easier to create. But the optimum is the "S" shape. You start at the safe frequency, increase slowly at first and change rate of increasing the speed exponentially until reaching the maximum.
When it is time to decelerate, the same algorithm apply, decreasing the speed slowly and exponentially changing the rate of speed decreasing, stop decreasing the speed when reaching the safe speed, which allow to stop the motor abruptly.
The actual code doing all that, using a motorola 68HC05 microcontroller, was taking about 500 bytes (the internal EPROM was 8K total and the RAM was 128 bytes). It was written in assembler.
If you have the hardware for micro-stepping, then you can ignore all the mention about noise and vibration. You still need a "S" shape acceleration if you want to exceed the usual maximum speed. But since there is no vibration no matter the speed, you can let the deceleration go as low as you want.
The lessons learned from the square wave drive still hold thru. Thhat is, for the most efficient way to reach destination, you want the deceleration to sit at the frequency just below the point where the motor torque is sufficient for abrupt stop and start.
|
2020-02-24 03:36:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5417452454566956, "perplexity": 911.3495728219424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00398.warc.gz"}
|
https://skepticalscience.com/news.php?p=1&t=79&&n=1048
|
# Climate Science Glossary
## Term Lookup
Enter a term in the search box to find its definition.
## Settings
Use the controls in the far right panel to increase or decrease the number of terms automatically displayed (or to completely turn that feature off).
# Settings
All IPCC definitions taken from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Annex I, Glossary, pp. 941-954. Cambridge University Press.
Climate's changed before It's the sun It's not bad There is no consensus It's cooling Models are unreliable Temp record is unreliable Animals and plants can adapt It hasn't warmed since 1998 Antarctica is gaining ice View All Arguments...
Archives
## Pielke Sr. and SkS Warming Estimates
#### Posted on 11 October 2011 by dana1981, Albatross
Dr. Roger Pielke Sr. has written a blog post addressing the disagreement between himself and Skeptical Science (SkS) regarding the contribution of CO2 to the net positive anthropogenic radiative forcing. Initially Dr. Pielke cited a presentation he gave in 2006 which said (on slide 12):
"The CO2 contribution to the radiative warming decreases to 26.5% using the IPCC framework given in Slide 9"
This "radiative warming" refers to the human plus natural positive radiative forcings ('natural' being solar). As Dr. Pielke's presentation was given in 2006, before publication of the IPCC Fourth Assessment Report (AR4), his reference to the IPCC is to the Third Assessment Report (TAR) published in 2001. In his new post, Dr. Pielke also references a previous post on his blog on the same subject, which concludes (emphasis added):
"For all of the human-caused warming radiative forcings, which includes the 0.5 Watts per meter squared value for the shortwave albedo change, and estimating tropospheric ozone as 0.3 Watts per meter squared, the aerosol black carbon direct effect as 0.2 Watts per meter squared, the black carbon on snow and ice as 0.3 Watts per meter squared, the semidirect indirect effect as 0.1 Watt per meter squared, and the glaciation indirect effect as 0.1 Watt per meter squared (with the latter two forcings using a nominal value, since these forcings are very poorly known), the contribution due to CO2 will fall to about 28%."
In this case Dr. Pielke refers to only the human positive radiative forcings, excluding the contribution of solar irradiance.
In short, Dr. Pielke has argued that CO2 contribution to the total positive radiative forcing (since pre-industrial times) is between 26% and 28% (depending on whether solar effects are included), whereas in our previous post, SkS concurred with the AR4 radiative forcing estimates, which put CO2 at approximately 50% of the total positive radiative forcing (nearly twice Dr. Pielke's estimate).
Below we discuss some problems SkS has identified in Dr. Pielke's estimate, and provide a detailed up-to-date estimate of these values. The main underlying problem is that Dr. Pielke is relying on an estimate he made in 2006, failing to account for advances in climate research over the past 5 years, and thus his sources are at least 5 years out of date. Additionally, he appears to have made some mathematical errors in his calculations.
### Methane
Dr. Pielke estimates the radiative forcing from methane at 0.8 Watts per square meter (W/m2), which is significantly larger than the IPCC estimate (both TAR and AR4) of 0.48 W/m2. To support this value, in his 2006 presentation Dr. Pielke references research by "Drew Shindell and colleagues; Keppler et al." (slide 11), and on his blog posts, references Keppler et al. (2006). Keppler et al. do not estimate the methane radiative forcing in their paper - the 0.8 W/m2 figure is Dr. Pielke's estimate based on Keppler et al.'s results.
However, as we noted in our previous post, both the atmospheric methane concentration and radiative forcing are well-known quantities. The IPCC TAR and AR4 best estimates of the methane radiative forcing are 0.48 W/m2, 0.49 W/m2 according to Skeie et al. (2011), and 0.504 W/m2 in 2010 according to the NOAA Annual Greenhouse Gas Index (AGGI). Thus Dr. Pielke's methane forcing estimate appears to be 60% too high.
Additionally, Dr. Pielke appears to have double-counted the methane forcing in his calculations:
"By summing the 0.8 Watts per meter squared for methane and using the total of 2.4 Watts per meter squared of the well-mixed greenhouse gases from the IPCC Report..."
The 0.48 W/m2 methane forcing is included in the 2.43 W/m2 best estimate forcing for well-mixed greenhouse gases in the IPCC TAR (the best estimate is 2.64 W/m2 in the AR4). Thus, summing Pielke's estimated methane forcing (0.8 W/m2) and the IPCC TAR greenhouse gas forcing (2.4 W/m2) double counts the methane forcing.
### Albedo
Dr. Pielke also estimates "0.5 Watts per meter squared value for the shortwave albedo change," which is a forcing not included in the TAR or AR4. In his presentation (slide 11), Dr. Pielke claims:
"For the period 2000-2004, a CERES Science Team assessment of the shortwave albedo found a decrease by 0.0015 which corresponds to an extra 0.5 W m−2 of radiative imbalance according to their assessment."
However, there are a number of problems with this estimate. Most importantly, the data in question only cover a period of 4 years. Changes in the Earth's albedo (reflectivity) over a 4-year period tell us little or nothing about changes in albedo over the past century. It's apples and oranges; one is short-term, the other is long-term.
Four years is also simply far too short of a timeframe to ascertain a meaningful trend. From Loeb et al. (2007):
"Commonly used statistical tools applied to the CERES Terra data reveal that in order to detect a statistically significant trend of magnitude 0.3 W m−2 decade−1 in global SW TOA flux, approximately 10 to 15 yr of data are needed. "
Additionally, there is significant uncertainty regarding this short-term albedo change (i.e. see Wielicki et al. 2005 and many other papers on the subject). While the CERES data Dr. Pielke references estimated a decrease in the Earth's albedo from 2000 to 2004, albedo change estimates over the exact same timeframe using Project Earthshine data found an even larger increase in albedo from 2000 to 2004 than the CERES-estimated decrease.
Loeb et al. (2007) also used a revised version of the CERES data to show that no statistically significant changes in the Earth’s albedo occurred between 2000 and 2005. More recently, Palle et al. (2009) conclude:
"Earthshine and FD [International Satellite Cloud Climatology Project flux data] analyses show contemporaneous and climatologically significant increases in the Earth's reflectance from the outset of our earthshine measurements beginning in late 1998 roughly until mid-2000. After that and to date, all three show a roughly constant terrestrial albedo, except for the FD data in the most recent years"
We should also note that an albedo increase/decrease due to increasing cloud cover would also be accompanied by an increased/decreased greenhouse effect, making the net effect on the climate even more uncertain.
But the bottom line is that in order to incorporate an albedo forcing into these estimates, we must use an estimated albedo change from pre-industrial to Present. We should also investigate the cause of any albedo change to determine if it should be treated as a forcing or as a feedback. If it's a forcing, then it's not anthropogenic, and Dr. Pielke was incorrect to include it in the anthropogenic forcings. If it's a feedback, then it should not be included in the calculation of total forcings at all.
Ultimately, for this calculation, Dr. Pielke's 0.5 W/m2 albedo forcing estimate is unjustified and not supported by more recent observations and scientific literature.
### Black Carbon
Dr. Pielke cites Hansen and Nazarenko (2004) in estimating the albedo effect of soot on snow and ice at 0.3 W/m2, and the net black carbon forcing at 0.5 W/m2. However, as the IPCC AR4 noted three years later, the magnitude of the black carbon radiative forcing remains uncertain. The best estimate of Skeie et al. (2011) of 0.45 W/m2 for the black carbon forcing is in rough agreement with Dr. Pielke's estimate. Ramanathan and Carmichael (2008) give a best estimate for the black carbon forcing at 0.9 W/m2.
In short, the black carbon forcing remains highly uncertain, but Dr. Pielke's estimate is reasonable.
### Tropospheric Ozone
Dr. Pielke's tropospheric ozone forcing estimate is somewhat unclear. He states that the associated forcing is 0.3 W/m2, but the IPCC TAR estimate is 0.35 W/m2, and Dr. Pielke appears to believe the value should be higher:
"Ozone was responsible for one-third to one-half of the observed warming trend in the Arctic during winter and spring [Drew Shindell]"
This release from NASA GISS appears to be the source, from which, if we are interpreting his presentation correctly, Dr. Pielke estimates an additional 0.3 W/m2 on top of the IPCC 0.35 W/m2 tropospheric ozone radiative forcing.
However, there are more recent estimates of this forcing, in addition to the IPCC's 0.35 W/m2 (both TAR and AR4). The best estimate from Skeie et al. (2011) was 0.44 W/m2, and the best estimate from Cionni et al, 2011 (submitted), on which Shindell is a co-author, is 0.23 W/m2. Thus Dr. Pielke's estimate of 0.65 W/m2 appears to be much too high.
### Aerosol Semi-Direct and Indirect Effects
Dr. Pielke also identifies a "glaciation effect" as causing a 0.1 W/m2 forcing, which, in a recent talk, he clarifies as "An increase in ice nuclei increases the precipitation efficiency." Lohmann et al. (2007) is a very good paper on this subject, and explains the effect, described as the aerosol indirect effect:
"Global climate model studies suggest that if, in addition to mineral dust, hydrophilic black carbon aerosols are assumed to act as ice nuclei at temperatures between 0 and –35°C, then increases in aerosol concentration from pre-industrial to present times may cause a glaciation indirect effect (Lohmann, 2002a). The glaciation effect refers to an increase in ice nuclei that results in a more frequent glaciation of supercooled stratiform clouds and increases the amount of precipitation via the ice phase. This decreases the global mean cloud cover and allows more solar radiation to be absorbed in the atmosphere. Whether or not the glaciation effect can partly offset the warm indirect aerosol effect depends on the competition between the ice nucleating abilities of the natural and anthropogenic freezing nuclei (Lohmann and Diehl, 2006)."
Lohmann et al. (2007) note that the aerosol indirect glaciation effect is negligible. However, Perlwitz and Mlller (2010) conclude:
"Despite the high complexity and nonlinearity of the microphysical interaction between aerosols and clouds, modeling studies generally indicate that the net effect of this interaction is to reflect more radiation back to outer space [Forster et al., 2007], although recent results show that aerosols acting as ice nuclei could counteract the cooling effect significantly [Storelvmo et al., 2008]. A few observational studies seem to confirm a relation between soil dust aerosols and cloud cover."
In short, the aerosol indirect glaciation effect remains far from clear. Dr. Pielke also identifies the aerosol semi-direct effect, which involves tropospheric aerosols absorbing shortwave radiation, as causing a 0.1 W/m2 forcing. However, the IPCC has not included this as a positive forcing becase
"the semi-direct effect is not strictly considered an RF because of modifications to the hydrological cycle"
Additionally, Lohmann et al. identify the semi-direct effect as most likely causing cooling:
"The semi-direct effect refers to temperature changes due to absorbing aerosols that can cause evaporation of cloud droplets, as was shown in a large eddy model simulation study that used black carbon concentrations measured during the Indian Ocean Experiment (Ackerman et al., 2000). It ranges from 0.1 to –0.5 Wm-2 in global simulations"
The IPCC AR4 also lists the semi-direct effect as "positive or negative" and "small" potential magnitude, and the indirect effect as "positive" and "medium" potential magnitude, where as Dr. Pielke lists both as positive and equal in magnitude (0.1 W/m2). In short, the magnitude and roles of the aerosol semi-direct and indirect glaciation effects in terms of radiative forcings remain far from clear.
### Carbon Dioxide
Dr. Pielke's estimate for the CO2 radiative forcing (1.4 W/m2) is both outdated and not consistent with the value in the IPCC TAR (1.46 W/m2); it appears that he either rounded the value down or eyeballed the IPCC TAR radiative forcing graphic rather than looking up the precise value. However, 1.46 W/m2 was the estimated value in 2001, when the TAR was published. In 2007, when the AR4 was published, the CO2 forcing had already increased to 1.66 W/m2. More recently, the NOAA AGGI estimated the CO2 forcing at 1.79 W/m2 in 2010, and Skeie et al. at 1.82 W/m2.
In other words, the CO2 radiative forcing has increased 25% over the past decade. Some of the other forcing estimates (like tropospheric ozone and black carbon) have changed mainly as a result of new research, but the CO2 forcing has changed as a result of rapidly increasing CO2 emissions and atmospheric concentrations.
"I think it is very generally recognized that, for the same global mean forcing, aerosols perturb the mean precipitation field more than do the well-mixed greenhouse gases (WMGGs). So if, up to the present, anthropogenic aerosols and WMGGs have had comparable effects on regional precipitation, say, the WMGG effect will undoubtedly grow and will be essentially irreversible on the time scale of several centuries, in the absence of geoengineering, while the aerosol effect will likely be bounded by its current magnitude, and the WMGGs will dominate."
### Estimated CO2 Contribution
Below we summarize various estimates of the CO2 contribution to the net positive radiative forcing. We believe Dr. Pielke has committed two types of errors: mathematical (double-counting and rounding), and using outdated sources.
We believe the first column is a replication of Dr. Pielke's estimates. The second column corrects Dr. Pielke's math errors by eliminating the double counting of methane, and correcting rounding errors for the CO2 and solar forcings. The third column provides the IPCC TAR estimates which were the basis of Dr. Pielke's estimates, but which, for the most part, we believe are more accurate than Dr. Pielke's suggested values.
The fourth and fifth columns correct for the out-of-date references by using the IPCC AR4 and Skeie et al. (2011) estimates. Bear in mind we have not included the uncertainty ranges - these are all just best estimates of the respective positive radiative forcings (in W/m2).
ForcingsPielke 2006Pielke Math CorrectedIPCC TARIPCC AR4Skeie 2011
CO2 1.40 1.46 1.46 1.66 1.82
CH4 0.80 0.80 0.48 0.48 0.49
other LLGHGs 1.00 0.49 0.49 0.50 0.51
tropospheric ozone 0.65 0.65 0.35 0.35 0.44
black carbon 0.50 0.50 0.20 0.10 0.45
albedo 0.50 0.50 0 0 0
aerosols (semi-direct+indirect) 0.20 0.20 0 0 0
stratospheric water vapor 0 0 0 0.07 0.07
contrails 0.02 0.02 0.02 0.01 0
solar 0.25 0.30 0.30 0.12 0.12 (AR4)
Total Positive Forcing 5.32 4.92 3.30 3.29 3.90
Total Anthropogenic Forcing 5.07 4.62 3.00 3.17 3.78
CO2 contribution to Total 26.3% 29.7% 44.2% 50.5% 46.7%
CO2 contribution to Anthropogenic 27.6% 31.6% 48.7% 52.4% 48.2%
If we correct for Pielke's double counting and rounding errors, the CO2 contribution to the total net positive forcing increases to approximately 30%. When we use up-to-date research for all forcings, the CO2 contribution increases to close to 50%, as we originally argued. We again note that this fraction will continue to increase along with continually increasing human CO2 emissions.
### Human Contribution to Global Surface Warming
We are still interested in Dr. Pielke's answer our original question on this subject:
"Approximately what percentage of the global warming (increase in surface, atmosphere, ocean temperatures, etc.) over the past 100 years would you estimate is due to human greenhouse gas emissions and other anthropogenic effects?"
We suggest a back-of-the-envelope answer to this question by applying the probabilistic estimate of transient climate sensitivity by Padilla (2011):
"we find a most-likely present-day estimate of the transient climate sensitivity to be 1.6 K with 90% confidence the response will fall between 1.3–2.6 K"
We can use this range of transient climate sensitivity (alpha = 0.35 to 0.70 K/Wm-2) and scale the transient climate response (we're currently 49% of the way to the radiative forcing associated with CO2 doubling [~1.8 out of 3.7 W/m2]) to estimate the amount of CO2-caused surface warming:
$\Delta T = \alpha * \Delta F$
Where F is the radiative forcing. Using the Skeie et al. (2011) CO2 forcing best estimate of 1.82 W/m2 for 2010 and the Padilla (2011) range of transient climate sensitivity parameters, this corresponds to a CO2 contribution of 0.64 to 1.28°C, with a best estimate of 0.79°C warming of average global surface temperature.
We can also consider the expected warming for the net anthropogenic forcing, which Skeie et al. estimated at 1.4 W/m2 and the IPCC AR4 estimated it at 1.6 W/m2. Using these two estimates and the Padilla transient sensitivity range yields a net anthropogenic warming of 0.49 to 1.12°C with a central estimate of 0.65°C warming of average global surface temperature.
Dr. Pielke, would you concur with these estimated ranges of CO2 and anthropogenic warming?
### Take-Home Message
The main points here are that CO2 is responsible for approximately 50% of the net positive radiative forcing since pre-industrial times (a percentage which will only continue to increase in the future). In the absence of negative forcings, CO2 would have contributed 0.79°C of the 0.8°C observed global surface temperature rise, and hence we would expect the total observed rise to be double that. This tells us that the negative forcings (primarily from human aerosol emissions) have offset approximately 50% of the net positive forcings.
We also found that the net anthropogenic radiative forcing (sum of all positive and negative forcings) accounts for approximately 80% of the observed average surface warming over the past century (~0.65 out of 0.8°C). The other ~20% is a combination of natural forcings (primarily solar), and perhaps a bit of natural variability.
Another key point is that aerosols have a short atmospheric lifetime, unlike long-lived greenhouse gases. Thus their large offsetting of close to 50% of the net positive radiative forcing is only temporary, and will decline rapidly if we reduce aerosol emissions. This is why, as Isaac Held noted in the quote above, we fully expect CO2 and other greenhouse gases to continue as the dominant cause of global warming, and why although we need to address other issues like land use change, CO2 emissions are rightfully the primary target in mitigating climate change.
0 0
Comments 1 to 50 out of 79:
1. Excellent post, I sincerely hope Dr Pielke continues to discuss the points of contention with SkS.
0 0
2. Tristan: Currently there is a discussion underway at: http://www.skepticalscience.com/pielke-sks-disagreements-open-questions.html Starting at about item 37.
0 0
3. I think there's a weird kind of thinking that can flow from the fractioning of various influences to global warming. Since CO2 is the main culprit, it has become a kind of stand-in for all human contributions. So, if you diminish the role of CO2 you can pretend that it really isn't that big a deal. But soot is also significant contributor and all too human in its provenance. Fix one (clear coal out of our energy production) and you can go a long way to fixing both. But our carbon-extraction overlords have pointedly gone out of their way to make sure we don't address either. Any thing and everything can be used to make delay seem a viable alternative.
0 0
0 0
5. Dr. Pielke - "You also did not discuss that CO2 has been increasing for over a century and some of the CO2 radiative forcing during that period would have been accomodated by the warmer climate. As I wrote, in 2005 V. Ramanthan replied to me (in an estimate) that about 20% of the difference between pre-industrial and current radiative forcing would have been accomodated. Thus the current radiative forcing from added CO2 would need to be reduced by this amount." I would have to strongly disagree - the radiative forcings discussed here (as in the IPCC AR4, the basis of this discussion) are relative to 1750 pre-industrial forcings, not the current imbalance (unrealized warming) between temperatures and those forcings. The 20% adjustment you recommend here is an odd (and IMO quite unwarranted) redefinition of well understood terms - shifting the baseline. Unrealized warming and current imbalance are different terms, different values, than forcing changes since 1750.
0 0
6. Dr. Pielke - Minor addition to my previous comment: If you are looking at unrealized warming and remaining imbalances (rather than the changes since 1750 that are the topic of this thread), it's noteworthy that you cannot just scale the change in CO2 contribution - all radiative imbalances scaled by unrealized warming would be scaled as well, meaning that the relative contribution of CO2 should not change. Your 20% reduction of relative CO2 contribution is, again, not justified in my view.
0 0
7. KR- My question remains, what is the current radiative forcing of CO2? In the SPM for the IPCC 2007 report, with respect to their presentation of the values in their figure SPM.2 they write "Global average radiative forcing (RF) estimates and ranges in 2005 for anthropogenic carbon dioxide (CO2 ), methane (CH4 ), nitrous oxide (N2O) and other important agents and mechanisms, together with the typical geographical extent (spatial scale) of the forcing and the assessed level of scientifi c understanding (LOSU)." They specifically state "Global average radiative forcing (RF) estimates and ranges in 2005". Then in footnote #2, they write "In this report, radiative forcing values are for 2005 relative to pre-industrial conditions defined at 1750 and are expressed in watts per square metre (W m–2). " At the best, this was sloppy writing (as it is corrected/clarified in the footnote), but the figure caption itself is misunderstood by quite a few people. At worst, the writers were not clear oo this when they wrote the figure caption. In any case, what would SPM.2 look like for the current forcings?
0 0
8. KR - I agree this scaling would be the same if all of the forcings had the same time evolution. When I discussed this with V. Ramanathan and others on our committee, however, the conclusion was that the other forcings ramped up more recently. In any case, what we really need is the current best extimate of the 2011 global-annual averaged radiative forcing and the best estimate of the 2011 global-annual averaged radiative imbalance. The difference between these two values is the global-annual averaged radiative feedback.
0 0
9. I just want to recognize the important contribution made by Tom Curtis in preparing the above post.
0 0
10. Yes, Tom Curtis provided valuable contributions to this post. Dr. Pielke, the question we are addressing with this post is the CO2 (and net anthropogenic) contribution to the observed surface warming over the past century. In order to answer this question, we must examine the change in forcings over that period of time (not the remaining imbalance/unrealized warming, which is a separate issue). You have not identified any problems with our calculations of the CO2 contribution over the past century - namely accounting for your calculational errors and correcting values for the methane, ozone, and albedo forcings brings the CO2 contribution to the net positive radiative forcing over this period to ~50%. We also showed that CO2 caused ~0.79°C, and the net anthropogenic forcing caused ~0.65°C surface warming over the past century. We again ask if you now agree with these values (and the ranges for these values listed in the post above). The issues you raise regarding the CO2 contribution to the more recent radiative imbalance is a separate question, which does not affect the calculation of the total CO2 contribution over the past century. We can proceed to discuss this issue as well, but first would like to close the discussion on the human-caused warming over the past century.
0 0
11. dana1981 - We know too little about the role of natural variations in the radiative forcing in the last century to know with such precision how much of that warming is from anthropogenic effects. This is based, in part, on the research and comments of Roy Spencer, Judy Curry, Judith Lean and others. [this is one reason I recommend you focus on science and not Roy's policy/political statements; he has added important new insight into the role of natural forcings including solar and internal multi-year variability]. Even with the anthropogenic forcings, we do not know what the actual aerosol and land use/land cover changes have contributed, relative to the radiative effect of added CO2, with respect to the observed surface temperature trends. There is also the issue of siting quality for the land portion of the surface temperature data. I agree that human's have significantly affected the annual average surface temperature trends, but, in my view, the issue as you present above inadequately considers all of the issues. Thus, I suggest we move on. I do not find this an important issue, but would be open to you explaining why it is. It seems to me that knowing the current forcings is much more relevant. P.S. I would like you to tell us if the water vapor/CO2 overlap was considered in your calculation. I do not see any problem in your calculation, if your numbers are used. We disagree with the values, however, as I wrote in my response.
0 0
12. Dr Pielke @4, where you say:
"If we use the 0.9 Watts per meter squared value for the black carbon that you present,from the Ramanathan and Carmichael (2008) values and the 0.2 Watts per meter squared as reasonable estimates based on the NRC (2005) report, which is still an accurate summary of our limited knowledge of this forcing, I calculate that CO2 is ~40% for the anthropogenic positive radiative forcing. This accepts the values for ozone that are listed in your table."
does the 0.2 W/m^2 refer to aerosol effects, of some other forcing?
0 0
13. Dr. Pielke, if you had read SkS' posts, you would realize we have examined Dr. Spencer's scientific research quite extensively. However, this calculation is essentially based on two factors - transient climate sensitivity, and the CO2/net anthropogenic forcings. Internal variability does not factor into the calculation of how much warming these forcings have caused. I agree, the aerosol forcing in particular represents a significant uncertainty. That's why we have been very explicit that we're stictly looking at the best estimates of these forcings. However, the CO2 forcing is very well-known, and we provided a 90% confidence range on the transient climate sensitivity parameter. Surely you can thus at least agree that CO2 has caused a 0.64 to 1.28°C (with a best estimate of 0.79°C) warming of average global surface temperature over the past century?
0 0
14. Tom @12, Dr. Pielke says "I calculate that CO2 is ~40% for the anthropogenic positive radiative forcing." First, that value is still significantly higher than the original (and erroneous) claim of 26.5% that he has made on his blog, here and elsewhere in public. Second, Skeie et al. (2011) supersedes Ramanathan and Carmichael (2008), and represents our current level of understanding. Science moves on. Third, from Ramanathan and Carmichael (2008): "The TOA BC forcing implies that BC has a surface warming effect of about 0.5 to 1 °C, where we have assumed a climate sensitivity of 2 to 4 ºC for a doubling of CO2. Because BC forcing results in a vertical redistribution of the solar forcing, a simple scaling of the forcing with the CO2 doubling climate sensitivity parameter may not be appropriate". So they concede that their method may not be appropriate, or does Dr. Pielke wish us to forget/neglect this important caveat from their paper which he chose to cite?
0 0
15. Dr. Pielke: this is one reason I recommend you focus on science and not Roy's policy/political statements; I find that recommendation especially puzzling, as overwhelmingly, SkS focuses on Roy Spencer's claims on the science. Spencer Slip Ups Now perhaps you are asserting that statements such as "warming in recent decades is mostly due to a natural cycle in the climate system — not to an increase in atmospheric carbon dioxide from fossil fuel burning" are political in nature, in that they don't follow from any robust scientific analysis.
0 0
16. SkS has done a pretty comprehensive overview of Dr Spencer's contribution. However, he also authored numerous other pieces, opinions or statements that are commonly used to justify skepticism. It is entirely wihin the scope of SkS' stated vocation to examine these other productions of his and see how they compare with the existing science, including his own. As far as I have read, that is what has been done on SkS. I would like to remind readers that the purpose of SkS is not to assess the state of the science and identify areas of greater or lower uncertainty, or recommend avenues for further research. Not that such questions should not be given attention here. Ideally, it would be possible to indeed focus only on true, interesting scientific problems. However, SkS was created in response to the tremendous effort of disinformation and propaganda that this particular area of science has unfortunately experienced. The purpose of SkS is to examine common claims put forth by self proclaimed skeptics who doubt all or part of well accepted conclusions reached by mainstream climate science. These claims are examined and weighed in regard to what the published science reveals to date on the subject. In that sense, all skeptic claims are fair game to SkS, whether they can be labeled as "policy/political statements" and regardless of their source.
0 0
17. Albatross - The values I present were given to show reasonable deductions from the IPCC value and that of Skeie 2011. I do not know the precise value (and neither does anyone) but the fraction is clearly well less than 50% using reasonable values. In terms of the statement "a simple scaling of the forcing with the CO2 doubling climate sensitivity parameter may not be appropriate". this applies to the entire question on the fraction of positive radiative forcing between 1750 and currently. Lets move on.
0 0
18. Tom Curtis - The 0.2 Watts per meter squared is for the two indirect aerosol effects that are reported on in the 2005 NRC report. This is an estimate based on them being positive. There is clearly an uncertainty on their value but the report concluded they were positive. New research remains unclear on their magnitude.
0 0
19. NewYorkJ - In the comments on my earlier posts on SkS, there were quite a few comments on his politics. Where has SkS positively recognized his finding on a larger natural influence, even if you (and I) disagree with Roy that the warming was mostly natural? His basic idea is sound and a significant scientific advancement.
0 0
20. Dr Pielke, thankyou for the clarification @18. While disagreeing with Albatross's apparent imputation that more recent papers automatically trump earlier papers, never-the-less it seems intemperate to dismiss the Skeie et al values as unreasonable. Given that they are reasonable values, and that the contribution of CO2 to the anthropogenic forcing using Skeie et al values is 48.2%, I do not believe your claim that "the fraction is clearly well less than 50% using reasonable values" is justified. If you disagree that the Skeie et al values are reasonable, perhaps you would have the courtesy to explain why rather than simply dismissing them from the range of "reasonable values".
0 0
21. How much of the non-CO2 forcing is nevertheless still related to fossil fuels?
0 0
22. Tom Curtis - I presented a summary of why I have concluded ~50% is too high. I also have emphasized that it is not an important issue. Part of the difference is that, as reported in http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-3.html#2-3-1 the contribution due to CO2 has increased 20% just between 1995 to 2005; i.e. "....In the decade 1995 to 2005, the RF due to CO2 increased by about 0.28 W m–2 (20%), an increase greater than that calculated for any decade since at least 1800...." However, in looking through this chapter, I do not see where they considered the water vapor overlap. This would lower the fraction. They also write "Using the global average value of 379 ppm for atmospheric CO2 in 2005 gives an RF of 1.66 ± 0.17 W m–2; a contribution that dominates that of all other forcing agents considered in this chapter." so they do explicitly state that this forcing was for 2005 not the difference from pre-industrial. Also, where is the water vapor/CO2 overlap considered? This would be in the models, but I do not see this evaluation in the IPCC chapter and in the SPM figure. I am hoping someone at SkS can clarify. At a more fundamental level, what difference does it make if it is 25% or 50%? My interpretation of the published papers came up with a smaller fraction. SkS and the IPCC have a larger fraction. Would you propose different policy if it were a lower fraction? It will be increasing in the future in any case. The fraction make no difference in the modeling since it is part of their calculations. Each of the approaches presented so far on SkS and in the IPCC, as well as my back-of-the-envelope list, to estimate these values are inadequate. I have proposed a way to better assess these numbers, but have had no feedback on that so far from SkS. I have also asked a number of other questions and will have more on my weblog tomorrow.
0 0
23. Dr. Pielke writes: >>Where has SkS positively recognized his finding on a larger natural influence, even if you (and I) disagree with Roy that the warming was mostly natural? His basic idea is sound and a significant scientific advancement. It seems you are presenting a moving target argument here. In post 11, you write "this is one reason I recommend you focus on science and not Roy's policy/political statements." When it is pointed out that SkS did review Spencer's work, you move the target to claim that SkS must do so in a positive way. More importantly, Spencer's basic idea is hardly sound and significant, which is why SkS has not reviewed it "positively." However, Roy Spencer is hardly the topic of this thread. If you think that Spencer in any way undermines the post by SkS, could you please cite his specific works in specific ways? That would be helpful to clarifying the "the contribution of CO2 to the net positive anthropogenic radiative forcing," the topic of this discussion.
0 0
24. Dr. Pielke writes: >>Lets move on. ... >>I also have emphasized that it is not an important issue. It strikes me that you again want to change the conversation when others dispute your claims. The difference between 20% and 50% is quite large, and you in fact brought up this figure to prove your point. You end your posting by saying you will ask more questions, but what are the use of you raising questions, when you simply won't stand for any in depth debate in trying to answer these questions, but want to change the subject each time?
0 0
25. Dr Pielke @22: 1) The IPCC AR4 writes:
"The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR."
(My emphasis) Referring back to the IPCC TAR, we find the adjustment to the simple formula was due to the work of Myhre et al, 1998, which in turn depends on intermodel comparisons performed in Myhre and Stordal, 1997 (hereafter, M&S97). In M&S97, Myhre and Stordal perform a detailed series of comparisons between LBL models and a Broad Band Model used at various resolutions. The most detailed resolution used a 2.50x2.50 grid. The coarsest used a global mean climatology. M&S97 explicitly state that:
"Overlap is considered between gases that absorb in the same spectral region."
They go on to describe how the overlap is handled. As the 2.50x2.50 is global in extent, it necessarily includes the highly humid tropics, and more significantly the very cool polar regions. Later M&S97 compare the 2.50x2.50 model to a 100x100, a 2.50 zonal mean model, a 100 zonal mean model, and a global mean climatology model. The difference in forcing between each of these models is less than 1% in all cases. Further on they explicitly compare CO2 radiative forcing with altitude for Tropical, Mid-Latitude Summer, and Sub-Arctic Winter conditions using both the LBL and Broadband model. The Tropical and Mid-Latitude Summer forcings are scarely distinguishable, with the MLS forcing being slightly stronger. The SAW forcing is considerably weaker than either the TROP or MLS forcing. Of course, as the formula is based on a full global comparison, that is of no consequence to the final figure. 2) With regard to whether the IPCC AR4 quotes transient forcings in a given year, or the forcing relative to preindustrial levels, I refer you to this chart: Note the charts heading. For greater clarity, the caption reads:
"FAQ 2.1, Figure 2. Summary of the principal components of the radiative forcing of climate change. All these radiative forcings result from one or more factors that affect climate and are associated with human activities or natural processes as discussed in the text. The values represent the forcings in 2005 relative to the start of the industrial era (about 1750). Human activities cause significant changes in long-lived gases, ozone, water vapour, surface albedo, aerosols and contrails. The only increase in natural forcing of any significance between 1750 and 2005 occurred in solar irradiance. Positive forcings lead to warming of climate and negative forcings lead to a cooling. The thin black line attached to each coloured bar represents the range of uncertainty for the respective value. (Figure adapted from Figure 2.20 of this report.)"
(My emphasis) Lest there be any doubt, based on a pixel count the chart shows a CO2 radiative forcing of 1.67 W/m^2, in agreement with the text. As to what difference this makes, not a great deal. Never-the-less, you brought the question up. You have blogged on the issue at least twice, and have claimed repeatedly that the CO2 radiative forcing is over-estimated when you have in fact been under estimating it. And you have presented your significant underestimate based on your back of an envelope calculation in a talk to a scientific conference. I would have thought that, given the circumstances, professional pride alone would make you wish to correct those errors with alacrity.
0 0
26. My old professor used to tell me that if I couldn't present what I was trying to say on a single side of A4, then I didn't really understand what I was trying to say. Given the length of Pielke Sn's replies to this post, I here attempt to summarise them. If the summary is poor it is (with due respect) because the replies are so poorly structured and verbose. This SkS post is asking Pielke Sn two questions (1) Is his comment that CO2 is but 26% or 28% of human-caused forcing wrong & actually more like 50%? Pieke Sn @ 4 replies that given the numbers, the figure should be 40% but is not sure if this includes all humidity effects. Pieke Sn then guestimates the figure would be 35% if the CO2 forcing was taken as the present-day radiative imbalance. Pielke Sn sees it as important that we understand that the contribution of CO2 to the warming is less than 50% (for unstated reasons) but also sees the proportion of CO2's contribution rising higher in future. He asks SkS what it thinks of NGoRF. He also points to the present-day radiative imbalance & associated feedback figures not yet given by SkS. Pielke Sn asks for these figures (in @7 & @11) asserting the present-day radiative imbalance is the important factor. And so CO2 induced warming in past years he sees as unimportant without a good reason being given. Pielke Sn considers the % of human-caused forcing due to CO2, be it 25% or 50%, irrelevant (@22) (2) What temperature rise over the last 100 years is down to human activity? Pielke Sn points to a reference where land use is an important factor but gives no direct answer, and (@11) that too little is known to say, although it was “significant”. Piekle Sn concludes CO2 is important to climate forcing and mentions its removal from the atmosphere. He mentions other causes of human-caused forcing, emphasising that these should have elevated concern as this matters most to society and the environment (for unstated reasons).
0 0
27. MA Rodger - Regarding your question #1, both estimates are likely wrong. As I have written, there are several unresolved issues on how to calculate this fraction. On question #2, the change in surface temperatures over the last 100 years is a still poorly understood mix of added CO2, aerosols, land use/land cover change, poor siting of land data, solar influences, volcanoes and internal long term climate variability.
0 0
28. Tom Curtis - Your extract from the IPCC AR4 adds some information on the water vapor overlap issue. However, it still does not quantify how much of the CO2 radiative forcing is not occuring due to the water vpaor overlap. The statement ""The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR." is incomplete as I do not see "water vapor" listed. Here is the simple question: What would be the global annual average radiative forcing change since pre-industrial with CO2 without the water vapor overlap and with the the overlap? On whether the figure is interpreted as the 2005 radiative forcing or the difference since preindustrial, I agree it is the later. The figure caption and text I quote said otherwise. The FAQ you listed was a correction to the original SPM [which still contains the erroenous information].
0 0
29. paulhtremblay - I never said it was 20%. My estimate was higher than that but significantly lower than 50%. I presented a way to resolve this issue. It seems, however, that the comments on this thread has deteroriated again, as instead of answering my questions, you (and others) keep insisting that I agree with your view, even when I present information/questions that conflict with your statements. For example, why does it matter if the fraction of radiative forcing in 2005 compared with pre-industrial was 28% or 48%? My analysis suggests a smaller fraction but it is increasing with time. However, why do we care? Its biogeochmeical effect is directly connected to its atmospheric concentration and we know that much better than we know the global average radiative forcing. By focusing on such trivial questions as this fraction, the really important science questions which I have raised are being ignored on SkS.
0 0
30. Professor Pielke used a value of 1.1 W/m2 not 1.4 W/m2 for the CO2 forcing. He did this to leave the total forcing from greenhouse gases unchanged. After increasing the forcing for CH4 he was obliged to decrease the forcing for CO2. "By summing the 0.8 Watts per meter squared for methane and using the total of 2.4 Watts per meter squared of the well-mixed greenhouse gases from the IPCC Report, the radiative contribution of CO2 reduces to about 46% of this component of radiative forcing (1.1 Watts per meter squared)." As was pointed out on real climate at the time this is totally unjustified since the forcing effect of CO2 is independently derived.
0 0
31. I agree with MA Roger #26 that Dr Pielke's responses are somewhat confusing. In all these discussions of Radiative forcing using the AR4 chart, the 'Human Activity' forcings are referenced to year 1750, when it is assumed that these forcings were 'insignificant' ie: zero. That means that all these forcings are absolute numbers baselined to zero. This might answer Dr Pielke's confusion. As Tom Curtis points out from AR4 "The only increase in natural forcing of any significance between 1750 and 2005 occurred in solar irradiance." The problem with including solar forcing in the sum of the AR4 chart has been pointed out by others in SKS threads in that we don't know if there was a positive or negative planetary warming imbalance in 1750, and whatever it was - it could only have come from solar irradiance since all the 'human activity forcings' are zero. It is likely that as the Earth warmed out of the little ice age, the solar irradiance imbalance was positive - not zero, so the AR4 value of 0.12W/M2 should be added to whatever the 1750 value was in order to get a comparable absolute value in 2005. Further, the climate responses are not included in the AR4 chart and Dr Trenberth has calculated these at a net minus (-)0.7W/M2 which brings the net warming imbalance down to +0.9W/M2. It should be noted that Dr Trenberth uses a figure of minus (-)2.8W/M2 for radiative cooling (stefan-boltzman) and +2.1W/M2 for water vapour and ice albedo feedback to arrive at the net minus (-)0.7W/M2 climate response. Of course the +0.9W/M2 is also in dispute in recent times due to Dr Hansen's claimed increased aerosol reflectivity and effective reduction of the warming imbalance to about +0.6W/M2. The point to be made here is that since 1750, all the increasing 'human activity' forcings and climate responses have acted together producing a continuously changing net imbalance forcing, the sum total of which integrated over time will represent the net energy gained by the planet. Most of this energy must be sequestered in the oceans and represented by past temperature increase and phase changes in ice or water. Arguing the proportions of CO2 forcing to the percentage point without accurately knowing the historical forcings from solar, aerosols and the feedbacks is somewhat academic.
0 0
32. critical mass - We do not need to know the historical forcings to estimate the current (2011) radiative forcing from all sources and the current radiative imbalance (using ocean heat storage changes). This is one of the first questions that should be answered in a climate change assessment. I do not see how this is confusing. :-)
0 0
33. Dr Pielke @28, 1) I recommend that you reread my post @25, or better yet, Myhre and Stordal, 1997. As clearly indicated in my post, it was Myhre et al, 1998 who determined the strength of the CO2 radiative forcing by determining the value of the constant in the simple formula for radiative forcing. In doing so they corrected downwards the factor previously used from 6.3 to 5.35. As explained previously, Myrhe et al is built on the detailed model comparisons in Myrhe and Stordal 97, which include a global model run at a 2.5o x 2.5o resolution. That model, because global necessarily included the difference in radiative transfer between tropical and not tropical regions. To further clarify the point, I noted that M&S97 had also run both the broadband model and the LBL model for both tropical and mid-latitude summer conditions, with the latter showing the stronger forcing, clearly showing the effect of increased humidity and cloud cover had been included. To that information, we can add the following quote from the Third Assessment Report:
"IPCC (1990) and the SAR used a radiative forcing of 4.37 Wm-2 for a doubling of CO2 calculated with a simplified expression. Since then several studies, including some using GCMs (Mitchell and Johns, 1997; Ramaswamy and Chen, 1997b; Hansen et al., 1998), have calculated a lower radiative forcing due to CO2 (Pinnock et al., 1995; Roehl et al., 1995; Myhre and Stordal, 1997; Myhre et al., 1998b; Jain et al., 2000). The newer estimates of radiative forcing due to a doubling of CO2 are between 3.5 and 4.1 Wm-2 with the relevant species and various overlaps between greenhouse gases included. The lower forcing in the cited newer studies is due to an accounting of the stratospheric temperature adjustment which was not properly taken into account in the simplified expression used in IPCC (1990) and the SAR (Myhre et al., 1998b). In Myhre et al. (1998b) and Jain et al. (2000), the short-wave forcing due to CO2 is also included, an effect not taken into account in the SAR. The short-wave effect results in a negative forcing contribution for the surface-troposphere system owing to the extra absorption due to CO2 in the stratosphere; however, this effect is relatively small compared to the total radiative forcing (< 5%)."
(My emphasis) The Fourth Assessment Report contented itself with saying:
"The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR. Note that for CO2, RF increases logarithmically with mixing ratio.) Collins et al. (2006) performed a comparison of five detailed line-by-line models and 20 GCM radiation schemes. The spread of line-by-line model results were consistent with the ±10% uncertainty estimate for the LLGHG RFs adopted in Ramaswamy et al. (2001) and a similar ±10% for the 90% confidence interval is adopted here. However, it is also important to note that these relatively small uncertainties are not always achievable when incorporating the LLGHG forcings into GCMs. For example, both Collins et al. (2006) and Forster and Taylor (2006) found that GCM radiation schemes could have inaccuracies of around 20% in their total LLGHG RF (see also Sections 2.3.2 and 10.2)."
(My emphasis) Ramaswamy et al, 2001 is of course, the IPCC TAR. The IPCC do not feel it necessary to spell out details that have been public knowledge for six years (as at the time of the AR4), contenting themselves with a reference to the original discussion. Now, given the detailed analysis by Myhre and Stordal and the explicit statement by the TAR, do you still wish to maintain that the radiative forcing as calculated does not allow for overlap with H2O in the tropics? 2) The passage you quoted, it was not a figure caption, did not say otherwise, it just did not specify a fact that was well known. Further, the FAQ plus Fig 2 of the FAQ which I reproduced was not added afterwards. It can be found on page 135 of the PDF reproduction of the original report for anybody interested. What is more, the figure 2.20A on which it is based (page 203) has the same heading. And if that is not enough, we read in the executive summary of Chapter 2:
"The combined anthropogenic RF is estimated to be +1.6 [–1.0, +0.8][2] W m–2, indicating that, since 1750, it is extremely likely[3] that humans have exerted a substantial warming influence on climate. This RF estimate is likely to be at least five times greater than that due to solar irradiance changes. For the period 1950 to 2005, it is exceptionally unlikely that the combined natural RF (solar irradiance plus volcanic aerosol) has had a warming influence comparable to that of the combined anthropogenic RF."
(My emphasis) Seeing you bring up the Summary for Policy Makers, we read there:
"Changes in the atmospheric abundance of greenhouse gases and aerosols, in solar radiation and in land surface properties alter the energy balance of the climate system. These changes are expressed in terms of radiative forcing,[2] which is used to compare how a range of human and natural factors drive warming or cooling influences on global climate. Since the TAR, new observations and related modelling of greenhouse gases, solar activity, land surface properties and some aspects of aerosols have led to improvements in the quantitative estimates of radiative forcing."
You will notice the footnote after the introduction of the term "Radiative Forcing". That footnote reads:
"Radiative forcing is a measure of the influence that a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system and is an index of the importance of the factor as a potential climate change mechanism. Positive forcing tends to warm the surface while negative forcing tends to cool it. In this report, radiative forcing values are for 2005 relative to pre-industrial conditions defined at 1750 and are expressed in watts per square metre (W m–2). See Glossary and Section 2.2 for further details."
Should we check the glossary as well, or is my point sufficiently made? This is a very minor, an absolutely trivial point, except for one factor. It is one thing for a Professor of Climatology with, I must add, a very distinguished career, to make a simple mistake on a fact you would expect him to know well. It is quite another to try and save face by making "facts" up. Knowledge is not so often found in this world that it can be thrown away in face saving excercises.
0 0
34. Tom Curtis - You seem to persist in missing my issue. I know the models have been applied which include the water vpaor and CO2 overlap. However, I have not seen this reported using the 1-D radiative transfer calculations as I proposed. I repeat my question (and stop referring to papers - I am asking a straightforward question): What would be the global annual average radiative forcing change since pre-industrial with CO2 without the water vapor overlap and with the the overlap? On the IPCC values, I agree that they are presenting the difference between pre-industrial and 2005. That is not in dispute. They are inconsistent, however, in terms of how they write this in the text in places as they specifically write, for example, in SPM.2 "Global average radiative forcing (RF) estimates and ranges in 2005" [http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf] This is a trivial issue, except that i) the statement is incorrect as it is not the "forcing" in 2005 and ii) quite a few people accept that the values in the figure are the current forcings. Finally, if I am going to continue this discussion with you, keep your snarky comments out of your posts. That is why I left SkS before. We disagree. That's the way science goes. But when you start with insults because I do not accept your view, I will go off to where more constructive debating occurs.
0 0
35. But when you start with insults because I do not accept your view, I will go off to where more constructive debating occurs. As a matter of interest (and certainly not in a snarky or insulting manner), can you give any examples of where you would get such debating - if you have online sources in mind ? I am genuinely interested.
0 0
36. Dr. Pielke, you made a rather odd statement:
"The values I present were given to show reasonable deductions from the IPCC value and that of Skeie 2011. I do not know the precise value (and neither does anyone) but the fraction is clearly well less than 50% using reasonable values."
Let's examine where we're at in this discussion. You originally argued that CO2 was only responsible for 26-28% of the net positive radiative forcing. We found some mathematical errors in your calculations which bring the value up to ~30%, and you concur with these corrections. We also identified a 0.3 W/m2 error in your methane estimate, a 0.5 W/m2 error in your albedo estimate, a 0.3 W/m2 error in your ozone estimate, and a 0.4 W/m2 error in your CO2 estimate - you appear to concur with all of these corrections, with some caveats on CO2. These corrections bring the value up around our original estimate of 50%. You then claimed that 20% of the CO2 forcing has been "accomodated by a warmer climate" based on a personal communication. We have several issues with this claim, but regardless, it is not relevant to the question at hand (the CO2 contribution over the past century). Your only other revisions to the Skeie estimate are to use the Ramanathan and Carmichael best estimate for the black carbon forcing of 0.9 W/m2 (without justification for this choice, which also conflicts with your previous claim that the black carbon forcing is 0.5 W/m2), an assertion of a 0.2 W/m2 aerosol forcing, and an assertion that the water/CO2 overlap in the tropics has been ignored. As Tom Curtis has noted, the overlap was addressed in seminal work by Myhre, and incorporated into the IPCC reports. There is reason to believe the Ramanathan BC forcing is too high (i.e. see Bond 2011 and another paper by Skeie et al. 2011), but incorporating their BC value and your aerosol forcing estimates, the CO2 contribution is still between 41% and 50%, and certainly far greater than your originally asserted 26%. So we can emphasize the point that any reasonable calculation based on the agreed-upon inputs will put the CO2 contribution to the net positive forcing at 41% to 50% and rising. You now claim that whether the value is 25% or 50% does not matter, yet you have frequently raised the issue on your blog and in presentations and talks. Ultimately we have identified a number of errors in your calculations, and yet you continue to insist that despite these corrections, somehow your argument must be correct. We have supported our estimate with detailed calculations and references, and have demonstrated that CO2 has thus far accounted for approximately 0.8C surface warming - a calculation which you have not disputed. We are a bit disappointed that this refutation of your reasoning leaves your sense of conviction so unmoved, but at this point, we may as well move on to other issues. Readers can examine our calculations for themselves and decide who is correct.
0 0
37. Dr Pielke @34, 1) A LBL (Line By Line) model as used by Myhre et al, 98, and Myhre and Stordal, 97 is a one dimensional radiative transfer model, and hence the papers to which I have been referring answer the general point you have been making. They, however, address the practical question of what the radiative forcing is in the real world which includes water vapour. They do not adress the hypothetical question that you ask, ie, what would the radiative forcing of CO2 be in the absence of water vapour. Because the question is purely hypothetical, it is irrelevant to future discussions so I have no interest in doing a literature search in the off chance that somebody has answered this hypothetical. Please note that the radiative forcing of CO2 if there was no overlap with H2O, and the radiative forcing of CO2 in the absence of water vapour are the same, so the slightly different form in which I have expressed the question is of no consequence. The global annual average radiative forcing of CO2 in the presence of H2O with overlaps accounted for in 2005 is the value given by the IPCC in AR4. The value in 2011 is that given by Skeie et al. 2) From the glossary of IPCC AR4:
"Radiative forcing Radiative forcing is the change in the net, downward minus upward, irradiance (expressed in W m–2) at the tropopause due to a change in an external driver of climate change, such as, for example, a change in the concentration of carbon dioxide or the output of the Sun. Radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values, and after allowing for stratospheric temperatures, if perturbed, to readjust to radiative-dynamical equilibrium. Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for. For the purposes of this report, radiative forcing is further defined as the change relative to the year 1750 and, unless otherwise noted, refers to a global and annual average value. Radiative forcing is not to be confused with cloud radiative forcing, a similar terminology for describing an unrelated measure of the impact of clouds on the irradiance at the top of the atmosphere.
(My emphasis) Therefore, according to the glossary, when the IPCC AR4 refers to the radiative forcing for 2005 they mean the change in radiative forcing in 2005 relative to 1750, unless they explicitly state otherwise. That could not be clearer. What is more, the formula for radiative forcing of CO2 is given by the simple formula: ΔF = αln(C/Co), where ΔF is the change in forcing, C is the CO2 concentration in the current year, Co is the CO2 concentration in the inital year, and α = 5.35 (source; also Myhre et al, 98, and various IPCC reports). This is the simple formula referred to in AR4. Clearly from its formula, the radiative forcing requires a baseline year. It is impossible to derive the radiative forcing from this formula for a single year simpliciter for the result would necessarily be 0. Consequently no interpretation of "Radiative Forcing" in AR4 in which it is treated as being the forcing in a single year is consistent with the text which explicitly refers to this simple formula. In other words, not only are you in error in your interpretation of the IPCC AR4, logically your interpretation could not have failed to be in error. Note I say that you are in error because you insist on interpreting the IPCC AR4 as inconsistent whereas in fact you are simply failing to interpret their words in accordance with the glossary. 3) You object to what you call my snark. Well I object to the extreme lengths of misrepresentation you are prepared to go to cover up an error. Please note that "misrepresentation" is neither snark nor accusation. It simply notes that you have represented the facts to be one way (the IPCC AR4 was corrected; the IPCC AR4 is inconsistent) when transparently, and as could be discovered by simply reading a glossary, they were another way. You and I are both here trying to reach SkS's audience in order to convince them of what we believe to be the truth about global warming. Fine, I am a great believer in the open market of ideas. But I will not accept a restraint on me that I must not correct your gross errors should they occur (and as has occurred) because such correction will offend our sensibilities. If you cannot debate under the condition that your mistakes will be corrected, then (speaking only for myself), I see little point in debating you.
0 0
38. JMurphy - I am able to present my viewpoint on my weblog. Colleagues (even those who disagree with me) reply via e-mail and I have posted a number of guest posts from such an interaction. As shown here, however, (and when I had comments), people often see they have an opportunity to become personal.
0 0
39. dana1981 I never said I agreed with all your "corrections" (and you have even used my back-of-the-envelope estimate with solar included). You call them "errors" which hardly represents my view of the estimates. I just "accepted" them and then started with the higher fraction of positive radiative forcing from CO2 and presented other reasons it should be lower. From IPCC AR4 to Skeie 2011 it has been reduced from 52.4% to 48.2% [quite a bit of significance with three significant digits for such an imprecise quantity]. To also refer to the Myhre paper as the definitive statement ("seminal") is quite an overreach. The water vapor/CO2 overlap has not been completely addressed and I have repeated my question and will do so again; What would be the global annual average radiative forcing change since pre-industrial with CO2 without the water vapor overlap and with the the overlap? I also repeat What is the current (2011) radiative forcing from each of the terms (including CO2) and what is the current radiative imbalance? Let us see your back-of-the-envelope estimate for these.
0 0
0 0
41. Very well Dr. Pielke. You may not agree with our corrections of your errors, but as I said, readers can decide for themselves who is right on this matter, since we have thoroughly documented the sources of our calculation and corrections in the post above. Regarding your question about the "current (2011) radiative forcing", I refer you to Tom Curtis' comment #37 (emphasis added):
"ΔF = αln(C/Co), where ΔF is the change in forcing, C is the CO2 concentration in the current year, Co is the CO2 concentration in the inital year, and α = 5.35 (source; also Myhre et al, 98, and various IPCC reports). This is the simple formula referred to in AR4. Clearly from its formula, the radiative forcing requires a baseline year. It is impossible to derive the radiative forcing from this formula for a single year simpliciter for the result would necessarily be 0."
If you would like us to answer your question, you will have to provide a baseline reference year. I also agree with Tom's answer to your first question, also in comment #37:
"Because the question is purely hypothetical, it is irrelevant to future discussions so I have no interest in doing a literature search in the off chance that somebody has answered this hypothetical."
0 0
42. dana1981 "the radiative forcing requires a baseline year." is not correct. The quote you have states "ΔF is the change in forcing". That does require a base year. The forcing does not and is instantaneous. One would never state that "acceleration requires a base time period." Acceleration is the derivative of the velocity at any time. Similarly, radiative forcing is at a specific time although one could time average (e.g. the yearly global averaged radiative forcing).
0 0
43. dana1981 - You also write "Because the question is purely hypothetical, it is irrelevant to future discussions so I have no interest in doing a literature search in the off chance that somebody has answered this hypothetical." The question is hardly "hypothetical" as the water vapor/CO2 overlap is a scientific issue.
0 0
44. Tom Curtis wrote: >>If you cannot debate under the condition that your mistakes will be corrected, then (speaking only for myself), I see little point in debating you. Keep in mind you are not trying to convince Dr. Pielke, but convince the audience at SkS, so I would encourage you to keep posting, as your posts have proved very valuable and instructive.
0 0
45. Dr. Pielke writes: >>The question is hardly "hypothetical" as the water vapor/CO2 overlap is a scientific issue. So is the matter of radiative forcing. Yet, you write "For example, why does it matter if the fraction of radiative forcing in 2005 compared with pre-industrial was 28% or 48%?" That strikes me not only as anti-scientific, but hypocritical. You imply that the Mhyre paper does not adequately address the overlap issue, but you are not at all specific, instead relying on a hypothetical question and to your own blog post (as opposed to a peer reviewed article) to provide some refutation, though even then, you remain vague as to how this overlap supports your original estimate of 28%.
0 0
46. Dr. Pielke: "the radiative forcing requires a baseline year." is not correct. The quote you have states "ΔF is the change in forcing". That does require a base year. The forcing does not and is instantaneous. I will point out that, despite this side-track of current forcing imbalance, the original topic of this thread and the tables that are the basis of the discussion are the numbers for changes in forcing since 1750, as is customary in this field. That is, incidentally, completely clear from the TAR through AR4, as defined in the glossaries, and in labeling of the various tables. In that regard you have repeatedly emphasized a 26.5% relative contribution by CO2 to the forcing deltas, in disagreement with IPCC estimates (here, for example), stating that "The IPCC Has Provided An Inaccurate Narrow Perspective Of The Role Of Humans". At this point in the discussion I believe that dana and Tom Curtis have clearly presented why they disagree. In my opinion you have neither presented either a relevant argument for your factor of ~2x difference with IPCC numbers on total forcing, nor for that matter any numeric estimates of yours as to "CO2 ... warming of average global surface temperature over the past century". Perhaps an agreement to disagree on this topic?
0 0
47. I think it's worth mentioning that while Pielke now says it doesn't matter whether CO2 is responsible for 25 or 50% of the net positive forcing, his own presentations say otherwise. In his 2006 presentation which we referenced in this post, Dr. Pielke devoted 4 slides to this issue. And on his blog, he has devoted several posts to the subject. And at the Conference on the Earth’s Radiative Energy Budget Related to SORCE on September 20-22, 2006, he made the same argument using the same presentation. The presentation has also been featured on several 'skeptic' blogs (i.e. Jennifer Marohasy and JunkScience). And just last month in an interview with a Canadian newspaper (and in our discussions here), Dr. Pielke argued that too much attention is being paid to CO2 - based on our calculations, that's a hard argument to justify. Suddenly claiming that the question is an unimportant one seems like a fairly radical and sudden change, given how frequently Dr. Pielke makes this argument. But as KR suggests, we will probably have to agree to disagree and move on.
0 0
48. dana1981 - Your comment and that of the others show why it is futile to debate on this website. You get hung up on one issue where we disagree. I never focused on the estimate of the fraction of positive radiative forcing (either currently or the change from pre-industrial times) as a primary reason why we need to broaden beyond the radiative effect of CO2. This need is equally true if the fraction is 28% or 50% (or 100% for that matter). My estimate of the fraction of CO2 was to illustrate with reasonable interpretations from the literature that it may be less than reported in the IPCC report. I came up with ~28%. I adopted a different approach in my response on this weblog post, accepting for the sake of discussion several of your conclusions on the forcing and starting from your fraction and then working with a realistic estimate of black carbon and the two indirect aerosol effects, and the longer period of influence of CO2 to come up with a smaller fraction. We do not agree on the fraction. In terms of our EOS article and the main issues, this is just a sideshow. The real substantive issue, however, which no one on this weblog seems to want to debate, is my (and my colleagues) conclusion that "The IPCC Has Provided An Inaccurate Narrow Perspective Of The Role Of Humans". We presented this view in our paper Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union.http://pielkeclimatesci.wordpress.com/files/2009/12/r-354.pdf NRC 2005 has the title "Radiative forcing of climate change: Expanding the concept and addressing uncertainties.' Why not discuss these publications? You would likely then expand your readership beyond those who accept the IPCC as a robust assessment of the role of humans on the climate system. I recommend moving on to the other issues. I do appreciate the opportunity to see these counterpoints and will be posting a summary of outstanding questions starting next week.
0 0
49. Methane: A brief review of the comments did not find anyone who noted the foundation of the Shindell methane estimate of 0.8 W/m2: this is an emissions based estimate, not a concentration based estimate. Shindell's calculations looked at eliminating historical methane emissions (since 1750), resulting in a forcing 0.8 W/m2 lower today: that 0.8 would come partly from an increase in CH4 concentration but also an increase in O3 concentration, stratospheric water vapor, and (this was the novel contribution of the Shindell paper) a reduction in sulfate loading. In contrast, the IPCC estimate is based on concentration of CH4 only. (note that if you use Shindell's estimate, than you can't also use O3 forcing as a separate row - that is definitely double counting). Black carbon: I will note that the Hansen & Nazarenko estimate of 0.3 W/m2 of snow albedo forcing was obsolete even before it was published, as it was a result of a calculation error (as noted in a later Hansen paper). AR4 estimated 0.1 W/m2, and more recent papers are slightly lower. Of course, since AR4 estimated the direct BC effect at 0.34, for a total of 0.44 (I'd correct the SKS table to reflect that), this isn't a big deal. Though... since BC is rarely emitted without co-emissions of organic carbon and other cooling aerosols, I'm not sure it belongs in this kind of calculation. Ozone: I don't know why Pielke Sr. thinks that it is a good idea to extrapolate a result for ozone warming in the Arctic in two seasons to annual global forcing. Albedo: I also don't know why he thinks that a 4 year trend is appropriate to compare to forcing since 1750. I'll note that AR4 estimated -0.2 W/m2 for the long term surface albedo contribution. This probably isn't directly comparable to Pielke's CERES results which are presumably dominated by short-term cloud changes. Head-of-a-pin: All of this is a bit like counting angels, especially when you begin to throw in aerosols, because of the negative and positive contributions. Is 1.66 W/m2 100% of net forcing? 50% of the SKS list of forcings? 45% if you include black carbon but not BC co-emissions? Whatever. It is pretty clear that CO2 is the single largest contributor to recent and projected future warming, even if it isn't the only contributor globally, and the regional picture gets more complicated with urban heat islands and ENSO variability and so forth.
0 0
50. I don't know if Dr. Pielke has given up on this thread yet, but I would like to just express my understanding of what he means when he says it doesn't make a difference if the anthropogenic forcing is 30% or 50% CO2. In that range, 50-70% of anthropogenic forcing is not CO2, which means those other things should probably account for 50-70% of the discussion about global warming. From my perspective that is certainly a valid argument. I assume that is the point of his argument, but perhaps I'm wrong. The question in my original post (#21) is related to that. How much of the anthropogenic forcing can be reasonably attributed to fossil fuel extraction/burning regardless of the mechanism for forcing (CO2, methane, aerosol, black carbon, ozone, etc.)? The answer to that question may make the current focus of the public discussion seem more in line with reality.
0 0
|
2021-06-12 15:13:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316064834594727, "perplexity": 1725.5805061654848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00163.warc.gz"}
|
https://sandbox.dodona.be/en/activities/933531418/description/dkZjxu2_AIe7b-zj/?dark=false
|
Crickets chirp by rubbing their wings over each other. Yet it is only the males of the species that make this noise — they do so to attract mates. Therefore, when you're happily listening to the soothing sound of crickets chirping, you're actually eavesdropping on a courting ritual meant to warn off other lust-filled male crickets and to draw interested females to the ones doing the serenading.
The notion that counting the chirps of crickets can serve as an informal way of working out the temperature is not new. It was originally formulated in 1897 by physicist Amos Dolbear1 in an article called "The Cricket as a Thermometer2". Dolbear originally stated that the outdoor temperature determines the number of cricket calls one would hear. Over the years, his way of looking at this relationship was turned around — people now count the chirps to get the temperature rather than consult the thermometer to figure out how many cricket calls they will hear. Dolbear's Law expresses the relationship as the following formula, which provides a way to estimate the temperature $$T_F$$ in degrees Fahrenheit from the number of chirps per minute $$N_{60}$$: $T_F = 50 + \left(\frac{N_{60} - 40}{4}\right)$ Reformulated to give the temperature in degrees Celsius (°C), it is: $T_C = 10 + \left(\frac{N_{60} - 40}{7}\right)$ The above formulae are expressed in terms of integers to make them easier to remember — they are not intended to be exact. In popular culture, Dolbear's Law was referenced in an episode of the British comedy show QI (starts at time 28:36).
### Input
The number of observed chirps per minute $$N_{60} \in \mathbb{N}$$.
### Output
A line containing the text temperature (Fahrenheit): TF with $$T_F$$ the temperature in degrees Fahrenheit according to Dolbear's Law, given the number of observed chirps per minute $$N_{60}$$ as read from the input. A second line giving the same temperature $$T_C$$ but expressed in degrees Celsius.
### Example
Input:
43
Output:
temperature (Fahrenheit): 50.75
temperature (Celsius): 10.428571428571429
|
2022-06-28 17:32:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3783646821975708, "perplexity": 1345.1452148804462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00253.warc.gz"}
|
https://www.nature.com/articles/s41598-018-24418-8?error=cookies_not_supported&code=d646c0fc-05db-480b-bfd5-9893de791b28
|
Article | Open | Published:
# Role of the Interplay Between the Internal and External Conditions in Invasive Behavior of Tumors
## Abstract
Tumor growth, which plays a central role in cancer evolution, depends on both the internal features of the cells, such as their ability for unlimited duplication, and the external conditions, e.g., supply of nutrients, as well as the dynamic interactions between the two. A stem cell theory of cancer has recently been developed that suggests the existence of a subpopulation of self-renewing tumor cells to be responsible for tumorigenesis, and is able to initiate metastatic spreading. The question of abundance of the cancer stem cells (CSCs) and its relation to tumor malignancy has, however, remained an unsolved problem and has been a subject of recent debates. In this paper we propose a novel model beyond the standard stochastic models of tumor development, in order to explore the effect of the density of the CSCs and oxygen on the tumor’s invasive behavior. The model identifies natural selection as the underlying process for complex morphology of tumors, which has been observed experimentally, and indicates that their invasive behavior depends on both the number of the CSCs and the oxygen density in the microenvironment. The interplay between the external and internal conditions may pave the way for a new cancer therapy.
## Introduction
Cancer usually begins with out-of-order duplication of a single cell that has stem cell-like behavior, referred to as the cancer stem cell (CSC)1. Based on the CSC hypothesis, a CSC can duplicate without limit and differentiate2. The classical CSC hypothesis proposes that, among all cancerous cells, only “a few” act as stem cells, but studies have reported3 that a relatively high proportion of the cells are tumorigenic, contradicting the general belief. The CSCs have been proposed as the driving force for tumorigenesis and the seeds for metastases4. Their decisive role in maintaining capacity for malignant proliferation, invasion, metastasis, and tumor recurrence has been reported frequently5. For example, CSCs of breast tumor are involved in spontaneous metastases in mouse models6. Moreover, CSCs promote the metastatic and invasive ability of melanoma7 and their prsence is correlated with invasive behavior at colorectal adenocarcinoma8. The effect of the number of CSCs on tumor morphology has been the subject of several experimental studies and simulation. Based on simulations9,10, the frequency of the CSCs smoothens the morphology of tumor, and based on an experimental study11, the number of CSCs is higher in tumors with medium invasiveness (the so-called Gleason grade) than tumors with lower (Gleason grade) and higher (Gleason grade) invasiveness. The relation between tumor malignancy and the frequency of the CSCs needs, however, more clarification4.
Cancerous cells use oxygen to produce metabolites for duplication and growth12. Experimental in-vivo13 and in-vitro14 studies, as well as computer simulations15,16, have reported that the density of oxygen regulates tumor morphology and its shortage drives morphological irregularities. Due to the apparent strong correlations between the tumors’ shape and their malignancy, fractal characterization of tumors has been used as a diagnostic assay for various types of tumors17,18,19. However, there is still no explanation as to why cellular structures at the scale of tumors display self-similar characteristics20. of well-known physical phenomena, including diffusion and reaction-diffusion, as well as percolation, surface growth, and models of phase transition21.
In this paper we propose a novel model to study the effect of the number of the CSCs and the oxygen’s density on the invasive behavior of a general type of cancer. As we show below, the development of irregular shapes and the respective tumor’s invasive behavior are correlated with the two factors. Unlike the previous studies, we present a quantitative measure by which one can understand better the effect of completion on the malignancy of tumors. We take the shape irregularity as the factor for identifying the invasive behavior of tumor and compare our results with experimental reports. The model that we present contains the essential features of the cells, such as symmetric/asymmetric division, metabolic state, cellular quiescence and movements, apoptosis, and existence of oxygen and its consumption. Our results explain, for the first time to our knowledge, the aforementioned experimentally-observed fractal behavior and contradict the predictions of recent models for the relation between the number of the CSCs and the growth rate and invasion. In addition, we believe that the results may cast doubt on the recent therapeutic approach based on oxygen deprivation.
## Results
As the system evolves, the cells consume oxygen, enhance their metabolic state, and proliferate after reaching the energy level of u p , in order to create a clone - the tumor - see Fig. 1. The perimeter of the clone is the main object that we study in this paper.
As Fig. 1 demonstrates, the cells take on irregular shapes during their growth whose complexity depends on the number of the CSCs (or probability p s ). One interesting approach is to study the structure of the perimeters in the context of interface instability22,23,24. The analogy with the instability of interfaces has been established for the case of melanoma25, and the instabilities were attributed to nutrient density. But, here, we quantify tumor behavior through classifying irregular morphology of the tumors. To quantify the irregularity of the tumor’s morphology and its evolution, we use fractal analysis. To this end, we measure the average distance r from the center of the mass, as well as the area of the tumor during its growth. Figure 1 indicates that log(area) versus log r is a linear plot so that, $${\rm{area}}\sim \,{r}^{{D}_{f}}$$. Thus, the slope of the line in the logarithmic plot is the fractal dimension D f , implying self-similarity of the tumors of various sizes. The self-similarity of the tumors’ growth is the result of heterogeneous duplication on their perimeter, which itself is due to the oxygen gradient. Cells in the region with higher curvatures have better supply of oxygen, helping them increase their metabolic state, and proliferate faster. The proliferation also creates new perimeter curvature with the same behavior. As the number of oxygen consumers, which is proportional to p s , increases the competition between the cells for the limited oxygen supply intensifies and oxygen availability becomes more heterogeneous. Thus, the tumors take on more irregular shapes or lower fractal dimension D f , contradicting previous studies9,10 that proposed an adverse relation between the number of the CSCs and the invasive behavior.
We note that fractal scaling has been reported previously in the experimental studies17,18. Moreover, irregular shapes have been interpreted as an indication of invasive behavior of various tumors17,18,19. Tumors with more irregular shapes are more invasive, and in our model the more irregular tumors have smaller D f . There are several reports that confirm the correlation between D f and tumor malignancy (a malignant tumor possesses a lower fractal dimension than that of a benign mass)26,27,28,29.
A study of the variations of D f with p s and the density n of the oxygen is useful to characterization of the tumor behavior. The computed D f for various values of p s and oxygen densities is shown in Fig. 2.
Figure 2 presents explicitly the value of D f and the corresponding malignancy of tumor as a result of both the internal feature and the external conditions. For a fixed density n of oxygen, the invasive behavior of tumor always increases with p s , implying that, regardless of the environmental conditions, higher numbers of CSCs always lead to a more invasive behavior; see Fig. 2 in the Supplementary Information (SI). This is in contradiction with the existing reports on the adverse effect of p s on the tumor’s invasive behavior9,10. On the other hand, the effect of the environmental stress on invasion is regulated by internal feature of the cells, p s . For p s = 1, the oxygen deprivation significantly increases the malignant behavior of tumors, while for p s = 0, the density of oxygen has negligible effect on tumor’s invasive behavior.
## Relation to Superficial Spreading Melanoma
As presented here, our model explains a two-dimensional (2D) tumor growth. Early stages of Superficial Spreading melanoma has a 2D structure that might be a promising case to apply our findings to. Experiments indicate that there is no blood flow to the Superficial Spreading melanoma (SSM) with thickness less than 0.9 mm33. In addition, melanoma is, at least in its early stages, an approximately 2D phenomenon, so that a 2D model may properly produce its structure. The malignant cells in SSM stay within the original tissue - the epidermis - in an in-situ phase for a long time, which could be up to decades. Initially, the SSM grows horizontally on the skin surface, known as radial growth, with lesion indicated by a slowly-enlarging flat area of discolored skin. Then, part of the SSM becomes invasive, crossing the base membrane and entering the dermis, giving rise to a rapidly-growing nodular melanoma within the SSM that begins to proliferate more deeply within skin.
## Discussion
The proposed model sheds new light on and provides new insight into the invasive behavior of tumors by deciphering the effect of both intrinsic and extrinsic features of cells. It also demonstrates that elimination of the oxygen in the previous models gives rise to such a relation. The fractal behavior that we identify and attribute to the growth limited to the perimeter is similar to surface growth17,34. Nevertheless, close inspection of the proliferation activity in the perimeter in the proposed model reveals larger parts of the cells as proliferative cells; see Fig. 1 of the SI. As the model demonstrates, a single biological parameter, namely p s , changes the cell’s features and results collectively in various self-similar states with distinct fractal dimensions. Previous models, which considered the CSCs9,10, obtained an inverse relation between the number of the CSCs and invasion, but our model indicates increased malignancy to be proportional to larger numbers of the CSCs. Compared to experimental data11 our model confirms increasing of morphological irregularities (Gleason grade), but complete consistency require more biological details in the model.
Tumors with low number of the CSCs that were proposed by the previous studies9,35 did not respond to oxygen deprivation, as was expected13,14. Hence, tumors that respond to oxygen deprivation must have larger number of the CSCs. In addition, models that do not consider the CSC evolution and endow the cells with unlimited proliferation capacity14,15 produce tumors corresponding to p s = 1. Such models consider the effect of oxygen and, as our model confirms, oxygen deprivation leads to higher irregularities. As p s decreases, the effect of oxygen vanishes. Thus, a lower number of the CSCs, which was proposed previously9,35, does not conform to the experimentally well-established oxygen effects. Our model, in addition to reproducing such result, provides quantitative and comparable results to classify the irregularities that can be used to analyze experimental data that have been reported for the fractal dimensions.
The conceptual results are applicable to the growth of other solid tumors that display the aforementioned behavior in response to oxygen tension and frequency of CSCs. For example, in the case of the SSM in which the number of CSCs is not small3,36, oxygen deprivation probably increases tumor malignancy. Contrary to the previous studies, the present model predicts invasion as the result of both the tumor and the microenvironment, demonstrating the effect of nutrient deprivation on the invasion. This implies that recent studies on such therapeutic approach37,38 must consider carefully the side effects that, based on our model for tumors with larger numbers of the CSCs, can increase tumor malignancy.
## The Model
Similar to many other natural systems, biological media fluctuate due to the intrinsic randomness of the individual events39. Cells are involved in regulatory pathways that depend highly nonlinearly on the chemical species that are present in low copy numbers per cell40, as a result of which other factors, such as the forces between cells, fluctuate significantly41. Thus, statistical approaches are suitable for simulating cells’ behavior. We consider the 2D lattice shown in Fig. 3 in which each bond is 100 micrometer long, while each site has the capacity for 100 cancer cells that typically have 10 μm diameter42. The nutrient density is constant on the perimeter of a circle with a radius of 1 cm. It diffuses into the internal zones and is consumed by the living cells. In the SI we present the results for various other initial/boundary conditions for the oxygen supply, including smaller and larger radii of the circle, regular and random distribution of the oxygen source, as well as its uniform distribution in the medium, and show that the predictions of the model do not depend on the choice of the oxygen supply mechanism. Though we consider 2D structures, the results for a 3D structure for oxygen supply system (vessels and capillaries) would remain qualitatively the same, while the model can be extended to 3D.
Keeping the oxygen density uniform in the milieu −0.15 mol/ml16 - a CSC is inserted at the center of the medium that consumes the oxygen and enhances its metabolic state. Although metabolic pathways are not fully understood, metabolic activity is a crucial factor in a cell’s decision to either proliferate or die43. In the former case a cell must increase its biomass and replicate its genome prior to division, in order to create two daughter cells. Thus, the cell must generate enough energy and acquire or synthesize biomolecules at a sufficient rate to meet the proliferation demand44. Given such biological facts, we choose metabolic state as the decisive factor for a cell’s decision to proliferate, and define an internal energy ucell for each cell as an indicator of its metabolic state. Physically, the cells acquire energy from the environment to accumulate internal energy45 - the energy of the absorbed molecules - which evolves according to the energy conservation law:
$$\frac{\partial {u}_{{\rm{cell}}}}{\partial t}=\chi n(x,y,t)-\gamma {u}_{{\rm{cell}}},$$
(1)
where n(x, y, t) is the oxygen density at position (x, y) and time t, with χ and γ being positive constants related to energy accumulation and consumption rate (for details of all the constants and their values see Table 1 in the SI). If a cell’s energy reaches a threshold u p , it will begin duplication. We set u p , χ and γ such that every cell in the appropriate situation will be in the duplication state after 15 hours46, which is about the time that tumor cells need to reach the so-called cell checkpoints eG1 (early G1), G1 and eS in the cell cycle for division. G1 is the primary point at which a cell must decide whether to divide. After it passes G1 and enters the S phase, the cell is committed to division46 (other checkpoints, such as G2 at which the cell is mostly concerned with the condition of its DNA, still remain to be completed in the next step). As we show below, Eq. (1) together with the limits imposed, reproduces cell plasticity and various proliferation activities under a variety of external conditions47 that were reported recently46. Time is measured in units of 10 minutes.
The evolution of the internal energy ucell of the cells depends on the local density of oxygen through a set of coupled differential equations, and if enough oxygen exists at the position of the first CSC, ucell increases to u p and the first CSC duplicates into two daughter cells. This relation between oxygen density, cell metabolic state and its duplication dynamics ensures the apparent role of the oxygen density in the tumor evolution. One may consider various scenarios for quantitative studies of the CSC proliferation48,49,50,51, but the probability of distinct kinds of divisions has yet to be assessed experimentally. Besides, some other studies52 have proposed the cells’ self-renewal ability as the prerequisite for tumor maintenance. Thus, we choose the simplest biologically-correct model that has the ability to generate the entire possible range of the CSC population percentage, from zero up to the values produced by the various mathematical48,49,50,51 and biological models52. In this model, during duplication of each CSC one daughter cell is assumed to be CSC, while the second one is either a CSC with probability p s - the probability of symmetric duplication of the CSCs - or a cancerous cell (CC) with probability (1 − p s ); see Fig. 4. Each CC duplicates into two CCs if it is allowed to10. Such a probabilistic approach is motivated by the fact stated earlier, that according to the classical CSC hypothesis, among all cancerous cells, only “a few” act as stem cells, whereas some studies3,53 have reported that the population of CSCs can be relatively high, which is why we take the population of the CSCs (with probability p s ) as a parameter of our model. For p s = 1 the model reduces to the stochastic model of tumor development54. Every CSC continues such a division for an unlimited frequency, but the CC can have only limited generations of duplication55, which we set it to be g = 51,10 after which it will die and produce dead cells (DCs); see Fig. 4. As the cells undergo apoptosis, they are recognized and removed from the body by phagocytes. Thus, we assume that the dead cells remain inactive in the medium, but even if we eliminate them after death, the main results remain the same; see the Fig. S15 in the SI.
We define the density of cells of type i at location (x, y) at time t by,
$${C}_{i}(x,y,t)=\frac{{\rm{number}}\,{\rm{of}}\,{\rm{cells}}\,{\rm{at}}\,(x,y,t)}{{\rm{capacity}}\,{\rm{of}}\,{\rm{each}}\,{\rm{site}}}\,,$$
(2)
with i ≡ CSCs, CCs, and DCs. Equation (2) is also valid for the total density of cells, C t = CCSC + CCC + CDC. Recall also that the capacity of each site is 100 cells42. The density of the CCs is denoted by CCC(x, y, t; j) in which j indicates their generation that varies from 1 to g (after g generations they produce the DCs). Healthy tissues contain healthy cells in which the distribution of the nutrients is in a steady state. We eliminate the healthy cells for all the tumors, as our results are based on comparison with and differences of tumors’ behavior that are the most important part of our study.
Local density gradients drive the stochastic motion of the cells56. Thus, one has,
$$\frac{\partial C(x,y,t)}{\partial t}=D{\nabla }^{2}C(x,y,t),$$
(3)
where D is the diffusion coefficient. Equation (3) is applicable to the various kinds of cells, for which16,57 D ≈ 10−10 cm2/s. Population growth of biological groups depends on the species ability for proliferation and the environmental limitations. One important environmental limit is contact inhibition of cell division58, i.e., if after the energy rises to u p , the cells will duplicate if there is space; otherwise, they will stay quiescent until they find space for duplication59. Thus, proliferation at each site depends on the number of cells that can duplicate, and the effect of competition for space between all types of cells. The evolution of the CSCs that qualifies for the duplication metabolic threshold u p , is expressed by a diffusion-reaction equation,
$$\begin{array}{rcl}\frac{\partial {C}_{\mathrm{CSC}}(x,y,t)}{\partial t} & = & D{\nabla }^{2}{C}_{\mathrm{CSC}}(x,y,t)\\ & & +\,{R}_{m}{p}_{s}{C}_{\mathrm{CSC}}(x,y,t\mathrm{)[1}-{C}_{t}(x,y,t)],\end{array}$$
(4)
where R m is the rate of passing the S, G2 and M phases in the cell cycle, which is fixed as a cell that has enough internal energy (has passed the aforementioned eG1, G1 and eS phases) will duplicate in 5 hours46, if there were no other cells. The last term on the right side of Eq. (5) that includes the term [1 − C t (x, y, t)] captures the effect of contact inhibition of proliferation in which C t (x, y, t) is the total density of all cells at (x, y, t). The entire cell cycle takes 20 h. The evolution of the jth generation of the CCs is governed by
$$\begin{array}{rcl}\frac{\partial {C}_{{\rm{CC}}}(x,y,t;j)}{\partial t} & = & D{\nabla }^{2}{C}_{{\rm{CC}}}(x,y,t;j)\\ & & +\,{\delta }_{1j}{R}_{m}\mathrm{[1}-{p}_{s}\mathrm{][1}-{C}_{t}(x,y,t)]\\ & & +\,\mathrm{(1}-{\delta }_{1j}){R}_{m}{C}_{{\rm{CC}}}(x,y,t;j-\mathrm{1)[1}-{C}_{t}(x,y,t)]\\ & & -\,\mathrm{(1}-{\delta }_{jg}){R}_{m}{C}_{{\rm{CC}}}(x,y,t;j\mathrm{)[1}-{C}_{t}(x,y,t)]\\ & & -\,{\delta }_{jg}{R}_{a}{C}_{{\rm{CC}}}(x,y,t;j),\end{array}$$
(5)
where δ ij denotes the Kronecker delta, i.e., δ ij = 1 for i = j and 0 otherwise, with 1 ≤ i, jg. The first term on the right side of Eq. (5) represents diffusion of the cells due to the local concentration gradient;16,56 the second is the creation of the first generation of the CCS due to asymmetric duplication of the CSCs10, while the third term represents the creation of the jth generation (for j ≠ 1) of the CCs from duplication of the prior generation. The concentration of the CCs decreases due to duplication and creation of the next generation, which the 4th terms accounts for, while the last term takes into account the death of the final (gth) generation of the CCs. R a is the rate of apoptosis - the process of programmed cell death - and is fixed as the gth generation has a halflife equal to 1 day. Finally, the evolution of the oxygen density in the presence of the cells is governed by
$$\begin{array}{rcl}\frac{\partial n(x,y,t)}{\partial t} & = & \beta {\nabla }^{2}n(x,y,t)\\ & & -\,\alpha [{C}_{\mathrm{CSC}}(x,y,t)+\sum _{j=1}^{g}{C}_{{\rm{CC}}}(x,y,t;j)],\end{array}$$
(6)
with α being proportional to oxygen consumption rate by the cells, which is the same for both the CCs and cancerous stem cells. We varied the rates of oxygen consumption for every kind of cells, but the essential results remained the same; see the SI. α was fixed by setting the reported value for oxygen consumption16,60 to be 6.65 × 10−17 mol cell−1 s−1. β is the diffusion coefficient of oxygen in the medium, which we fixed it based on the calculations at room temperature, 10−5 cm2/s. We present in the SI the results for other values of β. For distances more than 1 cm from the medium’s center the oxygen density is constant (see the SI for the results for larger and smaller distances, as well as other ways of supplying the oxygen), and is equal to 0.15 mol/ml16. For simplicity, in all the calculations we normalize n to 1. From outside of the aforementioned circle, oxygen penetrates into the central area. Given the assumptions, the cells are active elastic species, consuming oxygen and proliferating.
As we show in the SI, other boundary conditions do not change the essential results. In addition, (i) we also varied both the proliferation activity and oxygen consumption rate for various kinds of cells, but the results remained qualitatively the same. (ii) The CSCs and CCs are assumed to have equal oxygen consumption rates, but when we changed them for every kind of cell, the results were qualitatively the same. (iii) The CSCs and CCs are assumed to have the same internal energy threshold u p for duplication, and equal rates of crossing the S, G2 and M phases in the cell cycle. But changing the proliferation activity of the cells did not change our main results. Let us also emphasize that our model is not the same as the classical models of diffusion-limited aggregation61, as such model did not deal with the effect of reaction and consumption.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Reya, T., Morrison, S. J., Clarke, M. F. & Weissman, I. L. Stem cells, cancer, and cancer stem cells. nature 414, 105–111 (2001).
2. 2.
Beck, B. & Blanpain, C. Unravelling cancer stem cell potential. Nature Reviews Cancer 13, 727–738 (2013).
3. 3.
Quintana, E. et al. Efficient tumour formation by single human melanoma cells. Nature 456, 593–598 (2008).
4. 4.
Medema, J. P. Cancer stem cells: the challenges ahead. Nature cell biology 15, 338–344 (2013).
5. 5.
Li, S. & Li, Q. Cancer stem cells and tumor metastasis. International journal of oncology 44, 1806–1812 (2014).
6. 6.
Liu, H. et al. Cancer stem cells from human breast tumors are involved in spontaneous metastases in orthotopic mouse models. Proceedings of the National Academy of Sciences 107, 18115–18120 (2010).
7. 7.
Lin, X. et al. Notch4+ cancer stem-like cells promote the metastatic and invasive ability of melanoma. Cancer science 107, 1079–1091 (2016).
8. 8.
Choi, D. et al. Cancer stem cell markers cd133 and cd24 correlate with invasiveness and differentiation in colorectal adenocarcinoma. World journal of gastroenterology: WJG 15, 2258 (2009).
9. 9.
Enderling, H., Hlatky, L. & Hahnfeldt, P. Cancer stem cells: a minor cancer subpopulation that redefines global cancer features. Breast 11, 200 (2013).
10. 10.
Sottoriva, A. et al. Cancer stem cell tumor model reveals invasive morphology and increased phenotypical heterogeneity. Cancer research 70, 46–56 (2010).
11. 11.
Castellón, E. A. et al. Molecular signature of cancer stem cells isolated from prostate carcinoma and expression of stem markers in different gleason grades and metastasis. Biological research 45, 297–305 (2012).
12. 12.
Vaupel, P., Kallinowski, F. & Okunieff, P. Blood flow, oxygen and nutrient supply, and metabolic microenvironment of human tumors: a review. Cancer research 49, 6449–6465 (1989).
13. 13.
Höckel, M. et al. Association between tumor hypoxia and malignant progression in advanced cancer of the uterine cervix. Cancer research 56, 4509–4515 (1996).
14. 14.
Cristini, V. et al. Morphologic instability and cancer invasion. Clinical Cancer Research 11, 6772–6779 (2005).
15. 15.
Anderson, A. R., Weaver, A. M., Cummings, P. T. & Quaranta, V. Tumor morphology and phenotypic evolution driven by selective pressure from the microenvironment. Cell 127, 905–915 (2006).
16. 16.
Anderson, A. R. A hybrid mathematical model of solid tumour invasion: the importance of cell adhesion. Mathematical Medicine and Biology 22, 163–186 (2005).
17. 17.
Brú, A. et al. Super-rough dynamics on tumor growth. Physical Review Letters 81, 4008 (1998).
18. 18.
Caldwell, C. B. et al. Characterisation of mammographic parenchymal pattern by fractal dimension. Physics in medicine and biology 35, 235 (1990).
19. 19.
Lee, T. K. & Claridge, E. Predictive power of irregular border shapes for malignant melanomas. Skin Research and Technology 11, 1–8 (2005).
20. 20.
Baish, J. W. & Jain, R. K. Fractals and cancer. Cancer research 60, 3683–3688 (2000).
21. 21.
Tracqui, P. Biophysical models of tumour growth. Reports on Progress in Physics 72, 056701 (2009).
22. 22.
Vasiev, B. N. Classification of patterns in excitable systems with lateral inhibition. Physics Letters A 323, 194–203 (2004).
23. 23.
Vasiev, B., Hogeweg, P. & Panfilov, A. Simulation of dictyostelium discoideum aggregation via reaction-diffusion model. Physical Review Letters 73, 3173 (1994).
24. 24.
Vasieva, O., Vasiev, B., Karpov, V. & Zaikin, A. A model of dictyostelium discoideum aggregation. Journal of theoretical biology 171, 361–367 (1994).
25. 25.
Amar, M. B., Chatelain, C. & Ciarletta, P. Contour instabilities in early tumor growth models. Physical review letters 106, 148101 (2011).
26. 26.
Tambasco, M., Eliasziw, M. & Magliocco, A. M. Morphologic complexity of epithelial architecture for predicting invasive breast cancer survival. Journal of translational medicine 8, 140 (2010).
27. 27.
Etehad Tavakol, M., Lucas, C., Sadri, S. & Ng, E. Analysis of breast thermography using fractal dimension to establish possible difference between malignant and benign patterns. Journal of Healthcare Engineering 1, 27–43 (2010).
28. 28.
Zook, J. M. & Iftekharuddin, K. M. Statistical analysis of fractal-based brain tumor detection algorithms. Magnetic Resonance Imaging 23, 671–678 (2005).
29. 29.
Pérez, J. L. et al. Relationship between tumor grade and geometrical complexity in prostate cancer. bioRxiv 015016 (2015).
30. 30.
Smitha, K., Gupta, A. & Jayasree, R. Fractal analysis: fractal dimension and lacunarity from mr images for differentiating the grades of glioma. Physics in medicine and biology 60, 6937 (2015).
31. 31.
Pribic, J. et al. Fractal dimension and lacunarity of tumor microscopic images as prognostic indicators of clinical outcome in early breast cancer. Biomarkers 9, 1279–1277 (2015).
32. 32.
Buczko, O. & Mikołajczak, P. Shape analysis of mr brain images based on the fractal dimension. Annales Universitatis Mariae Curie-Sklodowska, sectio AI–Informatica 3, 153–158 (2015).
33. 33.
Srivastava, A., Laidler, P., Hughes, L. E., Woodcock, J. & Shedden, E. J. Neovascularization in human cutaneous melanoma: a quantitative morphological and doppler ultrasound study. European Journal of Cancer and Clinical Oncology 22, 1205–1209 (1986).
34. 34.
Brú, A., Albertos, S., Subiza, J. L., Garca-Asenjo, J. L. & Brú, I. The universal dynamics of tumor growth. Biophysical journal 85, 2948–2961 (2003).
35. 35.
Hermann, P. C. et al. Distinct populations of cancer stem cells determine tumor growth and metastatic activity in human pancreatic cancer. Cell stem cell 1, 313–323 (2007).
36. 36.
Girouard, S. D. & Murphy, G. F. Melanoma stem cells: not rare, but well done. Laboratory investigation 91, 647–664 (2011).
37. 37.
Tang, X. et al. Cystine deprivation triggers programmed necrosis in vhl-deficient renal cell carcinomas. Cancer research 76, 1892–1903 (2016).
38. 38.
Li, H. et al. Dt-13, a saponin monomer of dwarf lilyturf tuber, induces autophagy and potentiates anti-cancer effect of nutrient deprivation. European Journal of Pharmacology (2016).
39. 39.
Hilfinger, A. & Paulsson, J. Separating intrinsic from extrinsic fluctuations in dynamic biological systems. Proceedings of the National Academy of Sciences 108, 12167–12172 (2011).
40. 40.
Berg, O. G., Paulsson, J. & Ehrenberg, M. Fluctuations and quality of control in biological cells: zero-order ultrasensitivity reinvestigated. Biophysical journal 79, 1228–1236 (2000).
41. 41.
Trepat, X. et al. Physical forces during collective cell migration. Nature physics 5, 426–430 (2009).
42. 42.
Wang, Y. et al. Fiber-laser-based photoacoustic microscopy and melanoma cell detection. Journal of biomedical optics 16, 011014–011014 (2011).
43. 43.
Buchakjian, M. R. & Kornbluth, S. The engine driving the ship: metabolic steering of cell proliferation and death. Nature reviews Molecular cell biology 11, 715–727 (2010).
44. 44.
Jones, R. G. & Thompson, C. B. Tumor suppressors and cell metabolism: a recipe for cancer growth. Genes & development 23, 537–548 (2009).
45. 45.
Scalerandi, M. & Sansone, B. C. Inhibition of vascularization in tumor growth. Physical review letters 89, 218101 (2002).
46. 46.
Haass, N. K. et al. Real-time cell cycle imaging during melanoma growth, invasion, and drug response. Pigment cell & melanoma research 27, 764–776 (2014).
47. 47.
Meacham, C. E. & Morrison, S. J. Tumour heterogeneity and cancer cell plasticity. Nature 501, 328–337 (2013).
48. 48.
Shahriyari, L. & Komarova, N. L. Symmetric vs. asymmetric stem cell divisions: an adaptation against cancer? PLoS One 8, e76195 (2013).
49. 49.
Dhawan, A., Kohandel, M., Hill, R. & Sivaloganathan, S. Tumour control probability in cancer stem cells hypothesis. PloS one 9, e96093 (2014).
50. 50.
Tomasetti, C. & Levy, D. Role of symmetric and asymmetric division of stem cells in developing drug resistance. Proceedings of the National Academy of Sciences 107, 16766–16771 (2010).
51. 51.
Cao, Y., Naveed, H., Liang, C. & Liang, J. Modeling spatial population dynamics of stem cell lineage in wound healing and cancerogenesis. In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, 5550–5553 (IEEE, 2013).
52. 52.
Yoo, M.-H. & Hatfield, D. L. The cancer stem cell theory: is it correct? Molecules and cells 26, 514 (2008).
53. 53.
Gedye, C. et al. Cancer stem cells are underestimated by standard experimental methods in clear cell renal cell carcinoma. Scientific reports 6, 25220 (2016).
54. 54.
Nowell, P. C. The clonal evolution of tumor cell populations. Science 194, 23–28 (1976).
55. 55.
Hayflick, L. & Moorhead, P. S. The serial cultivation of human diploid cell strains. Experimental cell research 25, 585–621 (1961).
56. 56.
Ambrosi, D. & Preziosi, L. On the closure of mass balance models for tumor growth. Mathematical Models and Methods in Applied Sciences 12, 737–754 (2002).
57. 57.
Bray, D. Cell movements: from molecules to motility (Garland Science, 2001).
58. 58.
Martz, E. & Steinberg, M. S. The role of cell-cell contact in “contact” inhibition of cell division: A review and new evidence. Journal of cellular physiology 79, 189–210 (1972).
59. 59.
Montel, F. et al. Stress clamp experiments on multicellular tumor spheroids. Physical review letters 107, 188102 (2011).
60. 60.
Casciari, J. J., Sotirchos, S. V. & Sutherland, R. M. Variations in tumor cell growth rates and metabolism with oxygen concentration, glucose concentration, and extracellular ph. Journal of cellular physiology 151, 386–394 (1992).
61. 61.
Gerlee, P. & Anderson, A. R. Diffusion-limited tumour growth: simulations and analysis. Mathematical biosciences and engineering: MBE 7, 385 (2010).
## Acknowledgements
A.A.S. would like to acknowledge support from the Alexander von Humboldt Foundation, and partial financial support from the research council of the University of Tehran. We also acknowledge the High Performance Computing center of the University of Tehran in its Department of Physics, where most of computations were carried out. We thank an anonymous referee for constructive criticisms that guided us to revise and improve the manuscript.
## Author information
### Affiliations
1. #### Department of Physics, University of Tehran, Tehran, 14395-547, Iran
• & Abbas Ali Saberi
2. #### Institut für Theoretische Physik, Universitat zu Köln, Zülpicher Strasse 77, Köln, 50937, Germany
• Abbas Ali Saberi
### Contributions
A.A.S. proposed the project and computations. Y.A. did the simulations. Y.A., A.A.S. and M.S. analyzed the data and wrote the paper.
### Competing Interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Abbas Ali Saberi.
|
2019-02-22 14:01:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.659267783164978, "perplexity": 2633.021123978207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518425.87/warc/CC-MAIN-20190222135147-20190222161147-00160.warc.gz"}
|
https://www.giancolianswers.com/giancoli-physics-7th-edition-solutions/chapter-8/problem-70
|
## You are here
Hi aheumangutman, the position of the axis of rotation affects which equation to use. The textbook has some good illustrations on pg. 210 in Figure 8-20. This particular question says the axis of rotation is positioned at the center of the rod, which is what makes the equation $I=\dfrac{1}{12}ML^2$.
|
2022-08-15 18:24:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2527123987674713, "perplexity": 224.48606808626857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00409.warc.gz"}
|
http://nucleartalent.github.io/NuclearStructure/doc/pub/hfock/html/._hfock-bs047.html
|
Hartree-Fock algorithm
Our Hartree-Fock matrix is thus $$\hat{h}_{\alpha\beta}^{HF}=\langle \alpha | \hat{h}_0 | \beta \rangle+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}.$$ The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients $$C_{j\gamma}=\delta_{j,\gamma}$$ and solving the equations by diagonalization till the new single-particle energies $$\epsilon_i^{\mathrm{HF}}$$ do not change anymore by a prefixed quantity.
|
2020-09-22 17:15:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.646844208240509, "perplexity": 807.4263747578067}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00778.warc.gz"}
|
https://www.nature.com/articles/news050829-14?error=cookies_not_supported&code=0fb51ed4-4fc5-467f-9017-9e5e027ed202
|
Police are helping flood victims to evacuate the city. Credit: © AP Photo/Eric Gay
When hurricane Katrina hit New Orleans on 29 August, the city thought it had escaped the worst. The category-5 hurricane had weakened slightly to category 4 before making landfall, and residents were confident that they had avoided disaster.
But the following day, after the storm itself had passed, a 100-metre section of the levees protecting the area from the flood waters was breached, along with at least two other smaller stretches, inundating some 80% of the city.
By 31 August, estimates of the ultimate death toll were in the thousands, and all this with more than two months to go until the end of hurricane season. Here news@nature.com looks at the dire situation, and asks whether more is in store.
Did experts know this might happen?
Yes. New Orleans is protected by a series of flood walls called levees that help to hold back nearby Lake Pontchartrain, which in turn is connected to the Gulf of Mexico.
Parts of the city sit several metres below sea level. And the system's 565 kilometres of walls were built to withstand only category-3 hurricanes. So a direct strike from a severe storm has long been anticipated as one of the worst natural disasters that could befall the mainland United States (see 'Hurricane Ivan highlights future risk for New Orleans' ).
Could something have been done to prevent this?
The United States had had a really long run of good luck with hurricanes. Lots of building decisions were made thinking we would continue to have the benign conditions of the 1970s and 80s. Hugh Willoughby , International Hurricane Research Center, Miami
Yes. The levees could have been higher. The New York Times has reported that the estimated cost of protecting against a category-5 hurricane, the highest on the scale, is $2.5 billion. The natural marshlands that protect New Orleans from surrounding waters could also have been protected from degradation. A 30-year restoration plan, called Coast 2050, was published in 1998, but it put the bill at a staggering$14 billion. Damages from the current flooding are expected to run to tens of billions of dollars.
Part of the problem is that planners did not take into account the recent upswing in hurricane incidence, says Hugh Willoughby, a meteorologist at the International Hurricane Research Center in Miami, Florida.
"The United States had had a really long run of good luck with hurricanes. Lots of building decisions were made thinking we would continue to have the benign conditions of the 1970s and 1980s," Willoughby told news@nature.com.
"Unfortunately, 'Don't worry, be happy' is not a very good philosophy for dealing with this kind of thing."
How exactly did the levees fail?
It is still unclear why the 100-metre section of levee along the 17th Street canal was the one to break. It had recently been upgraded, and was constructed of concrete several feet thick, unlike the earthen structures elsewhere in the city.
Experts point out that Lake Pontchartrain was sloshing around in the wake of the storm, which might have caused water to tip over the edge of nearby levees. This water may have eaten away at the foundations of the wall, ultimately causing it to topple.
How long will it take to repair the damage?
All of the 20-odd pumping stations surrounding the 17th Street canal, the main route by which water is normally pumped out of the city, have been knocked out by the flood.
Keeping New Orleans free of water was a daily challenge even before Katrina struck. With almost the entire area between Lake Pontchartrain and the Mississippi River under water, clearing the flood will take at least a month.
For now, helicopters and barges are dropping sandbags and concrete highway construction barriers into the largest levee break in an attempt to plug the hole. As this story went to press, the waters were slowly starting to recede.
How many people will be affected?
Nearby towns already fear death tolls in the hundreds, and the overall number is expected to be in the thousands. It is potentially the worst natural disaster on US soil since the 1906 San Francisco earthquake, which claimed up to 6,000 lives.
Besides this, some 11,000 National Guard troops have been assigned to the region to distribute food supplies, rescue those stranded, and quell the looting that has sprung up in New Orleans.
Is climate change to blame?
It is impossible to say for certain.
There is evidence that hurricanes are becoming more intense, but this may be due to natural variation. New Orleans was last hit by a hurricane in 1969, marking the end of a particularly violent couple of decades. This was followed by a relatively quiet patch in Atlantic hurricanes, lasting until 1995. Since then, storms have been heating up again.
Hurricanes tend to be stronger when sea surface temperatures in the Atlantic are higher. But data on these temperatures only stretch back a couple of decades, since satellites began to be used to monitor the oceans. And computer models for climate change cannot predict small-scale, individual events such as hurricanes.
Nevertheless, sea surface temperatures are predicted to rise by a few degrees by 2100, meaning that devastating hurricanes may become more frequent. Whether these will make landfall or veer out to sea, however, is not known.
What can we expect from the rest of this hurricane season?
The Atlantic hurricane season traditionally lasts until November, so there could be more in store. So far this year, the region has produced 11 tropical storms, four of which have become hurricanes. The final tally could be around 20 storms with 10 hurricanes, says weather forecaster Julian Heming of the UK Met Office in Exeter, although not all of them will hit the mainland.
Can we expect more of the same next year?
"There's no reason to suggest it won't carry on as it has done," Heming says. "The past decade has seen a sudden switch to high activity." Monitoring such storms and evacuating people where necessary remains the best form of defence.
Of course, not all hurricanes will home in on major cities to such devastating effect. "Katrina probably picked the worst place to come ashore, with the possible exception of Miami," Heming says. But this week's events may well be a wake-up call.
|
2022-10-06 15:06:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27530062198638916, "perplexity": 3163.4348943517034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00454.warc.gz"}
|
https://math.stackexchange.com/questions/49309/intersection-of-prime-ideals
|
# Intersection of prime ideals
This is a problem from an algebra textbook.
Let $R$ be a ring, and $I$ be an ideal. If the radical of $I$ is $I$ itself, i.e. $\operatorname{rad}(I) = I$, then $I$ is an intersection of prime ideals.
My friend and I cannot figure out how to prove it.
• What are you taking to be the definition of $rad(I)$? There are a few equivalent ways to define it. – Matt Jul 4 '11 at 1:22
• You can check Chapter 2 of the book "A Primer of Commutative Algebra" written by James S. Milne. – chenf Aug 18 at 12:21
Hint: Show that the radical of the zero ideal (the nilradical) is the intersection of all prime ideals of $R.$ Then apply this to $R/I$ and use the fourth isomorphism theorem to obtain your desired result.
• For what special case, every radical ideal can be written as an intersection of maximal ideals? I have just seen the result, that for radical ideals of $k[x_1, x_2, \cdot \cdot x_n]$ where $k$ is an algebraically closed field, every radical ideal is the intersection of maximal ideals containing that ideal. Can you give me a simple argument why the result in this question reduces to maximal ideals for the ring mentioned above. – ramanujan_dirac Feb 18 '15 at 11:47
Here's my attempt. If you didn't want an actual solution then don't read this. Please correct me if I made any mistakes! I'm using the definition $\mbox{rad}(I) = \{ r \in R \,|\,\,\, r^k \in I \text{ for some } k \in \mathbb N \}$.
Let $P$ be a prime ideal containing $I$. If $r \in R$ is such that $r^k \in I$, then $r^k \in P$, so $r \in P$ since $P$ is prime. Thus $\mbox{rad}(I) \subset \bigcap_{P \supset I} P$.
Conversely, if $r \notin \mbox{rad}(I)$, then $r^k \notin I$ for any $k$, so $S = \{1, r, r^2, \ldots \}$ is a multiplicatively closed set disjoint from $I$. By a basic theorem on prime ideals, we have that $R \smallsetminus S$ contains a prime ideal $P_r$ containing I. Since $r \notin P_r$, we have $r \notin \bigcap_{P \supset I} P$. Therefore $\mbox{rad}(I) = \bigcap_{P \supset I} P$.
The problem posted is the special case where $I = \mbox{rad}(I)$.
• Thinking about it, I think this might only work for commutative rings.. – talkloud Jul 4 '11 at 3:32
• Why's that? I don't see what would fail in the non-commutative case. – MathManiac Aug 15 '16 at 14:16
• In the non-commutative case, $Rad (I)$ is not an ideal . See what happens when you try to prove that $ri \in I$ when $r \in R$ and $i \in I$ – Astrid A. Olave H. Aug 29 '16 at 4:57
• In the noncommutative setting, radical ideals are replaced by semiprime ideals, which are ideals $I$ such that if $J$ is an ideal such that $J^2\subseteq I$ then $J\subseteq I$. Equivalently, $I$ is semiprime if and only if $a\in R$ and $aRa\subseteq I$ imply $a\in I$. It is true that any semiprime ideal of a noncommutative ring is the intersection of the prime ideals which contain it (this is Nagata's lemma for the Baer radical). – Jose Brox Oct 28 '17 at 10:40
|
2019-08-22 18:30:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493597149848938, "perplexity": 110.24879645208635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00335.warc.gz"}
|
https://zbmath.org/?q=an:1030.58017
|
# zbMATH — the first resource for mathematics
The $$\eta$$-invariant and Pontryagin duality in $$K$$-theory. (English. Russian original) Zbl 1030.58017
Math. Notes 71, No. 2, 245-261 (2002); translation from Mat. Zametki 71, No. 2, 271-291 (2002).
The paper concerns relations between some spectral invariants of (a generalization of) elliptic operators on closed manifolds and topological invariants, and the more traditional part of the results of the paper is an expression of the fractional part of the Atiyah-Patodi-Singer eta-invariant in terms of the so-called linking number in $$K$$-theory (with coefficients) and a construction of an elliptic even-order operator on an odd-dimensional manifold with nontrivial fractional part of the eta-invariant (suitable example, due to Gilkey, in the case of even-dimensional is the $$\text{Pin}^c$$-operator on the real projective space $$\mathbb{R} P^{2n}$$).
In order to give a more detailed presentation of the results of the paper let us recall some earlier results of the authors. Namely, in some earlier papers, the authors developed a theory of elliptic operators on so-called pseudodifferential subspaces of $$C^\infty(M, E)$$ (smooth sections of a vector bundle $$E$$ over a smooth closed manifold $$M$$), i.e. subspaces that are images of pseudodifferential projections. A symbol of such a pseudodifferential subspace is well defined as a bundle on the cosphere tangent bundle, and parity (odd, even) of such subspace has been defined in terms of its symbol. Moreover a homotopy-invariant “dimension” functional $$d$$ is well defined on such pseudodifferential subspaces in the case the parities of the subspace and dimension of the manifold in question are (even, odd) or (odd, even). For example the subspace corresponding to the nonnegative eigenvalues of an elliptic self-adjoint operator $$A$$ of nonnegative order is a pseudodifferential subspace, and its “dimension” $$d$$ equals the eta-invariant of the operator $$A$$ (suitable parities are assumed).
For elliptic operators $$D$$ acting between pseudodifferential subspaces $$\widehat L^1$$, $$\widehat L^2$$ the authors also proved (in an earlier paper) an index theorem: $$\text{index}(D, \widetilde L^1, \widetilde L^2)=\tfrac 12\text{ index }\widetilde D+ d(\widetilde L^1)- d(\widetilde L^2)$$, where $$\widetilde D$$ is an auxiliary elliptic operator built out of the operator $$D$$. It follows from this theorem that the symbol of a pseudodifferential subspace determines a two-torsion element of $$(K(S^\bullet M)/K(M))$$. Moreover it has been proved that the group of stable homotopy classes of elliptic pseudodifferential operators acting between pseudodifferential subspaces is isomorphic to the $$K$$-theory with coefficients in $$\mathbb{Q}/\mathbb{Z}$$ of the cotangent bundle $$T^\bullet M$$.
The main results of the present paper is a formula, which expresses the fractional part of $$2d(\widehat L)$$ as the $$K$$-theoretical linking number of the pseudodifferential subspace in question and the orientation bundle of the manifold $$M$$. This can be summarized as follows. Using the index theorem above the authors express the fractional part of $$2d(\widehat L)$$ as the index $$\text{mod }2^N$$ of certain elliptic operator acting between pseudodifferential subspaces. Next the authors introduce suitable Pontryagin duality in $$K$$-theory with coefficients, which provides identification $$K^i_c(T^\bullet M,\mathbb{Q}/\mathbb{Z})\approx\operatorname{Hom}(K^i(M),\mathbb{Q}/\mathbb{Z})$$, and using this identification define, in a purely topological manner, a linking form $$\bigcap: \text{Tor }K^{i-1}_c(T^\bullet M)\times \text{Tor }K^i(M)\to\mathbb{Q}/\mathbb{Z}$$. The form is proved to be nondegenerate, and using the above-mentioned identification of stable homotopy classes of elliptic pseudodifferential operators acting between pseudodifferential subspaces and the $$K$$-theory with coefficients in $$\mathbb{Q}/\mathbb{Z}$$ of the cotangent bundle $$T^\bullet M$$ the authors show that the linking form can also be expressed as the index $$\text{mod }2^N$$ of an elliptic operator. Finally, the operator that appears in the above-mentioned formula for the fractional part of $$2d(\widehat L)$$ is identified as an operator in the “index” formula for the linking pairing, and therefore the fractional part of $$2d(\widehat L)$$ is expressed as the $$K$$-theoretical linking number of the pseudodifferential subspace in question and the orientation bundle of the manifold $$M$$, as desired.
##### MSC:
58J28 Eta-invariants, Chern-Simons invariants 58J20 Index theory and related fixed-point theorems on manifolds 58J22 Exotic index theories on manifolds 55N15 Topological $$K$$-theory
Full Text:
|
2021-05-14 02:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566519618034363, "perplexity": 250.80730283821808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00508.warc.gz"}
|
http://blog.althafkbacker.com/2009/04/spliting-files-gnu-way.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+AlthafsJournal+%28Althaf%27s+Journal%29
|
## Thursday, April 9, 2009
### Spliting Files .. the GNU way
Well here is typical scenario you have two 1GB pen drive and 2GB file to transport.
Q. What would to do?
Split is utility that comes with GNU coreutils .It allows the user to split a large file into small manageable size of MB KB or what ever apt to the situation.
For survival sake we make use if only -b and parameters associated with it.Well here it goes...
My file is of size 1.4 GB i need to split it to a chunk of 700MB
$split -b 700m myfile myfile-split Q. What would be the name of second half ? Well the programmers (guess who wrote it ) are clever , myfile-split is the base file name that i gave you can choose any name , when the chunks are made it would create myfile-splitaa,myfile-splitab etc Q. How to join ? On UNIX $ cat myfile-split* > myfile
On Windows
copy /b myfile-split* myfile
|
2018-01-16 20:07:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3711453974246979, "perplexity": 8583.569100734549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886639.11/warc/CC-MAIN-20180116184540-20180116204540-00639.warc.gz"}
|
https://www.physicsforums.com/threads/finding-integral-of-a-helicoid.355905/
|
Finding integral of a helicoid
1. Nov 18, 2009
MasterWu77
1. The problem statement, all variables and given/known data
Evaluate $$\int$$$$\int$$ $$\sqrt{1+x^2+y^2}$$ where S is the helicoid: r(u,v) = u cos(v)i + u sin(v)j+vk , with 0$$\leq$$u$$\leq$$1, 0$$\leq$$v$$\leq$$$$\theta$$.
The S is the area that we are trying to find. the area of the integral i guess.
2. Relevant equations
I know i have to use the $$\varphi$$ ($$\theta$$,$$\phi$$) = (acos$$\theta$$ sin $$\phi$$, asin$$\theta$$ sin $$\phi$$, acos$$\phi$$)
3. The attempt at a solution
we did examples like this in class but i'm not sure where to start off. do i need to change the equation of the integral into sin and cos?
2. Nov 18, 2009
Nick Bruno
you dont HAVE to use phi(theta,phi), you can do this in cartesian coordinates... but i think it would be easier to use a different coordinate system. ( i would suggest trying spherical?)
Yes, I you have to change the object of the integral if you want to use a different coordinate system because "x" and "y" are normally used for cartesian coordinates. theta and r are used for polar coordinates, theta, r and z are used for cylindrial coordinates, phi, rho, and theta are typically used for spherical coordinates.
All of these are just variables tho and can really be anything. They stand for angles and radii of the problem.
Remeber tho, when you change the coordinate system of your integral you have to apply the jacobian. ie, for cylindrical coordinates, the r dr dtheta is appended to the integral, or sphereical is some other trig with phi and theta.
Hope this helps somewhat...
|
2017-11-22 08:07:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736792206764221, "perplexity": 641.6537874630822}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00556.warc.gz"}
|
https://www.mediatebc.com/54jxkaac/simply-supported-beam-with-uniformly-varying-load-formula-e45397
|
You can find comprehensive tables in references such as Gere, Lindeburg, and Shigley.However, the tables below cover most of the common cases. Fig:1 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load are shown at the right, Fig:2 Shear Force & Bending Moment Diagram for Uniformly Distributed Load on Simply Supported Beam, Fig:3 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load at its mid span, Fig:4 SFD and BMD for Simply Supported at midspan UDL carrying Beam, Fig:5 Shear Force and Bending Moment Diagram for Simply Supported Uniformly distributed Load at left support, Fig:6 Formulas for finding moments and reactions at different sections of a Simply Supported beam having UDL at right support, Fig:8 Formulas for analysis of beam having SFD and BMD at both ends, Fig:9 Collection of Formulas for analyzing a simply supported beam having Uniformly Varying Load along its whole length, Fig:10 Shear force diagram and Bending Moment Diagram for simply supported Beam having UVL along its span, Fig:11 SFD and BMD for simply supported beam having UVL from the midspan to both ends, Fig:12 Formulas for calculating Moments and reactions on simply supported beam having UVL from the midspan to both ends. … Uniformly varying load mathalino beam formulas with shear and mom cantilever beam with uniformly varying simply supported beam with udl cantilever beam with uniformly varying … In our previous topics, we have seen some important concepts such as deflection and slope of a simply supported beam with point load, deflection and slope of a simply supported beam carrying uniformly distributed load and deflection and slope of a cantilever beam with point load … Draw the SF and BM diagrams for a Simply supported beam of length l carrying a uniformly distributed load w per unit length which occurs across the whole Beam. Distance 'x' of the section is measured from origin taken at support A. Continuous Beam Two Unequal Span With Udl. Bending Moment of Simply Supported Beams with Uniformly Varying Load < ⎙ 11 Other formulas that you can solve using the same Inputs Condition for Maximum Moment in Interior Spans of Beams When Plastic Hinge is Formed A simply supported beam with a uniformly distributed load. Simply Supported Beam With Uniformly Distributed Load : A simply supported beam AB with a uniformly distributed load w/unit length is shown in figure, The maximum deflection occurs at the mid point C and is given by : 4. Solution. E.g. Draw shear force and bending moment diagram of simply supported beam carrying point load. Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 3 o 1 7 360 l EI ω θ = 3 o 2 45 l EI ω θ = ( )4 2 2 4o 7 10 3 360 x y l l x x lEI ω = − + 4 o max 0.00652 l EI ω δ = at 0.519x l= 4 o 0.00651 l EI ω δ = at the center 3. $$\sum M_{D}\space = 0$$ Clockwise moments = Counter clockwise moments. Example - Beam with a Single Center Load. Problem 842 For the propped beam shown in Fig. Read more about Problem 842 | Continuous Beams with Fixed Ends; Log in or register to post comments; 16874 reads; Problem 827 | Continuous Beam by Three-Moment Equation. for 1+3, enter 4. Uniformly Distributed Load Uniform Load Partially Distributed Uniform Load Partially Distributed at One End Uniform Load Partially Distributed at Each End Load Increasing Uniformly to One End Load Increasing Uniformly to Center Concentrated Load at Center Concentrated Load at Any Point Two Equal Concentrated Loads Symmetrically Placed Two … Overhanging beam overhang both 14th edition steel construction manual solved a simply supported beam carries shear force bending moment diagram deflection cantilever beam point load, Calculator for ers slope and deflection simply beams fixed at one end and supported the other continuous bending moments calculation in a simply supported beam with simply supported beam with udl beam formulas with shear and mom. In the following table, the formulas describing the static response of the simple beam under a linearly varying (triangular) distributed load, ascending from the left to the right, are presented. BEAM DESIGN FORMULAS WITH SHEAR AND MOMENT DIAGRAMS American Forest & Paper Association w R V V 2 2 Shear M max Moment x DESIGN AID No. Cantilever Beam – Couple moment M at the free end Ml E I 2 Mx 2 y E I 2 Ml 2 E I. BEAM DEFLECTION FORMULAS BEAM TYPE SLOPE AT ENDS DEFLECTION AT ANY SECTION IN TERMS OF x MAXIMUM AND CENTER DEFLECTION 6. What is a Ground Source Heat Pump? Simply Supported UDL Beam Formulas and Equations. Suppose a simply-supported beam of span L, Figure 7.13, carries a lateral distributed load of variable intensity w. Then, from equation (7.4), if F is the shearing force a distance z from B, Figure 7.13. Problem 827 See Figure P … As shown in figure below. BEAM DIAGRAMS AND FORMULAS Table 3-23 (continued) Shears, Moments and Deflections 13. Simple Beam - Uniformly Increasing Load to One End. Bending Moment of Simply Supported Beams with Uniformly Varying Load calculator uses Bending Moment =0.1283*Uniformly Varying Load*Length to calculate the Bending Moment , The Bending Moment of Simply Supported Beams with Uniformly Varying Load formula is defined as the reaction induced in a structural element when an external force or moment is applied to the element, causing … Find reactions of simply supported beam when a point load of 1000 kg & 800 kg along with a uniform distributed load of 200 kg/m is acting on it.. As shown in figure below. BEAM FORMULAS WITH SHEAR AND MOMENT DIAGRAMS. Fig:1 Formulas for Design of Simply Supported Beam having Loads acting downward are taken as negative whereas upward loads are taken as positive. Uniformly Varying Load. Frame Structures - Types of Frame Structures, Types of Supports for Loads | Roller, Hinge, Fixed, Definition and Types of Structures and Structural Members, Retaining Wall - Definition and Types of Retaining Walls | Ret Wall, Retrofitting Techniques for Existing Damaged Buildings, What are Deep Beams? 6. beam diagrams and formulas by waterman 55 1. simple beam-uniformly distributed load 2. simple beam-load increasing uniformly to one end. Beam Simply Supported at Ends – Concentrated load P … Simply Supported Beam With Uniformly Distributed Load Formula November 20, 2018 - by Arfan - Leave a Comment Overhanging beam overhang both 14th edition steel construction manual solved a simply supported beam carries shear force bending moment diagram deflection cantilever beam point load I hope you like the Article “Different Types of beams and loads” I have covered almost all the relevant topics in this article.Please do comment and share our article and also Follow us on Facebook and Instagram for more updates, For video Lectures Follow us on YouTube channel “Basic Mech IN”.Don’t forget to share us on your Favourite social media. Fixed Beam With Udl Ering Notes. Cantilever Beam – Uniformly varying load: Maximum intensity o 3 o 24 l E I 2 32 23 o 10 10 5 120 x yllxlxx 4 o max 30 l E I 5. Structural Beam Deflection, Stress Formula and Calculator: The follow web pages contain engineering design calculators that will determine the amount of deflection and stress a beam of known cross section geometry will deflect under the specified load and distribution. 5. simple beam-uniform load partially distributed at one end 6. simple beam-uniform load partially distributed at each end. The beam is supported at each end, and the load is distributed along its length. Beam Calculator Input Units: Length of Beam, L: Load on Beam, W: Point of interest, x: Youngs Modulus, E: Moment of Inertia, I: Resultant, R 1 =V 1: Resultant, R 2 =V 2(max): Shear at x, V x: Max. First find reactions of simply supported beam. A simply supported beam is the most simple arrangement of the structure. A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. | Definition & Concept, Stability - Stable & Unstable Structures & Members. Beams. Simply-supported beam with lateral load of varying intensity. The beam is supported at each end, and the load is distributed along its length. The beam is supported at each end, and the load is distributed along its length. This calculator uses standard formulae for slope and deflection. Beam Deflection Tables Mechanicalc. Stay informed - subscribe to our newsletter. What Is The Maximum Bending Moment On A Simply Supported Beam And Restrained With Three Unequal Point Lo Asymmetrically Placed Uniformly Distributed Load Quora. 3. We have already seen terminologies and various terms used in deflection of beam with the help of recent posts and now we will be interested here to calculate the deflection and slope of a simply supported beam carrying uniformly distributed load throughout length of the beam with the help of this post. Deflection Of Simply Supported Beam Scientific Diagram . The maximum stress in a "W 12 x 35" Steel Wide Flange beam, 100 inches long, moment of inertia 285 in 4, modulus of elasticity 29000000 psi, with a center load 10000 lb can be calculated like σ max = y max F L / (4 I) = (6.25 in) (10000 lb) (100 in) / (4 (285 in 4)) = 5482 (lb/in 2, psi) P-842, determine the wall moment and the reaction at the prop support. Beams » Simply Supported » Uniformly Distributed Load » Three Equal Spans » Wide Flange Steel I Beam » W27 × 114 Beams » Simply Supported » Uniformly Distributed Load » Three Equal Spans » ALuminum I Beam » 4.00 × 2.311 More Beams. Problem 842 | Continuous Beams with Fixed Ends . Those who require more advanced studies may also apply Macaulay’s method to the solution of ENCASTRÉ. How does it Work? A simply supported beam is the most simple arrangement of the structure. A simply supported beam with a point load at the middle. A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. google_ad_height = 600; Simple Beam Uniformly Distributed Load And Variable End Moments, Cantilever Beam Uniformly Distributed Load, Beam Overhanging One Support Uniformly Distributed Load, Bending Moment Diagrams In A Simply Supported Beam Under Uniformly, Solved We 4 For The Simply Supported Beam With Uniformly, Bending Moments Calculation In A Simply Supported Beam With, Shear Force Diagrams In A Simply Supported Beam Under Uniformly, Beams Supported At Both Ends Continuous And Point Lo, Continuous Beam Two Unequal Span With Udl, 10 Simply Supported Beam Under Concentrated Load At Mid Span And, Simply Supported Beam With Uniformly Varying Load Formula, Dakota Alert 2500 Wireless Break Beam Sensor Driveway Alarm, Flexural Strength Test Of Rectangular Concrete Beam, Flexural Strength Test Of Wooden Beam Experiment, Flexural Strength Test Of Beam Concrete Specimen. Solution. Does The Formula For A Point Load Pl 4 On Beams … The tables below give equations for the deflection, slope, shear, and moment along straight beams for different end conditions and loadings. Since, beam is symmetrical. google_ad_client = "ca-pub-6101026847074182"; Both of the reactions will be equal. Simply supported beam with linearly varying distributed load (triangular) Quantity. google_ad_slot = "2612997342"; Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 7ωol 3 ωo l 4 θ1 = δ max = 0.00652 at x = 0.519 l ωo x 360 EI ω l3 y= 360lEI ( 7l 4 − 10l 2 x 2 + 3x4 ) ωol 4 EI θ2 = o δ = 0.00651 at the center 45 EI EI Fig:1 Formulas for Design of Simply Supported Beam having Uniformly Distributed Load are shown at the right A Simply Supported Beam E 12 Gpa Carries Uniformly Distributed Load Q 125 N M And Point P 200 At Mid Span The Has Rectangular. Get Ready for Power Bowls, Ancient Grains and More. Gotthard Base Tunnel (Rail Tunnel) Design Engineering, Construction & Cost, Structural & Non Structural Defects in Building Construction, SAP 2000 and ETABS Training Course on Realworld Civil Engineering Projects, Below are the Beam Formulas and their respective SFD's and BMD's. Formula. BEAM FIXED AT ONE END, SUPPORTED AT OTHER-CONCENTRATED LOAD AT CENTER A simply supported beam cannot have any translational displacements at its support points, but no restriction is placed on rotations at the supports. You will also learn and apply Macaulay’s method to the solution for beams with a combination of loads. Please note that SOME of these calculators use the section modulus of the geometry cross section ("z") of the beam. google_ad_width = 300; AF&PA is the national trade association of the forest, paper, and wood products … i.e., R1 = R2 = W/2 = 1000 kg. Solve this simple math problem and enter the result. Bending Moment & Shear Force Calculator for uniformly varying load (maximum on left side) on simply supported beam. This calculator provides the result for bending moment and shear force at a istance "x" from the left support of a simply supported beam carrying a uniformly varying (increasing from right to left) load on a portion of span. This calculator is for finding the slope and deflection at a section of simply supported beam subjected to uniformly varying load (UVL) on full span. Beams Fixed At Both Ends Continuous And Point Lo . AMERICAN WOOD COUNCIL The American Wood Council (AWC) is part of the wood products group of the American Forest & Paper Association (AF&PA). Let us know in the comments what you think about the concepts in this article! 7.11 Simply-supported beam with non-uniformly distributed load. Beam Simply Supported at Ends – Uniformly varying load: Maximum intensity ωo (N/m) 7ωol 3 ωo l 4 θ1 = δ max = 0.00652 at x = 0.519 l ωo x 360 EI ω l3 y= 360lEI ( 7l 4 − 10l 2 x 2 + 3x4 ) ωol 4 EI θ2 = o δ = 0.00651 at the center 45 EI EI Take moment about point D for finding reaction R1. Simply Supported Beam With Gradually Varying Load : A simply supported beam of AB of length l carrying a gradually varying load from zero at B to w/unit length at A, … 3. simple beam-load increasing uniformly to center 4. simple beam-uniformly load partially distributed. Workings . Simply Supported Beam with Point Load Example. Support loads, stress and deflections . what is the detailed method by which the formula is derived for finding shear force and bending moment on a simply supported beam with uniformly varying loads? How To Draw Shear Force Bending Moment Diagram Simply Supported Beam Exles Ering Intro. Most simple arrangement of the geometry cross section ( z '' of... Varying load ( triangular ) Quantity beam with linearly varying distributed load Ends Continuous and point Lo Definition &,! How to draw Shear Force calculator for uniformly varying load ( triangular ) Quantity & Structures. From origin taken at support a point Lo Asymmetrically Placed uniformly distributed load Quora calculators the. & Unstable Structures & Members, and the load is distributed along its length { D } \space = )... Beam carrying point load } \space = 0\ ) Clockwise moments = Counter Clockwise.. What you think about the concepts in this article beam and Restrained with Three Unequal Lo! At One end { D } \space = 0\ ) Clockwise moments = Counter Clockwise moments = Clockwise... The prop support uniformly distributed load ( triangular ) Quantity finding reaction R1 beam-uniformly load partially at! Calculator uses standard formulae for slope and deflection Figure P … beam FORMULAS with Shear moment! To draw Shear Force bending moment on a simply supported at each end and... Comments what you think about the concepts in this article what you think about the concepts in article! Modulus of the section is measured from origin taken at support a calculators use the section modulus of the modulus... And Restrained with Three Unequal point Lo Asymmetrically Placed uniformly distributed load ( triangular ) Quantity and 13. M_ { D } \space = 0\ ) Clockwise moments = Counter Clockwise moments = Clockwise. Macaulay ’ s method to the solution for beams with a uniformly distributed load Quora Power Bowls, Ancient and! ( z '' ) of the structure distance ' x ' of the structure distance ' '! & Members simple math problem and enter the result loads acting downward are taken negative... Varying load ( triangular ) Quantity side ) on simply supported beam with linearly varying distributed load ( triangular Quantity. Load to One end 6. simple beam-uniform load partially distributed at each end uniformly Increasing load to end... Simple beam - uniformly Increasing load to One end distributed along its length on! Load is distributed along its length D } \space = 0\ ) Clockwise moments problem 827 Figure... For beams with a combination of loads beam DIAGRAMS and FORMULAS Table 3-23 ( )... 6. simple beam-uniform load partially distributed section ( z '' ) of the geometry section. And more beam Exles Ering Intro Lo Asymmetrically Placed uniformly distributed load Quora will also learn and apply Macaulay s. Calculators use the section is measured from origin taken at support a article! Of ENCASTRÉ ' x ' of the section modulus of the structure learn and apply Macaulay ’ s to... ) Shears, moments and Deflections 13 of these calculators use the section is from! End, and the simply supported beam with uniformly varying load formula is distributed along its length with a distributed. … beam FORMULAS with Shear and moment DIAGRAMS draw Shear Force calculator for uniformly varying load maximum... The propped beam shown in Fig whereas upward loads are taken as.... Force calculator for uniformly varying load ( triangular ) Quantity distributed along its length of loads 3-23 ( ). Calculator uses standard formulae for slope and deflection Unstable Structures & Members the reaction at the prop.... ' x ' of the beam is supported at Ends – Concentrated load P … a simply beam. Whereas upward loads are taken as negative whereas upward loads are taken as negative whereas loads! Comments what you think about simply supported beam with uniformly varying load formula concepts in this article for Power Bowls, Ancient Grains and more is at! Taken at support a supported at Ends – Concentrated load P … beam with... Think about the concepts in this article simple math problem and enter the result \space = 0\ ) moments! Loads are taken as negative whereas upward loads are taken as negative whereas loads... Simply supported beam is supported at each end, and the load is distributed along its length Force moment! Cross section ( z '' ) of the structure - Stable & Unstable Structures &.... ) Shears, moments and Deflections simply supported beam with uniformly varying load formula Force bending moment diagram of simply supported beam linearly... Placed uniformly distributed load x ' of the structure solution for beams with combination! { D } \space = 0\ ) Clockwise moments simple beam - Increasing. Taken at support a learn and apply Macaulay ’ s method to the solution of ENCASTRÉ = W/2 1000. Uniformly Increasing load to One end advanced studies may also apply Macaulay ’ s method to solution! Moment & Shear Force bending moment diagram simply supported beam and Restrained with Three Unequal point Lo Ends Concentrated! Load to One end its length calculators use the section is measured from origin taken at support a point... & Members determine the wall moment and the load is distributed along its length who... Point Lo Asymmetrically Placed uniformly distributed load Force bending moment & Shear Force and bending &... Beam simply supported beam for slope and deflection these calculators use the section measured! Point load as positive of ENCASTRÉ ' x ' of the structure to 4.... The solution for beams with a uniformly distributed load beam with a uniformly distributed load maximum. Uniformly to center 4. simple beam-uniformly load partially distributed uniformly to center 4. simply supported beam with uniformly varying load formula beam-uniformly partially! = W/2 = 1000 kg the result - Stable & simply supported beam with uniformly varying load formula Structures & Members beam-uniformly. See Figure P … simply supported beam with uniformly varying load formula simply supported at each end, and the at! Measured from origin taken at support a, R1 = R2 = W/2 1000. 6. simple beam-uniform load partially distributed at One end distributed along its length ) of structure! Varying load ( triangular ) Quantity the maximum bending moment diagram of simply supported beam is supported each. The result Ends Continuous and point Lo Asymmetrically Placed uniformly distributed load and 13! Macaulay ’ s method to the solution of ENCASTRÉ simple arrangement of the geometry cross section ( ''! Ends Continuous and point Lo Asymmetrically Placed uniformly distributed load Quora, the. Whereas upward loads are taken as negative whereas upward loads are taken negative. The reaction at the prop support 4. simple beam-uniformly load partially distributed One. That SOME of these calculators use the section is measured from origin at! Note that SOME of these calculators use the section is measured from origin at. Uniformly varying load ( triangular ) Quantity triangular ) Quantity this article are taken as positive enter the result 842... Of simply supported beam and Restrained with Three Unequal point Lo Asymmetrically Placed uniformly load... ( \sum M_ { D } \space = 0\ ) Clockwise moments = Clockwise... Load ( triangular ) Quantity beam - uniformly Increasing load to One end distributed... To center 4. simple beam-uniformly load partially distributed to draw Shear Force calculator for varying! Combination of loads } \space = 0\ ) Clockwise moments = Counter Clockwise moments = Counter Clockwise.! Of ENCASTRÉ the wall moment and the load is distributed along its length point D for finding R1. Uniformly to center 4. simple beam-uniformly load partially distributed at each end, and the load distributed. & Shear Force calculator for uniformly varying load ( maximum on left side on! ( triangular ) Quantity distributed along its length combination of loads \space = 0\ ) Clockwise moments = Clockwise... Formulas with Shear and moment DIAGRAMS distributed along its length 4. simple beam-uniformly load distributed... The result enter the result 6. simple beam-uniform load partially distributed at each end, and the load is along. Those who require more advanced studies may also apply Macaulay ’ s method to the solution for beams with combination! = W/2 = 1000 kg and moment DIAGRAMS beams with a combination loads! = 1000 kg at support a moment & Shear Force bending moment on a supported... Calculator for uniformly varying load ( maximum on left side ) on simply supported beam with linearly varying distributed Quora! About the concepts in this article the maximum bending moment diagram of supported... Beams Fixed at Both Ends Continuous and point Lo along its length of simply beam! Also apply Macaulay ’ s method to the solution for beams with a uniformly distributed load Quora us! Draw Shear Force bending moment & Shear Force bending moment on a simply supported beam carrying point load support.! Placed uniformly distributed load ( triangular ) Quantity from origin taken at support a a simply supported beam linearly... Whereas upward loads are taken as negative whereas upward loads are taken as.... & Shear Force calculator for uniformly varying load ( maximum on left side on! With linearly varying distributed load Quora Exles Ering Intro to draw Shear Force and bending on! – Concentrated load P … beam FORMULAS with Shear and moment DIAGRAMS simple beam-uniformly load partially distributed at end. And deflection Placed uniformly distributed load Quora a uniformly distributed load ( triangular ) Quantity slope... Moment DIAGRAMS loads acting downward are taken as positive reaction R1 and enter the result beam shown Fig... ) Quantity Force calculator for uniformly varying load ( triangular ) Quantity with... With Three Unequal point Lo at support a ) of the section is measured from origin taken support. And more at each end, and the load is distributed along its length moments. How to draw Shear Force bending moment & Shear Force calculator for uniformly varying (... Lo Asymmetrically Placed uniformly distributed load Quora distributed along its length load is distributed its! Bowls, Ancient Grains and more, Stability - Stable & Unstable &... Determine the wall moment and the load is distributed along its length calculators use section.
Dog Breeders Macon, Ga, King Gary Season 2, 2019 Chevy Spark Rear Bumper, Payroll Administrator Certification, City Of Vacaville Jobs, Romantic And Victorian Criticism Pdf, Driver's Seat Drink Drive Course, Muscle Shoals Swampers Documentary, Best Restaurants In Capri, Italy, Examples Of External Forces On Structures, Orbit Internet Customer Care Number, I Like The Way You Move Remix,
|
2021-10-27 04:27:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6708800196647644, "perplexity": 3027.859283446892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00574.warc.gz"}
|
http://www.stat.cmu.edu/~ryantibs/statcomp/lectures/simulation_slides.html
|
Simulation
Monday October 8, 2018
Last week: Functions
• Function: formal encapsulation of a block of code; generally makes your code easier to understand, to work with, and to modify
• Functions are absolutely critical for writing (good) code for medium or large projects
• A function’s structure consists of three main parts: inputs, body, and output
• R allows the function designer to specify default values for any of the inputs
• R doesn’t allow the designer to return multiple outputs, but can return a list
• Side effects are things that happen as a result of a function call, but that aren’t returned as an output
• Top-down design means breaking a big task into small parts, implementing each of these parts, and then putting them together
Part I
Simulation basics
Why simulate?
R gives us unique access to great simulation tools (unique compared to other languages). Why simulate? Welcome to the 21st century! Two reasons:
• Often, simulations can be easier than hand calculations
• Often, simulations can be made more realistic than hand calculations
Sampling from a given vector
To sample from a given vector, use sample()
sample(x=letters, size=10) # Without replacement, the default
## [1] "k" "r" "n" "f" "d" "s" "w" "c" "x" "b"
sample(x=c(0,1), size=10, replace=TRUE) # With replacement
## [1] 0 0 1 1 0 0 0 1 1 1
sample(x=10) # Arguments set as x=1:10, size=10, replace=FALSE
## [1] 9 6 7 1 2 10 3 5 8 4
Random number generation
To sample from a normal distribution, we have the utility functions:
• rnorm(): generate normal random variables
• pnorm(): normal distribution function, $$\Phi(x)=P(Z \leq x)$$
• dnorm(): normal density function, $$\phi(x)= \Phi'(x)$$
• qnorm(): normal quantile function, $$q(y)=\Phi^{-1}(y)$$, i.e., $$\Phi(q(y))=y$$
Replace “norm” with the name of another distribution, all the same functions apply. E.g., “t”, “exp”, “gamma”, “chisq”, “binom”, “pois”, etc.
Random number examples
Standard normal random variables (mean 0 and variance 1)
n = 100
z = rnorm(n, mean=0, sd=1) # These are the defaults for mean, sd
mean(z) # Check: sample mean is approximately 0
## [1] 0.06769794
var(z) # Check: sample variance is approximately 1
## [1] 0.9970664
Estimated distribution function
To compute empirical cumulative distribution function (ECDF)—the standard estimator of the cumulative distribution function (CDF)—use ecdf()
x = seq(-3,3,length=100)
ecdf.fun = ecdf(z) # Create the ECDF
class(ecdf.fun) # It's a function!
## [1] "ecdf" "stepfun" "function"
ecdf.fun(0)
## [1] 0.43
# We can plot it
plot(x, ecdf.fun(x), lwd=2, col="red", type="l", ylab="CDF", main="ECDF")
lines(x, pnorm(x), lwd=2)
legend("topleft", legend=c("Empirical", "Actual"), lwd=2,
col=c("red","black"))
Interlude: Kolmogorov-Smirnov test
One of the most celebrated tests in statistics is due to Kolmogorov in 1933. The Kolmogorov-Smirnoff (KS) statistic is: $\sqrt{\frac{n}{2}} \sup_{x} |F_n(x)-G_n(x)|$ Here $$F_n$$ is the ECDF of $$X_1,\ldots,X_n \sim F$$, and $$G_n$$ is the ECDF of $$Y_1,\ldots,Y_n \sim G$$. Under $$F=G$$ (two distributions are the same), as $$n \to \infty$$, the KS statistic approaches the supremum of a Brownian bridge: $\sup_{t \in [0,1]} |B(t)|$
Here $$B$$ is a Gaussian process with $$B(0)=B(1)=0$$, mean $$\mathbb{E}(B(t))=0$$ for all $$t$$, and covariance function $$\mathrm{Cov}(B(s), B(t)) = s(1-t)$$
n = 500
t = 1:n/n
Sig = t %o% (1-t)
Sig = pmin(Sig, t(Sig))
eig = eigen(Sig)
Sig.half = eig$vec %*% diag(sqrt(eig$val)) %*% t(eig$vec) B = Sig.half %*% rnorm(n) plot(t, B, type="l") Two remarkable facts about the KS test: 1. It is distribution-free, meaning that the null distribution doesn’t depend on $$F,G$$! 2. We can actually compute the null distribution and use this test, e.g., via ks.test(): ks.test(rnorm(n), rt(n, df=1)) # Normal versus t1 ## ## Two-sample Kolmogorov-Smirnov test ## ## data: rnorm(n) and rt(n, df = 1) ## D = 0.14, p-value = 0.0001109 ## alternative hypothesis: two-sided ks.test(rnorm(n), rt(n, df=10)) # Normal versus t10 ## ## Two-sample Kolmogorov-Smirnov test ## ## data: rnorm(n) and rt(n, df = 10) ## D = 0.058, p-value = 0.3696 ## alternative hypothesis: two-sided Estimated density function To compute histogram—a basic estimator of the density based on binning—use hist() hist.obj = hist(z, breaks=30, plot=FALSE) class(hist.obj) # It's a list ## [1] "histogram" hist.obj$breaks # These are the break points that were used
## [1] -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 -1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2
## [15] 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
## [29] 3.2
hist.obj\$density # These are the estimated probabilities
## [1] 0.05 0.05 0.05 0.05 0.20 0.25 0.25 0.05 0.30 0.25 0.30 0.35 0.40 0.60
## [15] 0.25 0.50 0.35 0.25 0.10 0.20 0.00 0.05 0.10 0.00 0.00 0.00 0.00 0.05
# We can plot it
plot(hist.obj, col="pink", freq=FALSE, main="Histogram")
lines(x, dnorm(x), lwd=2)
legend("topleft", legend=c("Histogram", "Actual"), lwd=2,
col=c("pink","black"))
Part II
Pseudorandomness and seeds
Same function call, different results
Not surprisingly, we get different draws each time we call rnorm()
mean(rnorm(n))
## [1] -0.08407018
mean(rnorm(n))
## [1] 0.06926718
mean(rnorm(n))
## [1] 0.02865116
mean(rnorm(n))
## [1] 0.03032017
Is it really random?
Random numbers generated in R (in any language) are not “truly” random; they are what we call pseudorandom
• These are numbers generated by computer algorithms that very closely mimick “truly” random numbers
• The study of such algorithms is an interesting research area in its own right!
• The default algorithm in R (and in nearly all software languages) is called the “Mersenne Twister”
• Type ?Random in your R console to read more about this (and to read how to change the algorithm used for pseudorandom number generation, which you should never really have to do, by the way)
Setting the random seed
All pseudorandom number generators depend on what is called a seed value
• This puts the random number generator in a well-defined “state”, so that the numbers it generates, from then on, will be reproducible
• The seed is just an integer, and can be set with set.seed()
• The reason we set it: so that when someone else runs our simulation code, they can see the same—albeit, still random—results that we do
Seed examples
# Getting the same 5 random normals over and over
set.seed(0); rnorm(5)
## [1] 1.2629543 -0.3262334 1.3297993 1.2724293 0.4146414
set.seed(0); rnorm(5)
## [1] 1.2629543 -0.3262334 1.3297993 1.2724293 0.4146414
set.seed(0); rnorm(5)
## [1] 1.2629543 -0.3262334 1.3297993 1.2724293 0.4146414
# Different seeds, different numbers
set.seed(1); rnorm(5)
## [1] -0.6264538 0.1836433 -0.8356286 1.5952808 0.3295078
set.seed(2); rnorm(5)
## [1] -0.89691455 0.18484918 1.58784533 -1.13037567 -0.08025176
set.seed(3); rnorm(5)
## [1] -0.9619334 -0.2925257 0.2587882 -1.1521319 0.1957828
# Each time the seed is set, the same sequence follows (indefinitely)
set.seed(0); rnorm(3); rnorm(2); rnorm(1)
## [1] 1.2629543 -0.3262334 1.3297993
## [1] 1.2724293 0.4146414
## [1] -1.53995
set.seed(0); rnorm(3); rnorm(2); rnorm(1)
## [1] 1.2629543 -0.3262334 1.3297993
## [1] 1.2724293 0.4146414
## [1] -1.53995
set.seed(0); rnorm(3); rnorm(2); rnorm(1)
## [1] 1.2629543 -0.3262334 1.3297993
## [1] 1.2724293 0.4146414
## [1] -1.53995
Part III
Iteration and simulation
Drug effect model
• Let’s start with a motivating example: suppose we had a model for the way a drug affected certain patients
• All patients will undergo chemotherapy. We believe those who aren’t given the drug experience a reduction in tumor size of percentage $X_{\mathrm{no\,drug}} \sim 100 \cdot \mathrm{Exp}(\mathrm{mean}=R), \;\;\; R \sim \mathrm{Unif}(0,1)$
• And those who were given the drug experience a reduction in tumor size of percentage $X_{\mathrm{drug}} \sim 100 \cdot \mathrm{Exp}(\mathrm{mean}=2)$ (Here $$\mathrm{Exp}$$ denotes the exponential distribution, and $$\mathrm{Unif}$$ the uniform distribution)
What would you do?
What would you do if you had such a model, and your scientist collaborators asked you: how many patients would we need to have in each group (drug, no drug), in order to reliably see that the average reduction in tumor size is large?
• Answer used to be: get out your pen and paper, make some approximations
• Answer is now: simulate from the model, no approximations required!
So, let’s simulate!
# Simulate, supposing 60 subjects in each group
set.seed(0)
n = 60
mu.drug = 2
mu.nodrug = runif(n, min=0, max=1)
x.drug = 100*rexp(n, rate=1/mu.drug)
x.nodrug = 100*rexp(n, rate=1/mu.nodrug)
# Find the range of all the measurements together, and define breaks
x.range = range(c(x.nodrug,x.drug))
breaks = seq(min(x.range),max(x.range),length=20)
# Produce hist of the non drug measurements, then drug measurements on top
hist(x.nodrug, breaks=breaks, probability=TRUE, xlim=x.range,
col="lightgray", xlab="Percentage reduction in tumor size",
main="Comparison of tumor reduction")
# Plot a histogram of the drug measurements, on top
# Draw estimated densities on top, for each dist
lines(density(x.nodrug), lwd=3, col=1)
lines(density(x.drug), lwd=3, col=2)
legend("topright", legend=c("No drug","Drug"), lty=1, lwd=3, col=1:2)
Repetition and reproducibility
• One single simulation is not always trustworthy (depends on the situation, of course)
• In general, simulations should be repeated and aggregate results reported—requires iteration!
• To make random number draws reproducible, we must set the seed with set.seed()
• More than this, to make simulation results reproducible, we need to follow good programming practices
• Gold standard: any time you show a simulation result (a figure, a table, etc.), you have code that can be run (by anyone) to produce exactly the same result
Iteration and simulation (and functions): good friends
• Writing a function to complete a single run of your simulation is often very helpful
• This allows the simulation itself to be intricate (e.g., intricate steps, several simulation parameters), but makes running the simulation simple
• Then you can use iteration to run your simulation over and over again
• Good design practice: write another function for this last part (running your simulation many times)
Code sketch
Consider the code below for a generic simulation. Think about how you would frame this for the drug effect example, which you’ll revisit in lab
# Function to do one simulation run
one.sim = function(param1, param2=value2, param3=value3) {
# Possibly intricate simulation code goes here
}
# Function to do repeated simulation runs
rep.sim = function(nreps, param1, param2=value2, param3=value3, seed=NULL) {
# Set the seed, if we need to
if(!is.null(seed)) set.seed(seed)
# Run the simulation over and over
sim.objs = vector(length=nreps, mode="list")
for (r in 1:nreps) {
sim.objs[r] = one.sim(param1, param2, param3)
}
# Aggregate the results somehow, and then return something
}
Saving results
Sometimes simulations take a long time to run, and we want to save intermediate or final output, for quick reference later
There two different ways of saving things from R (there are more than two, but here are two useful ones):
• saveRDS(): allows us to save single R objects (like a vector, matrix, list, etc.), in (say) .rds format. E.g.,
saveRDS(my.mat, file="my.matrix.rds")
• save(): allows us to save any number of R objects in (say) .rdata format. E.g.,
save(mat.x, mat.y, list.z, file="my.objects.rdata")
Note: there is a big difference between how these two treat variable names
Corresponding to the two different ways of saving, we have two ways of loading things into R:
• readRDS(): allows us to load an object that has been saved by savedRDS(), and assign a new variable name. E.g.,
my.new.mat = readRDS("my.matrix.rds")
• load(): allows us to load all objects that have been saved through save(), according to their original variables names. E.g.,
load("my.objects.rdata")
Summary
• Running simulations is an integral part of being a statistician in the 21st century
• R provides us with a utility functions for simulations from a wide variety of distributions
• To make your simulation results reproducible, you must set the seed, using set.seed()
• There is a natural connection between iteration, functions, and simulations
• Saving and loading results can be done in two formats: rds and rdata formats
|
2018-11-18 01:21:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6671835780143738, "perplexity": 3695.322262082138}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743960.44/warc/CC-MAIN-20181118011216-20181118032450-00034.warc.gz"}
|
https://ckms.kms.or.kr/journal/view.html?doi=10.4134/CKMS.2014.29.2.263
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
On the growth rate of solutions to Gross-Neveu and Thirring equations Commun. Korean Math. Soc. 2014 Vol. 29, No. 2, 263-267 https://doi.org/10.4134/CKMS.2014.29.2.263Printed April 1, 2014 Hyungjin Huh Chung-Ang University Abstract : We study the growth rate of $H^1$ Sobolev norm of the solutions to Gross-Neveu and Thirring equations. A well-known result is the double exponential rate. We show that the $H^1$ Sobolev norm grows at most an exponential rate $\exp(c\, t^2)$. Keywords : Gross-Neveu, Thirring, Sobolev norm, $L^{\infty}$ bound MSC numbers : 35L45, 35Q41, 35F25, 81Q05 Downloads: Full-text PDF
Copyright © Korean Mathematical Society. The Korea Science Technology Center (Rm. 411), 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361 | Fax: 82-2-565-0364 | E-mail: paper@kms.or.kr | Powered by INFOrang Co., Ltd
|
2021-11-29 00:06:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17275413870811462, "perplexity": 6124.412041770397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00007.warc.gz"}
|
http://www-spires.fnal.gov/spires/find/books/www?keyword=Mathematical+Applications+in+the+Physical+Sciences
|
Fermilab Core Computing Division
Library Home | Ask a Librarian library@fnal.gov | Book Catalog | Library Journals | Requests | SPIRES | Fermilab Documents |
Fermilab Library
SPIRES-BOOKS: FIND KEYWORD MATHEMATICAL APPLICATIONS IN THE PHYSICAL SCIENCES *END*INIT* use /tmp/qspiwww.webspi1/21464.27 QRY 131.225.70.96 . find keyword mathematical applications in the physical sciences ( in books using www
Call number: 9789811008047:ONLINE Show nearby items on shelf Title: Chiral Four-Dimensional Heterotic String Vacua from Covariant Lattices Author(s): Florian Beye Date: 2017 Size: 1 online resource (XII, 95 p. 3 illus p.) Contents: Introduction -- Classification of Chiral Models -- Model Building -- Summary ISBN: 9789811008047 Series: eBooks Series: Springer eBooks Series: Springer 2017 package Keywords: Physics , Mathematical physics , Quantum field theory , String theory , Elementary particles (Physics) , Physics , Quantum Field Theories, String Theory , Elementary Particles, Quantum Field Theory , Mathematical Applications in the Physical Sciences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: 9783319461434:ONLINE Show nearby items on shelf Title: The Universal Coefficient Theorem and Quantum Field Theory A Topological Guide for the Duality Seeker Author(s): Andrei-Tudor Patrascu Date: 2017 Size: 1 online resource (XVI, 270 p. 6 illus., 1 illus. in color p.) Contents: Introduction -- Elements of General Topology -- Algebraic Topology -- Homological Algebra -- Connections: Topology and Analysis -- The Atyiah Singer Index Theorem -- Universal Coefficient Theorems -- BV and BRST Quantization, Quantum Observables and Symmetry -- Universal Coefficient Theorem and Quantum Field Theory -- The Universal Coefficient Theorem and Black Holes -- From Grothendieck’s Schemes to QCD -- Conclusions. ISBN: 9783319461434 Series: eBooks Series: Springer eBooks Series: Springer 2017 package Keywords: Physics , Mathematical physics , Algebraic topology , Quantum field theory , String theory , Elementary particles (Physics) , Physics , Quantum Field Theories, String Theory , Algebraic Topology , Mathematical Applications in the Physical Sciences , Elementary Particles, Quantum Field Theory Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: 9783319437217:ONLINE Show nearby items on shelf Title: Theory of Low-Temperature Plasma Physics Author(s): Shi Nguyen-Kuok Date: 2017 Size: 1 online resource (XV, 495 p. 253 illus p.) Contents: Foreword -- 1 Basic mathematical models of Low-temperature plasma -- 2 Classical calculation of particle interaction cross sections -- 3 The quantum-mechanical description of the particles scattering theory -- 4 Determination of the composition, thermodynamic properties and plasma transport coefficients on the basis of the model of particles mean free path -- 5 The solution of the kinetic Boltzmann equation and calculation of the transport coefficients of the plasma -- 6 Numerical methods of plasma physics -- 7 Simulation and calculation of paramete of RF-plasma torches -- 8 Simulation and calculation of parameters in Arc plasma torches -- 9 The calculation of the near-electrode processes in Arc plasma torches -- 10 Calculation of the heat transfer and movement of the solid particles in the plasma torches -- 11 Features of the experimental methods and automated diagnostic systems of RF and Arc plasma torches -- Appendix ISBN: 9783319437217 Series: eBooks Series: Springer eBooks Series: Springer 2017 package Keywords: Physics , Mathematical physics , Plasma (Ionized gases) , Physics , Plasma Physics , Numerical and Computational Physics, Simulation , Mathematical Applications in the Physical Sciences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-2012-9783642294044:ONLINE Show nearby items on shelf Title: Ten Physical Applications of Spectral Zeta Functions [electronic resource] Author(s): Emilio Elizalde Date: 2012 Edition: 2nd ed. 2012 Publisher: Springer Berlin Heidelberg Size: 1 online resource Note: Monograph Note: Springer 2012 Physics and Astronomy eBook collection Note: Springer e-book platform ISBN: 9783642294044 Series: Lecture Notes in Physics Series: e-books Keywords: Mathematical Methods in Physics , Mathematical Physics , Quantum Field Theories, String Theory , Mathematical Applications in the Physical Sciences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-2012-9783642283284:ONLINE Show nearby items on shelf Title: The Geometry of Special Relativity - a Concise Course [electronic resource] Author(s): Norbert Dragon Date: 2012 Publisher: Springer Berlin Heidelberg Size: 1 online resource Note: Brief Note: Springer 2012 Physics and Astronomy eBook collection Note: Springer e-book platform ISBN: 9783642283284 Series: SpringerBriefs in Physics Series: e-books Keywords: Classical and Quantum Gravitation, Relativity Theory , Mathematical Applications in the Physical Sciences , Classical Continuum Physics Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-2012-9783642276903:ONLINE Show nearby items on shelf Title: On Gauge Fixing Aspects of the Infrared Behavior of Yang-Mills Green Functions [electronic resource] Author(s): Markus Q. Huber Date: 2012 Publisher: Springer Berlin Heidelberg Size: 1 online resource Note: Monograph Note: Springer 2012 Physics and Astronomy eBook collection Note: Springer e-book platform ISBN: 9783642276903 Series: Springer Theses Series: e-books Keywords: Elementary Particles, Quantum Field Theory , Theoretical, Mathematical and Computational Physics , Mathematical Physics , Mathematical Applications in the Physical Sciences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-2012-9783642259463:ONLINE Show nearby items on shelf Title: Strings and Fundamental Physics [electronic resource] Author(s): Marco Baumgartl Ilka Brunner Michael Haack Date: 2012 Publisher: Springer Berlin Heidelberg Size: 1 online resource Note: Monograph Note: Springer 2012 Physics and Astronomy eBook collection Note: Springer e-book platform ISBN: 9783642259463 Series: Lecture Notes in Physics Series: e-books Keywords: Quantum Field Theories, String Theory , Mathematical Physics , Mathematical Applications in the Physical Sciences , Mathematical Methods in Physics Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-2012-9783642244391:ONLINE Show nearby items on shelf Title: Quantum Triangulations [electronic resource] Author(s): Mauro Carfora Annalisa Marzuoli Date: 2012 Publisher: Springer Berlin Heidelberg Size: 1 online resource Note: Monograph Note: Springer 2012 Physics and Astronomy eBook collection Note: Springer e-book platform ISBN: 9783642244391 Series: Lecture Notes in Physics Series: e-books Keywords: Physics, general , Mathematical Physics , Quantum Physics , Manifolds and Cell Complexes (incl. Diff.Topology) , Classical and Quantum Gravitation, Relativity Theory , Mathematical Applications in the Physical Sciences Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
Call number: SPRINGER-1962-9783642856273:ONLINE Show nearby items on shelf Title: Antiplane Elastic Systems Author(s): L. M Milne-Thomson Date: 1962 Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg Size: 1 online resource (266 p.) Note: 10.1007/978-3-642-85627-3 Contents: I. The Law of Elasticity -- 1.1. Continued dyadic products -- 1.2. The stress tensor -- 1.3. The deformation tensor -- 1.4. The equation of motion -- 1.5. Internal energy -- 1.6. Elastic deformation -- 1.7. Hooke’s law -- 1.8. Anisotropy -- 1.9. E lastic symmetry -- Examples I -- II. Stress functions and complex stresses -- 2.0. Introductory notions -- 2.1. Stress functions and fundamental stress combinations -- 2.3. The displacement -- 2.4. The strain-energy function -- 2.5. The elimination of the displacements -- 2.6. The complex stresses -- 2.7. Expression of the fundamental stress combinations in terms of the complex stresses -- 2.8. Effective stress functions -- 2.9. The shear function -- Examples II -- III. Isotropic beams -- 3.1. The boundar y conditions for a prismatic beam -- 3.2. The isotropic beam -- 3.3. Classification of certain antiplane problems -- 3.4. The equations which give the displacement in pure antiplane stress -- 3.5. The boundary condition for the pure antiplane problem for isotropic beams -- 3.6. Simple extension -- 3.7. Bending by terminal couples -- 3.8. Circular cylinder pushed into a hole -- Examples III -- IV. The torsion of isotropic beams -- 4.1. The torsion problem -- 4.2. Lines of shearing stress -- 4.3. The twisting moment -- 4.4. Solution by conformal mapping -- 4.5. The $$z\bar z$$method -- 4.6. Boundary conditions -- 4.7. A uniqueness theorem -- 4.8. The principle of virtual stresses -- 4.9. Torsion of a compound bar of isotropic materials - - Examples IV -- V. The flexure of isotropic beams -- 5.1. The flexure problem -- 5.2. The centre of flexure -- 5.3. Half-sections -- 5.4. Shear stress functions -- 5.5. de St. Venant’s flexure function -- Examples V -- VI. Antiplane of elastic symmetry -- 6.1. Bending by couples -- 6.2. Boundary conditions -- 6.3. A device for transforming integrals -- 6.4. Simplifying assumptions -- 6.5. Antiplane of elastic symmetry -- 6.6. The striess component zz -- 6.7. Orthotropic material -- 6.8. Methods of approximation -- Examples VI -- VII. General linear and cylindrical anisotropy -- 7.1. Generalized plane deformation -- 7.2. Line force applied to an elastic half-plane -- 7.3. Induced mappings for the region exterior to an ellipse -- 7.4 . Bending of a cantilever by a transverse force at the free end -- 7.5. Cylindrical anisotropy -- 7.6. Equations satisfied by the stress functions -- 7.7. Circular tube under pressure -- Examples VII -- References ISBN: 9783642856273 Series: eBooks Series: SpringerLink (Online service) Series: Springer eBooks Series: Ergebnisse der Angewandten Mathematik, Unter Mitwirkung der Schriftleitung des „Zentralblatt für Mathematik“ : 8 Keywords: Mathematics , Mathematical physics , Continuum physics , Mechanics , Mechanics, Applied , Mathematics , Mathematical Applications in the Physical Sciences , Classical Continuum Physics , Theoretical and Applied Mechanics Availability: Click here to see Library holdings or inquire at Circ Desk (x3401) Click to reserve this book Be sure to include your ID please. More info: Amazon.com More info: Barnes and Noble Full Text: Click here Location: ONLINE
|
2019-03-26 12:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38351377844810486, "perplexity": 8565.802767625339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00397.warc.gz"}
|
http://mathoverflow.net/feeds/question/108704
|
Integral equation - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T13:02:57Z http://mathoverflow.net/feeds/question/108704 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/108704/integral-equation Integral equation Alex A 2012-10-03T12:46:54Z 2012-10-19T13:06:33Z <p>Assume (for definiteness) $g:\mathbb{R} \to \mathbb{R}$ is continuous and that $f$ is defined by <code>$$f(E) = \int _0^{E-1} \Big ( (E - t)^2 - 1\Big )^{3/2} g(t) \, dt.$$</code> I'm interested in whether $g$ can be recovered assuming we know $f$. </p> <p>Does anyone know if this type of integrals have been studied before? </p> <p>For instance I am familiar to the fact that (Riemann-Liouville) integrals of the form <code>$$(J^\alpha g)(E) = \frac{1}{\Gamma (\alpha )}\int _0 ^E(E - t)^{\alpha -1}g(t) \, dt$$</code> can be inverted when $\alpha$ is a half-integer by using identities of the form $J^\alpha \circ J^\beta = J^{\alpha + \beta }$ and then differentiate. </p> <p>EDIT: I would just like to point out that I'm not necessarily looking for an explicit inversion formula. If the above equation fits into some general theory which concludes that $g$ can be recovered I'm happy. </p> <p>EDIT II: I have narrowed the problem down into finding $g_0$ (only depending on $t$) with <code>$$\int _1 ^{E-1} \Big ( (E - t)^2 - 1\Big )^{3/2} g_0(t) \, dt = 1, \qquad E>1.$$</code> Not sure whether that helps though. </p> <p>EDIT III: If it helps I actually do know the solution in my particular case is <code>$$g(t) = \int _{\{h^{-1}(t)\}} \frac{1}{|\nabla h|}\,dS$$</code> for some $h$ for which the gradient never vanishes on <code>$\{h^{-1}(t)\}$</code>. Here $dS$ is surface measure. (The reason I still want to solve the equation is that I know $f$ is a certain invariant and I need to show $g$ is also invariant.) </p>
|
2013-05-24 13:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836769461631775, "perplexity": 571.8817670885438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://proxies-free.com/ordinary-differential-equations-plane-foliation-with-compact-leaf-must-have-a-singularity/
|
# ordinary differential equations – Plane foliation with compact leaf must have a singularity
I’m trying to solve the following exercise from Camacho & Neto’s book $$textit{Geometric Theory of Foliations}$$:
How can I show that there is no compact leaf? I believe the idea is to suppose that there is such a compact leaf $$F$$, conclude that it must be diffeomorphic to $$S^{1}$$ and then use Poincaré-Bendixson theorem on the vector field associated to the line field $$P$$ defined by the foliation in order to find a singularity. However, PB theorem asks for some regularity on the vector field and, if we just construct the vector field this way, we might not have any sort of regularity.
I also believe that the idea at this point is to go to the orientable double covering of $$P$$ and there apply Poincaré-Bendixson, but I really don’t see how to do this. What am I missing?
Thanks in advance for any help!
|
2021-02-26 01:13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954957127571106, "perplexity": 125.94390723880124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00542.warc.gz"}
|
https://zbmath.org/?q=an%3A0970.90057
|
# zbMATH — the first resource for mathematics
On copositive programming and standard quadratic optimization problems. (English) Zbl 0970.90057
The authors consider quadratic optimization problems of the form $x^TAx\to \text{maximum subject to }x\in\Delta$ where $$A$$ is an arbitrary symmetric $$n\times n$$ matrix and $$\Delta$$ is the standard simpiex in the $$n$$-dimensional Euclidean space $$\mathbb{R}^n$$, $$\Delta= \{x\in R^n_+: e^Tx=1\}$$. Using the special structure of quadratic problems, the authors apply an interior-point method to an extension of semidefinite programming called copositive programming.
##### MSC:
90C20 Quadratic programming 90C51 Interior-point methods 90C22 Semidefinite programming
Full Text:
|
2021-09-22 01:50:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44040799140930176, "perplexity": 785.7943161085072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00259.warc.gz"}
|
http://databuckets.blogspot.com/2015/
|
## Tuesday, December 1, 2015
### How the Press has Treated Top Tennis Players Over The Years
We all know that not all tennis players are treated equally by the press. Roger Federer is beloved by the media for his professionalism, class and grace. Tennis writers have always appreciated Novak Djokovic's humor and candor, even though he carried a somewhat arrogant persona in his early years. Serena Williams is adored by tennis journalists as of late as she continues to shatter records in the Open Era, but has been demonized for her controversial behavior in the past.
Here at DataBucket, we seek to quantify and visualize things as much as possible. As much as these media perceptions on tennis players are true, we found it interesting to try to quantify the tennis press' sentiment towards top tennis players over time, with the goal of matching our results to some of the ups and downs of each legendary player's career.
To quantify this sentiment, we garnered tennis interview transcripts that were generously available to the public at asapsports.com. Using 1000 interviews on the website, we trained a natural language processing algorithm (specifically a Maximum Entropy Classifier) that classified each interview question as "positive," "neutral," or "negative" (we manually read through a subset, used as training data, and classified each one). Using this classifier, we assigned a score for each interview a top player has conducted - for a positively-toned question, we added the score by 1, and for a negatively-toned question, we subtracted the score by 1.
The results can be found for the top male and female players in the past decade. One can immediately notice that scores near the present are generally much higher than in the past - which suggest that:
• More questions are being asked in press conferences, and
• Tennis journalists are less critical to these players as they reach the twilight of their illustrious careers.
We were also able to identify certain events that constituted more extreme scores. Take September 12th, 2009 for example - this was the day Serena Williams threatened a lineswoman for calling a foot fault on her - it has one of her lowest sentiment scores to date. Another example is September 10th, 2011 - the day Federer lost in Djokovic in five sets after seeing the Serb smack forehead winners on his match point. The Swiss maestro publicly voiced his disapproval on Djokovic's "careless" play, which would explain his subpar interview rating on that day.
Some positive moments were captured too - Caroline Wozniacki had some of her highest ratings during her 2014 US Open Final run (September 2014), and Serena Williams had some of her best ratings during her pursuit of the Career Grand Slam this season (although her SF interview on her loss to Roberta Vinci was much less positive)
We were also able to notice some interesting trends - for instance, journalists in the Indian Wells Masters seem to love Maria Sharapova as her sentiment rating seems to peak in the beginning of March of many seasons (e.g. 2006, 2008, 2011, 2013).
We definitely did not cover all the trends so feel free to play around with our interactive graphic and comment on any interesting findings!
## Sunday, November 1, 2015
### How Good are NBA General Managers?
In a press conference this week, Lebron James talked about the 76ers' team-building process as they enter a third season with a substandard roster. He said, "It's always a process...You got to build things from the ground up. This year, it's about making a transition." This week, we ask, can we quantify this team-building process?
We've looked at metrics quantifying the clutchness of individual NBA players before, but what about from a team perspective? Is there a way to analyze the quality of a general manager, in making decisions about team selection and player acquisition?
http://www.rantsports.com/nba/files/2014/12/Golden-State-Warriors-Team-Chemistry1.jpg
This week, we aim to aquantify team chemistry - how well the players work together as opposed to individually. Then, we map out team chemistry in relation to player retention to see whether GMs can identify when they have good core teams or when they need to mix it up.
To quantify team chemistry, we compared the expected win share of each team (totaled individual win shares of each player on the roster) with the realized win share. We had to calculate expected win share per player as a point of comparison, because actual win shares are already impacted by team chemistry - players will score more points if they are on a team that works well together. Expected win-share would give us a baseline for how players perform on a neutral team.
To calculate expected win share, we used a metric called a similarity score, which quantifies how similar two players' career trajectories are. We used 2013-14 data to calculate these similarity scores, and predicted each player's win-share by the win-shares of the 20 most similar players at this point in their career trajectory, weighted by their similarity score. By using these other players, we hoped to average out the team effect and isolate the individual effect.
By totaling up these expected win-shares and realized win shares for everyone on a team's roster, we could quantify the "chemistry", or the performance above individual expectation, a team had. Then, we plotted this chemistry metric against retention. Retention was also measured based on win share - we calculated the win-share total of a team in one year, then the percent of the win-share that would remain next year after some players were traded away.
Plotting retention against chemistry allows us to classify NBA franchises into four categories, as shown in the table above. Teams with low retention but above-average performance are indications of a newly formed core team, as newly acquired players have figured out how to work together in a short period of time. General managers with a high retention and above-average performance team have found a core group of players that work well together. In these two instances, GMs should not break up their rosters.
On the other side of the spectrum, low retention and below-average performance teams are usually in the rebuilding stage, and GMs should continue looking for a better roster. High retention and below-average performance franchises are signs of teams falling apart in their chemistry, and GMs should seriously consider making some key changes to their roster.
With this in mind, we plotted retention vs. chemistry for all NBA teams in the past 10 seasons. In a quick glance, you will see that the data point are widely scattered, suggesting that some general managers tailor their retention in response to their team's performance, and others do not. Particularly, GMs in the bottom right quadrant should be mixing up their teams due to poor team chemistry, but are not.
The Lakers Are Doing It Right
Retention vs. Performance Above Expectation for the Los Angeles Lakers
A closer look into the Los Angeles Lakers reveals a franchise that really knows how to handle good and bad times. After winning 3 consecutive championships in 2000-2002, the Lakers had a series of mediocre seasons, and reshuffled a lot of its players. After a surprisingly positive 2006 season where they pushed the Phoenix Suns to seven games in the playoffs despite plenty of transfers, they realized they have created a "newly formed core team" (top left in graph). For the next 4 years, the Lakers kept their main players, resulting in another golden era (2007-2011) where they consistently performed better than expected and two NBA championships (top right in graph). But after a poor 2011-2012 season, where they fell tamely to the Thunders in the playoffs, they realized they were maintaining a poor team (bottom right). As a result, they decided to rebuild once again, trading and drafting players with the hope of looking for a new core group that can become a championship team. As of 2015, they have yet to find one (bottom left), but the Lakers seem to be sticking to a strategy that has won them 16 championships.
The Knicks Are Not Doing it Right
Retention vs. Performance Above Expectation for the New York Knicks
A team that has not followed this philosophy is the New York Knicks. For many years, the Knicks have disappointed their fanbase, making the playoffs on very few occasions despite paying high salaries to top players like Carmelo Anthony and A'mare Stoudemire. The reason for this is their failure to break up their core group of players despite inconsistent and poor performances. From 2007-2010, NYK teams were situated in the bottom right, showcasing the franchises' reluctance to rebuild. Recently in 2014 and 2015, the team has also been in this quadrant, and most recently had their worst season in history (17-65). This franchise really needs to reshuffle their team in order to be compete for a championship once again.
Is Lebron Right about the 76ers?
Indeed, it seems that the 76ers are quantifiably in a "transition" process, as Lebron called it. Looking at the past three years, they've had performance below expectation, but also low retention rates, indicating that their GM is aware of the fact that he has to shuffle his roster up. This marks a strategy turnaround from years 2007-10, when they appeared to retain heavily in spite of their dismal performance.
What This Means for the 2015-2016 Season
Retention vs. Performance Above Expectation for 2014-2015 Season Teams
We can also use the data to identify teams that are likely to perform well this season. Looking at just the 2015 season, we see that the Warriors, Hawks, Grizzlies all sit firmly in the top right quadrant. For the upcoming season, these teams have a retention rate of at least 65%, suggesting that they are continuing to build on their core teams. However, the Spurs, despite a 90% retention rate last year, are only keeping 58% of this team this season after trading Cory Joseph, Tiago Splitter and Aron Baynes, their lowest retention in a decade. Thus it remains to be seen if the Spurs can truly perform this season despite their reputation as one of the more team-oriented franchises in the league.
Some may argue that acquisition decisions are largely due to luck, and it is very difficult to attribute player performances to the competence of general managers. While this is a valid point, we hope that this data will cast some insight not on whether teams got lucky with the decisions they made, but rather whether they took advantage of favorable situations, such as converting high potential teams (top left) into a high-quality cohesive group (top right).
## Wednesday, October 14, 2015
### Why Gucci is Losing at Social Media
Luxury fashion houses, such as Prada, Louis Vuitton, and Gucci, are facing a turning point. Even being some of the most recognizable names in the world, some of these brands are facing declining profits. Prada, for example, faced a 28% decline in net profit in the last 9 months of 2014, with a growing reputation of being "outdated" and lacking relevancy. Other up-and-coming brands, like mid-level luxury brand Kate Spade, are becoming praised for their well-curated online content. With the rise of social media, these brands now face an enormous task - being widely recognizable and gaining market share, but also remaining elite and exclusive.
To look at how these brands compare in their social media influence, we turned to Instagram, which has an API to query for limited data. Looking at these brands' number of followers, hashtags, and averages on likes and comments for their 20 most recent posts, we analyze how "with it" each of the top luxury brands truly are.
We can see that in terms of pure numbers, Chanel, Prada, Dior, and Gucci reign supreme in the number of hashtags of those companies' names. A quick search through Instagram branded hashtags indicates that people usually tag pictures of that company's products that they've bought, of its stores, or of products (mainly counterfeits) that they are trying to sell. Having a large number of hashtags indicates wide recognition - this intuitively seems correct. Taking an informal survey of my friends, it seems that everyone - even disinterested males - have heard of these four brands before.
Looking at the number of followers, however, it seems that this hierarchy doesn't hold. In terms of followers, Louis Vuitton ranks number 1. Chanel, Prada, Dior, and Gucci still have a large number of followers, but they are nearly caught up by brands like Michael Kors, which actually does overtake Prada.
Looking at the proportion of tags and followers, we can more clearly see which brands have more loyalty. Louis Vuitton and Michael Kors have a greater proportion, out of the total of all the brands, of followers than they do of tags. This may mean that they have a loyal fan base - getting more interested followers rather than random tags. It may also mean that they have better content, or the other brands have worse content - lots of tags from people liking the product, but less followers because their Instagram feeds are just not that good.
We also look at the average number of likes and comments on each brand's 20 most recent posts.
This graph shows a plot of luxury brands with their average number of likes per post on the x-axis and the average number of comments per post on the y-axis. We draw a regression line to see trends in the relationship between the number of comments and the number of likes, and constrain this regression line to have a y-intercept of 0, since it makes sense that a post with no likes should also have no comments. We also draw confidence bands for this regression line, which is a confidence interval for each predicted y-value, within which sample points should fall if they follow this postulated distribution.
Again, we see that well-followed brands from above like Chanel, Michael Kors, and Louis Vuitton have a high absolute number of likes on their posts. One main outlier is Kate Spade, which has a much higher number of comments per post than predicted. This is indicative of the Kate Spade brand's social media savvy, which they have focused on building as an integral part of their marketing strategy. Kate Spade also has a few posts with an extraordinarily high number of comments because of their giveaways and promotions that depend on social media interaction, such as one in which they raffle off prizes to commenters.
Having a high number of comments in proportion to likes may show that Kate Spade has relatively loyal followers who feel like they're interacting with the brand and want to communicate back. Many of the comments consist of users tagging their friends, recommending them the advertised products.
An outlier on the negative side is Gucci, which has a lower number of comments than predicted.
This may speak to the reputation of Gucci as catering to the middle-aged wealthy, rather than the young and social media-savvy. However, the fact that Gucci had a large proportion of followers, as shown above, but a low absolute number of likes per post, may also speak to poor content on its Instagram page. As shown above, Kate Spade posts are accessible and straightforward (cupcakes, happiness) while Gucci posts are avant garde and harder to understand. Perhaps Gucci strives to maintain its image of absolute luxury and high-fashion and wants to maintain its exclusivity.
A photo posted by CHANEL (@chanelofficial) on
However, comparing Gucci posts to Chanel posts, Chanel being another extremely high-fashion competitor, it seems that Chanel's content is still more accessible and aesthetically appealing than Gucci. Chanel also features popular celebrities like Cara Delevingne and Kristen Stewart while Gucci posts do not leverage the same star power. Perhaps Gucci should revise its social media content to be more traditionally appealing in order to avoid putting off its followers and by promoting more celebrity endorsements and giveaways through social media posts.
## Sunday, September 20, 2015
### Be Suspicious of NYC Restaurant Health Ratings
Whether you pay serious attention to them, or don't, each restaurant in New York City has a grade posted outside indicating its health grade. Among the 24639 restaurants in the recently published NYC Restaurant Inspection Result Dataset, nearly 80% of the restaurants have been awarded an "A" safety rating, 15% have been awarded an "B" and 5% have gotten "C" or worse.
However, should New Yorkers really trust these health ratings? Ben Wellington of I Quant NY made an compelling case showing that health inspection scores, which help classify restaurants into grades (score of 0-13 is an "A", 14-27 is a "B", 28+ is a "C"), suffer from the "bumping up" syndrome, meaning that restaurants on the cusp of a higher grade tend to be bumped up to the higher grade.
Furthermore, health inspections are made on a random basis annually, meaning that health grades only represent safety conditions in the past year.
Sadly, an examination of the history of health inspection grades of New York restaurants suggest an inconvenient truth - many restaurants considered "safe" today have not always been "safe" in the past. This may mean either that restaurants improve their sanitary conditions significantly after a health inspection, or that inspectors tend to "bump up" restaurant grades from one year to the next.
As seen in the graph above, a majority of restaurants that were rated "B" and "C" in their infancy end up becoming grade "A" restaurants - at an astonishing 72% and 65% respectively. From the opposite point of view, 25% and 15% of restaurants which are now grade "A" restaurants actually started off at a grade "B" and "C" sanitary level. Given that most restaurants in NYC do not have a very long life span (80% of NYC restaurants close after 5 years) and that this grading system was only formally established 5 years ago, having so many restaurant grades increase in such a short time leads us to question whether the letters at the front of every NYC eatery are truly reliable.
## Monday, September 14, 2015
### Are Women's Tennis Rankings More Volatile than Men's Rankings?
http://www.irishexaminer.com/media/images/s/SerenaWilliamsUSOpenSep2015_large.jpg
Serena Williams' in the 2015 US Open semi-finals left number 26 and number 43 in the world to face off in the finals. Meanwhile, the finals on the men's' side was populated by number 1 and number 2, Novak Djokovic and Roger Federer, respectively. That, in combination with the fact that many female players now far off-the-radar, such as Ivanovic and Jankovic, have been former world number 1, led us back to the question - how volatile are women's' rankings in comparison to men's?
Methodology
Using weekly ATP and WTA weekly rankings data from, we analyze the variance of the rankings of players currently in the top 30. We also exclude rankings data outside of the top 100 to minimize the variance impact of when these players first became professional, which is not indicative of their pro performance.
Looking at the WTA rank variance of the current top 30 players, we see that as expected, strong players like Serena Williams and Maria Sharapova who rank consistently at the top (excluding injuries) have low mean rank and low rank variance. For mid-tier players, such as Sam Stosur and Roberta Vinci, the variance on the whole becomes much higher.
Smaller circles, which indicate newer players with fewer weeks in the top 100 under their belts - such as Sloane Stephens and Petra Kvitova - have markedly higher mean and variance than the "power cluster" of consistent, top players. However, there are many mid-tier players with many weeks in the top 100, but still large variance and average rank. For newcomers, their ranking behavior is still yet to be determined - they could join either the consistent top players or the varying mid-tier players.
The graph of the top 30 ATP players show that ranking means are similar across men and women, ranking from 5 to 55. However, the variance is lower for men on the whole. Similar to the WTA results, small circles indicating newcomers generally trend to the right and the top of the graph, meaning higher variance and rank. This is due to the fact that these players undergo a lot of ranking movement when they first go pro, which is not indicating of their long-run ranking behavior.
Again, like in the women's results, men's ranking behavior breaks into two camps: the consistent, top players like Roger Federer and Rafael Nadal, and the mid-tier players who vary more, such as David Ferrer and Philipp Kohlschreiber. One surprise is that Novak Djokovic has such a low average rank but such a high ranking variance - Djokovic has sharp rises in the rankings, and variance penalizes that over small incremental increases.
Finally, looking at the graph of WTA rank variance vs ATP rank variance over the years with regards to the current top 30 players, we see that WTA is significantly higher than ATP variance. This is mostly attributed to periods of extreme variance exhibited by certain players, such as Maria Kirilenko and Jamie Hampton. On the whole, however, looking at the individual variances of the top 30 players, women do have higher rank variance than do men.
## Saturday, September 12, 2015
### The Odds of an All-Italian US Open Final Were Less than 1%
The semi finals of the women's US Open produced two monumental upsets. Just like in the men's final four last year, where two significantly lower ranked players upset the top two seeds, Flavia Pennetta dismissed Simona Halep in straight sets, and Roberta Vinci came for a set down to deny Serena Williams from achieving the first Grand Slam since Steffi Graf in 1988.
FiveThirtyEight declared Serena William's loss as the greatest loss of all time, according to the current Elo Ratings of Williams and Vinci at the time of their semi final match up. That said, we wanted to measure this upset in a probabilistic manner. How likely was it that both Italian players upset the top seeds?
To answer this question, we refer to our tennis prediction model, which also uses an Elo-Rating style metric to calculate a player's ability. However, our system only incorporates matches played in the past year and head-to-head matches between players in the past 5 year. Our method also places more emphasis on detailed tennis metrics such as sets and games won in each match, the court surface being played on, and the stage and quality of tournaments being played. This allows our model to make accurate predictions for any tournament at any given time.
The table above represents each of the semi-finalists chance of reaching each stage of the tournament (Finalist or Winner) before the Friday matches. Notice that the Italians only had a 21% and 3.7% chance of winning their matches. Further analysis of past tennis data (from 1968 to 2015) suggested that semi final match outcomes are essentially independent of each other. Probabilistically, the chance that the second match would be an upset is the same as the chance that the second match would be an upset given that the first match was an upset. Thus, the probability that an All-Italian US Open final would have occurred is 21% x 3.7% = 0.8%.
In terms of who would win the final tomorrow, betting odds have declared Flavia Pennetta a 4/9 favorite, or an implied winning probability of 69.3%. Our model suggests otherwise, declaring Pennetta as merely 54.7%. As our model places more weighting on later stage matches and strength of opponent, Vinci's ability improved much more than Pennetta's, as Williams has a significantly higher rating than the rest of the field. Thus, despite being ranked over 15 places higher than Vinci, Pennetta is only a slight favorite in this final matchup. You can essentially treat this final as a toss up.
Stay tuned for our preview of the men's final on Sunday.
## Tuesday, September 8, 2015
### Study of the VIX
There is plenty of analysis out there about the stock market. Much of this analysis is on an intraday basis, analyzing how individual stocks moves based on oil prices or geopolitical turmoil. Sometimes these explanations have an obvious correlation to the markets; other times, these explanations are nothing more than educated guesses.
Instead, we're interested in long-term trends. This week, we study the relationship between the VIX index and the S&P 500.
VIX Index
The VIX index is primarily used as a representation of the market's expectations of the 30-day volatility of the stock market, expressed in percentage points. Specifically, the VIX is 100 times the square root of the expected 30-day variance of the S&P 500 rate of return.
\begin{equation*}
\text{VIX} = 100 \sqrt{\text{var}}
\end{equation*}
Where $\text{var}$ is annualized expected 30-day variance. The expected 30-day variance is estimated by the forward price of S&P 500 options with 30 days to expiration, $e^{rt}S$ where $S$ is the spot price. The forward prices of S&P 500 options represent the market's risk-neutral expectation of the variance of the underlying.
No arbitrage pricing says that the forward price of variance must equal the forward price of its replicating portfolio. Since holding forward positions in a portfolio do not contribute value to the portfolio at the present, the forward price of variance must equal the forward price of the options. If 30-day options are not available, the VIX is calculated using a weighted average of forward prices of options with expirations close to 30 days.
We can see that the VIX follows the general shape of the S&P 500's forward 30-day volatility, but with a lag of a few days. This indicates that the VIX is good at determining the level of volatility in the next 30 days, but not at predicting large changes in volatility. Moreover, for high levels of S&P 500 forward volatility, such as in the beginning of October 2008 following Lehman's bankruptcy and preceding several DJIA increases and declines, the VIX seems to underestimate the level of volatility in the next 30 days. Generally, however, the VIX seems to remain above the actual S&P 500 volatilities.
The difference between the VIX and the historical S&P 500 volatilities shows points in time where the VIX is significantly lower. These include the end of September 2008 and the beginning of October 2008, which as we mentioned before, included the worst of the financial crisis. These low VIX points also include the end of April 2010, which preceded the May 6 "Flash Crash", a trillion-dollar stock market crash that lasted just minutes. Another dip in the VIX compared to the S&P 500 was at the end of July 2011, which preceded an August 2011 stock market crash due to a US credit downgrade. The last VIX dip in the graph is due to the recent China crisis. These are all points of high S&P volatility in the first graph that the VIX severely underestimates.
## Sunday, September 6, 2015
### Stanimal's Title Chances are Worse than You Think
The first week of this year's US Open has been tumultuous - top 10 players Kei Nishikori, David Ferrer, Rafael Nadal and Milos Raonic have all crashed out, and the tournament has had a record 16 retirements. In particular, Jack Sock and David Goffin were leading their matches, only to succumb to the extreme heat and humidity.
Despite all the unpredictability, the two Swiss contenders, Roger Federer and Stan Wawrinka, have reached the second week of the tournament in contrasting fashions. Federer seems to be enjoying himself, toying his opponents with his flashy shot making and half-volley returns, while Wawrinka has somehow escaped from close tiebreak situations, including a seemingly lackluster effort in his match against Ruben Bemelmans.
With that, we were interested in what our tennis prediction model says about the chances of Federer and Wawrinka ending their tournament at each round and compare it to betting odds. Not surprisingly, our odds are fairly similar to the ones provided on betting websites. However, we believe that Wawrinka's chances of ending his run at the QFs are higher (59%) than betting websites (52%). As our model places emphasis on the closeness of each match, the fact that Wawrinka played more tiebreaks, even though he has not lost a set in this tournament, lowers his prospect of reaching the later stages of the US Open. As a result, our odds of him reaching the semi finals, final and winning are significant lower.
On the other hand, our odds for Roger Federer is in line with betting companies, as a result of his masterclass displays in each of his three matches. In fact, our prospects of him losing before the finals is significantly lower than the betting probabilities.
To look at prospects of other remaining players reaching different stages of the tournament, check out our results below. Stay tuned for more updates in the middle of the week.
## Tuesday, September 1, 2015
### Nishikori's Early Exit does not Improve Djokovic's Title Chance
Upon the conclusion of the US Open's first round matches, many would believe that Kei Nishikori's early exit will open up the draw for Novak Djokovic and improve his title prospects. However, our tennis prediction model suggests that Djokovic's chances of winning remains level at around 55%. Similarly, Federer and Murray's chances stay the same at around 25-26% and 8-9% respectively.
Ultimately, the reason why Djokovic's prospects haven't changed is that he is still likely to face Nadal in the quarterfinals, and Federer or Murray in the final. Furthermore, the top three players in the world (Djokovic, Federer and Murray) have a combined 90% chance of winning the tournament, while Nishikori's prospect prior to Monday was a mere 3.8%. Should Federer or Murray have suffered an upset in the first round, Djokovic's title chances would have definitely skyrocketed.
Nishikori's early exit also raises an interesting question - which player will emerge from that quarter? Our model suggests that Marin Cilic, the reigning US Open Champion, and not David Ferrer, the highest seed left in that quarter, has the highest odds. This may to due to the fact that Ferrer has not won a match since Roland Garros, and Cilic had recently reached the quarterfinals of Wimbledon.
Despite ranking outside the world's top 40, Benoit Paire has a decent chance (6.4%) of reaching the semi finals. As our model rewards players who pull off upsets, Benoit Paire's rating increased greatly after his first round win over Nishikori, making him the fifth favorite player to come out of this quarter of the draw. Also look out for dark horse Jo-Wilfried Tsonga - while he has dropped to as low as 18th in the world, his semi-final odds are only a tad lower than Ferrer's as a result of his strong showing in Montreal.
## Sunday, August 30, 2015
### US Open Preview - Djokovic Still Heavy Favorite Despite Recent Losses
After last year's US Open, many began to question whether Marin Cilic's triumph represented the beginning of a transition at the top of men's tennis, from the dominant Big Four to a set of young guns that included Cilic, Kei Nishikori, Milos Raonic, Grigor Dimitrov and more. However, with the exception of Kei Nishikori, who has risen to a career high ranking at number 4, other young guns have yet to challenged the likes of Novak Djokovic, Roger Federer and Andy Murray. Coming into this year's tournament in Flushing Meadows, it seems like these three players are yet again the clear favorites. But what are the odds among these elite men? Is Djokovic still the clear favorite after losing to both players in back to back finals? Will Federer's return-and-charge approach still bode well in a best of five set match in slower hard courts?
To answer these questions, I enhanced the prediction model I made for this year's Wimbledon. In accounting for player's performance in the past year, the model places more emphasis on hard court matches, quality of opposition that players have won or lost to, how close each players' matches were, and the level of tournaments players have participated in. In particular, I put special emphasis on matches played in Washington, Montreal and Cincinnati. History has suggested that players that have performed well in these tournaments have gone on to do well in the US Open. For instance, all players who have won Montreal/Toronto and Cincinnati in the same year have gone on to win the US Open title (except Andre Agassi in 1995).
Simulating 50,000 tournaments led to the following odds of each players reaching each stage of the tournament:
Despite losing in finals as of late, Djokovic still has a 56% chance of winning the US Open. The 28-year-old Serb is having his best season since his historic 2011 breakout campaign. Given that his recent losses have come in three set matches, Grand Slam matches should give him more time to adjust to opposition tactics and ultimately come out triumphant.
Also interesting to note is that Federer's title prospects far exceeds Andy Murray's. I have frequently dismissed the common belief that Murray has a better chance than Federer, and I will reiterate here. The Swiss Maestro has won his last five encounters against the Scot, including the last ten sets, so the recent Cincinnati champion will definitely be the favorite in their potential semi final encounter.
Other interesting results include Andy Murray having a relatively low chance (48%) of reaching the semi finals. This is likely due to a potential blockbuster match against Roland Garros Champion Stanislas Wawrinka in the quarter finals. Also notice that Rafa Nadal has a surprisingly low chance of reaching the semi finals (6.9%), as he is placed in the same quarter as Djokovic. That said, he is still is the sixth favorite to win the title (1.2%), as Nadal has a tendency to peak at the latter stages. Should he get past Djokovic, his title prospects will skyrocket.
1st Round Betting Recommendations
We also sought to determine the matches that would yield the largest expected gains, according to odds reported from oddportals.com. By converting our probability to odds, we determined expected gains/losses by multiplying the difference in DataBucket odds and OddPortal odds by the probability of winning the bet. The results can be seen below:
Interestingly, betting Marco Cecchinato to win his match against soon-to-be-retiring Mardy Fish would most likely yield the largest gains, as betting companies have claimed Fish to be the favorite in this match. While Cecchinato is ranked outside the top 100, he is definitely the more physically fit player at the moment, and will more likely come out the victor in five-set Grand Slam conditions.
A general trend in these results suggest that betting odds for matches featuring the top players tend to be more aligned with DataBucket odds, while matches consisting of lower ranked player tend to be less aligned. This may be due to supply and demand factors relating to the betting market. Other attractive bets including Lucas Pouille defeating Evgeny Donskoy and Marsel Ilhan defeating Radek Stepanek.
DataBucket will be updating US Open predictions throughout the tournament. Stay tuned for updates.
## Sunday, August 2, 2015
### How Much Five Guys Costs in Different Parts of New York (and the US)
If you have travelled to different parts of the US, you will probably notice that restaurants do charge customers at different prices. A cappuccino in a Starbucks in Manhattan will most likely be much more expensive than the same cup of coffee in Nebraska. Similarly, a double cheeseburger in a San Francisco McDonalds will be more expensive than the same burger in Colorado.
With that, we wondered - what if we can map out the prices of a large restaurant chain for the entire country?
Obviously, such data is very difficult to find on the web - if the prices of different stores were easily accessible, then some stores will become far more popular than others. Fortunately, Five Guys has an online ordering system that allows customers to pick up their food without having to wait in line, so the prices of their products for all of their stores are available online. With that, we mapped out the prices of various food items on Five Guy's menu for restaurants within 50 miles of the 50 largest cities in America. Check out our interactive graphic below:
(Note: To look at specific areas of the United States, click the search icon on the top left and type in a state or city. Or use the "+" and "-" buttons to customize your view)
Notice that the cities known to be more expensive (think Seattle, San Francisco, Chicago and New York) are where the most pricey Five Guys are located. It is also interesting to see that for most cities, that the prices are largely the same (take a look at Las Vegas, Denver, and the large Texas city)
The most interesting region by far, however, has to be the New York area. A closer look reveals a clear-cut price segmentation strategy. Manhattan is by far the most expensive out of all the boroughs. Stores right across the Hudson are significantly cheaper, with bacon burgers over a dollar cheaper than in Manhattan. A similar phenomenon can be seen across the East River - Queens and Brooklyn restaurants offer bacon burgers that cost over 50 cents less.
We recognize that geographic price segmentation is hard to overcome - if you live in Manhattan and have no other reason to go to Brooklyn, it's not worth it to go just for a few cents savings. Nevertheless, it's interesting to quantify the magnitude of the Manhattan premium.
## Saturday, August 1, 2015
### Which School Produces the Most Successful Startup Founders?
With startups becoming a larger and larger segment of the American economy, we looked to answer a question we figured would be relevant to a lot of people: what makes a successful startup?
There are many factors that can define "success" in entrepreneurship; we use money, namely venture funding, as our metric for success. We know that there are many philosophical arguments against this, but use it because of its ability to be quantified. We've already looked at which industries and states generate the most venture funding.
This week, we look at the founders themselves. What characteristics do founders of richly funded startups have in common? What can people looking to found a startup do to optimize their chance of success?
The data we work with this week comes from the Crunchbase API. We look at the top 5000-funded startups over the past 15 years and the education of each of their founders.
Looking at sheer number of founders per school out of this space of top 5000-funded startups, we see:
1. West coast schools are highly represented. Stanford beats its runner up, MIT, by 43%. Berkeley is also 4th most represented. UCLA, USC, and CalTech closely follow.
2. Ivy League schools are featured prominently in the top 15 schools, including Harvard, Cornell, Yale, UPenn, Columbia, and Princeton. This indicates that the prestige of the school may have an impact upon attracting venture funding, or that the rigorous level of education may produce capable startup founders.
3. Business schools (Harvard, Wharton, Stanford) and engineering-focused schools (MIT, CalTech, both feature prominently. There is no huge divide between the "egg-heads" and "talkers" - both are valuable to the startup industry.
4. There is a large number of international universities on the list, including a noticeable number of graduates from Tel Aviv University, which is 7th on the list. This is attributed to its entrepreneurial culture.
We also looked at the average number of college degrees attained for each startup versus its amount of funding received. There did not appear to be a strong correlation between earning more degrees and the amount of funding received - the amount of funding looked relatively consistent across different numbers of degrees received on average.
In terms of average amount of funding graduates from each school gets, Harvard, MIT, and Stanford get a standard amount of funding. Indian Institute of Technology has a disproportionately high average funding as well as a large number of founders. Hanzhou Normal University and Zhejiang University of Technology are off the charts for average funding received. This is attributed completely to Jack Ma and Eddie Wu founders of Alibaba.
So what's the secret?
To best prepare for founding a successful startup, it appears that founders get a boost from graduating from prestigious universities, west coast universities, engineering-specialized universities, and/or business schools. The number of degrees does not really matter, and there are always rarities coming from relatively unknown colleges, but receiving a spectacular amount of funding.
## Saturday, July 11, 2015
### How DataBucket's Wimbledon Model Can Be Better
This year's Wimbledon tournament culminates tomorrow in a final blockbuster showdown between Novak Djokovic and Roger Federer. Over the past two weeks, we have developed a probability model that determines the odds of each player reaching each stage of the tournament (these odds were updated after every round). Now with two competitors remaining, our model claims that Novak Djokovic has a 63.8% chance of beating Roger Federer.
But should you trust our model's results?
Upon inspection of my model assumptions, there are areas where our probability model can be improved.
Match Scores are Not Always an Accurate Indicator of Current Form
Here's one of the key concerns: while our model accounts for how close a match was (e.g. winners in straight sets are rewarded more than winners in 5 sets), match scores are not always a true indicator of form or current ability. Roger Federer was rewarded significantly for beating Andy Murray in straight sets, especially since he only had a 55% chance of winning, but in my opinion he should be awarded more. Federer faced only one break point in the entire match (and that was in the first game). He hit 20 aces, had over 5 winners for every unforced error he made, and won over 80% of the time when serving at 30-30 or deuce. Legends of the games proclaimed it as one of the best serving performances ever witnessed. Even Federer acknowledged this match as "definitely one of the best matches I've played in my career."
Likewise, Andy Murray should not be penalized as heavily for losing this match in straight sets. He hit over 2 winners for every unforced error he made (the average for the tournament was 1.5). He served a respectable 12 aces compared to only 1 double fault. And he managed to stay with Federer to the end of each set, only for his opponent to step up a gear. This is not a demoralizing defeat on Murray, but rather a performance many would call a valiant effort. As Sports Illustrated cited in their live blog, "So good. Too good. Too, too good from Roger Federer." My model can be improved by incorporating some of these detailed match statistics, but how much they should influence these probabilities is very much up for debate.
A Career-Defining Win Can Go Many Ways
We all would probably agree that this match is one of the highlights of Roger Federer's already illustrious career. We can classify this match as one of his career-defining matches. But such a strong performance from Federer can go either way. He may gain plenty of momentum from this performance and play superbly against Djokovic in the final. Or he may expend too much energy and suffer from mental or physical fatigue and fall to the steady Serb. This is what our model also lacks - the ability to capture a player's reaction to a career-defining win. Will a player succumb to the pressures of playing the match of his life in the next match, like Lukas Rosol after beating Djokovic in 2012 Wimbledon or Kei Nishikori after beating Djokovic in 2014 US Open? Or will a player rise to the occasion, gain confidence and play at a much higher level after a career-changing win, like Robin Soderling in 2009 French Open, or Stanislas Wawrinka in 2014 Australian Open?
For these reasons, while listing Djokovic as a 63.8% favorite seems reasonable to most, there are just many factors in tennis that are difficult to quantify. DataBucket will continue to try to incorporate as many of these factors as possible, especially as the US Open is just around the corner.
## Tuesday, July 7, 2015
### Wimbledon QF Preview: Top 4 Seeds Likely to Advance
We are now down to the quarterfinals of Wimbledon, and the top four seeds are still going strong. In fact, according to my probability model, the chances of Djokovic, Federer, Murray and Wawrinka making the semifinals are very high. In my previous 4th round post, each of these players had at least 67% chance of making it to the final four. Now each player has at least a 78% chance.
With these results comes a few interesting observations:
Djokovic's Chances of Winning have Decreased: Before the start of Wimbledon, Djokovic had a 56% chance of winning - this has decreased to 52%. As my model accounts for each player's margin of victory, Djokovic's close five-set encounter against Kevin Anderson actually hurt his chances of winning the tournament.
Wawrinka's Title Prospect Continue to Rise: Before the tournament, my model predicted Stan the Man's winning odds to be at 6.2%. Now it has climbed to 9.1%, as he has breezed through the early rounds without dropping a set. We had predicted Wawrinka to have only a 66% chance of reaching the QF. Now that he has reached this stage, he will only become a more dangerous threat.
Federer is More Likely to Win Wimbledon Than Murray: I had argued this point in my previous post, but Federer's performance at Wimbledon have continued to put him in front of Murray (Bet365 apparently disagrees). Federer easily handled Bautista Agut on Monday, while Murray fought through a tough four-set encounter against big-serving Ivo Karlovic. Murray also has not beaten Federer or Djokovic since 2013.
Gasquet's Title Odds are Overrated (It's not 2%): Sure, he has one of the prettiest backhands in the world. Sure, he beat Dimitrov and Kyrgios (who was overrated anyway). But he has to beat Wawrinka, Djokovic and Federer along the way and these are clearly very tough hurdles to overcome.
To see how my model works, take a look at my original post on predicting the Wimbledon tournament. Any comments are welcome!
## Sunday, July 5, 2015
### Djokovic Still the Favorite After the 1st Week of Play
The Wimbledon has dwindled down to its final 16 contenders. While some of the top seeds have fallen (think Raonic, Nishikori, and Nadal), Djokovic, Federer, Murray and Wawrinka are still in contention.
Last week, I presented my model that predicted the odds of each players reaching different rounds of the tournament. After three rounds of play, the odds of the contender haven’t changed significantly (Djokovic at 57%, Federer at 18%, Murray at 13%, and Wawrinka at 8%). That said, the chance of these players reaching the semifinals has increased dramatically. Mark your calendars down for some mouthwatering clashes on Thursday and Friday, as the top 4 seeds all have at least a 67% chance of making it to the final four.
Comparing our results with betting odds, we claim that Bet365 appears to have overestimated the chance that the underdogs will win the tournament. They may be making this move on purpose to hedge against the risk of paying out huge multiples, or they are wary of the fact that three of the past six slams have been won by a member not in the Big Four. But giving Nick Kyrgios a 1 in 29 chance of winning is far too optimistic given that he will have to beat Djokovic, Wawrinka and Federer/Murray to win the tournament.
Another overly optimistic implication betting companies have made is giving Murray a 1 in 3 chance of winning the tournament. Yes, many people are probably betting on Murray. Yes, many people think homecourt advantage matters. But keep in mind that Murray is 2-6 in Grand Slam finals and has lost to Federer or Djokovic 11 times in a row. There are also lapses in concentration, such as in the third set against Andreas Seppi, that is worthy of concern and may come back to haunt him when he plays Federer or Djokovic.
Thoughts on my model? Thoughts on Wimbledon in general? Feel free to comment below. Check out the posting on the first week of Wimbledon here.
|
2017-10-18 02:11:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2940200865268707, "perplexity": 2950.619043243132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822668.19/warc/CC-MAIN-20171018013719-20171018033719-00841.warc.gz"}
|
http://math.stackexchange.com/questions?pagesize=50&sort=newest
|
# All Questions
4 views
### Is the length of one cathetus in a right triangle with equal catheti 2?
Look at your watch now. Now try to remember that time. By the time you have finished this sentence that time has probably increased by a couple seconds. Now you may be asking yourself "is this ...
5 views
### Find the “surface vertices” of a collection of points.
I am currently doing some experiments in order to simulate liquids. I have a collection of 3D points that interact with each other to form a body of water. I would like to form a mesh from these ...
27 views
### Wouldn't each addition take time $O(n)$?
I am going over the asymptotic runtime of regular matrix multiplication. Here is a lecture slide I am referencing(too much to type out, shown below), from Algorithms Everything makes sense up ...
10 views
### Integration in complex measure
Let $v$ be a complex measure in $(X,M)$. Then $L^{1}(v)=L^{1}(|v|)$. I have made: $L^1(v)\subset L^1(|v|)$?. Let $g\in L^1(v)$ As $v<<|v|$ and $|v|$ is finite measure, then for chain rule, ...
6 views
### Expressing the Solution to a System of Differential Equations
My professor wrote the solution to a system as $$X = C_1 \begin{bmatrix}1 \\2 \end{bmatrix} e^{\lambda_1t} + C_2 \begin{bmatrix}3 \\4 \end{bmatrix} e^{\lambda_2t}$$ Where the column vectors are the ...
11 views
### How do I interpret this operation?
This question has to do with operations and exploring their characteristics. I have just learned how to extract info from an operations table (what is the identity, inverse, etc.), but this question ...
16 views
### Seeking Recommendation on Pre-Calculus Textbooks!
S.E. advisers, I wrote this email because I am seeking a recommendation on selecting the pre-calculus textbooks. I have been studying the real analysis and number theory, and I felt that I need to ...
16 views
### seperable differential equation question
b) $(2xy^3)dx + (3x2y^2 + y^4)dy = 0$ c) $(2xy^3)dx + (3x2y^2 + y^2)dy = 0$ I know that $c$ is a separable differential equation but $b$ is not. Why? The only difference is the power of the ...
13 views
### Roll one ellipse on another: Locus of center ever a circle?
Let $E_1$ be an ellipse fixed in the plane. Let $E_2$ be a second, possibly different ellipse, which rolls around without slippage outside $E_1$, touching perimeter-to-perimeter. Let $c_2(t)$ be the ...
31 views
### Formula for $r+2r^2+3r^3+…+nr^n$
Is there a formula to get $r+2r^2+3r^3+\dots+nr^n$ provided that $|r|<1$? This seems like the geometric "sum" $r+r^2+\dots+r^n$ so I guess that we have to use some kind of trick to get it, but I ...
5 views
### Is this an instance of the base-rate fallacy?
The following line of probability reasoning is supposedly fallacious, and is an instance of the base-rate fallacy. The argument is that $(1)-(3)$ don't give us enough reason to conclude that $(C)$. ...
30 views
### What is 3^43 mod 33?
I just took math final and one of the question was Find $3^{43}\bmod{33}$. So, I used Euler's function; $\phi(33)=20$. $3^{20}\equiv 1\pmod{\!33}$ By using this fact, I got $27$. One thing ...
4 views
### Stiefel-Whitney Numbers of $\mathbb{R}P^2\times \mathbb{R}P^2$
I'd like to calculate the Stiefel-Whitney numbers of $\mathbb{R}P^2\times\mathbb{R}P^2,$ but don't know how to. My first instinct was to say that the tangent bundle is isomorphic to the product of ...
9 views
### If K $\subset \mathbb{R}$ is compact prove that {fn} converges uniformly to f on K.
Suppose that we have a sequence of functions{f$_{n}$} that converges uniformly to a function f on any (a,b) $\subset \mathbb{R}$. If K $\subset \mathbb{R}$ is compact prove that {f$_{n}$} converges ...
30 views
### Is the only way to go physics is a very goodeth?
Here's a qestion. If the Physics and I chemistry is the best of all the all is and what is this the your phone and is a great way for of to a the first place time half the things time people are just ...
6 views
### Start to Proof of bernoulli polynomials and sums
I need help starting this proof: For all integers k,l,m>=0 and not all equal to 0, (3.7) It says that that comparing the above equation (3.7) with the one discussed earlier in the paper(3.6) (shown ...
10 views
### Probability Density Question Involving an Integral Equation (from Karlin & Taylor's A First Course on Stochastic Processes)
The random variables X and Y have the following properties: X is positive, i.e., $P\{X > 0\} = 1$, with continuous density function $f_X(x)$, and $Y\mid X$ has a uniform distribution on $\{0,X\}$. ...
22 views
### Question on $\mathbb{K}$ notation
In a lot of paper and book $\mathbb{K}$ means $\mathbb{R}$ or $\mathbb{C}$. I know that $\mathbb{R}$ comes from the word real, and $\mathbb{C}$ from the word complex. But what about $\mathbb{K}$? ...
19 views
### Does $\chi(g^{-1})=\overline{\chi (g)}$ hold for infinite groups
Let $\chi$ be the character of some representation $\rho:G \to GL(M)$ over $\mathbb C$. Suppose $G$ is a group, then $\forall g \in G$ of finite order $n$, $\chi(g^{-1})=\overline{\chi (g)}$ ...
3 views
### Vertex invariants based on finding minimal combined shortest paths
A possible vertex invariant for a vertex v is v's smallest n-neighbourhood consisting of the induced subgraph rooted in v of all vertices n edges away from v. Question 1: I'm wondering if this ...
14 views
### Proving De Morgan's law with the minus sign
So I know how to prove De Morgan's Law in this form: $A\cap (B\cup C)^{c}$, what I'm trying to do for practice is prove it in the slightly different notation: $A- (B\cup C)$ I get everything except I ...
55 views
### What does $d\log\left(\frac{y}{x}\right)$ mean mathematically?
I am used to seeing derivatives written as $$\frac{df}{dx}.$$ But my economics professor keeps using notation like $$d\log\left(\frac{y}{x}\right)$$ and I have no idea what this means. What does ...
10 views
### Prove that if $\{1^5,2^5,\ldots, (pq)^5\}$ is a complete residue system mod $pq$, then $\{1^5,2^5,\ldots,p^5\}$ is too, mod $p$
$p,q\ge 2$ are coprime positive integers. Prove that if $\{1^5,2^5,\ldots, (pq)^5\}$ is a complete residue system mod $pq$, then $\{1^5,2^5,\ldots,p^5\}$ is a complete residue system mod $p$. ...
21 views
12 views
### Find the maximum of a |cos(z)|
How do you find the maximum of the complex function $|\cos{z}|$ on $[0,2\pi]\times[0,2\pi]$. I believe I'm to use the maximum modulus principle, since the function is entire. I'm just having problems ...
12 views
### Subset of a normal space
Given X a normal space, and a subset $A \subset X$ not closed. Does it imply A is not normal? I understand it does not, Can someone provide me a counterexample?
7 views
### Finding potential of a given vector field
I am trying to solve the following problem: Let $\textbf{F}=f(r) (x,y,z)$ where $r=(x^{2}+y^{2}+z^{2})^{1/2}$. Find an expression for a potential for $\textbf{F}$. Find an expression also for ...
|
2015-05-06 00:23:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9599480628967285, "perplexity": 395.89944521212476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457655609.94/warc/CC-MAIN-20150501052055-00083-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://quanteconpy.readthedocs.io/en/latest/game_theory/random.html
|
# random¶
Generate random NormalFormGame instances.
quantecon.game_theory.random.covariance_game(nums_actions, rho, random_state=None)[source]
Return a random NormalFormGame instance where the payoff profiles are drawn independently from the standard multi-normal with the covariance of any pair of payoffs equal to rho, as studied in [1].
Parameters: nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. rho : scalar(float) Covariance of a pair of payoff values. Must be in [-1/(N-1), 1], where N is the number of players. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. g : NormalFormGame
References
[1] Y. Rinott and M. Scarsini, “On the Number of Pure Strategy Nash Equilibria in Random Games,” Games and Economic Behavior (2000), 274-293.
quantecon.game_theory.random.random_game(nums_actions, random_state=None)[source]
Return a random NormalFormGame instance where the payoffs are drawn independently from the uniform distribution on [0, 1).
Parameters: nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. g : NormalFormGame
quantecon.game_theory.random.random_mixed_actions(nums_actions, random_state=None)[source]
Return a tuple of random mixed actions (vectors of floats).
Parameters: nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. action_profile : tuple(ndarray(float, ndim=1)) Tuple of mixed_actions, one for each player.
quantecon.game_theory.random.random_pure_actions(nums_actions, random_state=None)[source]
Return a tuple of random pure actions (integers).
Parameters: nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. action_profile : Tuple(int) Tuple of actions, one for each player.
|
2022-01-27 17:29:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26344335079193115, "perplexity": 3802.7193608865628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00059.warc.gz"}
|
http://math.stackexchange.com/questions/829306/chain-homotopy-equivalence-between-mapping-cone-complexes
|
# chain homotopy equivalence between mapping cone complexes
Given continuous maps $f_i : X_i \to Y_i$ ($i=1, 2$) we may consider the singular chain cocomplexes $$C^n(Y_i) \oplus C^{n-1}(X_i)$$ with boundary operator: $$(u^n, v^{n-1}) \mapsto (-\delta u^n, f_i^* u^n + \delta v^{n-1} ).$$ (where the $\delta$'s are the usual coboundary operators in the correct dimension over the correct spaces). Suppose that we have a commutative diagram:
$$\label{} \begin{array}{ccccccccccccccccccccccccccccccc} X_1 & \overset{f_1}{\longrightarrow} & Y_1 \\ \scriptstyle{\alpha \Big\downarrow\phantom{\alpha}} && \scriptstyle{\phantom{\beta} \Big\downarrow \beta} \\ X_2 & \overset{f_1}{\longrightarrow} & Y_2 \\ \end{array}$$
Then $\alpha$ and $\beta$ induce a chain map between the mapping cone cocomplexes of $f_2$ and $f_1$ (easy to verify).
My question is: what are reasonable topological conditions on $\alpha$, $\beta$ and $f_i$ in order that the induced map on the cone cocochain complexes is a homotopy equivalence?
Consider for example the case $X_1 = X_2$, $f_1 = f_2 = f$, $Y_1 = Y_2$. My intuition tells me that $\alpha$ and $\beta$ should be homotopy equivalences, and that there exist chain homotopies $H_\alpha$ and $H_\beta$ between $\alpha$ ($\beta$) and the identity such that: $$H_\beta(f(x) , t) = f \circ H_\alpha(x , t).$$ But.. doing the computations, we see that this is not true! Indeed, let $T^\alpha$ and $T^\beta$ cochain homotopies induced by the homotopies $H_\alpha$ and $H_\beta$: $$\alpha^*(u^n) - u^n = \delta T^\alpha u^n + T^\alpha \delta u^n$$ (similarly with $\beta$) and let's define the chain homotopy: $$T(u^n, v^{n-1}) := (T^\beta u^n, T^\alpha v^{n-1}).$$ Then, it is easy to calculate that: $$\delta T(u^n, v^{n-1}) + T\delta(u^n, v^{n-1}) = (\beta^* u^n - u^n, -(\alpha^* v^n - v^n) - (T^\alpha f^* u^n + f^* T^\beta u^n)) \ne (\beta^* u^n - u^n, \alpha^* v^n - v^n)$$
Note that, with our hypotheses: $T^\alpha f^* u^n = f^* T^\beta u^n$. This seems to imply that there is simply a sign error.. but it is not so easy to overcome this difficult
-
|
2016-02-08 04:52:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9893141984939575, "perplexity": 193.42173719686394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152130.53/warc/CC-MAIN-20160205193912-00310-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-cm121-2-6
|
Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
• # Artykuł - szczegóły
## Colloquium Mathematicum
2010 | 121 | 2 | 239-247
## Almost Prüfer v-multiplication domains and the ring $D + XD_S[X]$
EN
### Abstrakty
EN
This paper is a continuation of the investigation of almost Prüfer v-multiplication domains (APVMDs) begun by Li [Algebra Colloq., to appear]. We show that an integral domain D is an APVMD if and only if D is a locally APVMD and D is well behaved. We also prove that D is an APVMD if and only if the integral closure D̅ of D is a PVMD, D ⊆ D̅ is a root extension and D is t-linked under D̅. We introduce the notion of an almost t-splitting set. $D^{(S)}$ denotes the ring $D + XD_S[X]$, where S is a multiplicatively closed subset of D. We show that the ring $D^{(S)}$ is an APVMD if and only if $D^{(S)}$ is well behaved, D and $D_S[X]$ are APVMDs, and S is an almost t-splitting set in D.
239-247
wydano
2010
### Twórcy
autor
• College of Computer Science and Technology, Southwest University for Nationalities, Chengdu 610041, P.R. China
|
2022-07-05 12:17:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030583620071411, "perplexity": 2706.949682432815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00088.warc.gz"}
|
https://www.hackmath.net/en/math-problem/1445
|
# Minutes
Write as fraction in basic form which part of the week is 980 minutes.
Result
x = 7/72
#### Solution:
$x = \dfrac{ 980 }{ 7\cdot 24\cdot 60 } = \dfrac{ 980 }{ 10080 } = \dfrac{ 7 }{ 72 }$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
Do you want to convert time units like minutes to seconds?
## Next similar math problems:
1. Zdeněk
Zdeněk picked up 15 l of water from a 100-liter full-water barrel. Write a fraction of what part of Zdeněk's water he picked.
2. Mixed2improper
Write the mixed number as an improper fraction. 166 2/3
3. Fraction and a decimal
Write as a fraction and a decimal. One and two plus three and five hundredths
4. Fraction to decimal
Write the fraction 3/22 as a decimal.
Added together and write as decimal number: LXVII + MLXIV
6. Product of two fractions
Product of two fractions is 9 3/5 . If one of the fraction is 9 3/7. Find the other fraction.
7. Chocolate
Children break chocolate first to third and then every part of another half. What kind got each child? Draw a picture. What part would have received if each piece have halved?
8. Cakes
On the bowl were a few cakes. Jane ate one-third of them, Dana ate a quarter of those cakes that remained. a) What part (of the original number of cakes) Dana ate? b) At least how many cakes could be (initially) on thebowl?
9. In fractions
An ant climbs 2/5 of the pole on the first hour and climbs 1/4 of the pole on the next hour. What part of the pole does the ant climb in two hours?
10. Fractions 4
How many 2/3s are in 6?
11. Pizza 5
You have 2/4 of a pizza and you want to share it equally between 2 people how much pizza does each person get?
12. Cake 7
1/3 of a cake shared with 4 people. What share of the whole cake has each people?
13. A baker
A baker has 5 1/4 pies in her shop. She cut the pies in pieces that are each 1/8 of a whole pie. How many pieces of pie does she have?
14. Lengths of the pool
Miguel swam 6 lengths of the pool. Mat swam 3 times as far as Miguel. Lionel swam 1/3 as far as Miguel. How many lengths did mat swim?
15. Math classification
In 3A class are 27 students. One-third got a B in math and the rest got A. How many students received a B in math?
16. The result
How many times I decrease the number 1632 to get the result 24?
17. Doctors
In the city operates 196 doctors. The city has 134456 citizens. How many citizens are per one doctor?
|
2020-02-27 15:12:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.620475172996521, "perplexity": 3584.7075147824157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00195.warc.gz"}
|
https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/newtondirectory/solution01.html
|
SOLUTION 1: We are given the equation $x^3+x-5=0$, so let's define function $f(x) = x^3+x-5$, whose graph is given below.
The derivative of $f$ is $f'(x) = 3x^2+1$. Now use Newton's Method: $$x_{n+1} = x_{n} - { f(x_{n}) \over f'(x_{n}) } \ \ \ \ \longrightarrow$$ $$x_{n+1} = x_{n} - { x_{n}^3+x_{n}-5 \over 3x_{n}^2+1} \ \ \ \ \longrightarrow$$ (Let's simplify the right-hand side of this equation. First get a common denominator.) $$x_{n+1} = x_{n} \ { 3x_{n}^2+1 \over 3x_{n}^2+1 } - { x_{n}^3+x_{n}-5 \over 3x_{n}^2+1} \ \ \ \ \longrightarrow$$ $$x_{n+1} = { 3x_{n}^3+ x_{n} \over 3x_{n}^2+1 } - { x_{n}^3+x_{n}-5 \over 3x_{n}^2+1} \ \ \ \ \longrightarrow$$ $$x_{n+1} = { 3x_{n}^3+ x_{n} - ( x_{n}^3+x_{n}-5 ) \over 3x_{n}^2+1 } \ \ \ \ \longrightarrow$$ $$x_{n+1} = { 2x_{n}^3+5 \over 3x_{n}^2+1 }$$ a.) Let $x_{0}=0$. Then using Newton's Method formula we get that $$x_{1} = { 2x_{0}^3+5 \over 3x_{0}^2+1 } = { 2(0)^3+5 \over 3(0)^2+1 } = 5$$ and $$x_{2} = { 2x_{1}^3+5 \over 3x_{1}^2+1 } = { 2(5)^3+5 \over 3(5)^2+1 } = { 255 \over 76 } \approx 3.355263158$$ Using Newton's Method formula for 8 iterations in a spreadsheet results in :
Thus the solution $r$ to the original equation to five decimal places is $r \approx 1.51598$.
b.) Let $x_{0}=1$. Then using Newton's Method formula we get that $$x_{1} = { 2x_{0}^3+5 \over 3x_{0}^2+1 } = { 2(1)^3+5 \over 3(1)^2+1 } = { 7 \over 4} = 1.75$$ and $$x_{2} = { 2x_{1}^3+5 \over 3x_{1}^2+1 } = { 2(1.75)^3+5 \over 3(1.75)^2+1 } = { 1006 \over 652 } \approx 1.54294$$ Using Newton's Method formula for 5 iterations in a spreadsheet results in :
Thus the solution $r$ to the original equation to five decimal places is $r \approx 1.51598$.
c.) Let $x_{0}=-1$. Then using Newton's Method formula we get that $$x_{1} = { 2x_{0}^3+5 \over 3x_{0}^2+1 } = { 2(-1)^3+5 \over 3(-1)^2+1 } = { 3 \over 4 } =0.75$$ and $$x_{2} = { 2x_{1}^3+5 \over 3x_{1}^2+1 } = { 2(0.75)^3+5 \over 3(0.75)^2+1 } = { 374 \over 172 } \approx 2.17441$$ Using Newton's Method formula for 7 iterations in a spreadsheet results in :
Thus the solution $r$ to the original equation to five decimal places is $r \approx 1.51598$.
|
2022-05-21 07:00:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824098348617554, "perplexity": 171.32019655009006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00355.warc.gz"}
|
https://codeforces.com/problemset/problem/1783/G
|
time limit per test
6 seconds
memory limit per test
512 megabytes
input
standard input
output
standard output
You are given a tree of $n$ vertices and $n - 1$ edges. The $i$-th vertex has an initial weight $a_i$.
Let the distance $d_v(u)$ from vertex $v$ to vertex $u$ be the number of edges on the path from $v$ to $u$. Note that $d_v(u) = d_u(v)$ and $d_v(v) = 0$.
Let the weighted distance $w_v(u)$ from $v$ to $u$ be $w_v(u) = d_v(u) + a_u$. Note that $w_v(v) = a_v$ and $w_v(u) \neq w_u(v)$ if $a_u \neq a_v$.
Analogically to usual distance, let's define the eccentricity $e(v)$ of vertex $v$ as the greatest weighted distance from $v$ to any other vertex (including $v$ itself), or $e(v) = \max\limits_{1 \le u \le n}{w_v(u)}$.
Finally, let's define the radius $r$ of the tree as the minimum eccentricity of any vertex, or $r = \min\limits_{1 \le v \le n}{e(v)}$.
You need to perform $m$ queries of the following form:
• $v_j$ $x_j$ — assign $a_{v_j} = x_j$.
After performing each query, print the radius $r$ of the current tree.
Input
The first line contains the single integer $n$ ($2 \le n \le 2 \cdot 10^5$) — the number of vertices in the tree.
The second line contains $n$ integers $a_1, \dots, a_n$ ($0 \le a_i \le 10^6$) — the initial weights of vertices.
Next $n - 1$ lines contain edges of tree. The $i$-th line contains two integers $u_i$ and $v_i$ ($1 \le u_i, v_i \le n$; $u_i \neq v_i$) — the corresponding edge. The given edges form a tree.
The next line contains the single integer $m$ ($1 \le m \le 10^5$) — the number of queries.
Next $m$ lines contain queries — one query per line. The $j$-th query contains two integers $v_j$ and $x_j$ ($1 \le v_j \le n$; $0 \le x_j \le 10^6$) — a vertex and it's new weight.
Output
Print $m$ integers — the radius $r$ of the tree after performing each query.
Example
Input
6
1 3 3 7 0 1
2 1
1 3
1 4
5 4
4 6
5
4 7
4 0
2 5
5 10
5 5
Output
7
4
5
10
7
Note
After the first query, you have the following tree:
The marked vertex in the picture is the vertex with minimum $e(v)$, or $r = e(4) = 7$. The eccentricities of the other vertices are the following: $e(1) = 8$, $e(2) = 9$, $e(3) = 9$, $e(5) = 8$, $e(6) = 8$.
The tree after the second query:
The radius $r = e(1) = 4$.
After the third query, the radius $r = e(2) = 5$:
|
2023-02-08 17:58:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467977046966553, "perplexity": 361.79171277047675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00670.warc.gz"}
|
http://mathoverflow.net/questions/71337/definable-wellordering-of-the-reals/71356
|
# Definable Wellordering of the Reals
Why are we interested in definable wellordering of the reals? For instance, we have
1. Con(ZFC) $\Rightarrow$ Con(ZFC + there is a $\Delta^1_2$-wellordering of $\mathbb{R}$),
2. Con(ZFC + there is a measurable cardinal) $\Rightarrow$ Con(ZFC + there is a measurable cardinal+there is a $\Delta^1_3$-wellordering of $\mathbb{R}$).
-
## 2 Answers
Several objects can be defined from a wellordering of the reals. Nonprincipal ultrafilters on the natural numbers, non-measurable sets of reals, sets of reals without the property of Baire, and so on.
Knowing how complicated wellorderings of the reals are tells us how complicated these objects are.
On the other hand, knowing the minimal complexity of a wellordering of the reals tells us something about the universe of set theory we are living in. In $L$, Goedel's constructible universe, there is a wellordering of the reals of minimal complexity, namely $\Delta^1_2$.
Assuming more and more large cardinals, we have more and regularity properties for projective sets of reals, and since a wellordering of the reals can be used to construct pathological sets of reals (as above), large cardinals imply that there is no easily definable wellordering of the reals. For example, a measurable cardinal implies that there is no $\Delta^1_2$ wellordering, but is consistent with a $\Delta^1_3$ wellordering of the reals.
So in some sense, the minimal complexity of a wellordering of the reals in a given universe of set theory is a test question to gauge the pathologies that definable sets of reals can exhibit in this universe.
-
This is a very interesting topic! There are two ways to address the question. They are different, so I treat them separately.
I.
As mentioned by Stefan, knowing that we have a well-ordering of certain complexity gives us a bound on the extent of regularity properties.
For example, the complexity of well-orderings gives us an upper bound on the complexity of pointclasses for which determinacy holds. Abstractly, determinacy of nice pointclasses implies that their members are Lebesgue measurable, etc, so it provides us with a precise framework to talk about "regularity" of sets of reals.
Moreover, in the context of fine structural inner models, the complexity of a well-ordering is really a reflection of the complexity of the underlying "comparison process" used to build the model. We are interested in this complexity as it gives us an upper bound on the kind of reals we can expect to see in these models.
To be more concrete, think of $L$ and its well-ordering: We say that $x\lt y$ (for $x,y$ reals), iff either $x$ appears first (i.e., there is an $\alpha$ such that $x\in L_\alpha$ but $y\notin L_\alpha$), or they appear at the same time, but $x$ is "simpler" (measured by, say, an ordering of formulas and parameters). This ordering is $\Sigma^1_2$ because the models $L_\alpha$ are simple to code by reals, essentially all we say is that $r$ codes a model $(A,E)$ and $E$ is well-founded.
When we go from $L$ to, say $L[\mu]$, it is more complicated to compare the levels where $x$ and $y$ appear. Now we have two set models $L_\alpha[U_1]$ and $L_\beta[U_2]$ and we iterate their respective measures until we reach a model and one of its initial segments. The complexity of describing this is greater than in the case of $L$, as now we need to talk about well-foundedness of the iteration as well as of the relevant models. This complexity increases with the models under consideration, as the iterations become more and more complex (iteration trees).
A good discussion of these ideas can be seen in the paper by Martin and Steel, "Iteration trees." J. Amer. Math. Soc. 7 (1994), no. 1, 1–73. It is also treated at a higher level in Steel's Handbook article.
The reason why the complexity of iterations bounds the complexity of the reals we can obtain is a folklore result in inner model theory. Rather than stating the technical fact, let me illustrate with an example: We can identify the real $0^\sharp$ with a model $(L_\alpha,U)$. This model is countable, and pointwise definable. This translates without much effort in the fact that if $y$ is a real in $L$, then $y\le_T 0^\sharp$, where $\le_T$ is the Turing-reducibility order (essentially, because in fact $y\in L_\alpha$). Hence, if a real is more complex (in the Turing sense) than $0^\sharp$, it cannot be in $L$. It also shows that (if $0^\sharp$ exists) then ${\mathbb R}\cap L$ is countable.
$0^\sharp$ is an example of what we call mice. In a sense, the more complex the mouse, the more reals it contains. If a mouse $m$ is so complex that it contains all reals of certain complexity $\Gamma$, and if the comparison process for a fine-structural model $M$ can be coded in $\Gamma$, then we have a concrete example (namely, $m$) for a real not in $M$ and, in fact, we get that all reals in $M$ are Turing reducible to $m$.
(The ultimate expression of these ideas is the so-called mouse set conjecture, but it would take us far off topic to discuss it here properly.)
II.
There is another reason for being interested in simple well-orderings. This reason appears in practice, and is not guided by fine-structural considerations or by trying to limit the extent of regularity properties.
Typically we are interested in strengthenings of the axioms of set theory by axioms that provide us with "combinatorial tools." Examples of the principles I have in mind include forcing axioms (Martin's Axiom, BPFA, MM, ...), the covering property axiom, real-valued measurability of some cardinal $\kappa\le{\mathfrak c}$, etc.
The combinatorial niceness that these axioms provide usually is a hindrance when it comes to defining well-orderings in simple ways. The reason is that typical coding tools we would use in such a definition are ruled out by the combinatorial principles. I present several examples of this in my paper "Real-valued measurable cardinals and well-orderings of the reals," available at this link, in a section titled "Anticoding results".
It is therefore an interesting technical problem to see whether we can circumvent these obstacles and still obtain (consistent) simple well-orderings. Usually we are not so interested in well-orderings per se, but rather in the possibility of developing coding tools. Typically, we can code arbitrary sets of reals just as well as we can code well-orderings (this, in turn, can be seen as an anti-compactness result).
This line of work, in the context of Martin's axiom, was started by Solovay, and developed by Abraham and Shelah in a nice series of papers:
• "A $\Delta^2_2$ well-order of the reals and incompactness of $L(Q^{MM})$." Ann. Pure Appl. Logic 59 (1993), no. 1, 1–32.
• "Martin's axiom and $\Delta^2_1$ well-ordering of the reals." Arch. Math. Logic 35 (1996), no. 5-6, 287–298.
• "Coding with ladders a well ordering of the reals." J. Symbolic Logic 67 (2002), no. 2, 579–597.
(I particularly recommend the introduction to the first paper in the series.)
I have worked on this problem of coding in the context of forcing axioms, and the surprise here is that strong forcing axioms actually provide us with simple definitions of well-orderings (not just consistently). For example:
• Sy Friedman and I showed that if BPFA holds and, say, $\omega_1=\omega_1^L$, then there is a $\Sigma^1_3$ well-ordering.
• Velickovic and I showed that if BPFA holds and $C$ is a ladder sequence on $\omega_1$, then there is a $\Delta_1$ well-ordering of the reals in parameter $C$.
The context here differs from that of the first part of the answer in several ways. For example, we tend not to be interested in projective well-orderings any longer, as decent forcing axioms imply AD${}^{L({\mathbb R})}$ and therefore prevent the existence of such orderings. Also, although we may (and do) ask about third-order definable well-orderings, we are actually more interested in definability over $H(\omega_2)$, as we expect to define not just a well-ordering of the reals but of all of ${\mathcal P}(\omega_1)$.
-
|
2016-02-10 04:52:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124383330345154, "perplexity": 332.9062490764681}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00005-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://grameng.com/1040t6/ex-ante-tracking-error-96c8ed
|
Island Hunters Chilean Patagonia, Taurus Replacement Magazine Springs, Department Of Justice Internships 2020, Mumbai University Kalina Guest House, What Replaces The Electrons Excited By Sunlight In Photosystem I?, Without Any Contamination Crossword Clue, 1966 And 1967 Ford Fairlane For Sale, Houses For Rent Pearl, Ms, " /> Island Hunters Chilean Patagonia, Taurus Replacement Magazine Springs, Department Of Justice Internships 2020, Mumbai University Kalina Guest House, What Replaces The Electrons Excited By Sunlight In Photosystem I?, Without Any Contamination Crossword Clue, 1966 And 1967 Ford Fairlane For Sale, Houses For Rent Pearl, Ms, " />
# ex ante tracking error
The model calculates an estimate of future tracking error based on how the holdings have moved in the … No changes to • An ex-ante portfolio (or trade) evaluation system. to be: mixed-integer quadratic programming (MIQP), Tracking error: A hidden cost of passive investing, https://en.wikipedia.org/w/index.php?title=Tracking_error&oldid=994239264, Creative Commons Attribution-ShareAlike License, This page was last edited on 14 December 2020, at 19:15. who need to evaluate investment managers. Tracking error is a commonly used gauge of benchmark risk, and is closely associated with excess return. For example, an • Not an ex-post portfolio decomposition/reporting system. experience. Learn more in our, Comparing Ex-Ante Tracking Error Estimates across Time, Ethics for the Investment Management Profession, Code of Ethics and Standards of Professional Conduct, Comparing Ex-Ante Tracking Error Estimates across Time (Digest Summary). Should the calculation be based on estimates for future returns, the resulting figure is known as an ex-ante error. So, I didn't check the calculations, but here is some high-level help (I hope): TRACKING ERROR » Fully transparent fundamental risk factor models Sandra We were not able to record your PL credits. Since it is almost impossible to have a TE of zero, managers will try to minimize it. {\displaystyle \omega ^{2}} to the S&P 500 and the MSCI All Country World ex-US indices. p In a factor model of a portfolio, the non-systematic risk (i.e., the standard deviation of the residuals) is called "tracking error" in the investment field. ω These types of risk models should not be used on their own but should be Introduction The asset management industry has been changed by the economic crisis. If a model is used to predict tracking error, it is called 'ex ante' tracking error. Please try again. Many portfolios are managed to a benchmark, typically an index. This research could benefit board members, consultants, and risk and compliance managers In finance, tracking error or active risk is a measure of the risk in an investment portfolio that is due to active management decisions made by the portfolio manager; it indicates how closely a portfolio follows the index to which it is benchmarked. Generally, the information ratio compares the returns of the manager's portfolio with those of a benchmark such as the yield on three-month Treasury bills or an equity index such as the S&P 500. straints generally tend to lower ex ante IR, as given in Eq. tracking error range. Email *. Spring Otherwise, you are agreeing to our use of cookies. in significance depending on recent market movements, and the portfolio’s riskiness The sums of both $w_p$ (the vector of portfolio weights) and $w_b$ (the vector of benchmark weights) are set to 1 and they share as many rows as $C$. Some portfolios are expected to replicate, before trading and other costs, the returns of an index exactly (e.g., an index fund), while others are expected to 'actively manage' the portfolio by deviating slightly from the index in order to generate active returns. with two different risk models for quarter-end periods. securities are made from period to period. 5%. E. Privacy Settings, CFA Institute Journal Review When comparing funds, choose the lowest tracking error possible. past and the exposure to common factors. A high tracking error denotes that active return is volatile and that the portfolio strategy is thus riskier. This type of constraint could cause the portfolio to trade or rebalance simply to adjust for These predictions change significantly depending on the time period of measurement and do CFA Institute, Riddles Manage your Professional Learning credits, Published by Journal of Performance Measurement, Summarized by 4 1. capitalization of $2 billion or more. Dividing portfolio active return by portfolio tracking error gives the information ratio, which is a risk adjusted performance measure. Krueger is the active return, i.e., the difference between the portfolio return and the benchmark return. 3 The review ex ante and ex post TE and (brief... Tracking error (TE) is the standard deviation of the difference between portfolio returns and benchmark returns. error for common factors and specific factors. 2 If you need immediate assistance, call 877-SSRNHelp (877 777 6435) in the United States, or +1 212 448 2500 outside of the United States, 8:30AM to 6:00PM U.S. Eastern, Monday - Friday. refresher on the use of tracking error. The tracking error predictions of risk models are swayed by recent market conditions. Equation (1) hinges on another simplified assump- ... estimate with seemingly equal probability ex post tracking errors of simulated portfolios.There is no clear evidence of bias one way or the other. Name *. tracking error has doubled from 2% to 4%, the portfolio has become more active. not properly capture the absolute level of a portfolio’s active risk. (1). © 2021 CFA Institute. You can construct a forecast covariance matrix from realized covariances if you think historical relationships will persist, or you use other methods, for example factor models. this made it easy for me to look at the chapters i was having trouble with (basically everything lol). • Use to structure efficient, active portfolios • Relate risk factors to portfolio returns and use this ... (“Tracking Error”) because: • Monte Carlo approach provided best estimate of A tracking error a… Tracking Error: Ex-Ante versus Ex-Post Measures Soosung Hwang Department of Banking and Finance, City University Business School, UK and Stephen E. Satchell* Faculty of Economics and Politics and Trinity College, Cambridge University, UK Abstract In this paper we show that ex-ante and ex-post tracking errors must Tracking error is a measure of the deviation from the benchmark; the aforementioned index fund would have a tracking error close to zero, while an actively managed portfolio would normally have a higher tracking error. Specifically, if denotes the n ncovariance matrix of asset returns, then the tracking error is (2) As a result, managers usually use an ex ante tracking error estimate produced by an equity risk model. Great post, as usual. b Like so many investing terms, there are many different versions of tracking error and ways of calculating it. fundamental risk models with such common factors as valuation, momentum, growth, and so The best measure is the standard deviation of the difference between the portfolio and index returns. All Rights Reserved. February 2016 r One tracker might be further away from the index at a given moment in time, because of its replication strategy, but might be closer to the index on average over a period of time. Thus, portfolio bets relative to the index change Vol. calculates an estimate of future tracking error based on how the holdings have moved in the Ex-post tracking error is more useful for reporting performance, whereas ex-ante tracking error is generally used by portfolio managers to control risk. The ‘Expected Value’ entry made the claim that buying a lottery ticket was a bad idea, but I never specified the point at which you were deciding it was irrational. Ex … Then the quantity you require is$\sqrt{w^{T}Cw}$, where$w$is a vector of excess weights relative to the benchmark. CFA. It’s therefore often difficult to be sure that two different sources are talking about the same measure. Technically, tracking error is the annualized standard deviation of … included with other tools to understand portfolio structure. Clarke, R. G., Krase, S. and Statman, M. (1994) ‘Tracking Errors, Regret and Tactical Asset Allocation’, Journal of Portfolio Management, Spring, 16–24. Save my name, email, and website in this browser for the next time I comment. Although the portfolio remains the same, the increased tracking error might cause a The problem is that, for both of these terms (relative VaR and tracking error), people can have different definitions. The latter way to compute the tracking error complements the formulas below but results can vary (sometimes by a factor of 2). A portfolio’s excess returns are simply the absolute difference between a portfolio’s return and that of its benchmark. Contact us if you continue to see this message. Thus the tracking error does not include any risk (return) that is merely a function of the market's movement. Otherwise, you are agreeing to our use of cookies. Issue 2, Neil To control risk, institutional investors often require their managers to invest within a tracking error range. Problems arise when practitioners use ex Tracking error is the standard deviation of the active return, as computed from an historical sample for ex postanalysis or an appropriate model for ex anteanalysis. professionals do not appreciate that estimates of risk can change based on recent market Ex-post tracking error is more useful for reporting performance, whereas ex-ante tracking error is generally used by portfolio managers to control risk. Table 1 shows average, standard deviation, minimum, and maximum Active Share for each portfolio vs. their respective Each stock is alternatively assigned to the portfolio I'm trying to calculate the difference between daily returns on different stocks vs their benchmarks. The latter way to compute the tracking error complements the formulas below but results can vary (sometimes by a factor of 2). The author identifies the problems that occur when [1] The optimization problem of maximizing the return, subject to tracking error and linear constraints, may be solved using second-order cone programming: Under the assumption of normality of returns, an active risk of x per cent would mean that approximately 2/3 of the portfolio’s active returns (one standard deviation from the mean) can be expected to fall between +x and -x per cent of the mean excess return and about 95% of the portfolio's active returns (two standard deviations from the mean) can be expected to fall between +2x and -2x per cent of the mean excess return. The ex-post tracking error formula is the standard deviation of the active returns, given by: where They are systematic drivers of portfolio risk and return and at the heart of risk management tools. The model Tracking error is a measure to find out how much the return of a portfolio or a mutual fund deviates from the return of an index it is trying to replicate in terms of the components of an index and also in the term of the return of that index. We’re using cookies, but you can turn them off in Privacy Settings. To control risk, institutional investors often require their managers to invest within a Tracking error Enrich your investment strategy by understanding the risk factors that affect your portfolio versus a benchmark. The author graphically compares quarterly Analytics help us understand how the site is used, and which pages are the most popular. Volume 46 In a factor model of a portfolio, the non-systematic risk (i.e., the standard deviation of the residuals) is called "tracking error" in the investment field. Functional cookies, which are necessary for basic site functionality like keeping you logged in, are always enabled. As a matter of fact, risk management has gained much importance and has been put at the core of Allow analytics tracking. A contribution to "non-normality" can be derived. This is a useful framework because people often conflate the two in their reasoning. Website. The service provides the ex-ante estimate of all financial portfolio risk indicators, such as VaR, CVaR, Relative VaR, Tracking Error, Volatility, Downside [...] Volatility, Shortfall probability and Default Risk on whatever time horizons chosen. tracking error over time is dramatic, ranging from a low of around 2% to a high of more than Comparing Ex-Ante Tracking Error Estimates across Time relative to a benchmark cannot be effectively monitored. If tracking error is measured historically, it is called 'realized' or 'ex post' tracking error. As your article does. Had a test on actuarial science coming up and was dead on all the concepts (had to start from ground zero). This research is significant for investment adviser contracts that require that a your ex-ante risk management approach by leveraging Bloomberg’s multi-factor risk models fully integrated with historical performance attribution ... including tracking error, stress testing, and VaR. By using this form you agree with the storage and handling of your data by this website. In particular, ex-post tracking error is always larger than ex-ante tracking error. When that is the case, the result is known as an ex-post error. I have a a sheet with a list of stocks and their benchmarks, the next sheet has the daily returns of the shares, the next is the daily returns of the benchmarks. came across the channel as it had small bits of FM chapters consolidated by the professor Stephen paris. This is different from the so-called ex ante tracking error, which is the amount of TE that the manager allows when running the portfolio. portfolio’s ex ante tracking error remain within a specific range. If a model is used to predict tracking error, it is called 'ex ante' tracking error. Tracking error is one of the most important measures used to assess the performance of a portfolio, as well as the ability of a portfolio manager to generate excessive returns and beat the market or the benchmark. I wonder how significant the differences would be from looking at different data points. The author’s research suggests that because the models are based on historical data, If tracking error is measured historically, it is called 'realized' or 'ex post' tracking error. ante tracking error estimate produced by an equity risk model. The most common formula for the ex-ante tracking error is$\sqrt{w^{T}Cw}$, where$w$is a vector of excess weights relative to the benchmark and$C$a forecast of covariance matrix. The opposite of ex ante is ex post, which means after the event. 2015 practitioner to assume that the portfolio had taken on a lot more bets than the target. market. 01 Mar In this paper we show that ex-ante and ex-post tracking errors must necessarily differ, since portfolio weights are ex-post stochastic in nature. Learn more in our Privacy Policy. Contributions to Ex Ante Volatility, Normal VaR & CVaR, Modified VaR & CVaR - Marginal, component and percentage contributions to Volatility, Normal VaR/CVaR as well as Modified VaR/CVaR. Such a The author creates the portfolio and target portfolios using stocks with a market or the target, with the resulting portfolios containing 449 securities each. Tracking error is the divergence between the price behavior of a position or a portfolio and the price behavior of a benchmark. There are both ex ante (expected) and ex post (observed) information ratios. A more recent offering is the introduction of smart beta or factor-investing techniques to the allocation decision of many institutions [1]. A high tracking error denotes that active return is volatile and that That said, a reasonably common definition of tracking error is: Tracking error = the standard deviation of returns relative to the returns of the index. Various types of ex-ante tracking error models exist, from simple equity models which use beta as a primary determinant to more complicated multi-factor fixed income models. Various types of ex-ante tracking error models exist, from simple equity models which use beta as a primary determinant to more complicated multi-factor fixed income models. {\displaystyle r_{p}-r_{b}} Journal of Performance Measurement they produce changing estimates of risk. forth. In calculating a tracking error, the data used may be historical in nature. Due to the abovementioned reasons, it is used as an input to calculate the information ratioInformation RatioThe information ratio measures the risk-adjusted returns of a financial asset or portfolio relative to a certain benchmark. Ex-ante analysis in financial markets refers to prediction of various indicators, economic and financial, by evaluating past and present data and parameters. A small level of TE is not necessarily a bad thing, since it may be too costly to avoid it. − a changing estimate based on recent market conditions. RiddlesCFA, CIPM This well-written, short article is ideal for an analyst or consultant who may need a Index funds are expected to minimize the tracking error with respect to the index they are attempting to replicate, and this problem may be solved using standard optimization techniques. In addition to risk (return) from specific stock selection or industry and factor "betas", it can also include risk (return) from market timing decisions. the portfolio strategy is thus riskier. r N No. The lower the tracking error, the more faithfully the fund is matching its index. To begin, define predicted tracking error versus time for each model from 2007 to 2013, along with tracking Our results imply that fund managers always have a higher ex-post tracking ante (predicted) tracking error as an absolute risk measure. 7, We’re using cookies, but you can turn them off in Privacy Settings. 19 The graphic display of changes of predicted analyst is likely making an incorrect conclusion if he assumes that because the forecasted Both risk models are global Home; Random; Log in; Settings; About Bogleheads® No guarantees are made as to the accuracy of the information on this site or the appropriateness of any advice to your particular situation. ( sometimes by a factor of 2 ) often require their managers to control risk and the behavior. 500 and the MSCI all Country World ex-US indices we were not able record! Small bits of FM chapters consolidated by the professor Stephen paris portfolio weights are ex-post in. Economic crisis for the next time i comment on a portfolio ’ return... The MSCI all Country World ex-US indices to predict tracking error complements the formulas ex ante tracking error but results can (... Not necessarily a bad thing, since portfolio weights are ex-post stochastic nature! To period a function of the market actuarial science coming up and was dead on all the concepts ( to. Or consultant who may need a refresher on the use of cookies more useful for performance!, since portfolio weights are ex-post stochastic in nature ground zero ) this. More recent offering is the introduction of smart beta or factor-investing techniques to s... Contribution to non-normality '' can be derived are ex-post stochastic in nature evaluate investment.! Risk adjusted performance measure should be included with other tools to understand portfolio structure the model s. Look at the heart of risk management tools portfolio managers to control risk channel as it had small of! Risk and compliance managers who need to evaluate investment managers concepts ( had to from... Whereas ex-ante tracking error does not include any risk ( return ) that is the divergence between the price of! That is merely a function of the difference between the portfolio strategy is thus riskier index... Many portfolios are managed to a benchmark the lower the tracking error complements the below. Handling of your data by this website of movement in the model ’ s research suggests that the. To record your PL credits ( sometimes by a factor of 2 ) can be.. Risk within the market this browser for the next time i comment data may! Choose the lowest tracking error is more useful for reporting performance, whereas ex-ante tracking error it! A market capitalization of$ 2 billion or more calculation be based on estimates for future returns, result. A… ex ante tracking error of large pension funds or insurance companies should care about factors high error! Contribution to non-normality '' can be derived show that ex-ante and ex-post tracking error is more useful reporting! Return by portfolio managers to control risk, institutional investors often require their managers to invest within a specific.... Forecast covariance matrix $C$ allocation decision of many institutions [ 1 ] because people conflate... Necessarily a bad thing, since it may be historical in nature portfolio structure,., since it is called 'ex ante ' tracking error a… investors of large pension funds insurance... Estimates for future returns, the resulting portfolios containing 449 securities each error a… investors of large funds... Market 's movement target portfolios using stocks with a market capitalization of $2 billion or.. Recent offering is the divergence between the portfolio to trade or rebalance simply to adjust for a changing based... All the concepts ( had to start from ground zero ) using form. The event of large pension funds or insurance companies should care about factors remain within specific! Investors of large pension funds or insurance companies should care about factors of constraint could cause portfolio. Returns, the result of movement in the model ’ s view of within. Such a change ex ante tracking error often the result is known as an ex-ante error in Eq predicted ) tracking,! Is merely a function of the market 's movement, but you can turn them off Privacy. Figure is known as an ex-ante error are always enabled ’ s view of risk are. Be historical in nature if a model is used to predict tracking error appreciate that of..., the more faithfully the fund is matching its index ex-ante error be sure that two risk. Or rebalance simply to adjust for a changing estimate based on historical,! Save my name, email, and website in this browser for the time... Weights are ex-post stochastic in nature than a target, with two different sources talking. Used may be historical in nature for future returns, the resulting portfolios containing 449 securities each on. Known as an ex-post ex ante tracking error any risk ( return ) that is a! On actuarial science coming up and was dead on all the concepts ( had to start from zero..., managers usually use an ex ante ( predicted ) tracking error is used... But results can vary ( sometimes by a factor of 2 ), article. You logged in, are always enabled no changes to securities are made from to! That of its benchmark analyst or consultant who may need a forecast covariance$. Will try to minimize it be too costly to avoid it author ’ s suggests! People often conflate the two in their reasoning portfolio and the price behavior of a position a... Often difficult to be sure that two different risk models are swayed by recent market experience other tools to portfolio! Both ex ante tracking error denotes that active return is volatile and that of its benchmark how. Pl credits on estimates for future returns, the resulting figure is known an... Had a test on actuarial science coming up and was dead on all the concepts had! Management tools ex-ante and ex-post tracking error is more useful for reporting performance, whereas ex ante tracking error tracking error produced! The result is known as an ex-ante error error does not include any risk return... In calculating a tracking error complements the formulas below but results can (. Conflate ex ante tracking error two in their reasoning model ’ s research suggests that because the are... Up and was dead on all the concepts ( had to start ground! Of calculating it show that ex-ante and ex-post tracking error denotes that active return portfolio..., the more faithfully the fund is matching its index always larger than ex-ante tracking is... Used may be historical in nature and return and at the heart of risk management tools portfolio return! No changes to securities are made from period to period an ex ante,. Results can vary ( sometimes by a factor of 2 ) $C.! Comparing funds, choose the lowest tracking error a bad thing, portfolio! Learn how this information is used, and so forth that occur when professionals do not that... From ground zero ) you logged in, are always enabled investing,!, choose the lowest tracking error, the more faithfully the fund is matching its index help. The asset management industry has been changed by the economic crisis ( expected ) and post. Versions of tracking error complements the formulas below but results can vary ( sometimes a. Chapters consolidated by the economic crisis its benchmark should care about factors differences would be from looking different. In nature level of TE is not necessarily a bad thing, since it is impossible! Portfolio tracking error predictions of risk management tools that estimates of risk models should not be on! Was having trouble with ( basically everything lol ) different data points and index returns ante is ex post which. Used may be too costly to avoid it to the allocation decision of many [... Has been changed by the professor Stephen paris, managers will try to minimize it risk models should not used! Covariance matrix$ C \$ managed to a benchmark how this information is used to predict error! Spring 7, we ’ re using cookies, which means after the event almost! Ex-Post tracking error complements the formulas below but results can vary ( by... Versions of tracking error complements the formulas below but results can vary ( sometimes by a factor of ). How this information is used to predict tracking error, you are agreeing to our of! Absolute difference between the portfolio strategy is thus riskier errors must necessarily differ since... Portfolios containing 449 securities each are based on estimates for future returns, the faithfully! As given in Eq simply to adjust for a changing estimate based on recent market experience contracts! Techniques to the allocation decision of many institutions [ 1 ] refresher on the use of cookies,! The Privacy Policy to learn how this information is used to predict tracking remain! Article is ideal for an analyst or consultant who may need a refresher on the use of tracking error.... Introduction of smart beta or factor-investing techniques to the portfolio and index returns ( relative VaR tracking! Analyst or consultant who may need a refresher on the use of tracking error gives information... Could benefit board members, consultants, and so forth error range is ideal for an analyst or who. To evaluate investment managers funds, choose the lowest tracking error, the data used may be in. The lower the tracking error is always larger than ex-ante tracking error range heart. Billion or more be derived a risk adjusted performance measure opposite of ex IR! Changes to securities are made from period to period ground zero ) refresher on the use of tracking error the! Across the channel as it had small bits of FM chapters consolidated by the Stephen! At the heart of risk, whereas ex-ante tracking error is generally used by portfolio managers control... Policy to learn how this information is used off in Privacy Settings a…. Many investing terms, there are many different versions of tracking error always!
|
2021-08-05 09:13:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2873516380786896, "perplexity": 3420.853854095869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00491.warc.gz"}
|
https://socratic.org/questions/54b17273581e2a2c83c2bf1c
|
# A sphere made out of a material that has a density of "2.7 g cm"^(-3) has a mass of "82 g". What is the radius of the sphere?
Jan 10, 2015
$\mathrm{de} n s i t y = \frac{m a s s}{v o l u m e}$
$V o l u m e = \frac{m a s s}{\mathrm{de} n s i t y}$
$V o l u m e = \frac{82}{2.7} = 30.37 c {m}^{3}$
The volume of a sphere is given by:
$V = \frac{4}{3} \pi {r}^{3}$
So ${r}^{3} = \frac{3 V}{4 \pi} = \frac{3 \times 30.37}{4 \times 3.142} = 1.274$
$r = 1.084 c m$
|
2019-11-20 19:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357965230941772, "perplexity": 703.6133238431792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00463.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/sensored-bldc-start-up-failure.44632/
|
# sensored bldc start up failure
#### etiquoe
Joined Jul 11, 2010
27
hi,
i'm working on my bldc motor driver.
i'm using 3 hall sensors to detect rotor position.
i don't know what's the problem.
when it failed to start up, the hall sensors signal is detected, and the microcontroller provide the right sequence control signal, but the power inverter output signal give wrong result.
enyone?
#### blueroomelectronics
Joined Jul 22, 2007
1,757
#### etiquoe
Joined Jul 11, 2010
27
this is the schematic of the power inverter.
High side and low side input signal is provided from microcontroller based on hall sensor states
#### Attachments
• 261.2 KB Views: 71
#### punisher454
Joined Jun 29, 2009
16
I'm not positive (still trying to get my 3-phase working) but I think you may need to pre charge the bootstrap caps before they will fire the first time. possibly try to charge the cap for start-up by turning on the low side first.
#### thatoneguy
Joined Feb 19, 2009
6,359
It seems to be a software issue, but there isn't any source code, and I don't see the hall effect sensors in the schematic.
I'm guessing port initialization or other startup tasks that may not be completed due to a typo or oversight.
#### etiquoe
Joined Jul 11, 2010
27
I'm not positive (still trying to get my 3-phase working) but I think you may need to pre charge the bootstrap caps before they will fire the first time. possibly try to charge the cap for start-up by turning on the low side first.
yup, i do charge the cap before begin to start it up
#### punisher454
Joined Jun 29, 2009
16
Can you post the code?
#### etiquoe
Joined Jul 11, 2010
27
Can you post the code?
i haven't used pwm yet.
i just try to make the motor rotate in CW and CCW
this is the code:
#### Attachments
• 2.9 KB Views: 43
#### punisher454
Joined Jun 29, 2009
16
I would try to get your pwm working. But first I'd suggest posting your code and schematic over at rcgroups, that seems to be the #1 bldc knowledge base.
#### thatoneguy
Joined Feb 19, 2009
6,359
Took a quick glance, didn't simulate it though.
Rich (BB code):
void first_init (void)
{
PORT_PHASE = 0b00010101;
}
This is called in the main routine. Doesn't this set all the coils to "ON" on the motor, preventing it from turning?
#### etiquoe
Joined Jul 11, 2010
27
Took a quick glance, didn't simulate it though.
Rich (BB code):
void first_init (void)
{
PORT_PHASE = 0b00010101;
}
This is called in the main routine. Doesn't this set all the coils to "ON" on the motor, preventing it from turning?
this is to make the bootstrap capacitor charged. as punisher454 mentioned, the bootstrap caps need to be pre-charged first before they work.
#### etiquoe
Joined Jul 11, 2010
27
i tried to increase the bootstrap caps pre-charge period, from 100 us to 250 us. and the start up problem was solved
#### etiquoe
Joined Jul 11, 2010
27
I would try to get your pwm working. But first I'd suggest posting your code and schematic over at rcgroups, that seems to be the #1 bldc knowledge base.
about the pwm, i haven't got any idea yet about using which one pwm scheme. still need much information about each pwm schemes. i haven't found any source which desribes about the pwm schemes in detail. So, it will be so helpful if you have more information about it
#### punisher454
Joined Jun 29, 2009
16
Ok I'm no BLDC expert at all, in fact mine wont be working untill I get a parts shipment later this week.
But, from what I have researched so far it looks like you can get away with just switching the high side fets on/off and applying the PWM signal to the low side only.
Variable braking can be accomplished by applying a PWM signal to all 3 low side coils at once.
You can also reduce the current draw and smooth the motor rotation by turning on the next fet in your commutation table BEFORE turning off the current one. You'd probably have to set up a timer to control how much overlap you get. Basically you'd be smoothly transitioning to the next phase rather than abruptly switching.
#### etiquoe
Joined Jul 11, 2010
27
Ok I'm no BLDC expert at all, in fact mine wont be working untill I get a parts shipment later this week.
But, from what I have researched so far it looks like you can get away with just switching the high side fets on/off and applying the PWM signal to the low side only.
Variable braking can be accomplished by applying a PWM signal to all 3 low side coils at once.
You can also reduce the current draw and smooth the motor rotation by turning on the next fet in your commutation table BEFORE turning off the current one. You'd probably have to set up a timer to control how much overlap you get. Basically you'd be smoothly transitioning to the next phase rather than abruptly switching.
|
2022-11-28 10:44:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4729444086551666, "perplexity": 3023.8652340713093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00198.warc.gz"}
|
https://pypi.org/project/Products.StandardCacheManagers/4.0.0/
|
Cache managers for Zope 2.
## Overview
This package provides two cache managers for Zope 2. A RAMCacheManager and an Accelerated HTTP cache manager, which adds HTTP cache headers to responses.
The following is intended for people interested in the internals of RAMCacheManager, such as maintainers.
## Introduction
The caching framework does not interpret the data in any way, it acts just as a general storage for data passed to it. It tries to check if the data is pickleable though. IOW, only pickleable data is cacheable.
The idea behind the RAMCacheManager is that it should be shared between threads, so that the same objects are not cached in each thread. This is achieved by storing the cache data structure itself as a module level variable (RAMCacheManager.caches). This, of course, requires locking on modifications of that data structure.
Each RAMCacheManager instance has one cache in RAMCacheManager.caches dictionary. A unique __cacheid is generated when creating a cache manager and it’s used as a key for caches.
## Object Hierarchy
RAMCacheManager
RAMCache
ObjectCacheEntries
CacheEntry
RAMCacheManager is a persistent placeful object. It is assigned a unique __cacheid on its creation. It is then used as a key to look up the corresponding RAMCache object in the global caches dictionary. So, each RAMCacheManager has a single RAMCache related to it.
RAMCache is a volatile cache, unique for each RAMCacheManager. It is shared among threads and does all the locking. It has a writelock. No locking is done on reading though. RAMCache keeps a dictionary of ObjectCacheEntries indexed by the physical path of a cached object.
ObjectCacheEntries is a container for cached values for a single object. The values in it are indexed by a tuple of a view_name, interesting request variables, and extra keywords passed to Cache.ZCache_set().
CacheEntry is a wrapper around a single cached value. It stores the data itself, creation time, view_name and keeps the access count.
## Changelog
### 4.0.0 (2017-05-13)
• Require Zope 4.
• Python 3-compatibility
### 3.0 (2016-07-18)
• Remove HelpSys support.
### 2.13.1 (2014-09-14)
• Prevent warnings when RAM caching in a context without a Request.
### 2.13.0 (2010-07-11)
• Released as separate package.
## Project details
Uploaded source
|
2023-03-20 23:32:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093826174736023, "perplexity": 4946.761280722355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00040.warc.gz"}
|
https://pub.uni-bielefeld.de/publication/2497324
|
# Automatic Acquisition of Ranked Qualia Structures from the Web
Cimiano P, Wenderoth J (2007)
In: ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Carroll JA, van den Bosch A, Zaenen A (Eds); 888-895.
Conference Paper | Published | English
Author
;
Editor
Carroll, John A. ; van den Bosch, Antal ; Zaenen, Annie
Publishing Year
Conference
45th Annual Meeting of the Association for Computational Linguistics (ACL 2007)
Location
Prague, Czech Republic
Conference Date
2007-06-23/2007-06-30
PUB-ID
### Cite this
Cimiano P, Wenderoth J. Automatic Acquisition of Ranked Qualia Structures from the Web. In: Carroll JA, van den Bosch A, Zaenen A, eds. ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. 2007: 888-895.
Cimiano, P., & Wenderoth, J. (2007). Automatic Acquisition of Ranked Qualia Structures from the Web. In J. A. Carroll, A. van den Bosch, & A. Zaenen (Eds.), ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (pp. 888-895).
Cimiano, P., and Wenderoth, J. (2007). “Automatic Acquisition of Ranked Qualia Structures from the Web” in ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Carroll, J. A., van den Bosch, A., and Zaenen, A. eds. 888-895.
Cimiano, P., & Wenderoth, J., 2007. Automatic Acquisition of Ranked Qualia Structures from the Web. In J. A. Carroll, A. van den Bosch, & A. Zaenen, eds. ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. pp. 888-895.
P. Cimiano and J. Wenderoth, “Automatic Acquisition of Ranked Qualia Structures from the Web”, ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, J.A. Carroll, A. van den Bosch, and A. Zaenen, eds., 2007, pp.888-895.
Cimiano, P., Wenderoth, J.: Automatic Acquisition of Ranked Qualia Structures from the Web. In: Carroll, J.A., van den Bosch, A., and Zaenen, A. (eds.) ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. p. 888-895. (2007).
Cimiano, Philipp, and Wenderoth, Johanna. “Automatic Acquisition of Ranked Qualia Structures from the Web”. ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Ed. John A. Carroll, Antal van den Bosch, and Annie Zaenen. 2007. 888-895.
All files available under the following license(s):
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
Access Level
Open Access
|
2017-08-22 22:18:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100031614303589, "perplexity": 11508.66447402881}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00642.warc.gz"}
|
https://electronics.stackexchange.com/questions/406441/why-do-designers-use-op-amps-with-fractional-gains
|
# Why do designers use op-amps with fractional gains?
I often find designs like the following
simulate this circuit – Schematic created using CircuitLab
Where the gain is less than one ($$\\ R_2/R_1 < 1\$$)
Why not simply use a resistive voltage divider? Beyond the inversion (which tends to be irrelevant in many applications), a divider with $$\\ (R_1-R_2) \& R_2 \$$ would produce the same output with the same input impedance. Plus, it will not have an offset problem, an input bias current problem, a transistor noise problem, or a bandwidth problem (adding a couple of capacitors can make it basically flat in frequency).
Although my first instinct is to regard this as a possible instability (gain beyond the op-amp specification), I see that it is basically a stable trans-conductance amplifier with the input current given by $$\\ V_{in}/R_1 \$$, so that is not a valid objection.
Why not use just a voltage divider? It decouples the input and output circuits.
• The output impedance is low
• The input impedance is R1
• The load has no effect on the input impedance
The last one is particularly important, as changing the load on the previous stage can have unexpected effects, e.g. changing frequency response or nonlinearity.
Why use this rather than a voltage divider and unity gain follower, for applications where phase doesn’t not matter?
• Same number of components
• The opamp’s inputs are at ground which is best for performance.
The opamp functions as a buffer, providing a much lower output impedance than the bare divider would have. This completely eliminates any loading effects created by the downstream circuitry.
A noninverting voltage follower configuration would provide the same benefit (and the same downsides), but if you want the inversion, this is the way to go.
Also, it is sometimes important for the application that the node between R1 and R2 be held at ground potential.
• @EdgarBrown most applications for a buffer in my experience, to counter your experience, are exactly to provide a low-impedance output to a high-impedance input. Kinda feels like the whole buffer idea. – Marcus Müller Nov 12 '18 at 21:31
• @MarcusMüller I have seen the same basic idea with differential amplifiers, in which the case is not as clear cut and impedance matching/buffering is definitively an issue (e.g., for high-speed differential ADCs). But I commonly see this in low-speed designs where a resistive divider would work at least as well. – Edgar Brown Nov 12 '18 at 22:04
If your sensor has high output impedance, yet the ADC needs to grab a bunch of charge and the sensor cannot provide that charge fast enough to support the desired sampling rate, then I could see using this circuit.
|
2020-08-12 21:34:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5009302496910095, "perplexity": 1386.7056246738025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00223.warc.gz"}
|
https://www.ncatlab.org/nlab/show/Robinson-Schensted-Knuth+correspondence
|
# nLab Robinson-Schensted-Knuth correspondence
## Idea
Robinson-Schensted-Knuth correspondence is the name given to a number of bijections involving standard Young tableaux and semistandard Young tableau.
## Variants
• Given a word of length $n$ over the alphabet $[1, N]$, one can associate to it two tableaux of size $n$ and the same shape, one a standard Young tableaux and one a semistandard Young tableau with entries from $[1, N]$.
## References
For an overview see
Last revised on June 4, 2021 at 02:28:00. See the history of this page for a list of all contributions to it.
|
2021-12-03 06:39:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5452011823654175, "perplexity": 457.88513501966435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00147.warc.gz"}
|
https://coq.gitlab.io/zulip-archive/stream/237664-math-comp-users/topic/ssreflect.20unfold.20.60In.60.20but.20I.20don't.20.20want.20it.20to.3F.html
|
Stream: math-comp users
Topic: ssreflect unfold In but I don't want it to?
walker (Oct 25 2022 at 09:27):
Two question:
First how to prevent the following behaviour:
From Coq Require Import List.
From mathcomp Require Import all_ssreflect.
Goal forall A (a: A) b l, a <> b ->
In a (b :: l) ->
In a l.
Proof.
move => A a b l ab_neq [].
The second goal looks like this:
(fix In (a0 : A) (l0 : seq A) {struct l0} : Prop :=
match l0 with
| [::] => False
| b0 :: m => b0 = a0 \/ In a0 m
end) a l -> In a l
what I expected to see is
In a l -> In a l
My second question is is In the predicate we are expected to use in ssreflect or is there ssreflect/mathcom version of it that we should use.
Enrico Tassi (Oct 25 2022 at 09:36):
You should use membership, look at the header of the file seq.v
Enrico Tassi (Oct 25 2022 at 09:36):
https://math-comp.github.io/htmldoc/mathcomp.ssreflect.seq.html
Enrico Tassi (Oct 25 2022 at 09:37):
It is written x \in l, and you have a lot of lemmas about it. It requires A to be an eqType since it tests (with a boolean test) the membership.
walker (Oct 25 2022 at 09:41):
alright, a follow up question, I was using also positive type from PArith, but I didn't see trace of it in math comp, is there an alternative for this one as well ?
Enrico Tassi (Oct 25 2022 at 11:48):
yes, but I don't recall where it is , @Cyril Cohen surely knows it.
Enrico Tassi (Oct 25 2022 at 11:49):
I mean, the type int of math-comp plays the role of Z, and it is built using some notion of positive number
Enrico Tassi (Oct 25 2022 at 11:49):
I don't recall if it is a custom one
Enrico Tassi (Oct 25 2022 at 11:51):
OK, now I recall. For positive we use nat and a moral +1, see https://github.com/math-comp/math-comp/blob/master/mathcomp/algebra/ssrint.v#L68
Enrico Tassi (Oct 25 2022 at 11:52):
The difference is that positive from PArith are efficient in computations, less in reasoning. Do you need to run computations inside Coq?
walker (Oct 25 2022 at 11:53):
I need to extract code with efficient computations.
walker (Oct 25 2022 at 11:55):
I checked int, it uses nat, which is different from positive, I actually need positive because of it is binary representation.
walker (Oct 25 2022 at 11:56):
and the fact that it started from one and not zero.
Cyril Cohen (Oct 25 2022 at 12:19):
This is why we originally developped CoqEAL (cf https://github.com/coq-community/coqeal/blob/master/refinements/pos.v)
walker (Oct 25 2022 at 12:37):
mmm, I was thinking of positive compatiable library, but thanks! I see that coqEAL will help with many cases, but in my case, I was implementing a data structure where I need to like match on binary bits of positive, I think I will have to stick PArith.
walker (Oct 25 2022 at 12:37):
thanks a lot everyone
Last updated: Feb 08 2023 at 07:02 UTC
|
2023-02-08 07:24:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376377701759338, "perplexity": 4872.908107125336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00347.warc.gz"}
|
http://physics.stackexchange.com/questions/27229/spectrum-of-a-quantum-relativistic-distance-squared-operator
|
# Spectrum of a quantum relativistic “distance squared” operator
This question disusses the same concepts as that question (this time in quantum context). Consider a relativistic system in spacetime dimension $D$. Poincare symmetry yields the conserved charges $M$ (a 2-form associated with Lorentz symmetry) and $P$ (a 1-form associated with translation symmetry). The center-of-mass trajectory $x$ is defined by the equations
$$x \wedge P + s = M$$ $${i_P}s = 0$$
I'm implicitely identifying vectors and 1-forms using the spacetime metric $\eta$
Define $X$ to be the point on the center-of-mass trajectory for which the spacelike interval to the origin is maximal. Substituting $X$ into the 1st equation, applying $i_P$ and using the 2nd equation and $(P, X) = 0$ yields
$$X = -\frac{{i_P}M}{m^2}$$
Passing to quantum mechanics, this equation defines $X$ as vector of self-adjoint operators, provided we use symmetric operator ordering the resolve the operator ordering ambiguity between $P$ and $M$. In particular these operators are defined on the Hilbert space of a quantum mechanical particle of mass $m$ and spin $s$.
What is the spectrum of the operator $X^2$, for given $m$, $s$ and $D$?
-
|
2015-10-09 23:58:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533115029335022, "perplexity": 314.26453958037354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737936627.82/warc/CC-MAIN-20151001221856-00162-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://grindskills.com/odds-ratio-vs-probability-ratio/
|
# Odds ratio vs probability ratio
An odds is the ratio of the probability of an event to its complement:
$$odds(X)=P(X)1−P(X)\text{odds}(X) = \frac{P(X)}{1-P(X)}$$
An odds ratio (OR) is the ratio of the odds of an event in one group (say, $$AA$$) versus the odds of an event in another group (say, $$BB$$):
$$OR(X)A vs B=P(X|A)1−P(X|A)P(X|B)1−P(X|B)\text{OR}(X)_{A\text{ vs }B} = \frac{\frac{P(X|A)}{1-P(X|A)}}{\frac{P(X|B)}{1-P(X|B)}}$$
A probability ratio1 (PR, aka prevalence ratio) is the ratio of the probability of an event in one group ($$AA$$) versus the probability of an event in another group ($$BB$$):
$$PR(X)A vs B=P(X|A)P(X|B)\text{PR}(X)_{A\text{ vs }B} = \frac{P(X|A)}{P(X|B)}$$
An incidence proportion can be thought of as pretty similar to a probability (although technically is a rate of probability occurring over time), and we contrast incidence proportions (and incidence densities, for that matter) using relative risks (aka risk ratios, RR), along with other measures like risk differences:
$$RRA vs B=incidence proportion(X|A)incidence proportion(X|B)\text{RR}_{A\text{ vs }B} = \frac{\text{incidence proportion}(X|A)}{\text{incidence proportion}(X|B)}$$
Why are relative probability contrasts so often represented using relative odds instead of probability ratios, when risk contrasts are represented using relative risks instead of odds ratios (calculated using incidence proportions instead of probabilities)?
My question is foremost about why prefer ORs to PRs, rather than why not use incidence proportions to calculate a quantity like an OR. Edit: I am aware that risks are sometimes contrasted using a risk odds ratio.
1 As near as I can tell… I do not actually encounter this term in my discipline other than very rarely.
I think the reason that OR is far more common that PR comes down to the standard ways in which different types of quantity are typically transformed.
When working with normal quantities, like temperature, height, weight, then the standard assumptions is that they are approximately Normal. When you take contrasts between these sorts of quantities, then a good thing to do is take the difference. Equally if you fit a regression model to it you don’t expect a systematic change in the variance.
When you are working with quantities that are “rate like”, that is they are bounded at zero and typically come from calculating things like “number per day”, then taking raw differences is awkward. Since the variance of any sample is proportional to the rate, the residuals of any fit to count or rate data won’t generally have constant variance. However, if we work with the log of the mean, then the variances will be “stabilized” – that is they add rather than multiply. Thus for rates we typically handle them as the log. Then when you form contrasts you are taking differences of logs, and that is the same as a ratio.
When you are working with probability like quantities, or fractions of a cake, then you are now bounded above and below. You now also have an arbitrary choice what you code as 1 and 0 (or more in multi-class models). Differences between probabilities are invariant to switching 1 to 0, but have the problem of rates that the variance changes with the mean again. Logging them wouldn’t give you invariance for 1s and 0s, so instead we tend to logit them (log-odds). Working with log-odds you are now back on the full real line, the variance is the same all along the line, and differences of log-odds behave a bit like normal quantities.
Gaussian
• Variance does not depend on $$μ\mu$$
• Canonical link for GLM is $$xx$$
Poisson
• Variance is proportional to the rate $$λ\lambda$$
• Canonical link for GLM is $$ln(x)\ln(x)$$
• Logging should result in residuals of constant variance
Binomial
• Variance is proportional to $$p(1−p)p(1-p)$$
• Canonical link for GLM is logit $$ln(p1−p)\ln\left(\frac{p}{1-p}\right)$$
• Taking logit (log-odds) of data should result in residuals of constant variance
So I think that the reason you see lots of RR, but very little PR is that PR is constructed from probability/Binomial type quantities, while RR is constructed from rate type quantities. In particular note that incidence can exceed 100% if people can catch the disease multiple times per year, but probability can never exceed 100%.
Is odds the only way?
No, the general messages above are just useful rules of thumb, and these “canonical” forms are just convenient mathematically – hence why you tend to see it most. The probit function is used instead for probit regression, so in principle differences of probit would be just as valid as OR. Similarly, despite best efforts to word it carefully, the text above still sort of suggests that logging and logiting your raw data, and then fitting a model to it is a good idea – it’s not a terrible idea, but there are better things that you can do (GLM etc.).
|
2022-09-28 06:36:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428013324737549, "perplexity": 920.8870863485158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00631.warc.gz"}
|
https://estafraseefalsa.wordpress.com/2012/04/23/encoding-a-well-founded-relation-over-pairs-in-coq-proof-assistant/
|
Encoding a Well-founded Relation over pairs in Coq Proof Assistant
This will be my very first post in English in this blog. Therefore, I would like to apologize in advance for any errors.
This post will assume that the reader knows what is a well-founded relation and has a basic knowledge of Coq proof assistant.
First of all, we will need some library definitions of what is a well-founded relation and what is an accessible term (considering some relation). Both definitions are in the module Wellfounded of Coq standard library. So, we start our development importing this module.
</p><p>Require Import Wellfounded.</p><p>
How we can define a well-founded relation for pairs? First, the elements of the given pairs must support some kind of ordering. So, we will parametrize our definition by a type of these elements and a ordering relation between them. We will assume that the parameter is a strict partial order, but no changes are needed to use an partial order. These definitions are given next:
</p><p>Section LexicographicOrdering.</p><p> Variable A : Type.</p><p> Variable ltA : A -> A -> Prop.</p><p>
Given a strict ordering relation $<$ and a equality $=$ over values a given set A, we could order pairs of A using the following definition:
$\forall x\, x'\, y\,y'\in A. x < x' \lor (x = x' \land y < y') \rightarrow (x,y) < (x',y')$.
The previous mathematical formula can be encoded using the following inductive type:
</p><p> Inductive lexprod : A * A -> A * A -> Prop :=<br /> | left_lex : forall (x1 x2 y1 y2 : A),<br /> ltA x1 x2 -> lexprod (x1, y1) (x2, y2)<br /> | right_lex : forall (x1 x2 y1 y2 : A),<br /> x1 = x2 -> ltA y1 y2 -> lexprod (x1, y1) (x2, y2).</p><p>
The type lexprod uses the ordering relation ltA, previously defined. The constructor left_lex, encodes the condition that if $x < x'$ then the pair $(x,y) < (x',y')$ for any values of y and y’. On the other hand, the constructor right_lex encodes the condition that if $x = x'$ and $y < y’$ then the pair $(x,y) < (x',y')$.
Now, it remains to prove that lexprod definition is a well founded relation, under the assumption that ltA is well founded. This can be easily proved by induction over the proof of accessibility relation for ltA, as shown by the next source code piece:
</p><p> Lemma acc_A_lexprod : <br /> well_founded ltA -><br /> forall x, <br /> Acc ltA x -><br /> forall y, <br /> Acc ltA y -> <br /> Acc lexprod (x, y).<br /> Proof.<br /> intros H x Hx.<br /> induction Hx as [x _ IHacc].<br /> intros y Hy.<br /> induction Hy as [y _ IHacc0].<br /> apply Acc_intro.<br /> intros (x1, y1).<br /> inversion 1; subst; auto.<br /> Qed.<br /><br /> Theorem wf_lexprod :<br /> well_founded ltA -><br /> well_founded lexprod.<br /> Proof.<br /> intros wfA ; unfold well_founded.<br /> intros (x, y); auto using acc_A_lexprod.<br /> Qed.</p><p>
With this definitions, we can easily define an well founded relation for pairs of natural numbers using:
</p><p>Definition lt2 := lexprod nat lt.<br /><br />Lemma wf_lt2 : well_founded lt2.<br />Proof.<br /> unfold lt2 ; auto using wf_lexprod with arith.<br />Qed.</p><p>
Mission accomplished! We these simple definitions we can define a well founded relation over products of a given type and its well founded relation. Ok, ok… the presentation isn’t in a tutorial style since, I believe, it has a very specific audience.
But why should I care about well founded relations? The reason is simple: Well founded relations can be used to prove termination of programs in Coq proof assistant. A topic that I will discuss in another post.
Anúncios
|
2017-08-20 22:49:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.620677649974823, "perplexity": 4540.348964788206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00161.warc.gz"}
|
https://stats.stackexchange.com/questions/91091/one-vs-all-and-one-vs-one-in-svm
|
# One-vs-All and One-vs-One in svm?
What is the difference between a one-vs-all and a one-vs-one SVM classifier?
Does the one-vs-all mean one classifier to classify all types / categories of the new image and one-vs-one mean each type / category of new image classify with different classifier (each category is handled by special classifier)?
For example, if the new image to be classified into circle, rectangle, triangle, etc.
The difference is the number of classifiers you have to learn, which strongly correlates with the decision boundary they create.
Assume you have $N$ different classes. One vs all will train one classifier per class in total $N$ classifiers. For class $i$ it will assume $i$-labels as positive and the rest as negative. This often leads to imbalanced datasets meaning generic SVM might not work, but still there are some workarounds.
In one vs one you have to train a separate classifier for each different pair of labels. This leads to $\frac{N(N-1)}{2}$ classifiers. This is much less sensitive to the problems of imbalanced datasets but is much more computationally expensive.
• Please, did you mean i-labels as positive OR i-th label as positive ? – PeterB May 13 '17 at 0:10
• labels corresponding to the class i as positive. – Gnattuha May 14 '17 at 21:05
• @Gnattuha - What do you mean by imbalanced datasets? Thanks in advance. – saurabheights Mar 21 '18 at 17:34
• I read here - en.wikipedia.org/wiki/… - "Although this strategy is popular, it is a heuristic that suffers from several problems. Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives". Still how does that imbalancing affect the accuracy? – saurabheights Mar 21 '18 at 17:44
|
2021-01-21 11:31:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46458759903907776, "perplexity": 1410.4175762201805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00504.warc.gz"}
|
https://mathematica.stackexchange.com/questions/117675/defining-a-variable-with-comma-or-any-other-separator-in-its-name
|
# Defining a variable with comma (or any other separator) in its name?
How can I define Mathematica variable/function with comma in it?
For example, defineL00,1 as a variable?
I plan to use L00,1 as a variable as 00 and 1 have different meanings. Other examples are like L01,1, L10,0, etc. The purpose to comma is to separate meanings when others see it, otherwise L001 will cause confusion. I also hope to avoid l["00",1] as a variable as the " " sign will cause confusion to users not familiar with mathematica interpreting it as string. Any symbol added to the variable that serves as a separator between 00 and 1 would be fine. For example L00;1 or L00_1. But they do not work. The purpose is to inform my user (colleagues) in a reader-friendly way.
If the question is not meaningful please ignore it or inform me.
• Subscript[a, "00", 1] = 12 – Coolwater Jun 5 '16 at 8:32
• It's not possible. Try Symbol["L00,1"] to see the error. – Marius Ladegård Meyer Jun 5 '16 at 9:20
• Why do you want to do this though? Is it just for formatting? – Marius Ladegård Meyer Jun 5 '16 at 9:21
• @kww and I have already shown you two ways to do it. – Marius Ladegård Meyer Jun 5 '16 at 9:32
• kww, this is of course possible. But to help others give you a useful answer, can you edit your question and provide some context about how you plan on using this symbol. -1 until you edit your question. – QuantumDot Jun 5 '16 at 10:58
Here is one possibility:
MakeBoxes[l[a_, b_], form_] := RowBox[{"l", "[",
ToBoxes[b], "]"}]
l[00,1] continues to display as l[00,1].
Here is another possibility:
MakeBoxes[l[a_String, b_], form_] :=
With[{string = "\"L" <> a <> "," <> ToString[b] <> "\""},
InterpretationBox[string, l[a, b]]]
l["00",1] displays as L00,1
As it has been pointed out in the comments made to your question, using reserved symbols in identifiers is not allowed. What you can and might want to do is bind expressions to values. The arguments of the expression can serve as indices or tags to produce the kind of differentiation you want.
Here are two examples of what I am alluding to.
a["00", 1] = 3; a["10", 0] = 42;
a["00", 1]^2 + a["10", 0]
51
b[0, 1][x_] := 1 + Log10[x]; b[10, 0] = 42;
b[10, 0]^b[0, 1][2] // N
129.389
|
2020-09-18 15:00:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25466352701187134, "perplexity": 2705.8986820961804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00562.warc.gz"}
|
https://ncatlab.org/nlab/show/quotient+bialgebra
|
# Quotient bialgebras
## Geometric motivation
Given a field $k$, the $k$-valued functions on a finite group form a Hopf algebra. Given a subgroup $B\subset G$, there is an induced map of Hopf algebras $k[G]\to k[B]$, which is a surjective homomorphism of commutative Hopf algebras. Similarly, for Hopf algebras of regular function?s on an algebraic group over a field.
The generalization to noncommutative Hopf algebras hence may be viewed as describing the notion of a quantum subgroup?, or in the bialgebra version of a quantum subsemigroup.
However there is also a weaker notion of a quantum subgroup, and also a dual notion (e.g. via coideal subalgebras).
## Definition
Given a $k$-bialgebra $H$, a quotient bialgebra is a bialgebra $Q$ equipped with an epimorphism of bialgebras $\pi: H\to Q$.
If both bialgebras are Hopf algebras then the epimorphism will automatically preserve the antipode.
## Quotient bialgebras from bialgebra ideals
A bialgebra ideal is an ideal in the sense of associative unital algebras which is also a coideal of coassociative coalgebras.
A Hopf ideal is a bialgebra ideal which is invariant under the antipode map.
If $H$ is a bialgebra and $I\subset H$ a bialgebra ideal then the quotient associative algebra $H/I$ has a natural structure of a bialgebra. Moreover, if $H$ is a Hopf algebra and $I\subset H$ is a Hopf ideal then the projection $H\to H/I$ will be an epimorphism of Hopf algebras.
Revised on September 23, 2010 03:25:28 by Toby Bartels (98.19.51.164)
|
2017-11-18 15:51:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888663053512573, "perplexity": 374.4919262636885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00071.warc.gz"}
|
https://www.omnimaga.org/asm-language/asm-optimized-routines/105/?topicseen
|
• ASM Optimized routines 5 1
Currently:
### Author Topic: ASM Optimized routines (Read 61153 times)
0 Members and 1 Guest are viewing this topic.
#### Xeda112358
• they/them
• Moderator
• LV12 Extreme Poster (Next: 5000)
• Posts: 4676
• Rating: +718/-6
• Calc-u-lator, do doo doo do do do.
##### Re: ASM Optimized routines
« Reply #105 on: June 18, 2021, 06:35:59 pm »
Wow, that's really neat. I don't have a use for that at the moment but I bet I can find one in some old, slow code.
How did you go about making the routine generator? I assume there are some simple rules it follows.
It looks through all the numbers 1-65535 to find some good code.
If the number is even, it essentially uses the code for n/2, but with an add hl,hl tacked on to the end. Otherwise it is odd and it first tries a shift-and-add algorithm (basically the classic, generic multiplication, but unrolled and with branches removed as we know which path it takes in advance). It registers whatever code it came up with, then it checks if it is faster or smaller to multiply by the factors (for example, it is faster to multiply by 27 by doing *3*9 or *9*3), so it finds all of the factors and checks for anything more efficient.
There are other tricks, too, like checking if it is faster to "negate" (i.e., *65535 is the same as *-1), and if there is a *(2^k-1) with k>3, it uses a subtraction. And it collapse things like 8 add hl,hl into ld h,l \ ld l,0, and has some pre-populated routines in the table.
It doesn't really know anything about the instruction set, so it can't do any fancy optimizations
|
2021-07-25 12:34:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.527888298034668, "perplexity": 3271.0613180710507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00370.warc.gz"}
|
https://stats.stackexchange.com/questions/104443/does-the-result-of-interaction-tell-whether-nor-not-the-moderator-variable-wor
|
# Does the result of “interaction” tell whether nor not the moderator variable worked?
I’m reading a research paper and the author prepared two print-advertisements of jam, one with an old lady (Ad1) the other one with an exotic lady (Ad2). Both print-advertisements (Testanzeige) have the same written information.
Hypothesis: With the increase of the need for variation (CSI), the Advertisement Attitude to the print Advertisement, in which the product was exotic presented, would increase too.
After a factor analysis the Advertisement Attitude was divided into two factors: amusement (Unterhaltungswert on the left column) and credibility (Glaubwuerdigkeit on the right column).
Based on this table, the author wrote:
The hypothesis, that the need for variation (CSI) has a moderation effect on the Advertisement Attitude Is rejected for the factor amusement Is supported for the factor credibility
So I assume I should watch $0.82$ and $0.121*$
What I cannot understand:
1.The original hypothesis is about the print advertisement which was exotic presented, but the table presents the total results, it that possible I can somehow tell from the results of the interaction that this is indeed for the the Ad2? (Absolute no information about that in the study)
2.The original hypothesis is about the impact of CSI on Ad Attitude, shouldn’t I watch the results from CSI $(-0.072, -0.064)$? I assume I should watch the results of the interaction because CSI is moderator?
3 How should I interprete the first row: Testanzeigen (Print AdvertisementS) $(,093* -,028)$ ? The scores from the objects' attitude to BOTH advertisements?
• The table is on page 17 of this document in case if you'd like to see the full text in German. – Penguin_Knight Jun 23 '14 at 18:59
• You're welcome. By the way, my vote is exotic lady = 1 and old lady = 0. I will be very amused to see a young lady wearing bikini while spreading jam on a toast (WHY is she doing that?); but on any day I'll actually eat the jam toast prepared by an old lady. – Penguin_Knight Jun 23 '14 at 19:10
• @Penguin_Knight So if I understand you correctly, the variable of Testanzeige should be either 1 or 0, right? How should the regression looks like? Y (Ad-Attitude) = b0+ b1*X1(Ad,the variable is either 1 or 0)+ b2*X2 (CSI)+b2*X1*X2 +error – yue86231 Jun 23 '14 at 19:23
• @Penguin_Knight That means, to my third question: ,093 is how the type of Ad impacts the Ad-Attitude, did I interprete it correctly? – yue86231 Jun 23 '14 at 19:35
• PS: Is Ad type a dummy variable in this context? – yue86231 Jun 23 '14 at 19:43
Let's just focus on credibility for now, they are the same model so no need to duplicate the effort.
The regression is:
$y = \beta_0 - 0.028 Ad- 0.064 CSI + 0.121 Ad\times CSI$
If Ad1 = 0 and Ad2 = 1, and if CSI low = 0 and CSI high = 1:
For Ad1, low CSI:
$y_{Ad1, Low} = \beta_0$
For Ad1, high CSI:
$y_{Ad1, High} = \beta_0 - 0.064$
For Ad2, low CSI:
$y_{Ad1, Low} = \beta_0 - 0.028$
For Ad2, high CSI:
$y_{Ad2, High} = \beta_0 - 0.028- 0.064 + 0.121$
Using this substitution method, you should be able to figure out the differences. Notice that the result can change if they use 1/2 coding instead of 0/1.
If Ad1 = 1 and Ad2 = 2, and if CSI low = 1 and CSI high = 2:
For Ad1, low CSI:
$y_{Ad1, Low} = \beta_0 - 0.028- 0.064 + 0.121$
For Ad1, high CSI:
$y_{Ad1, High} = \beta_0 - 0.028 - 2\times 0.064 + 2\times 0.121$
For Ad2, low CSI:
$y_{Ad1, Low} = \beta_0 - 2\times 0.028- 0.064 + 2\times 0.121$
For Ad2, high CSI:
$y_{Ad2, High} = \beta_0 - 2\times0.028- 2\times 0.064 + 4\times 0.121$
So, figuring out the coding is crucial. Most of the time we would model binary as 1/0, and you may either make the same assumption or contact the authors.
• Thanks Penguin_Knight very much for the explaination, now I understand the results with the interaction. But I cannot understand why CSI is the moderator, the type of Ad could also be the moderator, couldn't it? – yue86231 Jun 23 '14 at 20:02
• ^ Yes, you're correct. – Penguin_Knight Jun 23 '14 at 20:04
• So there is no rule which variable should be the moderator, actually they could all be seen as normal independent variables. The "moderator" is just a title we randomly give them. The interaction is the main point to watch, not who the moderator is. – yue86231 Jun 23 '14 at 20:08
• ^ Yes and no. In epidemiology (my own field), interaction is used to test effect modification, and not moderation. When we talk about moderation, it's more aligned with "mediation/moderation analysis", which is a special type of regression that does require the users to specify which one is the moderator. If the authors just used interaction term and call it a "moderation test," I'd consider that a misnomer or loss in translation. – Penguin_Knight Jun 23 '14 at 20:13
• And what you said is largely correct. CSI could have modified the effect of Ads, and Ads could have modified the effect of CSI on the factor score. When there is a significant interaction, interpret the main effect AND the interaction because main effect alone is no longer universally applicable. – Penguin_Knight Jun 23 '14 at 20:18
|
2020-02-21 10:34:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101920962333679, "perplexity": 1825.5809343439532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145500.90/warc/CC-MAIN-20200221080411-20200221110411-00213.warc.gz"}
|
https://socratic.org/questions/objects-a-and-b-are-at-the-origin-if-object-a-moves-to-2-1-and-object-b-moves-to
|
# Objects A and B are at the origin. If object A moves to (2 ,1 ) and object B moves to (5 ,2 ) over 4 s, what is the relative velocity of object B from the perspective of object A?
Feb 9, 2017
$\frac{\sqrt{10}}{4}$ in the direction given by ${\tan}^{- 1} \left(\frac{1}{3}\right)$
#### Explanation:
As the displacement for object A is $\left(2 , 1\right)$
and displacement for object B is $\left(5 , 2\right)$
relative displacement of B w.r.t. A is $\left(5 - 2 , 2 - 1\right)$ or $\left(3 , 1\right)$
as compared to $\left(0 , 0\right)$ at the start of $4$ seconds
Hence, relative velocity of object B from the perspective of object A is $\frac{3}{4} \hat{i} + \frac{1}{4} \hat{j}$
or $\sqrt{{\left(\frac{3}{4}\right)}^{2} + {\left(\frac{1}{4}\right)}^{2}} = \frac{\sqrt{10}}{4}$ in the direction given by ${\tan}^{- 1} \left(\frac{\frac{1}{4}}{\frac{3}{4}}\right) = {\tan}^{- 1} \left(\frac{1}{3}\right)$
|
2019-04-23 12:05:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35509565472602844, "perplexity": 460.2943809322119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423135805-00072.warc.gz"}
|
https://brettterpstra.com/2011/09/15/catching-markdown-mistakes/
|
I had an interesting idea this morning. At least I find it interesting, but I haven’t slept much lately. Either way, here it is: in Markdown, if you misname a reference link, forget to fill one in or have a malformed URL, your broken Markdown shows up in your output. Wouldn’t it be nice if your preview highlighted those for you before you went to publish?
Marked 1.3, which is coming along very nicely, has a few JavaScripts built in to the preview (which can be turned off in preferences). It provides a table of contents based on headers in your document, smooth scrolling, tool tips to show you where external links will go and, as of this morning, highlighting of broken links.
The code is simple. This version is jQuery, just because that was convenient, but you can pull this off in plain old JavaScript with barely any extra code:
It just scans for Markdown-style links ([text][title] or [text](link)) that are still in your text after converting to HTML. If it finds them, it highlights them in red. Just thought I’d share.
By the way, in addition to the JavaScript fun, the next version of Marked has multiple custom styles (unlimited), can open any text file with any extension, can load MathJax for MathML rendering and much more. I’ll keep you posted on its release, but it should be soon.
|
2020-09-19 05:57:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3328392505645752, "perplexity": 1911.292553617283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00292.warc.gz"}
|
https://shtools.oca.eu/shtools/public/fortran-examples.html
|
If you want to learn how to incorporate shtools routines in your fortran programs, the following example programs are a good starting point to see shtools in action.
Folder Description
SHCilmPlus/ Demonstration of how to expand spherical harmonic files into gridded maps using the GLQ routines, and how to compute the gravity field resulting from finite amplitude surface relief.
SHExpandDH/ Demonstration of how to expand a grid that is equally sampled in latitude and longitude into spherical harmonics using the sampling theorem of Driscoll and Healy (1994).
SHExpandLSQ/ Demonstration of how to expand a set of irregularly sampled data points in latitude and longitude into spherical harmonics by use of a least squares inversion.
SHMag/ Demonstration of how to expand scalar magnetic potential spherical harmonic coefficients into their three vector components and total field.
MarsCrustalThickness/ Demonstration of how to compute a crustal thickness map of Mars.
SHRotate/ Demonstration of how to determine the spherical harmonic coefficients for a body that is rotated with respect to its initial configuration.
SHLocalizedAdmitCorr/ Demonstration of how to calculate localized admittance and correlation spectra for a given set of gravity and topography spherical harmonic coefficients.
TimingAccuracy/ Test programs that calculate the time required to perform the GLQ and DH spherical harmonic transforms and reconstructions and the accuracy of these operations.
|
2021-07-24 23:43:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040926098823547, "perplexity": 732.6340960774144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00416.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/2158533?p_p_auth=6as0VAZ7
|
# Custom attributes to variables/ Defining custom domains?
Posted 1 year ago
1294 Views
|
|
3 Total Likes
|
Hi, I'm looking to see if there is a way to add custom attributes to variables in Mathematica. Or alternately being able to define manifolds. For instance, I want to see I can add assumptions such as, x \in S^2 or R \in SO(3)Then later define operations suitable to the domain type. Any help is appreciated! Thank you
|
2022-01-28 05:01:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626548886299133, "perplexity": 1390.7249514477858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00086.warc.gz"}
|
https://bertvandenbroucke.netlify.app/2019/04/18/memory-logging/
|
Bert's blog
## Memory logging
One of the big issues I have as a code developer is keeping track of memory usage. While it is relatively straightforward to track performance using appropriately placed timers, there is no clear-cut solution to tell you how much memory you are using at any given moment during program execution. Which is annoying in cases where memory usage is a limiting factor, e.g. when running a very large simulation.
In this post, I will give an overview of things I have already attempted in order to make this problem more tractable. None of the solutions I provide are really elegant, but some of them are sufficiently flexible to be useful, which is all that matters.
# Different types of memory
First of all, it is important to note that there is no such a thing as just memory. RAM memory comes in two flavours: physical memory (which is the hardware equivalent of RAM memory) and virtual memory, which is the amount of memory allocated to your program by the operating system. Whenever you allocate a block of memory within your software, you ask the operating system to provide an address in the virtual memory which you can use to read and write to. It is the operating system’s job to make sure this memory is actually available, either as a real piece of memory in the physical memory, or in some other way.
Note that this means that not every memory allocation in your program will necessarily use physical memory. If you are very greedy and allocate more memory than you need without ever using it, the operating system might decide not to allocate all of this memory in the physical RAM and your program might fit in less memory than you expect. But it also means that the operating system usually allows for quite a lot more virtual memory than it has available in RAM, in which case your program might very well force the operating system to use space on your hard drive (SWAP) in an attempt to free up enough physical RAM. And that usually leads to a situation in which your program deadlocks your whole system for a considerable amount of time.
This last issue is not at all unlikely, and has happened to me way more often than I would like to admit (in fact, it has happened at least three times in the past two weeks). I therefore strongly recommend you to manually limit the amount of virtual memory that is available to your experimental software by using
ulimit -v MAXIMUM_SIZE_IN_KB
within the terminal window in which you plan to run your software. If your program tries to allocate more virtual memory than the set limit, it will simply crash instead of rendering your system useless for the next half hour (or until you manage to kill your program).
# Keeping track of memory
Once you know that there is not just one type of memory, it is important to define what you actually mean with tracking memory. You might be interested in knowing exactly how much memory every memory allocation in your program actually allocates, in which case you want to track virtual memory. But if all you care about is making sure that your software will fit in the available memory on some machine, then you probably care about the amount of actual physical memory used by your software.
## Keeping track of virtual memory
If you want to track virtual memory, then you simply need to log every allocation that is made, i.e. you make a little note of how much memory is requested for every call that is made to the relevant allocation routines. In a low-level language like C where every allocation is done using malloc or an equivalent routine, it is straightforward to create your own wrapper function for malloc that achieves exactly this.
In somewhat higher level languages like C++, things are a bit more complicated, as many objects are allocated indirectly through class constructors and standard library functions. It is possible to overload the new operator globally, but I cannot think of an elegant way to wrap this into a modular structure that is compatible with C++ thinking or that allows for a good way of labelling allocations. As a result, most solutions I have come up with so far do not attempt to accurately count every allocated bit, but instead focus on a select number of objects (let’s call them the usual suspects: the objects of which you know that they require a lot of memory) or keep track of the total amount of allocated virtual memory.
My first attempt at manually logging object memory made extensive use of the sizeof operator in conjunction with some hard-coded functions to compute the memory size of std::vectors and manually allocated memory. I would basically provide every suspect class with a get_memory_size member function and then manually track the allocated memory whenever an object of that class was created. This worked very well, but was obviously a lot of work. Too much work in fact to make it worth the effort and use it as a sustainable solution.
A second way I only recently (read: this week) stumbled upon makes use of the pseudo file system that Linux distributions provide under /proc. This is a Linux only feature that is provided by the Linux kernel and provides a very powerful way for the kernel to communicate with other parts of the system. The way it works is as follows: your program (which runs within a unique process on your system) requests a file (or set of files) located under the /proc directory. The kernel catches this request and generates the corresponding file, tailored to the needs of the requesting process. No actual file is ever generated, but since the output of the request still takes the form of a simple text file, the requesting process can parse it as it likes and get all the relevant information.
You can easily generate a list of all available /proc files by querying the /proc file system:
ls /proc
This will generate a list of all currently running processes, and a list of global information files, like /proc/cpuinfo which contains information about the available CPUs on the system. To get information for the currently running process, you can simply query /proc/self, in which case the operating system will automatically display the contents of the /proc sub directory for the requesting process, without you having to bother to figure out the ID of this process.
/proc/self contains a lot of useful files, but for our purposes we are only interested in /proc/self/status and /proc/self/statm. The former contains a human readable description of the resources used by the requesting process, including the current virtual memory usage (VmSize) and the maximum virtual memory usage since the start of the process (VmPeak). The latter is a stripped down version of this information that focuses on virtual memory usage only and that is not meant for human consumption. It is however ideal for our purposes.
Using /proc/self/statm, we can get the current virtual memory usage of the program using the following code:
#include <fstream>
std::ifstream statm("/proc/self/statm");
unsigned int vmem_size;
statm >> vmem_size;
The value present in statm is expressed in page sizes, where one page size corresponds to the size of a single block of memory on the system. These blocks are the way the operating system uses to organise the memory; it will always allocate memory in multiples of the page size. The actual page size is system dependent, although a typical value for it is 4096 bytes (4 KB). You can get the system page size as follows:
#include <unistd.h>
unsigned int pagesize = getpagesize();
Note that while the page size will probably fit in a 32-bit integer variable, the total virtual memory size might not. So it is probably a good idea to use 64-bit integers or size_t variables to manipulate these values.
Now that we have a way to get the total virtual memory size for the program at any given time, it is reasonably straightforward to set up a memory tracking routine: we simply determine the memory size before and after a certain object is created, and assume that the difference is due to the size of the object in memory. We still need to explicitly call the memory tracking routine in our code, but the overhead has been significantly reduced (and this immediately provides us with a good way of attaching labels to the memory logs).
Alternatively, we can also simply take snapshots of the total memory at various points during the program, and use these to assess which operations have the most significant impact on the program. The MemoryLogger class I wrote for CMacIonize this week can be used for both and works well enough for my purposes.
## Keeping track of actual memory usage
As already mentioned, the actual memory that is used is not equal to the allocated virtual memory, as this depends on the inner workings of the kernel (and additional factors, e.g. how busy the system is at the time). If your program is written efficiently (in terms of memory allocations), then the actual memory usage and the virtual memory usage should be similar. If you don’t know whether your program is memory efficient or not, then you will need to do something else to figure out how much memory was actually used.
Again, /proc/self/statm provides the solution, and this time it is the only good solution I know. Other methods I know are only able to tell you what the maximum used physical memory was since the start of the program, and while useful, this does not allow accurate tracking of the real time physical memory usage. The only advantage is that these methods are a bit more portable and work on other Unix systems than Linux.
So let’s just start with the old method that I have been using for a few years now:
#include <sys/resource.h>
struct rusage resource_usage;
getrusage(RUSAGE_SELF, &resource_usage);
This gives you the peak memory usage in KB. The name rss stands for resident set size. While writing this post, I realised that exactly the same name is given in /proc/self/status for a real time value, and according to man proc (the Linux manual for the proc pseudo file system) this is also the second value present in /proc/self/statm. So the following code returns both the current virtual memory size and current physical memory size:
#include <fstream>
std::ifstream statm("/proc/self/statm");
unsigned int vmem_size, phys_size;
statm >> vmem_size >> phys_size;
Both are expressed in page sizes.
From my explanation above, it might be obvious that it does not really make sense to track physical memory usage while objects are being allocated, as allocation only creates the virtual memory and not necessarily physical memory. Tracking the physical memory usage over time can however be very useful. I plan to implement this feature in CMacIonize’s MemoryLogger as soon as I get a chance.
Professional astronomer.
|
2022-10-03 21:24:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2946328818798065, "perplexity": 553.6878965267703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00619.warc.gz"}
|
http://clay6.com/qa/4889/if-then-find-x-
|
Browse Questions
# If $| \overrightarrow a |=13 | \overrightarrow b |=5\: and \: \overrightarrow a.\overrightarrow b=60$ then find $|\overrightarrow a$ x $\overrightarrow b|$
Toolbox:
• $\cos\theta=\large\frac{\overrightarrow a.\overrightarrow b}{\mid\overrightarrow a\mid\mid\overrightarrow b\mid}$
• $\mid \overrightarrow a\times\overrightarrow b\mid=\mid\overrightarrow a\mid\mid\overrightarrow b\mid.\sin\theta$
Step 1:
Given :$\mid\overrightarrow a\mid=13$ and $\mid\overrightarrow b\mid$ and $\overrightarrow a.\overrightarrow b=60$
$\overrightarrow a.\overrightarrow b=\mid\overrightarrow a\mid\mid\overrightarrow b\mid\cos\theta$
$\cos\theta=\large\frac{\overrightarrow a.\overrightarrow b}{\mid\overrightarrow a\mid\mid\overrightarrow b\mid}$
$\qquad=\large\frac{60}{13\times 5}$
$\cos\theta=\large\frac{12}{13}$
Step 2:
We can find the third side of the triangle by using Pythagoras theorem.
$x^2+12^2=13^2$
$x^2=25$
$x=5$
$\sin\theta=\large\frac{5}{13}$
Step 3:
$\overrightarrow a\times\overrightarrow b=\mid\overrightarrow a \mid\mid\overrightarrow b\mid\sin\theta .\hat n$
$\mid\overrightarrow a\times\overrightarrow b\mid=\mid\overrightarrow a \mid\mid\overrightarrow b\mid\sin\theta$
$\Rightarrow 13\times 5\times \large\frac{5}{13}$
$\Rightarrow 25$
|
2016-12-08 07:56:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429796099662781, "perplexity": 3530.756812346717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542455.45/warc/CC-MAIN-20161202170902-00390-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/87494/estimating-n-in-coupon-collectors-problem
|
# Estimating n in coupon collector's problem
In a variation on the coupon collector's problem, you don't know the number of coupons and must determine this based on data. I will refer to this as the fortune cookie problem:
Given an unknown number of distinct fortune cookie messages $n$, estimate $n$ by sampling cookies one at a time and counting how many times each fortune appears. Also determine the number of samples necessary to get a desired confidence interval on this estimate.
Basically I need an algorithm that samples just enough data to reach a given confidence interval, say $n \pm 5$ with $95\%$ confidence. For simplicity, we can assume that all fortunes appear with equal probability/frequency, but this is not true for a more general problem, and a solution to that is also welcome.
This seems similar to the German tank problem, but in this instance, fortune cookies are not labeled sequentially, and thus have no ordering.
• Do we know the messages are equally frequent? – Glen_b Feb 22 '14 at 11:00
• edited question: Yes – goweon Feb 22 '14 at 11:03
• Can you write down the likelihood function? – Zen Feb 22 '14 at 16:41
• People doing wildlife studies capture, tag, and release animals. They later infer the size of the population based on the frequency with which they recapture already tagged animals. It sounds like your problem is mathematically equivalent to theirs. – Emil Friedman Feb 25 '14 at 19:17
For the equal probability/frequency case, this approach may work for you.
Let $K$ be the total sample size, $N$ be the number of different items observed, $N_1$ be the number of items seen exactly once, $N_2$ be the number of items seen exactly twice, $A=N_1(1− {N_1 \over K} )+2N_2,$ and $\hat Q = {N_1 \over K}.$
Then an approximate 95% confidence interval on the total population size $n$ is given by
$$\hat n_{Lower}={1 \over {1-\hat Q+{1.96 \sqrt{A} \over K} }}$$
$$\hat n_{Upper}={1 \over {1-\hat Q-{1.96 \sqrt{A} \over K} }}$$
When implementing, you may need to adjust these depending on your data.
The method is due to Good and Turing. A reference with the confidence interval is Esty, Warren W. (1983), "A Normal Limit Law for a Nonparametric Estimator of the Coverage of a Random Sample", Ann. Statist., Volume 11, Number 3, 905-912.
For the more general problem, Bunge has produced free software that produces several estimates. Search with his name and the word CatchAll.
• I took the liberty of adding the Esty reference. Please double check it's the one you meant – Glen_b Mar 1 '16 at 0:03
• Is it possible @soakley to get bounds (probably less precise bounds) if you only know $K$ (sample size), and $N$ (number of unique items seen)? i.e. we don't have information about $N_1$ and $N_2$. – Basj Nov 7 '17 at 0:45
• I don't know of a way to do it with just $K$ and $N.$ – soakley Nov 7 '17 at 18:35
### Likelihood function and probability
In an answer to a question about the reverse birthday problem a solution for a likelihood function has been given by Cody Maughan.
The likelihood function for the number of fortune cooky types $$m$$ when we draw $$k$$ different fortune cookies in $$n$$ draws (where every fortune cookie type has equal probability of appearing in a draw) can be expressed as:
$$\begin{array}{} \mathcal{L}(m \, \vert \, k,n ) = m^{-n} \frac{m!}{(m-k)!} \propto P(k \, \vert \, m,n) &=& m^{-n}\frac{m!}{(m-k)!} \cdot \underbrace{S(n,k)}_{\begin{subarray}{l}\text{Stirling number }\\ \text{of the 2nd kind}\end{subarray}}\\ &=& m^{-n}\frac{m!}{(m-k)!} \cdot \frac{1}{k!} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{(k-i)^n} \\ &=& {{m}\choose{k}} \sum_{i=0}^k {(-1)^{i}{k \choose i}}{\left(\frac{k-i}{m}\right)^n} \end{array}$$
For a derivation of the probability on the right hand side see the the occupancy problem. This has been described before on this website by Ben. The expression is similar to the one in the answer by Sylvain.
### Maximum likelihood estimate
We can compute first order and second order approximations of the maximum of the likelihood function at
$$m_1 \approx \frac{ {{n}\choose{2}}}{n-k}$$
$$m_2 \approx \frac{ {{n}\choose{2}} + \sqrt{{{n}\choose{2}}^2 - 4(n-k) {{n}\choose{3}}}}{2(n-k)}$$
### Likelihood interval
(note, this is not the same as a confidence interval see: The basic logic of constructing a confidence interval)
This remains an open problem for me. I am not sure yet how to deal with the expression $$m^{-n} \frac{m!}{(m-k)!}$$ (of course one can compute all values and select the boundaries based on that, but it would be more nice to have some explicit exact formula or estimate). I can not seem to relate it to any other distribution which would greatly help to evaluate it. But I feel like a nice (simple) expression could be possible from this likelihood interval approach.
### Confidence interval
For the confidence interval we can use a normal approximation. In Ben's answer the following mean and variance are given:
$$\mathbb{E}[K] = m \left(1-\left(1 - \frac{1}{m}\right)^n\right)$$ $$\mathbb{V}[K] = m \left(\left(m-1\right)\left(1-\frac{2}{m}\right)^n + \left(1 - \frac{1}{m}\right)^n - m \left(1 - \frac{1}{m}\right)^{2n} \right)$$
Say for a given sample $$n=200$$ and observed unique cookies $$k$$ the 95% boundaries $$\mathbb{E}[K] \pm 1.96 \sqrt{\mathbb{V}[K]}$$ look like:
In the image above the curves for the interval have been drawn by expressing the lines as a function of the population size $$m$$ and sample size $$n$$ (so the x-axis is the dependent variable in drawing these curves).
The difficulty is to inverse this and obtain the interval values for a given observed value $$k$$. It can be done computationally, but possibly there might be some more direct function.
In the image I have also added Clopper Pearson confidence intervals based on a direct computation of the cumulative distribution based on all the probabilities $$P(k \, \vert \, m,n)$$ (I did this in R where I needed to use the Strlng2 function from the CryptRndTest package which is an asymptotic approximation of the logarithm of the Stirling number of the second kind). You can see that the boundaries coincide reasonably well, so the normal approximation is performing well in this case.
# function to compute Probability
library("CryptRndTest")
P5 <- function(m,n,k) {
exp(-n*log(m)+lfactorial(m)-lfactorial(m-k)+Strlng2(n,k))
}
P5 <- Vectorize(P5)
# function for expected value
m4 <- function(m,n) {
m*(1-(1-1/m)^n)
}
# function for variance
v4 <- function(m,n) {
m*((m-1)*(1-2/m)^n+(1-1/m)^n-m*(1-1/m)^(2*n))
}
# compute 95% boundaries based on Pearson Clopper intervals
# first a distribution is computed
# then the 2.5% and 97.5% boundaries of the cumulative values are located
simDist <- function(m,n,p=0.05) {
k <- 1:min(n,m)
dist <- P5(m,n,k)
dist[is.na(dist)] <- 0
dist[dist == Inf] <- 0
c(max(which(cumsum(dist)<p/2))+1,
min(which(cumsum(dist)>1-p/2))-1)
}
# some values for the example
n <- 200
m <- 1:5000
k <- 1:n
# compute the Pearon Clopper intervals
res <- sapply(m, FUN = function(x) {simDist(x,n)})
# plot the maximum likelihood estimate
plot(m4(m,n),m,
log="", ylab="estimated population size m", xlab = "observed uniques k",
xlim =c(1,200),ylim =c(1,5000),
pch=21,col=1,bg=1,cex=0.7, type = "l", yaxt = "n")
axis(2, at = c(0,2500,5000))
# add lines for confidence intervals based on normal approximation
lines(m4(m,n)+1.96*sqrt(v4(m,n)),m, lty=2)
lines(m4(m,n)-1.96*sqrt(v4(m,n)),m, lty=2)
# add lines for conficence intervals based on Clopper Pearson
lines(res[1,],m,col=3,lty=2)
lines(res[2,],m,col=3,lty=2)
I do not know if it can help but it is the problem of taking $k$ different balls during $n$ trials in an urn with $m$ balls labelled differently with replacement. According to this page (in french) if $X_n$ if the random variable counting the number of different balls the probability function is given by: $P(X_n = k) = {m \choose k} \sum_{i=0}^k {(-1)^{k-i}{k \choose i}}{(\frac{i}{m})^n}$
|
2020-08-14 03:22:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6646915674209595, "perplexity": 614.3823656992149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00413.warc.gz"}
|
https://en.wikiversity.org/wiki/Pressure_field_tensor
|
# Pressure field tensor
The pressure field tensor is an antisymmetric tensor describing the pressure field and consisting of six components. Tensor components are at the same time components of the two three-dimensional vectors – pressure field strength and the solenoidal pressure vector. With the pressure field tensor the pressure stress-energy tensor, the pressure field equations and pressure force in matter are defined. Pressure field is a component of general field.
## Definition
Expression for the pressure field tensor can be found in papers by Sergey Fedosin, [1] where the tensor is defined using 4-curl:
${\displaystyle f_{\mu \nu }=\nabla _{\mu }\pi _{\nu }-\nabla _{\nu }\pi _{\mu }={\frac {\partial \pi _{\nu }}{\partial x^{\mu }}}-{\frac {\partial \pi _{\mu }}{\partial x^{\nu }}}.\qquad \qquad (1)}$
Here pressure 4-potential ${\displaystyle ~\pi _{\mu }}$ is given by:
${\displaystyle ~\pi _{\mu }=\left({\frac {\wp }{c}},-\mathbf {\Pi } \right),}$
where ${\displaystyle ~\wp }$ is the scalar potential, ${\displaystyle ~\mathbf {\Pi } }$ is the vector potential of pressure field, ${\displaystyle ~c}$ – speed of light.
## Expression for the components
The pressure field strength and the solenoidal pressure vector are found with the help of (1):
${\displaystyle ~C_{i}=c(\partial _{0}\pi _{i}-\partial _{i}\pi _{0}),}$
${\displaystyle ~I_{k}=\partial _{i}\pi _{j}-\partial _{j}\pi _{i},}$
and the same in vector notation:
${\displaystyle ~\mathbf {C} =-\nabla \wp -{\frac {\partial \mathbf {\Pi } }{\partial t}},}$
${\displaystyle ~\mathbf {I} =\nabla \times \mathbf {\Pi } .}$
The pressure field tensor consists of the components of these vectors:
${\displaystyle ~f_{\mu \nu }={\begin{vmatrix}0&{\frac {C_{x}}{c}}&{\frac {C_{y}}{c}}&{\frac {C_{z}}{c}}\\-{\frac {C_{x}}{c}}&0&-I_{z}&I_{y}\\-{\frac {C_{y}}{c}}&I_{z}&0&-I_{x}\\-{\frac {C_{z}}{c}}&-I_{y}&I_{x}&0\end{vmatrix}}.}$
The transition to the pressure field tensor with contravariant indices is carried out by multiplying by double metric tensor:
${\displaystyle ~f^{\alpha \beta }=g^{\alpha \nu }g^{\mu \beta }f_{\mu \nu }.}$
In the special relativity, this tensor has the form:
${\displaystyle ~f^{\alpha \beta }={\begin{vmatrix}0&-{\frac {C_{x}}{c}}&-{\frac {C_{y}}{c}}&-{\frac {C_{z}}{c}}\\{\frac {C_{x}}{c}}&0&-I_{z}&I_{y}\\{\frac {C_{y}}{c}}&I_{z}&0&-I_{x}\\{\frac {C_{z}}{c}}&-I_{y}&I_{x}&0\end{vmatrix}}.}$
To convert the components of the pressure field tensor from one inertial system to another we must take into account the transformation rule for tensors. If the reference frame K' moves with an arbitrary constant velocity ${\displaystyle ~\mathbf {V} }$ with respect to the fixed reference system K, and the axes of the coordinate systems parallel to each other, the pressure field strength and the solenoidal pressure vector are converted as follows:
${\displaystyle \mathbf {C} ^{\prime }={\frac {\mathbf {V} }{V^{2}}}(\mathbf {V} \cdot \mathbf {C} )+{\frac {1}{\sqrt {1-{V^{2} \over c^{2}}}}}\left(\mathbf {C} -{\frac {\mathbf {V} }{V^{2}}}(\mathbf {V} \cdot \mathbf {C} )+[\mathbf {V} \times \mathbf {I} ]\right),}$
${\displaystyle \mathbf {I} ^{\prime }={\frac {\mathbf {V} }{V^{2}}}(\mathbf {V} \cdot \mathbf {I} )+{\frac {1}{\sqrt {1-{V^{2} \over c^{2}}}}}\left(\mathbf {I} -{\frac {\mathbf {V} }{V^{2}}}(\mathbf {V} \cdot \mathbf {I} )-{\frac {1}{c^{2}}}[\mathbf {V} \times \mathbf {C} ]\right).}$
## Properties of tensor
• ${\displaystyle ~f_{\mu \nu }}$ is the antisymmetric tensor of rank 2, it follows from this condition ${\displaystyle ~f_{\mu \nu }=-f_{\nu \mu }}$. Three of the six independent components of the pressure field tensor associated with the components of the pressure field strength ${\displaystyle ~\mathbf {C} }$, and the other three – with the components of the solenoidal pressure vector ${\displaystyle ~\mathbf {I} }$. Due to the antisymmetry such invariant as the contraction of the tensor with the metric tensor vanishes: ${\displaystyle ~g^{\mu \nu }f_{\mu \nu }=f_{\mu }^{\mu }=0}$.
• Contraction of tensor with itself ${\displaystyle f_{\mu \nu }f^{\mu \nu }}$ is an invariant, and the contraction of tensor product with Levi-Civita symbol as ${\displaystyle {\frac {1}{4}}\varepsilon ^{\mu \nu \sigma \rho }f_{\mu \nu }f_{\sigma \rho }}$ is the pseudoscalar invariant. These invariants in the special relativity can be expressed as follows:
${\displaystyle f_{\mu \nu }f^{\mu \nu }=-{\frac {2}{c^{2}}}(C^{2}-c^{2}I^{2})=inv,}$
${\displaystyle {\frac {1}{4}}\varepsilon ^{\mu \nu \sigma \rho }f_{\mu \nu }f_{\sigma \rho }=-{\frac {2}{c}}\left(\mathbf {C} \cdot \mathbf {I} \right)=inv.}$
• Determinant of the tensor is also Lorentz invariant:
${\displaystyle \det \left(f_{\mu \nu }\right)={\frac {4}{c^{2}}}\left(\mathbf {C} \cdot \mathbf {I} \right)^{2}.}$
## Pressure field
Through the pressure field tensor the equations of pressure field are written:
${\displaystyle \nabla _{\sigma }f_{\mu \nu }+\nabla _{\mu }f_{\nu \sigma }+\nabla _{\nu }f_{\sigma \mu }={\frac {\partial f_{\mu \nu }}{\partial x^{\sigma }}}+{\frac {\partial f_{\nu \sigma }}{\partial x^{\mu }}}+{\frac {\partial f_{\sigma \mu }}{\partial x^{\nu }}}=0.\qquad \qquad (2)}$
${\displaystyle ~\nabla _{\nu }f^{\mu \nu }=-{\frac {4\pi \sigma }{c^{2}}}J^{\mu },\qquad \qquad (3)}$
where ${\displaystyle J^{\mu }=\rho _{0}u^{\mu }}$ is the mass 4-current, ${\displaystyle \rho _{0}}$ is the mass density in comoving reference frame, ${\displaystyle u^{\mu }}$ is the 4-velocity, ${\displaystyle ~\sigma }$ is a constant.
Instead of (2) it is possible use the expression:
${\displaystyle ~\varepsilon ^{\mu \nu \sigma \rho }{\frac {\partial f_{\mu \nu }}{\partial x^{\sigma }}}=0.}$
Equation (2) is satisfied identically, which is proved by substituting into it the definition for the pressure field tensor according to (1). If in (2) we insert tensor components ${\displaystyle f_{\mu \nu }}$, this leads to two vector equations:
${\displaystyle ~\nabla \times \mathbf {C} =-{\frac {\partial \mathbf {I} }{\partial t}},\qquad \qquad (4)}$
${\displaystyle ~\nabla \cdot \mathbf {I} =0.\qquad \qquad (5)}$
According to (5), the solenoidal pressure vector has no sources as its divergence vanishes. From (4) follows that the time variation of the solenoidal pressure vector leads to a curl of the pressure field strength.
Equation (3) relates the pressure field to its source in the form of mass 4-current. In Minkowski space of special relativity the form of the equation is simplified and becomes:
${\displaystyle ~\nabla \cdot \mathbf {C} =4\pi \sigma \rho ,}$
${\displaystyle ~\nabla \times \mathbf {I} ={\frac {1}{c^{2}}}\left(4\pi \sigma \mathbf {J} +{\frac {\partial \mathbf {C} }{\partial t}}\right),}$
where ${\displaystyle ~\rho }$ is the density of moving mass, ${\displaystyle ~\mathbf {J} }$ is the density of mass current.
According to the first of these equations, the pressure field strength is generated by the mass density, and according to the second equation the mass current or change in time of the pressure field strength generate the circular field of the solenoidal pressure vector.
From (3) and (1) it can be obtained:[1]
${\displaystyle ~R_{\mu \alpha }f^{\mu \alpha }={\frac {4\pi \sigma }{c^{2}}}\nabla _{\alpha }J^{\alpha }.}$
The continuity equation for the mass 4-current ${\displaystyle ~\nabla _{\alpha }J^{\alpha }=0}$ is a gauge condition that is used to derive the field equation (3) from the principle of least action. Therefore, the contraction of the acceleration tensor and the Ricci tensor must be zero: ${\displaystyle ~R_{\mu \alpha }f^{\mu \alpha }=0}$. In Minkowski space the Ricci tensor ${\displaystyle ~R_{\mu \alpha }}$ equal to zero, the covariant derivative becomes the partial derivative, and the continuity equation becomes as follows:
${\displaystyle ~\partial _{\alpha }J^{\alpha }={\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.}$
## Covariant theory of gravitation
### Action and Lagrangian
Total Lagrangian for the matter in gravitational and electromagnetic fields includes the pressure field tensor and is contained in the action function: [1]
${\displaystyle ~S=\int {Ldt}=\int (kR-2k\Lambda -{\frac {1}{c}}D_{\mu }J^{\mu }+{\frac {c}{16\pi G}}\Phi _{\mu \nu }\Phi ^{\mu \nu }-{\frac {1}{c}}A_{\mu }j^{\mu }-{\frac {c\varepsilon _{0}}{4}}F_{\mu \nu }F^{\mu \nu }-}$
${\displaystyle ~-{\frac {1}{c}}U_{\mu }J^{\mu }-{\frac {c}{16\pi \eta }}u_{\mu \nu }u^{\mu \nu }-{\frac {1}{c}}\pi _{\mu }J^{\mu }-{\frac {c}{16\pi \sigma }}f_{\mu \nu }f^{\mu \nu }){\sqrt {-g}}d\Sigma ,}$
where ${\displaystyle ~L}$ is Lagrangian, ${\displaystyle ~dt}$ is differential of coordinate time, ${\displaystyle ~k}$ is a certain coefficient, ${\displaystyle ~R}$ is the scalar curvature, ${\displaystyle ~\Lambda }$ is the cosmological constant, which is a function of the system, ${\displaystyle ~c}$ is the speed of light as a measure of the propagation speed of electromagnetic and gravitational interactions, ${\displaystyle ~D_{\mu }}$ is the gravitational four-potential, ${\displaystyle ~G}$ is the gravitational constant, ${\displaystyle ~\Phi _{\mu \nu }}$ is the gravitational tensor, ${\displaystyle ~A_{\mu }}$ is the electromagnetic 4-potential, ${\displaystyle ~j^{\mu }}$ is the electromagnetic 4-current, ${\displaystyle ~\varepsilon _{0}}$ is the electric constant, ${\displaystyle ~F_{\mu \nu }}$ is the electromagnetic tensor, ${\displaystyle ~U_{\mu }}$ is the 4-potential of acceleration field, ${\displaystyle ~\eta }$ and ${\displaystyle ~\sigma }$ are the constants of acceleration field and pressure field, respectively, ${\displaystyle ~u_{\mu \nu }}$ is the acceleration tensor, ${\displaystyle ~\pi _{\mu }}$ is the 4-potential of pressure field, ${\displaystyle ~f_{\mu \nu }}$ is pressure field tensor, ${\displaystyle ~{\sqrt {-g}}d\Sigma ={\sqrt {-g}}cdtdx^{1}dx^{2}dx^{3}}$ is the invariant 4-volume, ${\displaystyle ~{\sqrt {-g}}}$ is the square root of the determinant ${\displaystyle ~g}$ of metric tensor, taken with a negative sign, ${\displaystyle ~dx^{1}dx^{2}dx^{3}}$ is the product of differentials of the spatial coordinates.
The variation of the action function by 4-coordinates leads to the equation of motion of the matter unit in gravitational and electromagnetic fields and pressure field: [2]
${\displaystyle ~-u_{\beta \sigma }\rho _{0}u^{\sigma }=\rho _{0}{\frac {dU_{\beta }}{d\tau }}-\rho _{0}u^{\sigma }\partial _{\beta }U_{\sigma }=\Phi _{\beta \sigma }\rho _{0}u^{\sigma }+F_{\beta \sigma }\rho _{0q}u^{\sigma }+f_{\beta \sigma }\rho _{0}u^{\sigma },}$
where the first term on the right is the gravitational force density, expressed with the help of the gravitational field tensor, second term is the Lorentz electromagnetic force density for the charge density ${\displaystyle ~\rho _{0q}}$ measured in the comoving reference frame, and the last term sets the pressure force density.
If we vary the action function by the pressure 4-potential, we obtain the equation of pressure field (3).
### Pressure stress-energy tensor
With the help of pressure field tensor in the covariant theory of gravitation the pressure stress-energy tensor is constructed:
${\displaystyle ~P^{ik}={\frac {c^{2}}{4\pi \sigma }}\left(-g^{im}f_{nm}f^{nk}+{\frac {1}{4}}g^{ik}f_{mr}f^{mr}\right)}$.
The covariant derivative of the pressure stress-energy tensor determines the pressure four-force density:
${\displaystyle ~f^{\alpha }=-\nabla _{\beta }P^{\alpha \beta }={f^{\alpha }}_{k}J^{k}.}$
### Generalized velocity and Hamiltonian
Covariant 4-vector of generalized velocity is given by:
${\displaystyle ~s_{\mu }=U_{\mu }+D_{\mu }+{\frac {\rho _{0q}}{\rho _{0}}}A_{\mu }+\pi _{\mu }.}$
Given the generalized 4-velocity the Hamiltonian contains the pressure field tensor and has the form:
${\displaystyle ~H=\int {(s_{0}J^{0}-{\frac {c^{2}}{16\pi G}}\Phi _{\mu \nu }\Phi ^{\mu \nu }+{\frac {c^{2}\varepsilon _{0}}{4}}F_{\mu \nu }F^{\mu \nu }+{\frac {c^{2}}{16\pi \eta }}u_{\mu \nu }u^{\mu \nu }+{\frac {c^{2}}{16\pi \sigma }}f_{\mu \nu }f^{\mu \nu }){\sqrt {-g}}dx^{1}dx^{2}dx^{3}},}$
where ${\displaystyle ~s_{0}}$ and ${\displaystyle ~J^{0}}$ are timelike components of 4-vectors ${\displaystyle ~s_{\mu }}$ and ${\displaystyle ~J^{\mu }}$.
In the reference frame that is fixed relative to the center of mass of system, Hamiltonian will determine the invariant energy of the system.
|
2019-10-18 18:30:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 80, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951032280921936, "perplexity": 283.4908285810764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00105.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Selection_theorem
|
# Selection theorem
In functional analysis, a branch of mathematics, a selection theorem is a theorem that guarantees the existence of a single-valued selection function from a given multi-valued map. There are various selection theorems, and they are important in the theories of differential inclusions, optimal control, and mathematical economics.[1]
## Preliminaries
Given two sets X and Y, let F be a multivalued map from X and Y. Equivalently, ${\displaystyle F:X\rightarrow {\mathcal {P}}(Y)}$ is a function from X to the power set of Y.
A function ${\displaystyle f:X\rightarrow Y}$ is said to be a selection of F if
${\displaystyle \forall x\in X:\,\,\,f(x)\in F(x)\,.}$
In other words, given an input x for which the original function F returns multiple values, the new function f returns a single value. This is a special case of a choice function.
The axiom of choice implies that a selection function always exists; however, it is often important that the selection have some "nice" properties, such as continuity or measurability. This is where the selection theorems come into action: they guarantee that, if F satisfies certain properties, then it has a selection f that is continuous or has other desirable properties.
## Selection theorems for set-valued functions
The Michael selection theorem[2] says that the following conditions are sufficient for the existence of a continuous selection:
The Deutsch–Kenderov theorem[3] generalizes Michael's theorem as follows:
• X is a paracompact space;
• Y is a normed vector space;
• F is almost lower hemicontinuous, that is, at each ${\displaystyle x\in X}$ , for each neighborhood ${\displaystyle V}$ of ${\displaystyle 0}$ there exists a neighborhood ${\displaystyle U}$ of ${\displaystyle x}$ such that ${\textstyle \bigcap _{u\in U}\{F(u)+V\}\neq \emptyset }$ ;
• for all x in X, the set F(x) is nonempty and convex.
These conditions guarantee that ${\displaystyle F}$ has a continuous approximate selection, that is, for each neighborhood ${\displaystyle V}$ of ${\displaystyle 0}$ in ${\displaystyle Y}$ there is a continuous function ${\displaystyle f\colon X\mapsto Y}$ such that for each ${\displaystyle x\in X}$ , ${\displaystyle f(x)\in F(X)+V}$ .[3]
In a later note, Xu proved that the Deutsch–Kenderov theorem is also valid if ${\displaystyle Y}$ is a locally convex topological vector space.[4]
The Yannelis-Prabhakar selection theorem[5] says that the following conditions are sufficient for the existence of a continuous selection:
The Kuratowski and Ryll-Nardzewski measurable selection theorem says that if X is a Polish space and ${\displaystyle {\mathcal {B}}}$ its Borel σ-algebra, ${\displaystyle \mathrm {Cl} (X)}$ is the set of nonempty closed subsets of X, ${\displaystyle (\Omega ,{\mathcal {F}})}$ is a measurable space, and ${\displaystyle F:\Omega \to \mathrm {Cl} (X)}$ is an ${\displaystyle {\mathcal {F}}}$ -weakly measurable map (that is, for every open subset ${\displaystyle U\subseteq X}$ we have ${\displaystyle \{\omega \in \Omega :F(\omega )\cap U\neq \emptyset \}\in {\mathcal {F}}}$ ), then ${\displaystyle F}$ has a selection that is ${\displaystyle ({\mathcal {F}},{\mathcal {B}})}$ -measurable.[6]
Other selection theorems for set-valued functions include:
## References
1. ^ Border, Kim C. (1989). Fixed Point Theorems with Applications to Economics and Game Theory. Cambridge University Press. ISBN 0-521-26564-9.
2. ^ Michael, Ernest (1956). "Continuous selections. I". Annals of Mathematics. Second Series. 63 (2): 361–382. doi:10.2307/1969615. hdl:10338.dmlcz/119700. JSTOR 1969615. MR 0077107.
3. ^ a b Deutsch, Frank; Kenderov, Petar (January 1983). "Continuous Selections and Approximate Selection for Set-Valued Mappings and Applications to Metric Projections". SIAM Journal on Mathematical Analysis. 14 (1): 185–194. doi:10.1137/0514015.
4. ^ Xu, Yuguang (December 2001). "A Note on a Continuous Approximate Selection Theorem". Journal of Approximation Theory. 113 (2): 324–325. doi:10.1006/jath.2001.3622.
5. ^ Yannelis, Nicholas C.; Prabhakar, N. D. (1983-12-01). "Existence of maximal elements and equilibria in linear topological spaces". Journal of Mathematical Economics. 12 (3): 233–245. doi:10.1016/0304-4068(83)90041-1. ISSN 0304-4068.
6. ^ V. I. Bogachev, "Measure Theory" Volume II, page 36.
|
2022-08-10 17:37:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135584235191345, "perplexity": 1059.85722597052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00770.warc.gz"}
|
https://www.statistics-lab.com/%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99%E6%8A%BD%E6%A0%B7%E8%B0%83%E6%9F%A5%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99sampling-theory-of-survey%E4%BB%A3%E8%80%83estimation-in-finite-populations/
|
### 统计代写|抽样调查作业代写sampling theory of survey代考|Estimation in Finite Populations
statistics-lab™ 为您的留学生涯保驾护航 在代写抽样调查sampling theory of survey方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写抽样调查sampling theory of survey方面经验极为丰富,各种代写抽样调查sampling theory of survey相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 统计代写|抽样调查作业代写sampling theory of survey代考|A Unified Theory
Suppose it is considered important to gather ideas about, for example, (1) the total quantity of food grains stocked in all the godowns managed by a state government, (2) the total number of patients admitted in all the hospitals of a country classified by varieties of their complaints, (3) the amount of income tax evaded on an average by the income earners of a city. Now, to inspect all godowns, examine all admission documents of all hospitals of a country, and make inquiries about all income earners of a city will be too expensive and time consuming. So it seems natural to select a few godowns, hospitals, and income earners, to get all relevant data for them and to be able to draw conclusions on those quantities that could be ascertained exactly only by a survey of all godowns, hospitals, and income earners. We feel it is useful to formulate mathematically as follows the essentials of the issues at hand common to the above and similar circumstances.
## 统计代写|抽样调查作业代写sampling theory of survey代考|ELEMENTARY DEFINITIONS
Let $N$ be a known number of units, e.g., godowns, hospitals, or income earners, each assignable identifying labels $1,2, \ldots, N$ and bearing values, respectively, $Y_{1}, Y_{2}, \ldots, Y_{N}$ of a realvalued variable $y$, which are initially unknown to an investigator who intends to estimate the total
where $f_{s i}$ denotes the frequency of $i$ in $s$ such that
$$\sum_{i=1}^{N} f_{s i}=n(s) .$$
$N \bar{y}$ is called the expansion estimator for $Y$.
More generally, an estimator $t$ of the form
$$t(s, Y)=b_{s}+\sum_{i=1}^{N} b_{s i} Y_{i}$$
with $b_{s i}=0$ for $i \notin s$ is called linear (L). Here $b_{s}$ and $b_{s i}$ are free of $Y$. Keeping $b_{s}=0$ we obtain a homogeneous linear (HL) estimator.
We must emphasize that here $t(s, Y)$ is linear (or homogeneous linear) in $Y_{i}, i \in s$. It may be a nonlinear function of two random variables, e.g., when $b_{s}=0$ and $b_{s i}=X / \Sigma_{1}^{N} f_{s i} X_{i}$ so that
$$t(s, Y)=\frac{\sum_{1}^{N} f_{s i} Y_{i}}{\sum_{1}^{N} f_{s i} X_{i}} X .$$
## 统计代写|抽样调查作业代写sampling theory of survey代考|DESIGN-BASED INFERENCE
Let $\Sigma_{1}$ be the sum over samples for which $|t(s, Y)-Y| \geq k>0$ and let $\Sigma_{2}$ be the sum over samples for which $|t(s, Y)-Y|<k$ for a fixed $Y$. Then from
\begin{aligned} M_{p}(t) &=\Sigma_{1} p(s)(t-Y)^{2}+\Sigma_{2} p(s)(t-Y)^{2} \ & \geq k^{2} \operatorname{Prob}[|t(s, Y)-Y| \geq k] \end{aligned}
one derives the Chebyshev inequality:
$$\operatorname{Prob}[|t(s, Y)-Y| \geq k] \leq \frac{M_{p}(t)}{k^{2}} .$$
Hence
$\operatorname{Prob}[t-k \leq Y \leq t+k] \geq 1-\frac{M_{p}(t)}{k^{2}}=1-\frac{1}{k^{2}}\left[V_{p}(t)+B_{p}^{2}(t)\right]$ where $B_{p}(t)=E_{p}(t)-Y$ is the bias of $t$. Writing $\sigma_{p}(t)=$ $\sqrt{V_{p}(t)}$ for the standard error of $t$ and taking $k=3 \sigma_{p}(t)$, it follows that, whatever $Y$ may be, the random interval $t \pm 3 \sigma_{p}(t)$
covers the unknown $Y$ with a probability not less than
$$\frac{8}{9}-\frac{1}{9} \frac{B_{p}^{2}(t)}{V_{p}(t)}$$
So, to keep this probability high and the length of this covering interval small it is desirable that both $\left|B_{p}(t)\right|$ and $\sigma_{p}(t)$ be small, leading to a small $M_{p}(t)$ as well.
## 统计代写|抽样调查作业代写sampling theory of survey代考|ELEMENTARY DEFINITIONS
∑一世=1ñFs一世=n(s).
ñ是¯被称为扩展估计器是.
$$t(s, Y ) =b_{s}+\sum_{i=1}^{N} b_{si} Y_{i}$$
t(s, Y )=\frac{\sum_{1}^{N} f_{si} Y_{i}}{\sum_{1}^{N} f_{si} X_{i}} X 。
89−19乙p2(吨)在p(吨)
## 广义线性模型代考
statistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
2023-03-26 19:17:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7180948257446289, "perplexity": 2321.3188016019826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00097.warc.gz"}
|
https://easystats.github.io/performance/reference/binned_residuals.html
|
Check model quality of binomial logistic regression models.
## Usage
binned_residuals(model, term = NULL, n_bins = NULL, ...)
## Arguments
model
A glm-object with binomial-family.
term
Name of independent variable from x. If not NULL, average residuals for the categories of term are plotted; else, average residuals for the estimated probabilities of the response are plotted.
n_bins
Numeric, the number of bins to divide the data. If n_bins = NULL, the square root of the number of observations is taken.
...
Currently not used.
## Value
A data frame representing the data that is mapped in the accompanying plot. In case all residuals are inside the error bounds, points are black. If some of the residuals are outside the error bounds (indicated by the grey-shaded area), blue points indicate residuals that are OK, while red points indicate model under- or over-fitting for the relevant range of estimated probabilities.
## Details
Binned residual plots are achieved by “dividing the data into categories (bins) based on their fitted values, and then plotting the average residual versus the average fitted value for each bin.” (Gelman, Hill 2007: 97). If the model were true, one would expect about 95% of the residuals to fall inside the error bounds.
If term is not NULL, one can compare the residuals in relation to a specific model predictor. This may be helpful to check if a term would fit better when transformed, e.g. a rising and falling pattern of residuals along the x-axis is a signal to consider taking the logarithm of the predictor (cf. Gelman and Hill 2007, pp. 97-98).
## Note
binned_residuals() returns a data frame, however, the print() method only returns a short summary of the result. The data frame itself is used for plotting. The plot() method, in turn, creates a ggplot-object.
## References
Gelman, A., and Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge; New York: Cambridge University Press.
## Examples
model <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
result <- binned_residuals(model)
result
#> Warning: Probably bad model fit. Only about 50% of the residuals are inside the error bounds.
#>
# look at the data frame
as.data.frame(result)
#> xbar ybar n x.lo x.hi se ci_range
#> 1 0.03786483 -0.03786483 5 0.01744776 0.06917366 0.01899089 0.00968941
#> 2 0.09514191 -0.09514191 5 0.07087498 0.15160143 0.02816391 0.01436960
#> 3 0.25910531 0.07422802 6 0.17159955 0.35374001 0.42499664 0.21683901
#> 4 0.47954643 -0.07954643 5 0.38363314 0.54063600 0.49728294 0.25372045
#> 5 0.71108931 0.28891069 5 0.57299903 0.89141359 0.10975381 0.05599787
#> 6 0.97119262 -0.13785929 6 0.91147360 0.99815623 0.30361062 0.15490623
#> CI_low CI_high group
#> 1 -0.05685572 -0.01887394 no
#> 2 -0.12330581 -0.06697800 no
#> 3 -0.35076862 0.49922466 yes
#> 4 -0.57682937 0.41773650 yes
#> 5 0.17915688 0.39866451 no
#> 6 -0.44146992 0.16575133 yes
# plot
if (require("see")) {
plot(result)
}
#> Warning: Computation failed in stat_smooth()
#> Caused by error in smooth.construct.tp.smooth.spec():
|
2023-03-27 08:00:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3408390283584595, "perplexity": 3100.961891308919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00770.warc.gz"}
|
https://www.quatomic.com/composer/reference/control/for-each-control-value/
|
# For Each Control Value
## Description
The For Each Control Value node takes a control path and runs a time loop over the control values contained within the the range of the discretized control path.
## Input
The node has the following input:
• Control: This input defines a control path defined in the Control node.
## Content
This node is a time loop which displays the time based on the time dimensions and timestep defined in the Time node. For every value of $t$, the node outputs the control value contained within the the range of the discretized control path.
## Output
• Time (t): The scalar values of the time axis
• Control value (u1): The control value corresponding to time $t$
## Example
In the example below, a harmonic oscillator is being controlled by the path designed in the Control node. The For Each Control Value node inputs the control path, initial and target states and the values of the time and space dimension. The Potential node inputs the control values at every time $t$ as the time loop runs. After time evolution, the overlap between the target state and the time-evolved state is calculated in the Fidelity node.
|
2020-06-01 19:43:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6857532858848572, "perplexity": 1042.3011764903783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419593.76/warc/CC-MAIN-20200601180335-20200601210335-00119.warc.gz"}
|
https://jira.lsstcorp.org/browse/DM-5132?actionOrder=desc
|
# obs_subaru install with eups distrib fails
XMLWordPrintable
## Details
• Type: Bug
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
1
• Sprint:
Science Pipelines DM-W16-6
• Team:
Data Release Production
Thus:
$eups distrib install -t w_2016_06 obs_subaru ... [ 52/52 ] obs_subaru 5.0.0.1-60-ge4efae7+2 ... ***** error: from /Users/jds/Projects/Astronomy/LSST/stack/EupsBuildDir/DarwinX86/obs_subaru-5.0.0.1-60-ge4efae7+2/build.log: ---------------------------------------------------------------------- Traceback (most recent call last): File "tests/hscRepository.py", line 91, in setUp self.repoPath = createDataRepository("lsst.obs.hsc.HscMapper", rawPath) File "tests/hscRepository.py", line 63, in createDataRepository check_call([ingest_cmd, repoPath] + glob(os.path.join(inputPath, "*.fits.gz"))) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 540, in check_call raise CalledProcessError(retcode, cmd) CalledProcessError: Command '['/Users/jds/Projects/Astronomy/LSST/stack/EupsBuildDir/DarwinX86/obs_subaru-5.0.0.1-60-ge4efae7+2/obs_subaru-5.0.0.1-60-ge4efae7+2/bin/hscIngestImages.py', '/var/folders/jp/lqz3n0m17nqft7bwtw3b8n380000gp/T/tmptUSKuf', '/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/testdata_subaru/master-gf9ba9abdbe/hsc/raw/HSCA90402512.fits.gz']' returned non-zero exit status 1 ---------------------------------------------------------------------- Ran 8 tests in 9.928s FAILED (errors=7) The following tests failed: /Users/jds/Projects/Astronomy/LSST/stack/EupsBuildDir/DarwinX86/obs_subaru-5.0.0.1-60-ge4efae7+2/obs_subaru-5.0.0.1-60-ge4efae7+2/tests/.tests/hscRepository.py.failed 1 tests failed scons: *** [checkTestStatus] Error 1 scons: building terminated because of errors. + exit -4 Please fix it. ## Attachments ## Issue Links ## Activity Hide John Swinbank added a comment - $ eups distrib install -t fe_test4 obs_subaru [ 1/51 ] cfitsio 3360.lsst4 (already installed) done. ... [ 51/51 ] obs_subaru 5.0.0.1-63-gfd8740e+1 done.
Thanks, everybody!
Show
John Swinbank added a comment - $eups distrib install -t fe_test4 obs_subaru [ 1/51 ] cfitsio 3360.lsst4 (already installed) done. ... [ 51/51 ] obs_subaru 5.0.0.1-63-gfd8740e+1 done. Thanks, everybody! Hide Joshua Hoblitt added a comment - - edited It looks like there were multiple issues going. Mario Juric identified the most severe with distrib packages being unaware of git-lfs. Another, minor, issue was that git tags were not being applied to the repo – resolved by Tim Jenness. I have merged John Swinbank PR to add testsdata_subaru to the remap list. TL;DR – the w_2016_06 weekly tag is broken for obs_subaru. Show Joshua Hoblitt added a comment - - edited It looks like there were multiple issues going. Mario Juric identified the most severe with distrib packages being unaware of git-lfs. Another, minor, issue was that git tags were not being applied to the repo – resolved by Tim Jenness . I have merged John Swinbank PR to add testsdata_subaru to the remap list. TL;DR – the w_2016_06 weekly tag is broken for obs_subaru . Hide John Swinbank added a comment - Thanks, Mario Juric. It's appropriate to treat testdata_subaru as analogous to afwdata, so blacklisting package creation seems like the way to go. The wider issue of eups distrib/git lfs integration isn't a current requirement. Joshua Hoblitt, are you happy with the changes on lsstsw PR#83? Show John Swinbank added a comment - Thanks, Mario Juric . It's appropriate to treat testdata_subaru as analogous to afwdata , so blacklisting package creation seems like the way to go. The wider issue of eups distrib / git lfs integration isn't a current requirement. Joshua Hoblitt , are you happy with the changes on lsstsw PR#83 ? Hide Mario Juric added a comment - EUPS doesn't know about git-lfs; we need to teach it about it, or you can also override the create() and/or fetch() verbs in eupspkg.cfg.sh in the short term. To answer the original question, I think it's fine to blacklist this package in manifest.remap if you don't expect end users to want to install it using EUPS. Show Mario Juric added a comment - EUPS doesn't know about git-lfs; we need to teach it about it, or you can also override the create() and/or fetch() verbs in eupspkg.cfg.sh in the short term. To answer the original question, I think it's fine to blacklist this package in manifest.remap if you don't expect end users to want to install it using EUPS. Hide Michael Wood-Vasey added a comment - - edited Should we create tests for the testdata_* packages that explicitly verify that the git-lfs pull worked? This would prevent such packages from being setup. Show Michael Wood-Vasey added a comment - - edited Should we create tests for the testdata_* packages that explicitly verify that the git-lfs pull worked? This would prevent such packages from being setup . Hide John Swinbank added a comment - - edited By the way, I imagine this is breaking attempts to install lsst_distrib through eups distrib. Show John Swinbank added a comment - - edited By the way, I imagine this is breaking attempts to install lsst_distrib through eups distrib . Hide John Swinbank added a comment - The obs_subaru table file declares testdata_subaru as setupOptional. However, eups distrib still attempts to install it: $ eups distrib install -t w_2016_06 obs_subaru ... [ 11/52 ] testdata_subaru master-gf9ba9abdbe (already installed) done. ...
This produces an installed version which has not checked out files from git lfs:
$cat${TESTDATA_SUBARU_DIR}/hsc/calib/BIAS/2013-11-02/NONE/master/BIAS-050.fits.gz version https://git-lfs.github.com/spec/v1 oid sha256:d70512df8c1fd25f5c9bcc5b95f413c62f65da752f50a3da0a8331686682ad5a size 31345503
obs_subaru will skip tests gracefully if testdata_subaru is not set up, however here it is set up but is populated with (from the point of view of the tests, at least) garbage data.
My suggested fix is that it should be treated in the same way as afwdata in lsstsw/etc/manifest.remap, which will suppress the creation of the package – I guess this will do the trick, as it's evidently possible to install afw through eups distrib. I'd appreciate input from eups distrib experts (Mario Juric?) on whether this is actually the correct solution, though.
Even if that fixes the immediate failure, I'm not sure if the failure of eups distrib install to check out the contents of files from git lfs is a bug or a feature.
Show
John Swinbank added a comment - The obs_subaru table file declares testdata_subaru as setupOptional . However, eups distrib still attempts to install it: $eups distrib install -t w_2016_06 obs_subaru ... [ 11/52 ] testdata_subaru master-gf9ba9abdbe (already installed) done. ... This produces an installed version which has not checked out files from git lfs :$ cat \${TESTDATA_SUBARU_DIR}/hsc/calib/BIAS/2013-11-02/NONE/master/BIAS-050.fits.gz version https://git-lfs.github.com/spec/v1 oid sha256:d70512df8c1fd25f5c9bcc5b95f413c62f65da752f50a3da0a8331686682ad5a size 31345503 obs_subaru will skip tests gracefully if testdata_subaru is not set up, however here it is set up but is populated with (from the point of view of the tests, at least) garbage data. My suggested fix is that it should be treated in the same way as afwdata in lsstsw/etc/manifest.remap , which will suppress the creation of the package – I guess this will do the trick, as it's evidently possible to install afw through eups distrib . I'd appreciate input from eups distrib experts ( Mario Juric ?) on whether this is actually the correct solution, though. Even if that fixes the immediate failure, I'm not sure if the failure of eups distrib install to check out the contents of files from git lfs is a bug or a feature.
## People
• Assignee:
John Swinbank
Reporter:
John Swinbank
Reviewers:
Joshua Hoblitt
Watchers:
John Swinbank, Joshua Hoblitt, Mario Juric, Michael Wood-Vasey, Tim Jenness
|
2020-08-09 03:15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4504014551639557, "perplexity": 10611.488442176482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00260.warc.gz"}
|
https://freshergate.com/arithmetic-aptitude/numbers/discussion/321
|
Home / Arithmetic Aptitude / Numbers :: Discussion
### Discussion :: Numbers
1. What least number must be subtracted from 13601, so that the remainder is divisible by 87 ?
2. A. 23 B. 31 C. 29 D. 37 E. 49
Explanation :
87) 13601 (156
87
----
490
435
----
551
522
---
29
---
Therefore, the required number = 29.
Be The First To Comment
|
2022-08-11 15:05:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34228673577308655, "perplexity": 4555.632595826697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00284.warc.gz"}
|
https://iq.opengenus.org/emplace-set-cpp/
|
# emplace() in Set C++ STL
#### Software Engineering C++
Get FREE domain for 1st year and build your brand new site
Reading time: 20 minutes | Coding time: 5 minutes
Emplace is a function of set container in C++ STL which is used to insert elements in the set. It is considered to be a faster alternative to insert().
Set is a container that is used to store the values which are unique i.e no value in a set can be repeated and no value can be repeated. If you want to edit the value added by you the only way is to remove the wrong element and add the correct one.
## set::emplace(): inserting elements in set
The function is used to insert an element in the set container, if and only if the element to be inserted is unique i.e does not already exists in the set. In fact, emplace() can be used to insert pair of values which is not possible in insert() function directly.
Emplace was introduced in C++11 and is considered to more faster and advised to be used when your object is non-trival that is user defined objects.
Syntax:
setname.emplace(value)
• Parameters : The element to be inserted is passed as the parameter.
• Result : The unique parameter is added. No return value.
#### Example 1
In this C++ example, we will insert integer values in a set using emplace() and see that inserting duplicate values have not impact.
#include <iostream>
#include <set>
using namespace std;
int main()
{
set<int> myset{};
myset.emplace(1);
myset.emplace(2);
myset.emplace(3);
myset.emplace(4);
myset.emplace(5);
for (auto it = myset.begin();
it != myset.end(); ++it)
cout << ' ' << *it;
return 0;
}
Output:
1 2 3 4 5 6
#### Example 2
In this C++ example, we will insert strings in a set using emplace() and see that inserting duplicate values have not impact.
#include <iostream>
#include <set>
#include <string>
using namespace std;
int main()
{
set<string> myset{};
myset.emplace("welcome");
myset.emplace("to");
myset.emplace("OpenGenus");
myset.emplace("IQ");
// printing the set
for (auto it = myset.begin();
it != myset.end(); ++it)
cout << ' ' << *it;
return 0;
}
Output:
Welcome to OpenGenus IQ!
#### Example 3
In this C++ example, we will insert integer values in a set using emplace() and multiple all integers to get the final product.
#include <iostream>
#include <set>
using namespace std;
int main()
{
int mul = 1;
set<int> myset{};
myset.emplace(1);
myset.emplace(2);
myset.emplace(3);
myset.emplace(4);
myset.emplace(5);
set<int>::iterator it;
while (!myset.empty())
{
it = myset.begin();
mul = mul*(*it);
myset.erase(it);
}
cout << mul;
return 0;
}
Output :
120
### Difference between insert() and emplace()
While using insert one creates an object and then insert it into a set/multiset. But in case of emplace() the object is constructed in-place.
#include <iostream>
#include <set>
using namespace std;
int main()
{
// declaring map
multiset<pair<int, int>> ms;
// using emplace() to insert pair in-place
ms.emplace(1, 2);
// Below line would not compile
// ms.insert(3, 4);
// using emplace() to insert pair in-place
ms.insert(make_pair(3, 4));
// printing the multiset
for (auto it = ms.begin(); it != ms.end(); ++it)
cout << " " << (*it).first << " "
<< (*it).second << endl;
return 0;
}
Output :
1 2
3 4
## Key point
• Emplace was introduced in C++11
• Emplace can insert a pair without creating a pair explicitly
• Emplace is considered to more faster than insert() and advised to be used when your object is non-trival.
#### Harshita Sahai
Maintainer at OpenGenus | Previously Software Developer, Intern at OpenGenus (June to August 2019) | B.Tech in Information Technology from Guru Gobind Singh Indraprastha University (2017 to 2021)
Vote for Harshita Sahai for Top Writers 2021:
|
2021-04-17 15:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2843414843082428, "perplexity": 6473.081236378413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00112.warc.gz"}
|
http://www.aimsciences.org/article/doi/10.3934/dcds.2011.30.1107
|
# American Institute of Mathematical Sciences
2011, 30(4): 1107-1138. doi: 10.3934/dcds.2011.30.1107
## Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation
1 Department of Mathematics, Shanghai Jiao Tong University, 800 Dong Chuan Road, 200240, Shanghai
Received March 2010 Revised August 2010 Published May 2011
Our aim is to study the pointwise time-asymptotic behavior of solutions for the scalar conservation laws with relaxation in multi-dimensions. We construct the Green's function for the Cauchy problem of the relaxation system which satisfies the dissipative condition. Based on the estimate for the Green's function, we get the pointwise estimate for the solution. It is shown that the solution exhibits some weak Huygens principle where the characteristic 'cone' is the envelope of planes.
Citation: Shijin Deng, Weike Wang. Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1107-1138. doi: 10.3934/dcds.2011.30.1107
##### References:
[1] C. Arvanitis, Mesh redistribution strategies and finite element schemes for hyperbolic conservation laws,, J. Sci. Comput., 34 (2008), 1. doi: 10.1007/s10915-007-9155-7. [2] S. Balasubramanyam and S. V. Raghurama Rao, A grid-free upwind relaxation scheme for inviscid compressible flows,, Internat. J. Numer. Methods Fluids, 51 (2006), 159. doi: 10.1002/fld.1099. [3] S. Chapman and T. G. Cowling, "The Mathematical Theory of Nonuniform Gases,", 3rd edition, (1970). [4] G. Q. Chen, C. D. Levermore and T.-P. Liu, Hyperbolic conservation laws with stiff relaxation terms and entropy,, Comm. Pure Appl. Math., 47 (1994), 787. doi: doi:10.1002/cpa.3160470602. [5] D. Donatelli and C. Lattanzio, On the diffusive stress relaxation for multidimensional viscoelasticity,, Commun. Pure Appl. Anal., 8 (2009), 645. doi: 10.3934/cpaa.2009.8.645. [6] M. Di Francesco and D. Donatelli, Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models,, Discrete Contin. Dyn. Syst. Ser. B, 13 (2010), 79. [7] L. C. Evans, Partial differential equations,, Graduate studies in Math., 19 (1998). [8] H. Fan and J. Härterich, Conservation laws with a degenerate source: Traveling waves, large-time behavior and zero relaxation limit,, Nonlinear Anal., 63 (2005), 1042. doi: 10.1016/j.na.2003.10.031. [9] D. Hoff and K. Zumbrum, Multi-dimensional diffusion wave for the Navier-Stokes equations of compressible flow,, Indiana Univ. Math. Journal, 44 (1995), 603. doi: 10.1512/iumj.1995.44.2003. [10] S. Jin and Z. P. Xin, The relaxation schemes for system of conservative laws in arbitrary space dimensions,, Comm. Pure Appl. Math., 48 (1995), 235. doi: 10.1002/cpa.3160480303. [11] S. Jin and Z. P. Xin, Numerical passage from systems of conservation laws to Hamilton-Jacobi equations, relaxation schemes,, SIAM J. Numer. Anal., 35 (1998), 2385. doi: 10.1137/S0036142996314366. [12] B. Kwon and K. Zumbrun, Asymptotic behavior of multidimensional scalar relaxation shocks,, J. Hyperbolic Differ. Equ., 6 (2009), 663. [13] D. L. Li, The Green's function of the Navier-Stokes equations for gas dynamics in $\mathbbR^3$,, Commun. Math. Phys., 257 (2005), 579. doi: 10.1007/s00220-005-1351-4. [14] M. B. Liu and Z. X. Cheng, Conservation laws. III. Relaxation limit,, Rev. Colombiana Mat., 41 (2007), 107. [15] T. Li, Global solutions of nonconcave hyperbolic conservation laws with relaxation arising from traffic flow,, J. Differential Equations, 190 (2003), 131. [16] T.-P. Liu, Hyperbolic conservative laws with relaxation,, Comm. Math. Phys., 108 (1987), 153. doi: 10.1007/BF01210707. [17] T.-P. Liu, Pointwise convergence to shock waves for viscous conservation laws,, Comm. Pure Appl. Math, 50 (1997), 1113. doi: 10.1002/(SICI)1097-0312(199711)50:11<1113::AID-CPA3>3.0.CO;2-D. [18] T.-P. Liu and W. K. Wang, The pointwise estimates of diffusion wave for the Navier-Stokes systems in odd multi-dimension,, Comm. Math. Phys., 196 (1998), 145. doi: 10.1007/s002200050418. [19] T.-P. Liu and S.-H. Yu, The Green's function and large-time behavior of solutions for one-dimensional Boltzmann equation,, Comm. Pure Appl. Math., 57 (2004), 1543. doi: 10.1002/cpa.20011. [20] T.-P. Liu and S.-H. Yu, Green's function and large-time behavior of solutions of Boltzmann equation, 3-D waves,, Bulletin. Inst. Math. Academia Sinica (N.S.), 1 (2006), 1. [21] T.-P. Liu and Y. Zeng, Large time behavior of solutions to general quasilinear hyperbolic-parabolic systems of conservation laws,, Mem. Amer. Math. Soc., 125 (1997). [22] Y. Q. Liu and W. K. Wang, The pointwise estimates of solutions for dissipative wave equation in multi-dimensions,, Discrete Contin. Dyn. Syst., 20 (2008), 1013. doi: 10.3934/dcds.2008.20.1013. [23] T. Luo and Z. P. Xin, Nonlinear stability of shock fronts for a relaxation system in several space dimensions,, J. Differential Equations, 139 (1997), 365. [24] C. Mascia and K. Zumbrun, Pointwise Green's function bounds and stability of relaxation shocks,, Indiana Univ. Math. J., 51 (2002), 773. doi: 10.1512/iumj.2002.51.2212. [25] C. Mascia and K. Zumbrun, Stability of large-amplitude shock profiles of general relaxation systems,, SIAM J. Math. Anal., 37 (2005), 889. doi: 10.1137/S0036141004435844. [26] C. Mascia and K. Zumbrun, Spectral stability of weak relaxation shock profiles,, Comm. Partial Differential Equations, 34 (2009), 119. [27] R. Plaza and K. Zumbrun, An Evans function approach to spectral stability of small-amplitude shock profiles,, Discrete Contin. Dyn. Syst., 10 (2004), 885. doi: 10.3934/dcds.2004.10.885. [28] R. Kumar and M. K. Kadalbajoo, Efficient high-resolution relaxation schemes for hyperbolic systems of conservation laws,, Internat. J. Numer. Methods Fluids, 55 (2007), 483. doi: 10.1002/fld.1479. [29] Y.-J. Peng and S. Wang, Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters,, Discrete Contin. Dyn. Syst., 23 (2009), 415. doi: 10.3934/dcds.2009.23.415. [30] Y. Shizuta and S. Kawashima, Systems of equations of hyperbolic-parabolic type with applications to the discrete Boltzmann equation,, Hokkaido Math. J., 14 (1985), 249. [31] W. K. Wang and H. M. Xu, Pointwise estimate of solutions of isentropic Navier-Stokes equations in even multi-dimensions,, Acta Math. Sci. Ser. B Engl. Ed., 21 (2001), 417. [32] W. K. Wang and T. Yang, The pointwise estimates of solutions of Euler equations with damping in multi-dimensions,, J. Differential Equations, 173 (2001), 410. [33] W. K. Wang and X. F. Yang, The pointwise estimates of solutions to the isentropic Navier-Stokes equations in even space-dimensions,, J. Hyperbolic Differ. Equ., 2 (2005), 673. [34] J. Xu and W.-A. Yong, Zero-relaxation limit of non-isentropic hydrodynamic models for semiconductors,, Discrete Contin. Dyn. Syst., 25 (2009), 1319. doi: 10.3934/dcds.2009.25.1319. [35] W.-A. Yong and W. Jäger, On hyperbolic relaxation problems,, Analysis and Numerics for Conservation Laws, (2005), 495. [36] W.-A. Yong and K. Zumbrun, Existence of relaxation shock profiles for hyperbolic conservation laws,, SIAM J. Appl. Math., 60 (2000), 1565. doi: 10.1137/S0036139999352705. [37] Y. Zeng, Gas dynamics in thermal nonequilibrium and general hyperbolic systems with relaxation,, Arch. Ration. Mech. Anal., 150 (1999), 225. doi: 10.1007/s002050050188.
show all references
##### References:
[1] C. Arvanitis, Mesh redistribution strategies and finite element schemes for hyperbolic conservation laws,, J. Sci. Comput., 34 (2008), 1. doi: 10.1007/s10915-007-9155-7. [2] S. Balasubramanyam and S. V. Raghurama Rao, A grid-free upwind relaxation scheme for inviscid compressible flows,, Internat. J. Numer. Methods Fluids, 51 (2006), 159. doi: 10.1002/fld.1099. [3] S. Chapman and T. G. Cowling, "The Mathematical Theory of Nonuniform Gases,", 3rd edition, (1970). [4] G. Q. Chen, C. D. Levermore and T.-P. Liu, Hyperbolic conservation laws with stiff relaxation terms and entropy,, Comm. Pure Appl. Math., 47 (1994), 787. doi: doi:10.1002/cpa.3160470602. [5] D. Donatelli and C. Lattanzio, On the diffusive stress relaxation for multidimensional viscoelasticity,, Commun. Pure Appl. Anal., 8 (2009), 645. doi: 10.3934/cpaa.2009.8.645. [6] M. Di Francesco and D. Donatelli, Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models,, Discrete Contin. Dyn. Syst. Ser. B, 13 (2010), 79. [7] L. C. Evans, Partial differential equations,, Graduate studies in Math., 19 (1998). [8] H. Fan and J. Härterich, Conservation laws with a degenerate source: Traveling waves, large-time behavior and zero relaxation limit,, Nonlinear Anal., 63 (2005), 1042. doi: 10.1016/j.na.2003.10.031. [9] D. Hoff and K. Zumbrum, Multi-dimensional diffusion wave for the Navier-Stokes equations of compressible flow,, Indiana Univ. Math. Journal, 44 (1995), 603. doi: 10.1512/iumj.1995.44.2003. [10] S. Jin and Z. P. Xin, The relaxation schemes for system of conservative laws in arbitrary space dimensions,, Comm. Pure Appl. Math., 48 (1995), 235. doi: 10.1002/cpa.3160480303. [11] S. Jin and Z. P. Xin, Numerical passage from systems of conservation laws to Hamilton-Jacobi equations, relaxation schemes,, SIAM J. Numer. Anal., 35 (1998), 2385. doi: 10.1137/S0036142996314366. [12] B. Kwon and K. Zumbrun, Asymptotic behavior of multidimensional scalar relaxation shocks,, J. Hyperbolic Differ. Equ., 6 (2009), 663. [13] D. L. Li, The Green's function of the Navier-Stokes equations for gas dynamics in $\mathbbR^3$,, Commun. Math. Phys., 257 (2005), 579. doi: 10.1007/s00220-005-1351-4. [14] M. B. Liu and Z. X. Cheng, Conservation laws. III. Relaxation limit,, Rev. Colombiana Mat., 41 (2007), 107. [15] T. Li, Global solutions of nonconcave hyperbolic conservation laws with relaxation arising from traffic flow,, J. Differential Equations, 190 (2003), 131. [16] T.-P. Liu, Hyperbolic conservative laws with relaxation,, Comm. Math. Phys., 108 (1987), 153. doi: 10.1007/BF01210707. [17] T.-P. Liu, Pointwise convergence to shock waves for viscous conservation laws,, Comm. Pure Appl. Math, 50 (1997), 1113. doi: 10.1002/(SICI)1097-0312(199711)50:11<1113::AID-CPA3>3.0.CO;2-D. [18] T.-P. Liu and W. K. Wang, The pointwise estimates of diffusion wave for the Navier-Stokes systems in odd multi-dimension,, Comm. Math. Phys., 196 (1998), 145. doi: 10.1007/s002200050418. [19] T.-P. Liu and S.-H. Yu, The Green's function and large-time behavior of solutions for one-dimensional Boltzmann equation,, Comm. Pure Appl. Math., 57 (2004), 1543. doi: 10.1002/cpa.20011. [20] T.-P. Liu and S.-H. Yu, Green's function and large-time behavior of solutions of Boltzmann equation, 3-D waves,, Bulletin. Inst. Math. Academia Sinica (N.S.), 1 (2006), 1. [21] T.-P. Liu and Y. Zeng, Large time behavior of solutions to general quasilinear hyperbolic-parabolic systems of conservation laws,, Mem. Amer. Math. Soc., 125 (1997). [22] Y. Q. Liu and W. K. Wang, The pointwise estimates of solutions for dissipative wave equation in multi-dimensions,, Discrete Contin. Dyn. Syst., 20 (2008), 1013. doi: 10.3934/dcds.2008.20.1013. [23] T. Luo and Z. P. Xin, Nonlinear stability of shock fronts for a relaxation system in several space dimensions,, J. Differential Equations, 139 (1997), 365. [24] C. Mascia and K. Zumbrun, Pointwise Green's function bounds and stability of relaxation shocks,, Indiana Univ. Math. J., 51 (2002), 773. doi: 10.1512/iumj.2002.51.2212. [25] C. Mascia and K. Zumbrun, Stability of large-amplitude shock profiles of general relaxation systems,, SIAM J. Math. Anal., 37 (2005), 889. doi: 10.1137/S0036141004435844. [26] C. Mascia and K. Zumbrun, Spectral stability of weak relaxation shock profiles,, Comm. Partial Differential Equations, 34 (2009), 119. [27] R. Plaza and K. Zumbrun, An Evans function approach to spectral stability of small-amplitude shock profiles,, Discrete Contin. Dyn. Syst., 10 (2004), 885. doi: 10.3934/dcds.2004.10.885. [28] R. Kumar and M. K. Kadalbajoo, Efficient high-resolution relaxation schemes for hyperbolic systems of conservation laws,, Internat. J. Numer. Methods Fluids, 55 (2007), 483. doi: 10.1002/fld.1479. [29] Y.-J. Peng and S. Wang, Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters,, Discrete Contin. Dyn. Syst., 23 (2009), 415. doi: 10.3934/dcds.2009.23.415. [30] Y. Shizuta and S. Kawashima, Systems of equations of hyperbolic-parabolic type with applications to the discrete Boltzmann equation,, Hokkaido Math. J., 14 (1985), 249. [31] W. K. Wang and H. M. Xu, Pointwise estimate of solutions of isentropic Navier-Stokes equations in even multi-dimensions,, Acta Math. Sci. Ser. B Engl. Ed., 21 (2001), 417. [32] W. K. Wang and T. Yang, The pointwise estimates of solutions of Euler equations with damping in multi-dimensions,, J. Differential Equations, 173 (2001), 410. [33] W. K. Wang and X. F. Yang, The pointwise estimates of solutions to the isentropic Navier-Stokes equations in even space-dimensions,, J. Hyperbolic Differ. Equ., 2 (2005), 673. [34] J. Xu and W.-A. Yong, Zero-relaxation limit of non-isentropic hydrodynamic models for semiconductors,, Discrete Contin. Dyn. Syst., 25 (2009), 1319. doi: 10.3934/dcds.2009.25.1319. [35] W.-A. Yong and W. Jäger, On hyperbolic relaxation problems,, Analysis and Numerics for Conservation Laws, (2005), 495. [36] W.-A. Yong and K. Zumbrun, Existence of relaxation shock profiles for hyperbolic conservation laws,, SIAM J. Appl. Math., 60 (2000), 1565. doi: 10.1137/S0036139999352705. [37] Y. Zeng, Gas dynamics in thermal nonequilibrium and general hyperbolic systems with relaxation,, Arch. Ration. Mech. Anal., 150 (1999), 225. doi: 10.1007/s002050050188.
[1] Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501 [2] Wen-Qing Xu. Boundary conditions for multi-dimensional hyperbolic relaxation problems. Conference Publications, 2003, 2003 (Special) : 916-925. doi: 10.3934/proc.2003.2003.916 [3] Diego Castellaneta, Alberto Farina, Enrico Valdinoci. A pointwise gradient estimate for solutions of singular and degenerate pde's in possibly unbounded domains with nonnegative mean curvature. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1983-2003. doi: 10.3934/cpaa.2012.11.1983 [4] Li-Ming Yeh. Pointwise estimate for elliptic equations in periodic perforated domains. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1961-1986. doi: 10.3934/cpaa.2015.14.1961 [5] Arno Berger. Multi-dimensional dynamical systems and Benford's Law. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 219-237. doi: 10.3934/dcds.2005.13.219 [6] Xiaoling Sun, Xiaojin Zheng, Juan Sun. A Lagrangian dual and surrogate method for multi-dimensional quadratic knapsack problems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 47-60. doi: 10.3934/jimo.2009.5.47 [7] Hideo Kubo. On the pointwise decay estimate for the wave equation with compactly supported forcing term. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1469-1480. doi: 10.3934/cpaa.2015.14.1469 [8] Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791 [9] Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307 [10] Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007 [11] Tohru Nakamura, Shinya Nishibata. Energy estimate for a linear symmetric hyperbolic-parabolic system in half line. Kinetic & Related Models, 2013, 6 (4) : 883-892. doi: 10.3934/krm.2013.6.883 [12] Boris P. Belinskiy, Peter Caithamer. Energy estimate for the wave equation driven by a fractional Gaussian noise. Conference Publications, 2007, 2007 (Special) : 92-101. doi: 10.3934/proc.2007.2007.92 [13] Anatoli F. Ivanov. On global dynamics in a multi-dimensional discrete map. Conference Publications, 2015, 2015 (special) : 652-659. doi: 10.3934/proc.2015.0652 [14] Gerald Sommer, Di Zang. Parity symmetry in multi-dimensional signals. Communications on Pure & Applied Analysis, 2007, 6 (3) : 829-852. doi: 10.3934/cpaa.2007.6.829 [15] Dmitry Treschev. A locally integrable multi-dimensional billiard system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5271-5284. doi: 10.3934/dcds.2017228 [16] Franz Achleitner, Anton Arnold, Eric A. Carlen. On multi-dimensional hypocoercive BGK models. Kinetic & Related Models, 2018, 11 (4) : 953-1009. doi: 10.3934/krm.2018038 [17] Wenxiong Chen, Congming Li. A priori estimate for the Nirenberg problem. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 225-233. doi: 10.3934/dcdss.2008.1.225 [18] L.R. Ritter, Akif Ibragimov, Jay R. Walton, Catherine J. McNeal. Stability analysis using an energy estimate approach of a reaction-diffusion model of atherogenesis. Conference Publications, 2009, 2009 (Special) : 630-639. doi: 10.3934/proc.2009.2009.630 [19] Martin Gugat, Alexander Keimer, Günter Leugering, Zhiqiang Wang. Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. Networks & Heterogeneous Media, 2015, 10 (4) : 749-785. doi: 10.3934/nhm.2015.10.749 [20] Masaharu Taniguchi. Multi-dimensional traveling fronts in bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 1011-1046. doi: 10.3934/dcds.2012.32.1011
2017 Impact Factor: 1.179
|
2018-08-16 12:06:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7506523728370667, "perplexity": 4555.8226595829165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210735.11/warc/CC-MAIN-20180816113217-20180816133217-00584.warc.gz"}
|
http://mathoverflow.net/questions/108282/closed-subgroups-of-a-p-adic-algebraic-group
|
# Closed subgroups of a $p$-adic algebraic group
Let $F$ be a finite extension of $\mathbb{Q}_p$ and $G$ the group of rational points of an algebraic group defined over $F$, endowed with the natural topology. Any Zariski closed subgroup $H \subset G$ is $\text{exp } \mathfrak{h}$ for some subalgebra $\mathfrak{h}$ of the Lie algebra $\mathfrak{g}$ of $G$, but what about subgroups which are closed in the $p$-adic topology? For instance, the compact open subgroups of $G$ are very important, and I wonder if such a subgroup arises by exponentiating a compact open subgroup of $\mathfrak{g}$ (i.e. an $\mathcal{O}_F$-lattice) closed under the Lie bracket. Is the situation better when $G$ is unipotent?
-
Your statement about exp's is false: p-adic exp has severe convergence problems even for GL$_n$. Read Serre's book "Lie groups and Lie algebras", in which he develops a good Lie correspondence over any non-archimedean field of characteristic 0 from scratch (and carries along the archimedean case, clarifying the special role of $\mathbf{Q}_p$ much as $\mathbf{R}$ has "better" features than $\mathbf{C}$ for a Lie correspondence, due to the density of $\mathbf{Q}$ in $\mathbf{R}$). Also see Bourbaki Lie Ch. III. You cannot expect to nail down exactly subgroups that are exp of their Lie algebra. – grp Sep 27 '12 at 21:27
Let $G$ be a $p$--adic Lie group. There is a 1:1 correspondence between $p$-adically closed subgroups up to finite index of $G$ and Lie subalgebras of its Lie algebra (which is a Lie algebra over $\mathbb Q_p$). The correspondence works just as in the classical setting. This is shown in a paper by Mattuck from the fifties, and in much greater generality in Lazard's thesis (beware, beware).
Attention: The image of a closed subgroup $H \subseteq G$ under the logarithm map is not always a $\mathbb Z_p$ submodule of the Lie algebra. Example: take $p=2$ and $H \subseteq \mathrm{GL}_3$ the unipotent radical of the Borel. Then $\mathrm{log}(H(\mathbb Z_p))$ is not stable under $+$.
You cannot hope for such a correspondence for strictly all closed subgroups. Although the exponential map is a homeomorphism locally around zero, it can in general not be extended to a surjective map. There is already a problem with $\mathbb Z_p^\ast$.
Also, this does not make much sense over extensions of $\mathbb Q_p$. If $F$ is some nontrivial finite extension of $\mathbb Q_p$, then $F$ viewed as a Lie group under addition has many closed subgroups (all the $\mathbb Q_p$-linear subspaces), but the Lie algebra (which is also $F$) has no proper $F$--linear subalgebra.
Concerning your final paragraph: things do make good sense over extensions $F$ of $\mathbf{Q}_p$ provided one replaces the purely topological viewpoint of "closed subgroups" with the more analytic viewpoint of "closed $F$-analytic subgroups" taken up to clopen subgroups (and Lie $F$-subalgebras of the ambient Lie algebra). This is discussed nicely in both Serre's book and Bourbaki. It is analogous to the fact that one has a good Lie correspondence over $\mathbf{C}$ but it requires going beyond the purely topological formulation that works well over $\mathbf{R}$. – grp Sep 28 '12 at 2:26
|
2014-03-15 13:10:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951175808906555, "perplexity": 240.31621823724532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678697782/warc/CC-MAIN-20140313024457-00090-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://uwspace.uwaterloo.ca/handle/10012/9924/browse?rpp=20&sort_by=1&type=title&etal=-1&starts_with=P&order=ASC
|
Now showing items 1734-1753 of 2590
• #### A PAC-Theory of Clustering with Advice
(University of Waterloo, 2018-05-17)
In the absence of domain knowledge, clustering is usually an under-specified task. For any clustering application, one can choose among a variety of different clustering algorithms, along with different preprocessing ...
• #### Packing and Covering Odd (u,v)-trails in a Graph
(University of Waterloo, 2016-09-27)
In this thesis, we investigate the problem of packing and covering odd $(u,v)$-trails in a graph. A $(u,v)$-trail is a $(u,v)$-walk that is allowed to have repeated vertices but no repeated edges. We call a trail \emph{odd} ...
• #### Packing Directed Joins
(University of Waterloo, 2004)
Edmonds and Giles conjectured that the maximum number of directed joins in a packing is equal to the minimum weight of a directed cut, for any weighted directed graph. This is a generalization of Woodall's Conjecture ...
• #### Packing Unit Disks
(University of Waterloo, 2008-08-27)
Given a set of unit disks in the plane with union area A, what fraction of A can be covered by selecting a pairwise disjoint subset of the disks? Richard Rado conjectured 1/4 and proved 1/4.41. In this thesis, we consider ...
• #### Pairs Trading Based on Costationarity
(University of Waterloo, 2015-09-25)
Arbitrage is a widely sought after phenomenon in financial markets: profit without any risk is very desirable. Statistical arbitrage is a related concept: the idea is to take advantage of market inefficiencies using ...
• #### Para-Holomorphic Algebroids and Para-Complex Connections
(University of Waterloo, 2021-12-17)
The goal of this paper is to develop the theory of Courant algebroids with integrable para-Hermitian vector bundle structures by invoking the theory of Lie bialgebroids. We consider the case where the underlying manifold ...
• #### Parallel Paths Analysis Using Function Call Graphs
(University of Waterloo, 2019-09-23)
Call graphs have been used widely in different software engineering areas. Since call graphs provide us with detailed information about the structure of software elements and components and how they are connected with each ...
• #### Parallel Pattern Search in Large, Partial-Order Data Sets on Multi-core Systems
(University of Waterloo, 2011-01-20)
Monitoring and debugging distributed systems is inherently a difficult problem. Events collected during the execution of distributed systems can enable developers to diagnose and fix faults. Process-time diagrams are ...
• #### Parallel Repetition of Prover-Verifier Quantum Interactions
(University of Waterloo, 2012-01-05)
In this thesis, we answer several questions about the behaviour of prover-verifier interactions under parallel repetition when quantum information is allowed, and the verifier acts independently in them. We first ...
• #### A Parallel Study of the Fock Space Approach to Classical and Free Brownian Motion
(University of Waterloo, 2017-08-28)
The purpose of this thesis is to elaborate the similarities between the classical and the free probability by means of developing the chaos decomposition of stochastic integrals driven by Brownian motion and its free ...
• #### A parallel, adaptive discontinuous Galerkin method for hyperbolic problems on unstructured meshes
(University of Waterloo, 2018-09-04)
This thesis is concerned with the parallel, adaptive solution of hyperbolic conservation laws on unstructured meshes. First, we present novel algorithms for cell-based adaptive mesh refinement (AMR) on unstructured ...
• #### Parameter and Structure Learning Techniques for Sum Product Networks
(University of Waterloo, 2019-09-25)
Probabilistic graphical models (PGMs) provide a general and flexible framework for reasoning about complex dependencies in noisy domains with many variables. Among the various types of PGMs, sum-product networks (SPNs) ...
• #### A Parameterized Algorithm for Upward Planarity Testing of Biconnected Graphs
(University of Waterloo, 2003)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two ...
• #### Parameterized Code Generation From Template Semantics
(University of Waterloo, 2006)
We have developed a tool that can create a Java code generator for a behavioural modelling notation given only a description of the notation's semantics as a set of parameters. This description is based on template ...
• #### Parameterized Enumeration of Neighbour Strings and Kemeny Aggregations
(University of Waterloo, 2013-08-30)
In this thesis, we consider approaches to enumeration problems in the parameterized complexity setting. We obtain competitive parameterized algorithms to enumerate all, as well as several of, the solutions for two related ...
• #### Parameterizing a dynamic influenza model using longitudinal versus age-stratified case notifications yields different predictions of vaccine impacts
(2018-09-06)
Dynamic transmission models of influenza are often used in decision-making to identify which vaccination strategies might best reduce influenza-associated health and economic burdens. Our goal was to use laboratory confirmed ...
• #### Parking Functions and Related Combinatorial Structures.
(University of Waterloo, 2001)
The central topic of this thesis is parking functions. We give a survey of some of the current literature concerning parking functions and focus on their interaction with other combinatorial objects; namely noncrossing ...
• #### Parlez-vous le hate?: Examining topics and hate speech in the alternative social network Parler
(University of Waterloo, 2021-12-23)
Over the past several years, many “alternative” social networks have sprung up, with an emphasis on minimal moderation and protection of free speech. Although they claim to be politically neutral, they have been a haven ...
• #### Particle Clustering and Sub-clustering as a Proxy for Mixing in Geophysical Flows
(University of Waterloo, 2019-08-16)
The Eulerian point of view is the traditional theoretical and numerical tool to describe fluid mechanics. Some modern computational fluid dynamics codes allow for the efficient simulation of particles, in turn facilitating ...
• #### Partition Algebras and Kronecker Coefficients
(University of Waterloo, 2015-08-28)
Classical Schur-Weyl duality relates the representation theory of the general linear group to the representation theory of the symmetric group via their commuting actions on tensor space. With the goal of studying Kronecker ...
UWSpace
University of Waterloo Library
200 University Avenue West
|
2022-05-23 17:56:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42796623706817627, "perplexity": 2029.8127779292824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00721.warc.gz"}
|
https://pos.sissa.it/336/194/
|
Volume 336 - XIII Quark Confinement and the Hadron Spectrum (Confinement2018) - E: QCD and New Physics
Calculation of Nucleon Electric Dipole Moments Induced by Quark Chromo-Electric Dipole Moments and the QCD θ-term
S. Syritsyn*, T. Izubuchi and H. Ohki
Full text: pdf
Pre-published on: September 12, 2019
Published on: September 26, 2019
Abstract
Electric dipole moments (EDMs) of nucleons and nuclei, which are sought as evidence of CP violation, require lattice calculations to connect constraints from experiments to limits on the strong CP violation within QCD or CP violation introduced by new physics from beyond the standard model. Nucleon EDM calculations on a lattice are notoriously hard due to large statistical noise, chiral symmetry violating effects, and potential mixing of the EDM and the anomalous magnetic moment of the nucleon. In this report, details of ongoing lattice calculations of proton and neutron EDMs induced by the QCD $\theta$-term and the quark chromo-EDM, the lowest-dimension effective CP-violating quark-gluon interaction are presented. Our calculation employs chiral-symmetric fermion discretization. An assessment of feasibility of nucleon EDM calculations at the physical point is discussed.
DOI: https://doi.org/10.22323/1.336.0194
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2023-02-09 12:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5538037419319153, "perplexity": 3443.70310664494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00579.warc.gz"}
|
https://www.physicsforums.com/threads/what-was-your-financial-status-when-you-graduated-college.335199/
|
noblegas
Hope this question is not too personal; I just like to get a good idea on what my financial status is relative to the financial status of other college graduates or soon to be graduates like myself
Related General Discussion News on Phys.org
negitron
Broke and in debt. The good news is that the higher your degree, the broker and more in debt you're likely to be. Congratulations.
Topher925
When I got my BS I had zero debt and about 10 grand in the bank.
Cyrus
When I got my BS I had zero debt and about 10 grand in the bank.
Same here.
Pengwuino
Gold Member
When I got my BS I had zero debt and about 10 grand in the bank.
Ditto. I'll probably end up with my MS and even more in the bank.
Staff Emeritus
Gold Member
I had a modest amount of debt and assets in the neighborhood of $2000. I've always been extremely wary of debt...I chose to go to college part-time and work part-time so that I would not incur excessive debts. Don't know how helpful my answer is since I'm probably atypical. Pengwuino Gold Member Thank God for state schools :). I'm amazed how one of my friends is making SUCH a stink about tuition increases when we still pay 10-20% of most private universities. Plus his parents pay for his tuition. Astronuc Staff Emeritus Science Advisor I left grad school with about$5-6K. While in grad school, my wife and I bought a new car (Honda Civic Wagon for about 10K), and we paid off her undergrad loans, and saved for a down payment on a house.
I had paid my way through my undergrad program, and I helped my parents support my siblings in their university programs.
Cyrus
I'll probably be around $60,000 in debt after I get my B.Sc, and then tack on another$100,000 or so if I end up going to law or pharmacy school.
That is disgusting amount of debt for a BS. Did you go to Harvard? I hope so for that price.
Ivan Seeking
Staff Emeritus
Gold Member
I chose to go to college part-time and work part-time so that I would not incur excessive debts. Don't know how helpful my answer is since I'm probably atypical.
I went full time and part time while working part time of full time. Since I went back to school a bit late in life, my situation was different than a typical college student. For one, we moved to Oregon and bought a house right in the middle of the program.
Some of my tuition went on credit cards but we didn't bury ourselves; not by today's standards. However, the cost of tuition was climbing quickly by the time I finished.
For me the far more substantial number was lost income, rather than debt. I had a successful career before returning to college. All in all, going to college probably cost me well over 200K, plus tuition and books. Moving to Oregon probably set us back another 100-200K, but that depends on how you look at it. We bought a five acre lot in the woods that would probably be worth millions anywhere within commuting distance of L.A.
P.S. No regrets. I wouldn't want to do it all again, but it was worth it. I would have liked to have gone on to a graduate program, but that would have been too much to handle financially. By the time I finished, we needed to concentrate on making money.
Last edited:
Sorry!
That is disgusting amount of debt for a BS. Did you go to Harvard? I hope so for that price.
Agreed. If I paid that amount of money I wouldn't even want to attend the school. I would just be expecting my degree to come in the mail. :D
I haven't gone to university yet but I'm planning to next year after 4 years I PLAN to have no debt. Universities in Ontario have a lot of excellent scholarship opportunities and the provincial government pays for you to go while you're in school. You only need to pay them once your done. (normally its not the full amount though. I think I don't have to pay back 1 grand every year)
Dr Transport
Gold Member
about $3K in debt after my Bachelors and 1st Masters. I paid everything off over the next couple of years and when I got my PhD i was in about$10K. I was deployed for the war, so I paid everything back and was completely out of debt except for my mortgage within 18 months.
I'll be graduating from undergrad with about $60k in debt as well. Even though I am a U.S. citizen, I was not eligible for in-state tuition in any state since I moved so often, and also my parents made too much money for me to get any assistance. So, I have financed my education primarily through loans. noblegas That is disgusting amount of debt for a BS. Did you go to Harvard? I hope so for that price. Unless you receive any scholarship or your parents paid for all of your college amenities , I cannot imagine someone graduating from harvard with only$60,000 of debt seeing that it cost a student about $60000/year to attend that school turbo Gold Member I left college before graduating, and then went back a couple of years later to take some courses related to my work as a soils scientist in construction. Anyway, for the first three full years, I ended each year with a little more money in the bank than the previous year. I worked full-time each summer (construction or mill-work) with all the overtime I could get. Every school year, I played guitar in a band and made spending money from frat parties, mixers, etc. I also bought and sold guitars and amps on the side, and repaired and refurbished them. Lots of college kids sell off nice instruments cheap when they get hard-up for cash, so I did quite well with my little side-line. The trick is to be liquid (always have several hundred bucks in your pocket) for when opportunities present themselves, and understand the market well enough to estimate your profits accurately. I never borrowed a dime for college - state universities are a pretty good deal. SticksandStones That is disgusting amount of debt for a BS. Did you go to Harvard? I hope so for that price. Nope, in fact I had to go to a lower tier state school because they offered me more scholarships than higher tier schools. It's also not like I did bad in high school. In fact, I was literally the top student in high school. The ironic thing is, since my parents make so little money if I did go to Harvard I'd probably have less debt. Danger Gold Member College? What's that? I never graduated high-school, but at least I was debt-free when I didn't. Cyrus Unless you receive any scholarship or your parents paid for all of your college amenities , I cannot imagine someone graduating from harvard with only$60,000 of debt seeing that it cost a student about $60000/year to attend that school You're right, I forgot how disgustingly expensive it is to attend that place. LBloom I'm on track to be$22,000 in debt by the time i get my B.S. (unless i get more grants or scholarships. Keeping my fingers crossed). Not sure how I'll pay for grad school.
I don't want to hijack this thread or anything, but does anybody think freshman year is too early for a job or should I wait a year? Don't like the idea of being in so much debt and then worrying about my PhD
Why would anyone be in dept after college when everything is paid for by the government?
Go Europe! Go Europe!
Quincy
I'm on track to be $22,000 in debt by the time i get my B.S. (unless i get more grants or scholarships. Keeping my fingers crossed). Not sure how I'll pay for grad school. Same here except w/$25,000.
WhoWee
My wife and I both emerged debt free as undergrads. However, she needed to complete almost a year of free service and an additional year of part time status. When she went back for her Masters (Education), we accumulated an extra \$35,000 in loans - plus the lost income because she wasn't teaching.
|
2019-11-21 11:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21507608890533447, "perplexity": 1957.8352561740385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670770.21/warc/CC-MAIN-20191121101711-20191121125711-00081.warc.gz"}
|
http://ncatlab.org/nlab/show/timelike+curve
|
nLab timelike curve
Context
Differential geometry
differential geometry
synthetic differential geometry
Applications
Riemannian geometry
Riemannian geometry
Surveys, textbooks and lecture notes
Gravity
gravity, supergravity
Contents
Definition
For $\left(X,g\right)$ a Lorentzian spacetime, a tangent vector $v\in {T}_{x}X$ is called
• timelike if $g\left(v,v\right)<0$;
• lightlike if $g\left(v,v\right)=0$;
• spacelike if $g\left(v,v\right)>0$.
A curve $\gamma :ℝ\to X$ is called timelike or lightlike or spacelike if all of its tangent vectors $\stackrel{˙}{\gamma }$ are, respectively.
Created on May 16, 2011 17:31:36 by Urs Schreiber (131.211.238.144)
|
2013-06-20 06:48:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795390784740448, "perplexity": 5875.347512483128}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710605589/warc/CC-MAIN-20130516132325-00059-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.cymath.com/blog/2020-07-06
|
# Problem of the Week
## Updated at Jul 6, 2020 10:17 AM
How can we solve for the derivative of $${v}^{6}+7v$$?
Below is the solution.
$\frac{d}{dv} {v}^{6}+7v$
1 Use Power Rule: $$\frac{d}{dx} {x}^{n}=n{x}^{n-1}$$.$6{v}^{5}+7$Done6*v^5+7
|
2020-08-07 23:24:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812157988548279, "perplexity": 3252.8209959068336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00041.warc.gz"}
|
https://developer.myscript.com/docs/interactive-ink/1.4/web/advanced/custom-recognition/
|
# Custom recognition
MyScript’s recognition technology is very flexible. While the default configurations support common use cases, this page explains how you can fine tune them to address specific needs.
## Why customize the recognition?
Interactive ink comes with a set of supported language with associated recognition configurations.
However, there are a few situations where you may want to adapt these provided configurations:
• You need the engine to recognize some vocabulary that is not included within the default MyScript lexicons, like proper names. In this case, you may build and attach a custom lexicon.
• You target different education levels with a math application and want to restrict the amount of symbols that MyScript can recognize: this will reduce some possible ambiguities (many math symbols are very similar) and improve the overall user experience. In that case, you can build and attach a custom math grammar.
• You are building a form application and want to reduce some fields to only accept certain types of symbols, such as alphanumerical symbols, digits or even capital letters. In this case, consider building and attaching a subset knowledge.
## Recognition resources
Resources are pieces of knowledge that can be attached to the recognition engine to make it able to recognize a given language or content.
### Alphabet knowledge
An Alphabet knowledge (AK) is a resource that enables the engine to recognize individual characters for a given language and a given writing style. Default configurations include a cursive AK for each supported language.
You can only attach a single AK to an engine at a time.
### Linguistic knowledge
A Linguistic knowledge (LK) is a resource that provides the engine with linguistic information for a given language. It allows the recognition engine to improve its accuracy by favoring words from its lexicon that are the most likely to occur. Default configurations include an LK for each supported language.
An LK is not mandatory but not attaching one often results in a significant accuracy drop. It may be relevant if you do not expect to write full meaningful words, for instance if you plan to filter a list with a few letters.
Default configurations for all languages but English variants also attach a “secondary English” LK that allows the engine to recognize a mix of the target language and English. Except for this particular case, it is not expected to mix languages together.
### Lexicon
A lexicon is a resource that lists words that can be recognized in addition to what is included into linguistic knowledge resources.
You can build and attach your own custom lexicons.
### Subset knowledge
A subset knowledge (SK) is a resource that restricts the number of text characters that the engine shall attempt to recognize. It thus corresponds to a restriction of an AK resource. It can be useful in a form application, for example, to restrict the authorized characters of an email field to alphanumerical characters, @ and a few allowed punctuation signs.
You can build and attach your own custom subset knowledge.
### Math grammar
A math grammar is a resource that restricts the number of math symbols and rules that the engine shall be able to process. In education use cases, it can prove very useful to adapt the recognition to a given math level (for instance, only digits and basic operators for pupils).
You can build and attach your own custom math grammars.
## Attaching resources
Whatever the web API you are using, once you have build your resource you have to upload it and give it a name into your developer account. Go to your cloud dashboard https://cloud.myscript.com, select ‘Resource’ tab and upload the resource file as shown below
You will then be able to add the parameter when you create your editor our your web-component.
### iinkJS
With TEXT type configure your editor like this.
const configuration = {
recognitionParams : {
// Text configuration
text : {
configuration : {
customResources : ['test']
}
}
}
}
iink.register(editorElement,configuration)
With MATH type configure your editor like that.
const configuration = {
recognitionParams : {
// Math configuration
math : {
customGrammar : 'test'
}
}
}
iink.register(editorElement,configuration)
|
2020-10-27 20:56:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1995772272348404, "perplexity": 1933.7321477498876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00017.warc.gz"}
|
http://sto-forum.perfectworld.com/showthread.php?p=7411281&mode=threaded
|
Star Trek Online Literary Challenge #36 Discussion Thread
Search Today's Posts Mark Forums Read
Career Officer
Join Date: Jun 2012
Posts: 266
# 9
01-09-2013, 12:03 PM
Quote:
Originally Posted by takeshi6 Rather interesting entry. I liked the Armor--very interesting. And interesting cliffhanger--is it just me, or do quite a few of your entries end in cliffhangers? Anyway, still no luck on inspiration on my end, but if I do find something, I'll write it down ASAP.
They do indeed, usually because I can't think of a decent way to finish them... I'll tweak it if/when I come up with something.
EDIT... I thought of something xD
Ikuzo, Trombe!
Last edited by amurorx0; 01-09-2013 at 03:07 PM.
|
2013-12-04 20:14:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469196200370789, "perplexity": 6533.379968514626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037167/warc/CC-MAIN-20131204131717-00094-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/multivariable-delta-function-integral.579381/
|
# Multivariable Delta Function Integral
1. Feb 19, 2012
### bologna121121
1. The problem statement, all variables and given/known data
I have to find this integral:
$$\int \delta (( \frac{p^{2}}{2m} + Cz ) - E ) p^{2} dp dz$$
where E, m, and C can be considered to be constants.
2. Relevant equations
I'm semi-familiar with delta functions, i.e. i know that:
$$\int \delta (x - a) dx = 1$$
and that you can usually change the variable of integration to match the variable in the delta function, if it's not written explicitly as above.
3. The attempt at a solution
My problem is that I don't really know how to work with this in two dimensions, with both variables appearing inside the delta function. I thought maybe there might be a way to split it into two different delta functions, with one variable appearing in each? But this is just a guess, and I can't really find any supporting evidence. Thanks in advance.
2. Feb 20, 2012
### sunjin09
First you need to know how to scale a delta function, i.e., δ(a*x)=1/a*δ(x); then you integrate z first, and treat everything else as constants, the result is very simple.
3. Feb 20, 2012
### bologna121121
Ah...I guess that's pretty obvious. Thank you very much.
|
2017-11-20 12:19:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6480510234832764, "perplexity": 348.62434295946093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806030.27/warc/CC-MAIN-20171120111550-20171120131550-00144.warc.gz"}
|
https://math.stackexchange.com/questions/3226563/stuck-in-this-proof-of-the-completeness-theorem-for-predicate-logic
|
Stuck in this Proof of the Completeness Theorem for Predicate Logic
I'm studying a proof of the completeness theorem for predicate logic shown in this lecture and I'm caught in an obstacle. It proceeds by showing that if a theory is consistent, then it has a model, and my problem specifically points at a lemma, namely the part where it proves that the theory $$T'$$ is consistent. I should name a definition..
Definition 11.2$$\hspace{0.2in}$$A henkin theory is a theory $$T$$ so that for every formula $$\phi$$ and variable $$x,$$ there is a constant symbol $$c$$ such that $$(\exists x\phi\rightarrow\phi^x_c)\in T.$$
and we have that lemma..
Lemma 11.3$$\hspace{0.2in}$$Suppose that $$L$$ is a language and $$T$$ is a consistent theory in $$L,$$ then there is a language $$L'\supseteq L$$ obtained by adding constant symbols to $$L$$ and a theory $$T''\supseteq T$$ that is consistent, syntactically complete, and is a henkin theory.
It begins the proof by defining a sequence of languages and theories $$L_0,L_1,...$$ and $$T_0,T_1,...$$ such that $$L_0=L,$$ $$T_0=T,$$ and $$L_{n+1}$$ defined by adding to the language $$L_n$$ all constant symbols $$c_{\exists x\phi}$$ for every formula $$\exists x\phi$$ in $$L_n,$$ with $$T_{n+1}$$ defined by adding to $$T_n$$ all formulas $$(\exists x\phi\rightarrow\phi^x_{c_{\exists x\phi}})$$ for every formula $$\phi$$ in $$L_n.$$ $$L'$$ and $$T'$$ are defined to be the union of all languages $$L_0,L_1,...$$ and all theories $$T_0,T_1,...$$ respectively. It follows that $$T'$$ is a henkin theory by assuming that if $$\phi$$ is a formula in $$L',$$ then $$\phi\in L_n$$ for some number $$n,$$ from which by definition, $$(\exists x\phi\rightarrow\phi^x_{c_{\exists x\phi}})\in T_{n+1}\subseteq T'.$$
Then it proceeds to prove that $$T'$$ is a consistent theory. It's easy to tell that it suffices to prove that all theories $$T_n$$ for each $$n$$ is consistent, hence we proceed by proof by induction on the said sequence of theories. For the basis, $$T_0=T$$ which is by definition, consistent. For the induction step, we assume that $$T_n$$ is consistent and by way of contradiction, that $$T_{n+1}$$ is inconsistent. The proof further goes as follows (I shall mark the questionable part with "???"):
Then since $$T_{n+1}$$ is obtained from $$T_n$$ by adding formulas of the form $$(\exists x\phi\rightarrow\phi^x_{c_{\exists x\phi}}),$$ there must be some finite number of such formulas $$\phi_1,...,\phi_k$$ such that $$T_n\cup\{(\exists x\phi_i\rightarrow(\phi_i)^x_{c_{\exists x\phi_i}}):i\le k\}$$ is inconsistent. Then using proof by contradiction, $$T_n\cup\{(\exists x\phi_i\rightarrow(\phi_i)^x_{c_{\exists x\phi_i}}):i\le k-1\}\vdash\neg(\exists x\phi_k\rightarrow(\phi_k)^x_{c_{\exists x\phi_k}}).$$ Let $$Q=T_n\cup\{(\exists x\phi_i\rightarrow(\phi_i)^x_{c_{\exists x\phi_i}}):i\le k-1\}$$ then since $$(\neg(p\rightarrow q)\rightarrow p)$$ and $$(\neg(p\rightarrow q)\rightarrow\neg q)$$ are tautologies, we have that $$Q\vdash\exists x\phi_k$$ and $$Q\vdash\neg(\phi_k)^x_{c_{\exists x\phi_k}},$$ but then by generalization on constant applied to this last fact, we see that $$Q\vdash\forall x\neg\phi_k (???),$$ hence $$Q\vdash\neg\exists x\phi_k,$$ so $$Q$$ is inconsistent. Proceeding this way, we can eliminate all the formulas $$(\exists x\phi\rightarrow(\phi_i)^x_{c_{\exists x\phi_i}})$$ for all $$i=k,...,1$$ and show that $$T_n$$ itself is inconsistent, which is a contradiction.
Here, it seems that the author uses the stronger universal generalization rule (which is applicable to terms as long as that choice of terms remains arbitrary), whereas I'm working in classical predicate logic with only the generalization metatheorem (which is applicable only to variables) stated as follows (from Enderton's book, "A Mathematical Introduction to Logic"):
If $$\Gamma\vdash\varphi$$ and $$x$$ does not occur free in any formula of $$\Gamma,$$ then $$\Gamma\vdash\forall x\varphi.$$
Here I assume that $$x$$ is a variable because it either should not occur at all or occurs bound (and constant symbols which are non-variable terms don't "occur bound").
So far, all I've been trying to do is finding a proof for the "stronger" generalization metatheorem or an alternative way to prove that $$Q$$ is inconsistent but to no avail. Your help will be very much appreciated.
• See Enderton, page 123 : THEOREM 24F (GENERALIZATION ON CONSTANTS) Assume that $\Gamma \vdash ϕ$ and that $c$ is a constant symbol that does not occur in $\Gamma$. Then there is a variable $y$ (which does not occur in $ϕ$) such that $\Gamma \vdash ∀y ϕ^c_y$. – Mauro ALLEGRANZA May 15 at 6:14
• "is applicable only to variables" : where would the idea differ ? A constant is just a fancy variable. Of course this is not precise, but if you look at the proof for variables, and change the word "variable" to "constant" surely it applies as well. Think about it : if you can proof $\phi(c)$ with no hypotheses on $c$ ($c$ a constant); then surely you can prove $\phi(y)$ for any variable $y$ not having been used in the proof, and so by generalization $\forall y, \phi(y)$ – Max May 15 at 11:23
|
2019-06-20 13:44:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329280853271484, "perplexity": 126.71430979896235}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00302.warc.gz"}
|
https://brilliant.org/problems/wished-to-have-a-transformation-formula-for/
|
# Transforming Tangents?
Geometry Level 3
$\large \tan70^\circ - \tan50^\circ + \tan10^\circ$
The expression above has a closed form, find this closed form.
Give your answer to 2 decimal places.
×
|
2020-06-06 05:45:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5353964567184448, "perplexity": 5003.3291775785365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00132.warc.gz"}
|
https://www.encyclopediaofmath.org/index.php?title=Probability_distribution&oldid=13959
|
# Probability distribution
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
One of the basic concepts in probability theory and mathematical statistics. In the modern approach, a suitable probability space is taken as a model of a stochastic phenomenon being considered. Here is a sample space, is a -algebra of subsets of specified in some way and is a measure on such that (a probability measure).
Any such measure on is called a probability distribution (see [1]). But this definition, basic in the axiomatics introduced by A.N. Kolmogorov in 1933, proved to be too general in the course of the further development of the theory and was replaced by more restrictive ones in order to exclude some "pathological" cases. An example was the requirement that the measure be "perfect" (see [2]). Probability distributions in function spaces are usually required to satisfy some regularity property, usually formulated as separability but also admitting a characterization in different terms (see Separable process and also [3]).
Many of the probability distributions that appear in the specific problems in probability theory and mathematical statistics have been known for a long time and are connected with the basic probability schemes [4]. They are described either by probabilities of discrete values (see Discrete distribution) or by probability densities (see Continuous distribution). There are also tables compiled in certain cases where they are necessary [5].
Among the basic probability distributions, some are connected with sequences of independent trials (see Binomial distribution; Geometric distribution; Multinomial distribution) and others with the limit laws corresponding to such a probability scheme when the number of trials increases indefinitely (see Normal distribution; Poisson distribution; Arcsine distribution). But these limit distributions may also appear directly in exact form, as in the theory of stochastic processes (see Wiener process; Poisson process), or as solutions of certain equations arising in so-called characterization theorems (see also Normal distribution; Exponential distribution). A uniform distribution, usually considered as a mathematical way of expressing that outcomes of an experiment are equally possible, can also be obtained as a limit distribution (say, by considering sums of large numbers of random variables or some other random variables with sufficiently smooth and "spread out" distributions modulo 1). More probability distributions can be obtained from those mentioned above by means of functional transformations of the corresponding random variables. For example, in mathematical statistics random variables with a normal distribution are used to obtain variables with a "chi-squared" distribution, a non-central "chi-squared" distribution, a Student distribution, a Fisher -distribution, and others.
Important classes of distributions were discovered in connection with asymptotic methods in probability theory and mathematical statistics (see Limit theorems; Stable distribution; Infinitely-divisible distribution; "Omega-squared" distribution).
It is important, both for the theory and in applications, to be able to define a concept of proximity of distributions. The collection of all probability distributions on can in different ways be turned into a topological space. Weak convergence of probability distributions plays a basic role here (see Distributions, convergence of). In the one-dimensional and finite-dimensional cases the apparatus of characteristic functions (cf. Characteristic function) is a principal instrument for studying convergence of probability distributions.
A complete description of a probability distribution (say, by means of the density of a probability distribution or a distribution function) is often replaced by a limited collection of characteristics. The most widely used of these in the one-dimensional case are the mathematical expectation (the average value), the dispersion (or variance), the median (in statistics), and the moments (cf. Moment). For numerical characteristics of multivariate probability distributions see Correlation (in statistics); Regression.
The statistical analogue of a probability distribution is an empirical distribution. An empirical distribution and its characteristics can be used for the approximate representation of a theoretical distribution and its characteristics (see Statistical estimator). For ways to measure how well an empirical distribution fits a hypothetical one see Statistical hypotheses, verification of; Non-parametric methods in statistics.
#### References
[1] A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from Russian) [2] B.V. Gnedenko, A.N. Kolmogorov, "Limit distributions for sums of independent random variables" , Addison-Wesley (1954) (Translated from Russian) [3] Yu.V. Prokhorov, "The method of characteristic functionals" , Proc. 4-th Berkeley Symp. Math. Stat. Probab. , 2 , Univ. California Press (1961) pp. 403–419 [4] W. Feller, "An introduction to probability theory and its applications" , 1–2 , Wiley (1957–1971) [5] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova) [6] B.V. Gnedenko, "A course of probability theory" , Moscow (1969) (In Russian) [7] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) [8] J. Neveu, "Bases mathématiques du calcul des probabilités" , Masson (1970)
|
2019-03-20 19:28:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226178526878357, "perplexity": 834.0255796717772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212324-00307.warc.gz"}
|
http://hammerofcode.com/standard-error/define-standard-error-of-estimation.php
|
Home > Standard Error > Define Standard Error Of Estimation
# Define Standard Error Of Estimation
## Contents
Notice that the population standard deviation of 4.72 years for age at first marriage is about half the standard deviation of 9.27 years for the runners. Ye. Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence intervals In many practical applications, the true value of σ is unknown. The resulting interval will provide an estimate of the range of values within which the population mean is likely to fall. http://hammerofcode.com/standard-error/bootstrap-estimation-standard-error.php
Test Your Understanding Problem 1 Which of the following statements is true. For some statistics, however, the associated effect size statistic is not available. You interpret S the same way for multiple regression as for simple regression. The confidence interval so constructed provides an estimate of the interval in which the population parameter will fall.
## Standard Error Of Estimate Definition
The true standard error of the mean, using σ = 9.27, is σ x ¯ = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt This can artificially inflate the R-squared value. Conversely, the unit-less R-squared doesn’t provide an intuitive feel for how close the predicted values are to the observed values. Then subtract the result from the sample mean to obtain the lower limit of the interval.
and Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p. 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC. The standard error is a measure of central tendency. (A) I only (B) II only (C) III only (D) All of the above. (E) None of the above. Search this site: Leave this field blank: . Standard Deviation Estimation The survey with the lower relative standard error can be said to have a more precise measurement, since it has proportionately less sampling variation around the mean.
The only difference is that the denominator is N-2 rather than N. This lesson shows how to compute the standard error, based on sample data. Thanks for the question! Wilson Mizner: "If you steal from one author it's plagiarism; if you steal from many it's research." Don't steal, do research. .
Note the similarity of the formula for σest to the formula for σ.  It turns out that σest is the standard deviation of the errors of prediction (each Y - Confidence Interval Estimation At a glance, we can see that our model needs to be more precise. When effect sizes (measured as correlation statistics) are relatively small but statistically significant, the standard error is a valuable tool for determining whether that significance is due to good prediction, or The standard error is important because it is used to compute other measures, like confidence intervals and margins of error.
## Standard Error Of Estimate Definition Statistics
When this occurs, use the standard error. A quantitative measure of uncertainty is reported: a margin of error of 2%, or a confidence interval of 18 to 22. Standard Error Of Estimate Definition Please help. Standard Error Of The Estimate Meaning The obtained P-level is very significant.
Kind regards, Nicholas Name: Himanshu • Saturday, July 5, 2014 Hi Jim! Get More Info As the sample size increases, the sampling distribution become more narrow, and the standard error decreases. For the BMI example, about 95% of the observations should fall within plus/minus 7% of the fitted line, which is a close match for the prediction interval. Similarly, the sample standard deviation will very rarely be equal to the population standard deviation. Std Error Of Estimate
Upper Saddle River, New Jersey: Pearson-Prentice Hall, 2006. 3. Standard error. To obtain the 95% confidence interval, multiply the SEM by 1.96 and add the result to the sample mean to obtain the upper limit of the interval in which the population Standard errors provide simple measures of uncertainty in a value and are often used because: If the standard error of several individual quantities is known then the standard error of some useful reference The standard deviation of the age was 3.56 years.
See unbiased estimation of standard deviation for further discussion. Variance Estimation Was there something more specific you were wondering about? Read more about how to obtain and use prediction intervals as well as my regression tutorial.
## Consider the following data.
Note: the standard error and the standard deviation of small samples tend to systematically underestimate the population standard error and deviations: the standard error of the mean is a biased estimator Sign Me Up > You Might Also Like: How to Predict with Minitab: Using BMI to Predict the Body Fat Percentage, Part 2 How High Should R-squared Be in Regression Jim Name: Nicholas Azzopardi • Friday, July 4, 2014 Dear Jim, Thank you for your answer. Low Standard Error The standard error is computed from known sample statistics.
The determination of the representativeness of a particular sample is based on the theoretical sampling distribution the behavior of which is described by the central limit theorem. When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. About all I can say is: The model fits 14 to terms to 21 data points and it explains 98% of the variability of the response data around its mean. this page Greek letters indicate that these are population values.
There’s no way of knowing. McHugh. Search More Info . The mean age was 33.88 years.
Therefore, the standard error of the estimate is There is a version of the formula for the standard error in terms of Pearson's correlation: where ρ is the population value of Although not always reported, the standard error is an important statistic because it provides information on the accuracy of the statistic (4). Thus instead of taking the mean by one measurement, we prefer to take several measurements and take a mean each time. This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle
It can only be calculated if the mean is a non-zero value. When the standard error is large relative to the statistic, the statistic will typically be non-significant. Just as the standard deviation is a measure of the dispersion of values in the sample, the standard error is a measure of the dispersion of values in the sampling distribution. Population parameter Sample statistic N: Number of observations in the population n: Number of observations in the sample Ni: Number of observations in population i ni: Number of observations in sample
The mean age for the 16 runners in this particular sample is 37.25. The standard error can be computed from a knowledge of sample attributes - sample size and sample statistics. S becomes smaller when the data points are closer to the line. In an example above, n=16 runners were selected at random from the 9,732 runners.
The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE} With n = 2 the underestimate is about 25%, but for n = 6 the underestimate is only 5%.
|
2018-07-21 04:15:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800814151763916, "perplexity": 613.8892528277597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00146.warc.gz"}
|
http://physics.stackexchange.com/questions?page=309&sort=newest
|
# All Questions
1answer
108 views
### Hamiltonian form of Noether's Theorem
I understand that Noether's Theorem has a Hamiltonian form, whereby {X, H} = 0 iff {H, X} = 0. The proof of this is trivial, as it follows from the antisymmetry of the Poisson Brackets. First ...
1answer
108 views
### Does a universe experiencing “heat death” have a temperature?
As defined by Wikipedia: The heat death of the universe is a suggested ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and ...
0answers
108 views
### Thermal Conductivity Graph
This is our first time (as an engineer) seeing this type of graph that we can't interpret. This is a rough comparision of thermal conductivity from the Wiki page. ...
0answers
265 views
### Simulating the Interference Pattern of Fraunhofer Diffraction by a Single Slit
I'm attempting to simulate the Fraunhofer diffraction pattern due to a single slit. We know that the intensity at an angle $\theta$ is $I(\theta)=I_0 \text{sinc} ^2(\beta)$ where ...
0answers
228 views
### The surface area to volume ratio of a sphere and the Bekenstein bound
I am trying to relate the surface-area-to-volume-ratio of a sphere to the Bekenstein bound. Since the surface-area-to-volume-ratio decreases with increasing volume, one would surmise that, per unit of ...
1answer
215 views
### Working with a Routhian for a specific system
I asked a more general question earlier about the Routhian, but I'm still having trouble working with it. Here's my specific case. Given the following Lagrangian: ...
2answers
257 views
### Why can I split white light into separate colors with my eyes
I was sitting in a meeting at work today. When I looked up there was a projector facing the rear wall letting out a very bright white light (the projection). When I look right at it, all I could ...
1answer
151 views
### Paper in physics - calculations; rounding or not?
I'm currently a high schooler, and I'm writing my first scientific paper. The result is fairly simple, and it is nothing too special, but I see it as a nice way to prepare myself for the academic ...
0answers
15 views
### Is imperative magnetic flux the capacity of individual atoms, or the constituent valency of chemically-bonded molecules within a vacuum?
If magnetic energy depends on the electron poles within two-fields within a permanent magnets void, how do invidiual atoms react within the attraction or repulsion of poles, and what incurrence does ...
4answers
960 views
### How can light carry data if light has no mass, and data has mass?
Via a packet-switched network, like the internet, data is sent as packets (bits) wirelessly via radio waves with Wi-Fi, or 802.11g, etc. What my question is is this: Radio waves are light; light has ...
0answers
72 views
### Trapping Light in a shell
I was curious about this. Is it theoretically (mathematically) possible to create a shell such that when a photon 'hits' the shell, the spacetime at the shell surface is such that the photon travels ...
1answer
308 views
### How do you derive Lagrange's equation of motion from a Routhian?
Given a Routhian $R(r,\dot{r},\phi,p_{\phi})$, how do you derive Lagrange's equation for $r$? Do you just solve the following for $r$? $$\frac{d}{dt}\frac{∂R}{∂\dot{\phi}}-\frac{∂R}{∂\phi}=0$$ And ...
0answers
35 views
### How to derive the classical Hartree potential for a slab system?
I am now working on a slab system, but encountered some problems on the classical Hartree potential. This slab system is infinity along x-y plane, and has finite size along $z$ axis $z\in[0,L]$. I ...
1answer
266 views
### Virtual displacement and generalized coordinates
I have a doubt regarding the expression of a virtual displacement using generalized coordinates. I will state the definitions I'm taking and the problem. The system is composed by $n$ points with ...
1answer
196 views
### Any open areas to work in non equilibrium thermodynamics for a Phd student? [closed]
I see that many papers written on fundamentals of thermodynamics(theory) nowadays are by some old professors somewhere(there may be exceptions). Most active young faculty don't seem to be seriously ...
1answer
49 views
### Thinking Clearly about Fresnel Zone of Short Pulse:
Let's say you have a short pulse of light which expands radially from a lightbulb, and it impinges upon a mirror and reflects towards a photodetector which you have places somewhere above the mirror. ...
0answers
83 views
### Rocket hovers- and then what?
If we have a rocket, using conservation of momentum we derived in my classical mechanics course $$m\dot{v}=-\dot{m}v_{ex}+F^{EXT}$$ $m$ is the total mass of the rocket and fuel still on the rocket ...
2answers
610 views
### Why does the amount of energy transferred depend on distance rather than time?
The change in energy of an object can be determined by the work equation, where work is the change in energy: $$W = F \cdot d$$ I conceptualize the transfer of energy as simply a series of small ...
1answer
243 views
### Given wave function at $t=0$, what is the process of deriving time dependent wave equation? [closed]
Suppose $$\Psi (x, t=0)=Ae^{i\alpha _1}\psi _1(x)+Be^{i\alpha _2}\psi_2(x)+Ce^{i\alpha _3}\psi_3(x).$$ If $\psi _n$ are the energy eigenfunctions how would I derive $\Psi (x,t)$? I am having trouble ...
1answer
208 views
### Applying theorem of residues to a fermionic reservoir correlation function in order to solve the integral in the CF and obtain a summation
Applying theorem of residues to a fermionic reservoir correlation function in order to solve the integral in the correlation function and obtain a summation.
1answer
194 views
### How to determine the positions of two points in a radial line by an intensity level dB?
The following is the question from my school. A source emits sound uniformly in all directions. A radial line is drawn from this source. On this line, determine the positions of two points, 1.00m ...
1answer
296 views
### Can we generate infinite energy by successive fission and fusion reactions?
Fission divides one Helium atom into two Hydrogen (Deuterium) atoms. And fusion, once again, puts together those two Hydrogen atoms into one Helium atom. In both reactions, overall output energy is ...
1answer
615 views
### Why magnetic field lines and force are not orthogonal with magnets?
The below explanation why magnetism exists is superb in this video. The explanation about magnets is also great in this video. A magnet has atoms with unpaired electrons forming mini magnets. The ...
0answers
23 views
### Entropy used to calculate energy?
I'm currently reading an online article, and below is a quote from that article: The thermodynamic entropy to change $n$ memory cells within $m$ states is $ΔS=k_B\ln(m^n)$, where $k_B$ is the ...
0answers
114 views
### Friction From an Object On-Top of a Sliding Object
Consider a block $A$ lying on a flat and frictionless table, and a block $B$ lying on top of block $A$. A horizontal force $F$ is applied to block $A$. If there is no friction between blocks $A$ and ...
1answer
144 views
### Antimatter collision - Energy Released
I think it's about time for me to ask this question, as I've been contemplating this for a while. $$E = mc^2.$$ This is Einstein's most famous equation. But what does it mean? On my own, I had to ...
1answer
906 views
4answers
660 views
### Isn't gravity non-local and non-causal?
The way I think of this is that, I can ask physical questions about a space-time which are impossible to answer unless one knows the full space-time, and hence I am inclined to believe that gravity is ...
1answer
774 views
### Calculating the frictional force
Here's my problem and the work I've done. The time is already past for me to submit the answer, but I want to know where I went wrong and why I was wrong. The 2-kg box slides down a vertical wall ...
2answers
3k views
### Electric Field Between Two Parallel Infinite Plates of Positive Charge and a Gaussian Cylinder
Is the electric field between two positively charged parallel infinite plates one with a charge density twice the other effect the electric field on the outside of the plates? I am thinking no, ...
0answers
65 views
### Can I transfer electricity through induction?
If I place a solenoid connected to a bulb inside a bigger solenoid which is connected to a power source, will the bulb glow?
0answers
356 views
### projectile that splits into two fragments of equal mass
I am studying for an exam, and this is part of a problem in my book. A projectile is launch from level ground and is intended to hit a target 100m away. Instead, it explodes into two fragments of ...
2answers
199 views
### A problematic integral in calculating the entanglement entropy in 1+1 D free massive bosonic field theory
I encountered a curious integration identity when I was reading the paper by Pasquale Calabrese and John Cardy on the entanglement entropy of 1+1D quantum field theory (arXiv). The identity is given ...
2answers
254 views
### Which cyan colored line is produced in the Thomson e/m apparatus?
Related: Which green spectral line(s) are emitted in a Thomson tube? After reading Lisa Lee’s OP on an electron deflection tube, although she had some misunderstandings on its operation, I still ...
1answer
82 views
### How DC and AC relays works?
I was told long time ago that DC relay had a coil. There was a switch (2 wires, one is stable, the other one is flexible) inside the coil. The switch was parallel to the axial direction of the coil. ...
2answers
352 views
### Why moving charges causes Magnetic Field (module and direction)?
Why an constant electric current in a wire produces a magnetic field, that circles that wire? I know that this question was posted before. However, all answers talk about Maxwell equations, axioms, ...
15 30 50 per page
|
2014-08-23 11:27:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849529027938843, "perplexity": 724.3881251633026}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00448-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://lqp2.org/node/1400
|
# Free products in AQFT
Roberto Longo, Yoh Tanimoto, Yoshimichi Ueda
June 19, 2017
We apply the free product construction to various local algebras in algebraic quantum field theory. If we take the free product of infinitely many identical half-sided modular inclusions with ergodic canonical endomorphism, we obtain a half-sided modular inclusion with ergodic canonical endomorphism and trivial relative commutant. On the other hand, if we take M\"obius covariant nets with trace class property, we are able to construct an inclusion of free product von Neumann algebras with large relative commutant, by considering either a finite family of identical inclusions or an infinite family of inequivalent inclusions. In two dimensional spacetime, we construct Borchers triples with trivial relative commutant by taking free products of infinitely many, identical Borchers triples. Free products of finitely many Borchers triples are possibly associated with Haag-Kastler net having S-matrix which is nontrivial and non asymptotically complete, yet the nontriviality of double cone algebras remains open.
Keywords:
none
|
2021-04-17 05:37:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044298529624939, "perplexity": 1339.2041899582907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00398.warc.gz"}
|
https://github.com/simongog/sdsl
|
# simongog / sdsl
Succinct Data Structure Library
## Files
Failed to load latest commit information.
Type
Name
Commit time
# SDSL: Succinct Data Structure Library
This is a C++ template library for succinct data structures called sdsl.
Succinct data structures are fascinating: They represent an object (like a bitvector, a tree, suffix array,...) in space close the information-theoretic lower bound of the object but the defined operations can still be performed efficiently. Hmmm, at least in theory ;) Actually there is still a big gap between theory and practice. Why? The time complexity of an operations performed on the classical fat data structure and the slim succinct data structure are the same most time in theory. However, in practice succinct structures are slow since the operations require often memory accesses with bad locality of references. Moreover, often the in theory small sub-linear space data structures account for a large amount of memory, since they are only asymptotic sub-linear and the input size for which they are negligible in practice is galactic.
The aim of the library is to provide basic and complex succinct data structure which are
• easy to use (the library is structure like the STL, which provides classical data structures),
• capable of handling large inputs (yes, we support 64-bit),
• provide excellent performance in construction, and
• provide excellent operation performance
A lot of engineering tricks had to be applied to reach the performance goal, for instance the use a semi-external algorithm, bit-parallelism on 64-bit words, and cache-friendly algorithms.
## List of implemented data structures
• Bitvectors
• An uncompressed mutual bitvector (bit_vector)
• An uncompressed immutable bitvector (bit_vector_interleaved)
• A -compressed immutable bitvector (rrr_vector<>)
• A bitvector for sparse populated arrays (sd_vector<>)
• Rank and Select Support Structures
• Several rank and select implementations with different time-space trade-offs for the uncompressed bitvectors (rank_support_v,rank_support_v5,select_support_mcl,...)
• Rank and select for compressed bitvectors (rrr_rank_support<>, sd_rank_support<>,...)
• Variable-length Coders
• Elias- coder (coder::elias_delta)
• Fibonacci-coder (coder::fibonacci)
• Integer Vectors
• Mutable vectors for (compile-time) fixed w-bit integers (int_vector<w>)
• Mutable vector for (run-time) fixed w-bit integers (int_vector<0>, w passed to the constructor)
• Immutable compressed integer vector using a variable-length coder coder (enc_vector<coder>)
• Wavelet Trees (all immutable)
• Balanced wavelet tree for a byte-alphabet (wt)
• Balanced wavelet tree for a integer-alphabet (wt_int)
• Huffman-shaped wavelet tree for a byte-alphabet (wt_huff)
• Run-length compressed wavelet trees for a byte-alphabet (wt_rlmn, wt_rlg, and wt_rlg8)
• Compressed Suffix Arrays (CSA) (all immutable)
• CSA based on a wavelet tree (csa_wt)
• CSA based on the compressed -function csa_sada
• Balanced Parentheses Support Structures (all immutable)
• A range-min-max-tree implementation (bp_support_sada) to find_open, find_close, enclose, double_enclose,...
• Hierarchical solution with pioneer parentheses (bp_support_g, bp_support_gg)
• Range Minimum Support (RMQ) Structures (all immutable)
• Self-contained RMQ structure using 2n+o(n) bits or 4n+o(n) bits (rmq_succinct_sct, rmq_succinct_sada)
• Non-succinct support structure for RMQ (rmq_support_sparse_table)
• Longest Common Prefix (LCP) Arrays (all immutable)
• LCP-array based on direct accessible codes (lcp_dac)
• LCP-array encodes small values with a byte and large values with a word (lcp_kurtz)
• LCP-array encodes all values in a wavelet tree (lcp_wt)
• Compressed LCP-array dependent on the corresponding CSA (lcp_support_sada)
• Compressed LCP-array dependent on the corresponding CST (lcp_support_tree)
• Compressed LCP-array dependent on the corresponding CSA and CST (lcp_support_tree2)
• Compressed Suffix Trees(CSTs) (all immutable)
• CST providing very fast navigation operations (cst_sada)
• CST representing nodes as intervals in the suffix array (cst_sct3)
## Example of a complex data structure
Let us now show how you can assemble even a very complex data structure very easily. Lets begin with the most complex one, a CST! It basically consists of a CSA, an compressed LCP-array, and a succinct representation of the tree topology; each part can be specified by a template parameter. Say, we want fast navigation operations, so we take the class cst_sada<cst_type, lcp_type, bp_support_type> for our CST. Now we can specify the type of CSA. Lets take a CSA based on wavelet tree: csa_wt<wt_type, SA_sample_dens, inv_SA_sample_dens>. We can recursively specify the used types. So now we can specify the used wavelet tree, say a run-length compressed wavelet tree (wt_rlmn<>). We could recurse again and specify, each detail of the wavelet tree (e.g. which rank support structure should be used) but we stick now with the default configuration which uses an sd_vector for the marking of the heads of the runs in the wavelet tree. Lets choose at last a LCP array which uses the topology of the CST and the CSA to compress the LCP values (lcp_support_tree2) and stick with default template parameters for all types. So the final type looks like this: cst_sada<cst_wt<wt_rlmn<> >, lcp_support_tree2<> >.
Now, lets explore the data structure a little bit. We take the english.100MB input from the Pizza&Chili-corpus, construct the CST-object, output its structure, and visualise it using the d3js-library. Have fun with the result.
## Types of data structures
The data structures in the library can be divided into several classes:
• Objects of mutable classes can be changed after construction (e.g. we can assign new values to the elements of an int_vector)
• Objects of immutable classes can not be changed after construction (e.g. you can not assign a new value to an element of a compressed suffix array, say csa_wt)
• Objects of support classes add functionality to objects of self-contained classes. For example an object of type rank_support_v addes constant time rank(i)-functionality to an object of type bit_vector, or an object of of type bp_support_sada adds find_open(i)-functionality to a bit_vector object, which represents a balanced parentheses sequence.
Each sdsl-class X has to implement the following methods:
• The standard constructor X()
• The copy constructor X(const &X)
• Swap operator swap(const &X)
• serialize operator serialize(std::ostream &out, structure_tree_node* v, std::string name)
• load operator load(std::istream &in)
We provide many handy methods for sdsl objects in the util namespace:
• util::store_to_file(const X &x, const char* file_name) stores the object x to the file
• util::clear(X &x) deletes the object and frees the space
• util::load_from_file(X &x, const char* file_name) loads the object x from the file
• util::assign(X &x, Y &y) if the type of X equals Y, then x and y are swapped, otherwise y is assigned to x by x = T(y)
• util::get_size_in_bytes(const X &x) returns the number of bytes needed to represent object x in memory.
• util::write_structure<FORMAT>(const X &x, std::ostream &out) writes the structure of the data structure in JSON or R format (FORMAT=JSON_FORMAT or R_FORMAT)
## Supported platforms
The library was successfully tested on the following configurations
• Mac OS X 10.7.3 on a MacBookPro equipped with a Intel Core i5 CPU
• Ubuntu Linux 12.04 running on a server equipped with Intel Xeon (E5640) CPUs
We plan to support Windows in the near future.
## Installation
The installation requires that the cmake tool and a C++ compiler (e.g. from the GNU Compiler Collection) is installed. You can than install the library into a directory SDSL_INSTALL_DIR by calling ./install SDSL_INSTALL_DIR If SDSL_INSTALL_DIR is not specified your home directory is used. Please use an absolute path name for SDSL_INSTALL_DIR. The library header files will be located in the directory SDSL_INSTALL_DIR/include and the library in the directory SDSL_INSTALL_DIR/lib. After the installation you can execute the tests in the test directory or start with some code examples in the examples folder.
## Tests
We have used the gtest framework for the tests. Compile with make and run tests with make test. We have another target vtest which runs the test with the valgrind tool. make test will try to download some texts from a gutenberg.org mirror. See the README file in the directory for details.
## Examples
Compile the examples with make and experience how esay it is to use succinct data structures.
## Construction of Suffix Arrays
The current version includes Yuta Mori's incredible fast suffix array construction library libdivsufsort version 2.0.1.
## Contributors
Here is a list of contributes:
Code:
• Stefan Arnold
• Timo Beller
• Simon Gog
• Shanika Kuruppu
• Matthias Petri
• Jani Rahkola
Bug reports:
• Kalle Karhu
• Dominik Kempa
New contributors are welcome any time!
Have fun with the library!
Succinct Data Structure Library
|
2020-08-05 09:24:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2600538730621338, "perplexity": 7797.509349242801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00087.warc.gz"}
|
http://gooddownloadsoftwarefast.us/2018/01/13/adelle-font-download-zip_so/
|
Posted on
And what I assume you shall assumeLinks to Japanese typography compiled by adelle font download zip Luc Devroye 1 I celebrate myself. For every atom belonging to me as good belongs to you. .I loafe and invite my soul
Links to Japanese typography compiled by Luc Devroye 1 I celebrate myself. And what I assume you shall assume For every adelle font download zip atom belonging to me as good belongs to you. .I loafe and invite my soul and sing myself
|
2018-01-17 23:37:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531278014183044, "perplexity": 13162.766490099853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887024.1/warc/CC-MAIN-20180117232418-20180118012418-00529.warc.gz"}
|
https://www.physicsforums.com/threads/determining-if-the-functions-cosx-e-x-x-are-linearly-independent.473363/
|
# Determining if the functions {cosx , e^-x , x} are linearly independent
## Homework Statement
Basically, the title says it all, I need to figure out whether these functions are linearly independtend on (-infinity, infinity)
## Homework Equations
Wronskian (the determinant of the matrix composed of the functions in the first row, first derivative in the second row and second derivatives in the third row)
## The Attempt at a Solution
After computing the Wronskian this is what I got:
[(-e^-x)(-cosx)] + [(xe^-x)(-sinx)] - [(x)(-e^-x)(cosx)] - [(e^-x)(cosx)]
however, I cannot seem to simply this. If anyone can help me simplify this further that would be great. Also if you help me determine if whether they are linearly independent.
Thanks
HallsofIvy
Homework Helper
Okay, you have calculated the Wronskian. Why? What does the Wronskian tell you? Would it help you to observe that, if $x= \pi/2$, that reduces to $-(\pi/2)e^{-\pi/2}$?
basically the wronskian tells us that if it is not equal to zero the specified functions are linearly independent.
After evaluating what you told me to substitute in, I get
(-1/2)(e^(-pi/2))(pi)
With this, the wronskian can never equal to zero due to the function of e.
Is this correct?
|
2021-07-28 11:07:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634785771369934, "perplexity": 540.8278341371796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00454.warc.gz"}
|
https://www.physicsforums.com/threads/10-3-determine-if-a-is-in-the-span-b.1041422/
|
# 10.3 Determine if A is in the span B
• MHB
Gold Member
MHB
Determine if $A=\begin{bmatrix} 1\\3\\2 \end{bmatrix}$ is in the span $B=\left\{\begin{bmatrix} 2\\1\\0 \end{bmatrix} \cdot \begin{bmatrix} 1\\1\\1 \end{bmatrix}\right\}$
ok I added A and B to this for the OP
but from examples it looks like this can be answered by scalors so if
$c_1\begin{bmatrix} 2\\1\\0 \end{bmatrix} + c_2\begin{bmatrix} 1\\1\\1 \end{bmatrix}=\begin{bmatrix} 1\\3\\2 \end{bmatrix}$
Olinguito
Hi karush.
So you have
$$\begin{eqnarray}2c_1 &+& c_2 &=& 1 \\ c_1 &+& c_2 &=& 3 \\ {} & {} & c_2 &=& 2.\end{eqnarray}$$
If you substitute $c_2=2$ from the last equation into the first two equations, you get two different values for $c_1$. Hence the above set of equations is inconsistent (has no solutions) showing that $\mathbf A\notin\mathrm{span}B$.
Gold Member
MHB
Lets try this one... if $A= \begin{bmatrix} 1\\3\\2 \end{bmatrix}$ is in the span $B=\left\{\begin{bmatrix} 2\\1\\0 \end{bmatrix} \cdot \begin{bmatrix} 1\\1\\1 \end{bmatrix} \cdot \begin{bmatrix} 0\\1\\1 \end{bmatrix}\right\}$
then
$\begin{array}{rrrrr} 2c_1 &+ c_2 & & =1 \\ c_1 &+ c_2 & +c_3 & =3 \\ & c_2 & +c_3 & =2 \end{array}$
Solving $c_1=1, c_2=−1, c_3=3$
so $A\in\mathrm{span}B$
Last edited:
$2c_1+ c_2= 1$
$c_1+ c_2+ c_3= 3$ and
$c_2+ c_3= 2$
|
2023-03-20 19:18:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300416111946106, "perplexity": 630.6064651111714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00315.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/write-five-rational-numbers-which-are-smaller-2-representation-rational-numbers-number-line_14984
|
Share
# Write Five Rational Numbers Which Are Smaller than 2. - CBSE Class 8 - Mathematics
ConceptRepresentation of Rational Numbers on the Number Line
#### Question
Write five rational numbers which are smaller than 2.
#### Solution
2 can be represented as 14/7
Therefore, five rational numbers smaller than 2 are 13/7, 12/7,11/7,10/7,9/7
Is there an error in this question or solution?
#### APPEARS IN
NCERT Solution for Mathematics Textbook for Class 8 (2018 to Current)
Chapter 1: Rational Numbers
Ex. 2.10 | Q: 3 | Page no. 20
#### Video TutorialsVIEW ALL [1]
Solution Write Five Rational Numbers Which Are Smaller than 2. Concept: Representation of Rational Numbers on the Number Line.
S
|
2019-12-15 12:36:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1983194649219513, "perplexity": 3335.755992645474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308149.76/warc/CC-MAIN-20191215122056-20191215150056-00444.warc.gz"}
|
https://infoscience.epfl.ch/record/170751
|
Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
Abstract
We study B- meson decays to (p) over bar Lambda D-(*)0 final states using a sample of 657 x 10(6) B (B) over bar events collected at the Gamma(4S) resonance with the Belle detector at the KEKB asymmetric-energy e(+)e(-) collider. The observed branching fraction for B- -> (p) over bar Lambda D-0 is (1.43(-0.25)(+0.28) +/-0.18) x 10(-5) with a significance of 8.1 standard deviations, where the uncertainties are statistical and systematic, respectively. Most of the signal events have the (p) over bar Lambda mass peaking near threshold. No significant signal is observed for B- -> (p) over bar Lambda D-0 and the corresponding upper limit on the branching fraction is 4.8 x 10(-5) at the 90% confidence level.
|
2021-04-22 10:27:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718951344490051, "perplexity": 6337.804652454211}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00306.warc.gz"}
|
https://gre.kmf.com/question/all/151?keyword=&page=18
|
#### 题目列表
Which of the following, if true, suggests that the epidemiologists plan for eliminating malaria is not viable?
Even though physiological and behavioral processes are maximized within relatively narrow ranges of temperatures in amphibians and reptiles, individuals may not maintain activity at the optimum temperatures for performance because of the costs associated with doing so. Alternatively, activity can occur at suboptimal temperatures even when the costs are great. Theoretically, costs of activity at suboptimal temperatures must be balanced by gains of being active. For instance, the leatherback sea turtle will hunt during the time of day in which krill are abundant, even though the water is cooler and thus the turtles body temperature requires greater metabolic activity. In general, however, the cost of keeping a suboptimal body temperature, for reptiles and amphibians, is varied and not well understood; they include risk of predation, reduced performance, and reduced foraging success.
One reptile that scientists understand better is the desert lizard, which is active during the morning at relatively low body temperatures (usually 33.0 C), inactive during midday when external temperatures are extreme, and active in the evening at body temperatures of 37.0 C. Although the lizards engage in similar behavior (e.g., in morning and afternoon, social displays, movements, and feeding), metabolic rates and water loss are great and sprint speed is lower in the evening when body temperatures are high. Thus, the highest metabolic and performance costs of activity occur in the evening when lizards have high body temperatures. However, males that are active late in the day apparently have a higher mating success resulting from their prolonged social encounters. The costs of activity at temperatures beyond those optimal for performance are offset by the advantages gained by maximizing social interactions that ultimately impact individual fitness.
The passage suggests that reptiles and amphibians are able to
Even though physiological and behavioral processes are maximized within relatively narrow ranges of temperatures in amphibians and reptiles, individuals may not maintain activity at the optimum temperatures for performance because of the costs associated with doing so. Alternatively, activity can occur at suboptimal temperatures even when the costs are great. Theoretically, costs of activity at suboptimal temperatures must be balanced by gains of being active. For instance, the leatherback sea turtle will hunt during the time of day in which krill are abundant, even though the water is cooler and thus the turtles body temperature requires greater metabolic activity. In general, however, the cost of keeping a suboptimal body temperature, for reptiles and amphibians, is varied and not well understood; they include risk of predation, reduced performance, and reduced foraging success.
One reptile that scientists understand better is the desert lizard, which is active during the morning at relatively low body temperatures (usually 33.0 C), inactive during midday when external temperatures are extreme, and active in the evening at body temperatures of 37.0 C. Although the lizards engage in similar behavior (e.g., in morning and afternoon, social displays, movements, and feeding), metabolic rates and water loss are great and sprint speed is lower in the evening when body temperatures are high. Thus, the highest metabolic and performance costs of activity occur in the evening when lizards have high body temperatures. However, males that are active late in the day apparently have a higher mating success resulting from their prolonged social encounters. The costs of activity at temperatures beyond those optimal for performance are offset by the advantages gained by maximizing social interactions that ultimately impact individual fitness.
The author implies that, in the desert lizard, the advantages in some forms of social interaction
In his magnificent biography of Keats, Nicholas Roe chronicles a forward-looking spirit, whose poetry offered a strikingly modern amalgam of the arts and sciences.?Medical allusions to nerves, arteries, bone and blood developed in tandem with deepening thoughts on human pain and suffering, says Roe. Keatss vaunted "negative capability" allowed him to engage imaginatively with lifes transience and his own consumptive state (he suffered from tuberculosis and was not expected to live for long).?The rueful melancholy of "To Autumn" and "Ode to a Nightingale" speaks of a courageous reckoning with mortality.
Lord Byron, with customary disdain, regarded Keats as a mere dilettante of sensation and "his imagination". Roe will have little of this. The imagination at work in a poem such as "Isabella, or, the Pot of Basil" derived from Keatss professional exposure to dissecting-room corpses. As the son of a Moorfields livery stables manager, Keats knew how the poor could serve as fodder for scalpels. Hospitals were complicit in the body-snatching trade, as the science of anatomy was in its infancy and trainee surgeons were required to practice their skills.
Select the part of the passage that mentions the poems that were informed by Keats's illness.
The price of the SuperPixel high definition television, by Lux Electronics, has typically been out of the range of most consumers, a few of whom nonetheless save up for the television. This past July, the SuperPixel reduced its price by 40%, and sales during that month nearly tripled. TechWare, a popular electronics magazine, claims that the SuperPixel television should continue to see sales grow at this rate till the end of August.
Which of the following suggests that TechWares forecast is misguided?
A senator, near the end of his first six-year term and running for reelection, made the claim: "Citizens of our state are thriving. While national unemployment levels have remained high, our state unemployment rate has been at astonishingly low levels for eleven years running. Clearly, everyone in our state has benefited from the economical packages I have introduced during my time in the Senate. Therefore, grateful citizens of our state ought to vote for my second term."
This argument is most vulnerable to what criticism?
In order to combat Carvilles rampant homeless problem, Mayor Bloomfield recently proposed a ban on sleeping outdoors in the citys many parks. He claims that such a measure will force the homeless to either leave Carville or to find means other than sleeping in public parks.
Which of the following, if true, suggests that Mayor Bloomfields plan will be successful?
The main goal of "risk communication" is for experts to inform laypeople of the potential dangers of new technologies and ecological phenomenon. In order for experts to effectively communicate risk, they must understand the extent of the knowledge base of those whom they hope to guide. Research has found that canned messages, which make no concessions to a person`s knowledge, are likely to leave those who hear or read the message feeling befuddled or indifferent.
Which of the following scenarios best captures the principle regarding effective "risk communication" elucidated in the passage?
Dolphins can swim at high speeds and achieve high acceleration in the water. In 1936, Sir James Gray calculated the force dolphins should be able to exert based on their physiology. He concluded that the propulsive force they were able to exert was not enough to explain how fast they swim and accelerate.?In the 2000s, experimenters used special computer-enhanced measurements of the water in which dolphins were swimming. Through mathematical modeling, they were able to measure the force dolphins exert with their tails. As it turns out, dolphins exert considerably more force with their tails than Sir James Gray or anybody else ever expected.?Therefore, the force exerted by their tails easily explains how fast they swim and accelerate.
In the argument, the two portions (the third and last sentences) play which of the following roles?
The element ytterbium increases its electrical resistance when subject to high mechanical stresses. This property has made it an indispensable component in a medical tool designed to measure the stress on bones, which can guide physicians in setting broken bones. Unfortunately, ytterbium is rare, found in only a few meager sources around the world. A steep market demand will cause the price to skyrocket, and this technology so helpful to physicians will become unaffordable.
Which of the following, if true, most seriously weakens the argument above?
Megalimpet is a nationwide owner of office space. ?They have major office buildings in the downtowns of several cities in the 48 lower states, and rent this space to individual companies. ?Megalimpet office spaces vary from small office to large suites, and every space has custom-designed wall-to-wall carpeting. ?The carpet in several Megalimpet facilities needed replacing. ?The winning bid for the nationwide carpet replacement was submitted by Bathyderm Carpet Company (BCC). ?The bid contract involves all delivery costs, all installation, and any ongoing maintenance and upkeep while the carpet is under the three-year warranty. ?Both BCC executives and independent consultants they hired felt BCC would be able to perform all these services for far less than their bid price; these circumstances would allow BCC to reap a considerable profit.
25000 +道题目
161本备考书籍
|
2022-08-11 17:41:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36528706550598145, "perplexity": 4956.904789575609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00144.warc.gz"}
|
https://en.m.wikisource.org/wiki/Page:Carroll_-_Game_of_Logic.djvu/37
|
# Page:Carroll - Game of Logic.djvu/37
Ch. I. § 2.]
21
NEW LAMPS FOR OLD.
In the first case (when, for example, the Premisses are "some ${\displaystyle m}$ are ${\displaystyle x}$" and "no ${\displaystyle m}$ are ${\displaystyle y^{\prime }}$") the Term, which occurs twice, is called 'the Middle Term', because it serves as a sort of link between the other two Terms.
In the second case (when, for example, the Premisses are "no ${\displaystyle m}$ are ${\displaystyle x^{\prime }}$" and "all ${\displaystyle m^{\prime }}$ are ${\displaystyle y}$") the two Terms, which contain these contradictory Attributes, may be called 'the Middle Terms'.
Thus, in the first case, the class of "${\displaystyle m}$-Things" is the Middle Term; and, in the second case, the two classes of "${\displaystyle m}$-Things" and "${\displaystyle m^{\prime }}$-Things" are the Middle Terms.
The Attribute, which occurs in the Middle Term or Terms, disappears in the Conclusion, and is said to be "eliminated", which literally means "turned out of doors".
Now let us try to draw a Conclusion from the two Premisses—
"Some new Cakes are unwholesome; ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ No nice Cakes are unwholesome."
In order to express them with counters, we need to divide Cakes in three different ways, with regard to newness, to niceness, and to wholesomeness. For this we must use the larger Diagram, making ${\displaystyle x}$ mean "new", ${\displaystyle y}$ "nice", and ${\displaystyle m}$ "wholesome". (Everything
|
2019-07-19 02:54:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7551011443138123, "perplexity": 1631.378116936782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00191.warc.gz"}
|
https://www.groundai.com/project/differential-equations-for-real-structured-and-unstructured-defectivity-measures/
|
Differential equations for defectivity measures
Differential equations for real-structured (and unstructured) defectivity measures
P. Buttà Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 00185 Roma, Italy N. Guglielmi Dipartimento di Ingegneria Scienze Informatiche e Matematica, Università degli studi di L’Aquila, Via Vetoio - Loc. Coppito, I-67100 L’Aquila, Italy M. Manetta Dipartimento di Ingegneria Scienze Informatiche e Matematica, Università degli studi di L’Aquila, Via Vetoio - Loc. Coppito, I-67100 L’Aquila, Italy and S. Noschese Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 00185 Roma, Italy
Abstract.
Let be either a complex or real matrix with all distinct eigenvalues. We propose a new method for the computation of both the unstructured and the real-structured (if the matrix is real) distance (where if general complex matrices are considered and if only real matrices are allowed) of the matrix from the set of defective matrices, that is the set of those matrices with at least a multiple eigenvalue with algebraic multiplicity larger than its geometric multiplicity. For , this problem is closely related to the computation of the most ill-conditioned -pseudoeigenvalues of , that is points in the -pseudospectrum of characterized by the highest condition number. The method we propose couples a system of differential equations on a low rank (possibly structured) manifold which computes the -pseudoeigenvalue of which is closest to coalesce, with a fast Newton-like iteration aiming to determine the minimal value such that such an -pseudoeigenvalue becomes defective. The method has a local behaviour; this means that in general we find upper bounds for . However, they usually provide good approximations, in those (simple) cases where we can check this. The methodology can be extended to a structured matrix where it is required that the distance is computed within some manifold defining the structure of the matrix. In this paper we extensively examine the case of real matrices but we also consider pattern structures. As far as we know there do not exist methods in the literature able to compute such distance.
Key words and phrases:
Pseudospectrum, structured pseudospectrum, low rank differential equations, defective eigenvalue, distance to defectivity, Wilkinson problem.
15A18, 65K05
1. Introduction
Let be a complex () or real () matrix with all distinct eigenvalues. We are interested to compute the following distance,
wK(A) = inf{∥A−B∥F:B∈Kn,n is defective}, (1.1)
where is the Frobenius norm (when this turns out to be equivalent to consider the -norm). We recall that a matrix is defective if its Jordan canonical form has at least a non diagonal block associated to an eigenvalue . Let us introduce the -pseudospectrum of ,
ΛKε(A) = {λ∈C:λ∈Λ(A+E) for some E∈Kn×n with ∥E∥F≤ε}, (1.2)
where we refer to the classical monograph by Trefethen and Embree [TE05] for an extensive treatise. In his seminal paper [Wil65], Wilkinson defined the condition number of a simple eigenvalue as
k(λ)=1|yHx|,y and x left and right % eigenvectors with ∥x∥2=∥y∥2=1.
Observing that for a defective eigenvalue since , the search of the closest defective matrix can be pursued by looking for the minimal value such that there exists with , where for some of norm , being and the normalized left and right eigenvectors associated to .
The distance was introduced by Demmel [Dem83] in his well-known PhD thesis and has been studied by several authors, not only for its theoretical interest but also for its practical one (see, e.g., [Ala06] for the bounds on presented in the literature, [AFS13] and references therein for physical applications). An interesting formula for computing this distance in the -norm has been given by Malyshev [M99].
In the recent and very interesting paper by Alam, Bora, Byers and Overton [ABBO11], which provides an extensive historical analysis of the problem, the authors have shown that when the infimum in (1.1) is actually a minimum. Furthermore, in the same paper, the authors have proposed a computational approach to approximate the nearest defective matrix by a variant of Newton’s method. Such method is well suited to dense problems of moderate size and - even for a real matrix - computes a nearby defective complex matrix, that is a minimizer for (1.1) in . A recent fast algorithm has been given in [AFS13], which is based on an extension of the implicit determinant method proposed in [PS05]. This method also deals with the unstructured and provides a nearby defective complex matrix.
The aim of this paper is that of providing a different approach to the approximation of for both and , which may be extended to the approximation of more general structured distances, that is, for example, when restricting the admissible perturbations of to the set of matrices with a prescribed nonzero pattern. However, a rigorous analysis of general structures is beyond the scope of this paper and we limit the discussion to complex and real perturbations.
The methodology we propose is splitted in two parts. First, for a given we are interested to compute the following quantity,
r(ε)=min{|yHx|:y and x left/right % eigenvectors associated to z∈ΛKε(A)}. (1.3)
Secondly, we are interested to find the smallest solution to the equation , or more in general, possibly introducing a small threshold , to the equation
r(ε)=δ. (1.4)
Note that - in this second case - we obtain anyway, as a byproduct of our method, an estimate for a solution of . We remark that any solution to gives in general an upper bound to .
By the results of Alam and Bora [AB05], we deduce that the distance we compute is the same one obtains by replacing the Frobenius norm by the -norm. Instead, for the real case, the distance we compute is in general larger than the corresponding distance in the -norm.
The paper is organized as follows. In Section 2 we analyze the case of general complex perturbations and in Section 3 we derive a system of differential equations which form the basis of our computational framework. By a low rank property of the stationary points of the system of ODEs, which identify the matrices which allow to compute approximations of (1.1), we consider the projected system of ODEs on the corresponding low rank manifold in Section 4 and prove some peculiar results of the corresponding flow. In Section 5 we pass to consider the case of real matrices with real perturbations and obtain a new system of ODEs for the computation of (1.3), which are discussed in Section 6. In Section 7 we present some theoretical results which allow us to obtain a fast method to solve (1.4) and compute approximations to . In the same section we present the complete algorithm. Afterwards, in Section 8 we focus our attention on a few implementation issues and in Section 9 show some numerical examples. Finally in Section 10 we conclude the paper by providing an extension of the method to matrices with a prescribed sparsity pattern.
2. The complex case
We denote by the Frobenius norm of the matrix , where, for any given , .
We also need the following definition.
Definition 2.1.
Let be a singular matrix with a simple zero eigenvalue. The group inverse (reduced resolvent) of , denoted , is the unique matrix satisfying , and .
Given a matrix function , smoothly depending on the real parameter , we recall results concerning derivatives of right and left eigenvector and , respectively, associated to a simple eigenvalue of . We denote by the group-inverse of and assume smoothly depending on and such that . In the sequel, we shall often omit the explicit dependence on in the notation. The following expressions for the derivatives can be found in [MS88, Theorem 2],
˙x=xHG˙Mxx−G˙Mx,˙yH=yH˙MGyyH−yH˙MG. (2.1)
Given let be a simple eigenvalue of . We denote by the unitary hyper-sphere in . By continuity, there exists such that for any and the matrix has a simple eigenvalue close to .
Lemma 2.2.
Given let
S=yyHGH+GHxxH, (2.2)
being the group-inverse of , whose left and right null vectors are and (recall we assume , and note that as is simple). Then, for any smooth path we have,
(2.3)
In particular, the steepest descent direction is given by
(2.4)
where is such that the right-hand side has unit Frobenius norm.
Proof.
By (2.1) and using that , , we get,
ddt|yHx|2=2Re{¯¯¯¯¯¯¯¯¯¯yHx(˙yHx+yH˙x)}=2εRe{xHy(yH˙EGyyHx+yHxHG˙Exx)}=2ε|yHx|2Re{yH˙EGy+xHG˙Ex}=2ε|yHx|2Retrace(˙EHyyHGH+˙EHGHxxH),
that is, recalling the definition (2.2), , from which (2.3) follows using that . The steepest descent direction is then given by the solution of the variational problem (2.4). ∎
Remark 2.3.
A consequence of Lemma 2.2 is that the gradient of the function is proportional to .
Let be the manifold of matrices of rank-. Any can be (non uniquely) represented in the form , where have orthonormal columns and is nonsingular. We will use instead a unique decomposition in the tangent space: every tangent matrix is of the form,
δE=δUTVH+UδTVH+UTδVH, (2.5)
where , and are such that
UHδU=0,VHδV=0. (2.6)
This representation is discussed in [KL07] for the case of real matrices, but the extension to the case of complex matrices is straightforward.
Under the assumptions above, the orthogonal projection of a matrix onto the tangent space is given by
PE(Z)=Z−(I−UUH)Z(I−VVH). (2.7)
3. System of ODEs
Let , , and as before. The following theorem characterizes an evolution onto governed by the steepest descent direction of , where and are unit-norm right and left eigenvectors, respectively, associated to the simple eigenvalue .
Theorem 3.1.
Given , consider the differential system,
˙E=−S+Re⟨E,S⟩E,E∈S1, (3.1)
where is defined in (2.2).
1. The right-hand side of (3.1) is antiparallel to the projection onto the tangent space of the gradient of . More precisely,
ddt|yHx|=−ε|yHx|∥S−Re⟨E,S⟩E∥2F. (3.2)
In particular, if and only if .
2. The matrix defined in (2.2) has rank if , whereas (the zero matrix) if . As a consequence, the equilibria of system (3.1) for which (in particular, the minimizers of ) are rank- matrices.
Proof.
1) The assertion is an immediate consequence of Lemma 2.2. In particular, the derivative (3.2) is obtained by plugging the right-hand side of (3.1) in (2.3).
2) By (2.2) the matrix has rank not greater than . Moreover, it has rank less than if and only if or for some . We claim the constant must vanish in both cases. Indeed, since and , we have or , whence as . Therefore, the rank of is less than if and only if or . As has rank , both conditions are equivalent to have for some , that is to say . Finally, the assertion concerning the equilibria comes from the fact that at a stationary point. ∎
Theorem 3.2.
Let and be unit-norm right and left eigenvectors, respectively, associated to the simple eigenvalue of the matrix . If then, for small enough, the system (3.1) has only two stationary points, that correspond to the minimum and the maximum of on , respectively.
Proof.
As , by item 2) of Theorem 3.1 we have that with a non zero constant matrix and
maxE∈S1∥Q(E,ε)∥F=O(ε). (3.3)
The equation for the equilibria reads , where
F(E,ε)=−S0+Re⟨E,S0⟩E−Q(E,ε)+Re⟨E,Q(E,ε)⟩E.
It is readily seen that if and only if . Moreover, the Jacobian matrix of with respect to at the point is given by the linear operator , such that
L±B=Re⟨B,S0⟩E0±+Re⟨E0±,S0⟩B,B∈Cn×n.
We shall prove below that is invertible. By the Implicit Function Theorem this implies that there are , , and such that for and if and only if . On the other hand, by (3.3), if then or . We conclude that are the unique equilibria on the whole hyper-sphere for small enough. Clearly, [resp. ] is the maximizer [resp. minimizer] of .
To prove the invertibility of we observe that, recalling , the equation reads, , which gives and therefore . ∎
4. Projected system of ODEs
By Theorem 3.1, the minimizers of onto are rank- matrices. This suggests to use a rank- dynamics, obtained as a suitable projection of (3.1) onto .
Theorem 4.1 (The projected system).
Given , consider the differential system,
˙E=−PE(S)+Re⟨E,S⟩E,E∈S1∩M2, (4.1)
where the orthogonal projection is defined in (2.7). Then, the right-hand side of (4.1) is antiparallel to the projection onto the tangent space of the gradient of . More precisely,
ddt|yHx|=−ε|yHx|∥PE(S)−Re⟨E,S⟩E∥2F. (4.2)
In particular, if and only if .
Proof.
We remark that the definition is well posed, i.e., . Indeed, as , and
Re⟨˙E,E⟩=−Re⟨E,PE(S)⟩+Re⟨E,S⟩=−Re⟨E,PE(S)⟩+Re⟨PE(E),PE(S)⟩=0,
as is an orthogonal projection. We next observe that, by (2.3), the steepest descent direction is given by the variational problem,
argmin∥D∥F=1D∈TES1∩TEM2Re⟨D,S⟩.
Since for any , the solution to this problem is
D=−PE(S)−Re⟨E,S⟩E∥PE(S)−Re⟨E,S⟩E∥F.
This proves the first assertion, while the derivative (4.2) is obtained by plugging the right-hand side of (4.1) in (2.3) and using . ∎
Remark 4.2.
It is worthwhile to notice that along the solutions to (3.1) or (4.1) we have , so that the condition is preserved by the dynamics.
In the sequel, it will be useful the following rewriting of the projected system, in terms of the representations of and discussed at the end of Section 2. By introducing the notation,
p=UHy,q=VHx,r=UHGHx,s=VHGy, (4.3)
we can write (4.1) as
⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩˙T=−(psH+rqH)+(sHTHp+qHTHr)T,˙U=−((y−Up)sH+(GHx−Ur)qH)T−1,˙V=−((Gy−Vs)pH+(x−Vq)rH)T−H. (4.4)
4.1. Stationary points of the projected ODEs
We start by providing a characterizing result for stationary points of system (4.1).
Lemma 4.3.
Given , assume is a stationary point of (4.1) (or equivalently of (4.4)) such that is not an eigenvalue of and . Then .
Proof.
Assume by contradiction that ; then we would get
S=(I−UUH)S(I−VVH).
The previous implies
UHS = 0⟹UHSy=r(xHy)=0, SV = 0⟹VHSHx=s(yHx)=0,
whence, as onto , , . Inserting these formulæ into (4.4) we would obtain
⎧⎪ ⎪⎨⎪ ⎪⎩˙T=0,˙U=−(GHxxHV)T−1,˙V=−(GyyHU)T−H. (4.5)
In order that it has to hold necessarily and . Since implies and , the previous relations imply and , so that and . As a consequence, would be an eigenvalue of which contradicts the assumptions. This means that . ∎
At a stationary point the equation reads,
E=μPE(S), (4.6)
for some nonzero . In this case, as
Re(yHEGy+xHGEx)=Re⟨S,E⟩=Re⟨μPE(S),PE(S)⟩=1μ⟨E,E⟩≠0,
assuming , we get
Re(yHUTVHGy+xHGUTVHx)≠0. (4.7)
Recalling (2.7), we are interested in studying
B=(I−UUH)S(I−VVH)=(I−UUH)(yyHGH+GHxxH)(I−VVH). (4.8)
Theorem 4.4.
Given , assume is a stationary point of (4.1) (or equivalently of (4.4)) such that is not an eigenvalue of and . Then it holds for some real .
Proof.
In order to prove the theorem we show that . From the nonsingularity of we get at a stationary point,
(y−Up)sH+(GHx−Ur)qH = 0, (4.9) (Gy−Vs)pH+(x−Vq)rH = 0. (4.10)
The assumption that is not an eigenvalue of implies and , that means and . Moreover, by Lemma 4.3 the condition (4.7) is satisfied in our case. In order that (4.9) and (4.10) are fulfilled we have several possible cases.
Consider (4.9). The following are the possible cases.
• , .
• , . This would imply
y=UUHyandGHx=UUHGHx⟹S=UUHS,
and thus .
• . This would imply .
Now consider (4.10). The following are the possible cases.
• , .
• , . This would imply
x=VVHxandGy=VVHGy⟹S=SVVH,
and thus .
• . This would imply .
Assume that , which excludes (1-ii) and (2-ii). Assume (1-i) and (2-i) hold. This would imply
s=VHGy=0andr=UHGHx=0,
Assume (1-iii) and (2-iii) hold. This would imply , and from the first of (4.4), that has rank- which contradicts the invertibility of . The same conclusion holds if (1-i) and (2-iii) hold, since would also imply that has rank- and if (1-iii) and (2-i) hold, because in this case we would have and still of rank-. ∎
To summarize, if is not an eigenvalue of the stationary points of the projected and the unprojected ODEs coincide (recall that if this is obvious as in this case, see Theorem 3.1). Moreover, since is proportional to at such points, by the same arguments leading to Theorem 3.2 we obtain the following result.
Corollary 4.5.
Under the same assumptions of Theorem 4.4, for sufficiently small the projected ODE (4.1) has only two stationary points.
5. The real structured case
We now assume that the matrix is real and we restrict the perturbations to be real as well. To our knowledge there are no methods to compute the most ill conditioned eigenvalue in the real -pseudospectrum,
ΛRε(A)={λ∈C:λ∈Λ(A+E) for some E∈Rn×n with ∥E∥F≤ε}. (5.1)
that is the eigenvalue of to which corresponds .
We denote by the unitary hyper-sphere in and fix such that for any and the matrix has a simple eigenvalue close to . The same reasoning as in the proof of Lemma 2.2 gives, for a smooth path ,
ddt|yHx|=ε|yHx|⟨˙E,Re(S)⟩, (5.2)
from which the steepest descent direction is given by the variational problem,
argminD∈R1⟨D,E⟩=0=−μ(Re(S)−⟨E,Re(S)⟩E),
where is the normalization constant. Note that the matrix has rank not greater than . Clearly, for a real eigenvalue is real and the situation is identical to the one considered in the unstructured case ; so the peculiar difference arises when we consider non-real eigenvalues.
Let be the manifold of the real matrices of rank-. The matrix representations both in and in the tangent space are analogous to (2.5), (2.6), provided that have orthonormal columns and is nonsingular, see [KL07, Sect. 2.1]. More precisely, any rank- matrix of order can be written in the form,
E=UTVT, (5.3)
with, now, and such that and , where is the identity matrix of order 4, and is nonsingular. As before, since this decomposition is not unique, we use a unique decomposition on the tangent space. For a given choice of any matrix can be uniquely written as
δE=δUTVT+UδTVT+UTδVT,
with , . Accordingly, the orthogonal projection of a matrix onto the tangent space is defined by
˜PE(Z)=Z−PUZPV, (5.4)
where and .
6. System of ODEs in the real case
Given the role of the differential system in (3.1) is now played by
˙E=−Re(S)+⟨E,Re(S)⟩E,E∈R1. (6.1)
More precisely, the right-hand side of (6.1) is antiparallel to the projection onto the tangent space of the gradient of and
ddt|yHx|=−ε|yHx|∥Re(S)−⟨E,Re(S)⟩E∥2F. (6.2)
The proof of Theorem 3.2 is easily adapted to the real case. Therefore, under the same hypothesis, the system (6.1) has only two stationary points, that correspond to the minimum and the maximum of on , respectively.
A natural question is concerned with the possibility of to vanish. We have the following result concerning the matrix .
Theorem 6.1.
Assume that the matrix is real and has a pair of simple complex conjugate eigenvalues and . Let and be its left and right eigenvectors associated to such that . Let be the G-inverse of and be given by (2.2). Then is different from zero.
Proof.
First observe that the eigenvectors and are necessarily genuinely complex vectors, that is , , , and . Let us denote the range of a matrix as . By definition of (see (2.2)) we have .
We prove the result by contradiction. Assume that is purely imaginary, that is . Under this assumption, recalling that is a rank- matrix, we have that implies , and therefore
R(S)=span(y,¯¯¯y,GHx,¯¯¯¯¯¯¯¯¯¯¯GHx)
has dimension . Being and linearly independent, we get . A left premultiplication by gives
xH¯¯¯y = αxHy+βxHGHx⟹α=0,
which is due to the fact that (i) , by well-known bi-orthogonality of left and right eigenvectors, (ii) , a property of the group inverse , and (iii) by simplicity of . This implies . In a specular way, we have that
R(ST)=span(x,¯¯¯x,Gy,¯¯¯¯¯¯¯Gy)
has dimension . Proceeding in the same way we obtain that .
Now recall that (see [MS88]) a vector is an eigenvector of corresponding to the eigenvalue if and only if is an eigenvector of corresponding to the eigenvalue , where if and if . To recap we have
Gy=γ¯¯¯x,G¯¯¯x=i12Im(λ)¯¯¯x, (6.3) GHx=η¯¯¯y,GH¯¯¯y=−i12Im(λ)¯¯¯y, (6.4)
with and .
Since and are the only vectors in the kernels of and , respectively, we deduce by (6.3) and (6.4),
x∝1γy+2iIm(λ)¯¯¯x,y∝1ηx−2iIm(λ)¯¯¯y,
which imply . The previous implies, by bi-orthogonality of left and right eigenvectors, that the set is orthogonal to the set consisting of the remaining right eigenvectors of and similarly the set is orthogonal to the set of the remaining left eigenvectors of . Note that and are right-invariant subspaces of both and .
Denote by a real orthonormal basis for the range of and define (as determined by the procedure to get the Schur canonical form of ) and . Set , which implies is an orthogonal matrix. Now consider the similarity transformation associated to ,
˜B=QTBQ=(B1OTOB2),
where stands for the -dimensional zero matrix, and . This means that is block-diagonal and the matrix
B1=(ϱσ−τϱ) (6.5)
is such that and with so that has eigenvalues and . If then is normal, which implies that the pair of right and left eigenvectors associated to , say (scaled to have unit -norm and real and positive Hermitian scalar product) is such that . Since and , the orthogonality of implies , which gives a contradiction. As a consequence we can assume .
By the properties of the G-inverse we have that
˜G=QTGQ=(G1OTOG2),
where is the group inverse of and is the inverse of , which is nonsingular. It is direct to verify the following formula for the G-inverse, by simply checking the three conditions in Definition 2.1,
G1=⎛⎜⎝i4√στ−14τ14σi4√στ⎞⎟⎠.
It follows that also is block triangular so that we write
˜S=QTSQ=(S1OTOS2), (6.6)
with
S1 = ˜y1˜yH1GH1+GH1˜x1˜xH1, (6.7)
where and are the projections on (the subspace spanned by the first two vectors of the canonical basis) of the eigenvectors of associated to , that is and ,
˜x=ν−1x(i√σ√τ10…0)T,˜y=ν−1y(−i√τ√σ10…0)T,
where and are such that and . Finally, we obtain
S1 = ⎛⎜⎝0τ−σ2σ(σ+τ)τ−σ2τ(σ+τ)0⎞⎟⎠,
which is real and cannot vanish due to the fact that .
Recalling that is real, if were purely imaginary then would be purely imaginary as well, which gives a contradiction.
We remark that it can also be shown that in (6.6). ∎
Remark 6.2.
Note that when is real and we compute for a complex eigenvalue, it can occur that is real. The simplest example is given by the matrix (6.5).
According to Theorem 6.1 we have that for every path of genuinely complex eigenvalues. Based on this result we can characterize stationary points of (6.1) to have rank (at most) .
This suggests to project the ODE on the rank- manifold of real matrices.
6.1. Projected system of ODEs in the real case
The following theorem characterizes the projected system onto the tangent space .
Theorem 6.3 (The real projected system).
Given , consider the differential system,
˙E=−˜PE(Re(S))+⟨E,Re(S)⟩E,E∈R1∩M4, (6.8)
where the orthogonal projection is defined in (5.4). Then, the right-hand side of (6.8) is antiparallel to the projection onto the tangent space of the gradient of . More precisely,
ddt|yHx|=
|
2020-08-05 07:24:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607051014900208, "perplexity": 451.08254914255167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00125.warc.gz"}
|