url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://mathematica.stackexchange.com/questions/32111/strings-and-formatting
|
# Strings and Formatting
There is something about strings which doesn't seem to be totally clear for me (for the moment):
When entering "a string", the output, formatted as "Output" gives this string without the quotes. But when changing the "Output" format to "Text" format (which looks better for text as the typewriter font for Output—for my uses at least), the quotes come back!
Why is that and how can I get non-typewriter fonts as output of evaluations, without the quotes coming back?
Thanks for all help, as always!
-
The first part of your question is answered with the option ShowStringCharacters.
Here with ShowStringCharacters -> True set on the output cell using the Option Inspector:
TraditionalForm does not display string characters by default:
If you want to control the printing of string characters for an entire Notebook you can edit the custom style sheet. If you give a more specific idea of the output that you want and when you want it I can provide more examples.
-
Thanks for your interesting comment! The problem is this: given "Compute " <> ToString[HoldForm[#1 (x - #2) = #3] & @@ Table[RandomInteger[{1, 10}], {3}]], I then use SelectionMove[EvaluationNotebook[], Previous, CellContents, 2]; SelectionEvaluate[EvaluationNotebook[]] in the next cell, as this is part of a bigger program. Now, this doesn't look "nice", at all, even when I add //TraditionalForm in the first cell. It's outputted in bold, and I'd love to have it as when you evaluate the first cell using shift-return. Is that possible? Thanks! – Gabriel Sep 11 '13 at 15:08
@Gabriel Please tell me if this prints in the style that you want: ExpressionCell["Compute 8 (x - 6) = 7", "Input"] -- if that is correct we can work out the specifics of what you want to do, which I cannot yet understand from your description. – Mr.Wizard Sep 11 '13 at 17:41
@Gabriel SelectionEvaluate[EvaluationNotebook[]] in your code does exactly what Shift+Return does. What behavior do you expect? – Alexey Popkov Sep 11 '13 at 18:38
@Alexey: SelectionEvaluate[EvaluationNotebook[]] seems to behave differently when a CellContents is selected and when the cell in its entirety is selected. Is that normal? – Gabriel Sep 11 '13 at 20:23
@Mr.Wizard: thanks for the suggestion, but using the code you provided prints the code in bold. I am searching for a Plain font, if possible. Is there any way of doing that? Thanks for all help! – Gabriel Sep 11 '13 at 20:25
|
2015-07-06 07:12:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47289586067199707, "perplexity": 1493.6602281647772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098071.98/warc/CC-MAIN-20150627031818-00301-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.physicsoverflow.org/507/examples-of-heterotic-cfts
|
# examples of heterotic CFTs
+ 9 like - 0 dislike
122 views
I'm trying to get a global idea of the world of conformal field theories.
Many authors restrict attention to CFTs where the algerbas of left and right movers agree. I'd like to increase my intuition for the cases where that fails (i.e. heterotic CFTs).
What are the simplest models of heterotic CFTs?
There exist beautiful classification results (due to Fuchs-Runkel-Schweigert) in the non-heterotic case that say that rational CFTs with a prescribed chiral algebras are classified by Morita equivlence classes of Frobenius algebras (a.k.a. Q-systems) in the corresponding modular category.
Is anything similar available in the heterotic case?
This post has been migrated from (A51.SE)
I guess you are aware of the article http://arxiv.org/abs/math-ph/0009004 where Prof. Rehren includes the heterotic case from the beginning....
This post has been migrated from (A51.SE)
That's a nice paper... I was more looking for actual examples of heterotic CFTs: ones that are particularly easy to describe, or that are specially relevant for other purposes.
This post has been migrated from (A51.SE)
+ 4 like - 0 dislike
The first example that comes to mind is the heterotic string worldsheet theory, described in the original paper of Gross, Harvey, Martinec, & Rohm.
I don't know if there is a classification result for rational heterotic CFTs which generalizes the FRS result. However, if you want to understand the global space of CFTs, you may not want to emphasize rational CFTs anyways. Most CFTs aren't rational.
This post has been migrated from (A51.SE)
answered Nov 22, 2011 by (415 points)
Thanks for your answer. I'm now reading this article. If I understand, there's 2 CFTs constructed: one compactified on the $E_8\times E_8$-torus, and one compactified on the $\Gamma_{16}$-torus. Quote: "In order to achieve a consistent string theory involving only left-moving coordinates $X^I$ to cancel anomalies and to preserve the geometrical structure of string interactions, we are forced to compactify on a special torus". Do I understand that, as far as constructing CFTs is concerned, I may disregard those constraints and compactify on any torus? (or not compactify at all)
This post has been migrated from (A51.SE)
+ 0 like - 0 dislike
I just found by incidence a simple example in some proceedings of Böckenhauer and Evans below. Namely for $\mathrm{Spin}(8\ell)_1$ (so $D_{4\ell}$ lattice) with $\ell=1,2,\ldots$ there exist modular invariants, which should give rise to heterotic models (by Rehrens paper).
This post has been migrated from (A51.SE)
answered Nov 23, 2011 by (300 points)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:$\varnothing\hbar$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2021-08-05 17:34:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4574607312679291, "perplexity": 1562.2792495475976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00104.warc.gz"}
|
https://2022.help.altair.com/2022.1/activate/business/en_us/topics/reference/oml_language/ComputerVision/getcv.htm
|
getcv
Gets the grayscale/RGB/RGBA matrix R, which represents the pixels of the image handle.
Syntax
R = getcv(handle)
Inputs
handle
Handle of an image.
Type: integer
Outputs
R
R represents the pixels in h. For a grayscale or binary image, R will be a 2D real matrix. For color images, it will be a real, ND matrix, where the first slice represents the red channel, second slice represents the blue channel, third slice represents the green channel and the fourth slice represents the alpha channel, if applicable.
Type: matrix
Example
Get the pixel data of an image read with the ComputerVision library:
200 200 3
|
2022-09-30 07:01:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4879675805568695, "perplexity": 3293.736049640447}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00373.warc.gz"}
|
https://socratic.org/questions/what-is-the-force-in-terms-of-coulomb-s-constant-between-two-electrical-charges--95
|
# What is the force, in terms of Coulomb's constant, between two electrical charges of -225 C and -75 C that are 15 m apart?
Jan 17, 2016
$6.75 \times {10}^{11}$ Newtons repulsive force
#### Explanation:
Coulombs force F=1/(4πε_0) (q_1 q_2)/r^2
Here 1/(4πε_0) is a constant with the value of $9 \times {10}^{9}$ $N {m}^{2} {C}^{-} 2$
Substituting the values $F = 9 \times {10}^{9} \frac{- 225 \times - 75}{15} ^ 2$
=$6.76 \times {10}^{11}$ $N$
As the result is positive the force between the charges would be repulsive.
|
2019-09-21 19:49:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49412184953689575, "perplexity": 2616.1315112976135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574662.80/warc/CC-MAIN-20190921190812-20190921212812-00028.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/98135-group-theory-basic-question-existence-sub-groups-print.html
|
# Group Theory - Basic question on existence of sub-groups
• Aug 15th 2009, 05:08 AM
aman_cc
Group Theory - Basic question on existence of sub-groups
Let G be a group, $\mid G \mid = p^n$, where p is prime.
Prove G has a sub-group for each of the order $p^\alpha$, where $\alpha \in [0,n]$
Can someone provide me hints/sketches to attempt this? Have been stuck for a long time now.
Thanks
• Aug 15th 2009, 06:39 AM
ynj
Quote:
Originally Posted by aman_cc
Let G be a group, $\mid G \mid = p^n$, where p is prime.
Prove G has a sub-group for each of the order $p^\alpha$, where $\alpha \in [0,n]$
Can someone provide me hints/sketches to attempt this? Have been stuck for a long time now.
Thanks
use induction on n
for any k<n, if p^k<=|Z(G)|, then Z(G) have a subgroup of order p^k since it is abelian. if p^k>|Z(G)|,then by induction,G/Z(G) must have a subgroup of order p^k/|Z(G)|,by the corresponding theorem, we can write it as H/Z(G), then H will be a subgroup of order p^k
• Aug 15th 2009, 07:24 AM
aman_cc
Sorry but a little elaboration will help, not able to follow the argument. Maybe you can mention the relevant theorm for e.g. I am not able to follow why "if p^k<=|Z(G)|, then Z(G) have a subgroup of order p^k since it is abelian"
Thanks
Quote:
Originally Posted by ynj
use induction on n
for any k<n, if p^k<=|Z(G)|, then Z(G) have a subgroup of order p^k since it is abelian. if p^k>|Z(G)|,then by induction,G/Z(G) must have a subgroup of order p^k/|Z(G)|,by the corresponding theorem, we can write it as H/Z(G), then H will be a subgroup of order p^k
• Aug 15th 2009, 08:42 AM
ynj
Quote:
Originally Posted by aman_cc
Sorry but a little elaboration will help, not able to follow the argument. Maybe you can mention the relevant theorm for e.g. I am not able to follow why "if p^k<=|Z(G)|, then Z(G) have a subgroup of order p^k since it is abelian"
Thanks
There is the theorem: |G|=n ,G is an abelian, m|n, then G has a subgroup of order m.
Proof:
G is isomorphic to H=Z(p1^k1)*...Z(pt^kt), where pi is prime.
let n=p1^k1*p2^k2...*pt^kt,m=p1^l1....*pt^lt, where li<=ki; f be the isomorphism of H to G
then the group generated by element f(p1^(k1-l1),0,0..0),f(0,p2^(k2-l2),0...0).....f(0..0,pt^(kt-lt))will have order p1^l1*...pt^lt=m
• Aug 15th 2009, 08:52 AM
ynj
Actually, you can assert that for every p^k||G|,where p is a prime, then G has a subgroup of order p^k according to the First Sylow Theorem
• Aug 15th 2009, 04:39 PM
NonCommAlg
Quote:
Originally Posted by aman_cc
Let G be a group, $\mid G \mid = p^n$, where p is prime.
Prove G has a sub-group for each of the order $p^\alpha$, where $\alpha \in [0,n]$
Can someone provide me hints/sketches to attempt this? Have been stuck for a long time now.
Thanks
proof by induction over $n$: there's nothing to prove if $n=0$ (or $\alpha = n$). now consider two cases:
1) $G$ is abelian: by Cauchy's theorem $G$ has an element $x$ of oreder $p.$ now apply induction to the abelian $p$ group $\frac{G}{}.$
2) General case: let $|Z(G)|=p^m,$ where $0 < m \leq n.$ if $\alpha \leq m,$ we're done by 1). if $\alpha > m,$ let $G_1=\frac{G}{Z(G)}.$ we have $|G_1|=p^{n-m} < p^n.$ so, by induction, $G_1$ has a subgroup $H_1=\frac{H}{Z(G)}$
of order $p^{\alpha-m},$ which gives us $|H|=p^{\alpha}.$ done again!
• Aug 16th 2009, 12:36 AM
aman_cc
Thanks. I get the idea.
Quote:
Originally Posted by NonCommAlg
proof by induction over $n$: there's nothing to prove if $n=0$ (or $\alpha = n$). now consider two cases:
1) $G$ is abelian: by Cauchy's theorem $G$ has an element $x$ of oreder $p.$ now apply induction to the abelian $p$ group $\frac{G}{}.$
2) General case: let $|Z(G)|=p^m,$ where $0 < m \leq n.$ if $\alpha \leq m,$ we're done by 1). if $\alpha > m,$ let $G_1=\frac{G}{Z(G)}.$ we have $|G_1|=p^{n-m} < p^n.$ so, by induction, $G_1$ has a subgroup $H_1=\frac{H}{Z(G)}$
of order $p^{\alpha-m},$ which gives us $|H|=p^{\alpha}.$ done again!
|
2016-12-11 14:57:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 47, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347721934318542, "perplexity": 662.1906463919631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544679.86/warc/CC-MAIN-20161202170904-00425-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2877733/fx-xtmx-using-lagrange-multipliers-to-prove-svd-decomposition
|
$f(x) = x^TMx$, using Lagrange multipliers to prove SVD decomposition
I'm reading the existence proof of singular value decomposition.
It considers $f:\mathbb{R}^n\to \mathbb{R}, f(x) = x^TMx$. It talks about the gradient of $f$ and make it equal to a multiple of the gradient of $x^tx$. I suppose that it's because the constraint is the unit sphere, so that's why it made $x^tx = x_1^2 + \cdots x_n^2$, right?
I'm trying to understand this so I took $f$ with a generic matrix $M$
$$f(x) =\begin{bmatrix} x_1 & \cdots & x_n \end{bmatrix}\begin{bmatrix} a_{11} & a_{12} & \dots \\ \vdots & \ddots & \\ a_{n1} & & a_{nn} \end{bmatrix}\begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} = \\ x_1(a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n) + \\x_2 (a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n) + \\ \cdots + \\x_n(a_{n1}x_1+a_{n2}x_2 + \cdots + a_{nn}x_n)$$
Taking the partials to construct the gradient vector, I can see that I'll end up with:
$$\begin{bmatrix} 2a_{11}x_1 + a_{21} + \cdots a_{n1} \\ a_{12} + 2a_2x_2 + \cdots + a_{n2} \\ \vdots \\ a_{1n} + a_{2n}\cdots + 2a_{nn}x_n\\ \end{bmatrix}$$
Now, I need to equal this with $\lambda$ gradient of $x^tx$: $$\begin{bmatrix} 2x_1 \\ 2x_2 \\ \vdots \\ 2x_n\\ \end{bmatrix}$$
so:
$$\begin{bmatrix} 2a_{11}x_1 + a_{21} + \cdots a_{n1} \\ a_{12} + 2a_2x_2 + \cdots + a_{n2} \\ \vdots \\ a_{1n} + a_{2n}\cdots + 2a_{nn}x_n\\ \end{bmatrix} = \lambda \begin{bmatrix} 2x_1 \\ 2x_2 \\ \vdots \\ 2x_n\\ \end{bmatrix}$$
As an example, the first line becomes:
$2a_{11}x_1 + a_{21} + \cdots a_{n1} = \lambda 2x_1 \implies \lambda 2x_1 -2a_{11}x_1 = a_{21} + \cdots a_{n1}\implies x_1(2\lambda - 2a_{11}) = a_{21} + \cdots a_{n1}$
What should I do now? It says that I should end up with $Mu = \lambda u$
Also, is there a more elegant way of calculating the gradients or it's just all this mess?
$$\\ x_1(a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n) + \\x_2 (a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n) + \\ \vdots \\x_n(a_{n1}x_1+a_{n2}x_2 + \cdots + a_{nn}x_n)$$ For example, the partial derivative of this with respect to $x_1$ is $$\underbrace{2a_{11}x_1+a_{12}x_2+\dots+a_{1n}x_n}_{\text{first row}}+\underbrace{a_{21}x_2+\dots+a_{n1}x_n}_{\text{first term of remaining rows}}$$ which you can recognize as the first entry of $(M+M^\top)x$. Therefore, the Lagrange multiplier equation becomes $$(M+M^\top)x=2\lambda x$$ If you are further given that $M$ is symmetric, this implies $Mx=\lambda x$.
|
2022-01-19 09:12:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335618019104004, "perplexity": 127.52891560780263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00108.warc.gz"}
|
http://www.r-bloggers.com/automatic-hyperparameter-tuning-methods/
|
Automatic Hyperparameter Tuning Methods
July 20, 2012
By
(This article was first published on John Myles White » Statistics, and kindly contributed to R-bloggers)
At MSR this week, we had two very good talks on algorithmic methods for tuning the hyperparameters of machine learning models. Selecting appropriate settings for hyperparameters is a constant problem in machine learning, which is somewhat surprising given how much expertise the machine learning community has in optimization theory. I suspect there’s interesting psychological and sociological work to be done exploring why a problem that could be answered using known techniques wasn’t given an appropriate solution earlier.
Thankfully, the take away message of this blog post is that this problem is starting to be understood.
A Two-Part Optimization Problem
To set up the problem of hyperparameter tuning, it’s helpful to think of the canonical model-tuning and model-testing setup used in machine learning: one splits the original data set into three parts — a training set, a validation set and a test set. If, for example, we plan to use L2-regularized linear regression to solve our problem, we will use the training set and validation set to select a value for the $$\lambda$$ hyperparameter that is used to determine the strength of the penalty for large coefficients relative to the penalty for errors in predictions.
With this context in mind, we can set up our problem using five types of variables:
1. Features: $$x$$
2. Labels: $$y$$
3. Parameters: $$\theta$$
4. Hyperparameters: $$\lambda$$
5. Cost function: $$C$$
We then estimate our parameters and hyperparameters in the following multi-step way so as to minimize our cost function:
$\theta_{Train}(\lambda) = \arg \min_{\theta} C(x_{Train}, y_{Train}, \theta, \lambda)$
$\lambda_{Validation}^{*} = \arg \min_{\lambda} C(x_{Validation}, y_{Validation}, \theta_{Train}(\lambda), \lambda)$
The final model performance is assessed using:
$C(x_{Test}, y_{Test}, \theta_{Train + Validation}(\lambda_{Validation}^{*}), \lambda_{Validation}^{*})$
This two-part minimization problem is similar in many ways to stepwise regression. Like stepwise regression, it feels like an opportunity for clean abstraction is being passed over, but it’s not clear to me (or anyone I think) if there is any analytic way to solve this problem more abstractly.
Instead, the methods we saw presented in our seminars were ways to find better approximations to $$\lambda^{*}$$ using less compute time. I’ll go through the traditional approach, then describe the newer and cleaner methods.
Grid Search
Typically, hyperparameters are set using the Grid Search algorithm, which works as follows:
1. For each parameter $$p_{i}$$ the researcher selects a list of values to test empirically.
2. For each element of the Cartesian product of these values, the computer evaluates the cost function.
3. The computer selects the hyperparameter settings from this grid with the lowest cost.
Grid Search is about the worst algorithm one could possibly use, but it’s in widespread use because (A) machine learning experts seem to have less familiarity with derivative-free optimization techniques than with gradient-based optimization methods and (B) machine learning culture does not traditionally think of hyperparameter tuning as a formal optimization problem. Almost certainly (B) is more important than (A).
Random Search
James Bergstra’s first proposed solution was so entertaining because, absent evidence that it works, it seems almost flippant to even propose: he suggested replacing Grid Search with Random Search. Instead of selecting a grid of values and walking through it exhaustively, you select a value for each hyperparameter independently using some probability distribution. You then evaluate the cost function given these random settings for the hyperparameters.
Since this approach seems like it might be worst than Grid Search, it’s worth pondering why it should work. James’ argument is this: most ML models have low-effective dimension, which means that a small number of parameters really affect the cost function and most have almost no effect. Random search lets you explore a greater variety of settings for each parameter, which allows you to find better values for the few parameters that really matter.
I am sure that Paul Meehl would have a field day with this research if he were alive to hear about it.
Arbitrary Regression Problem
An alternative approach is to view our problem as one of Bayesian Optimization: we have an arbitrary function that we want to minimize which is costly to evaluate and we would like to find a good approximate minimum in a small number of evaluations.
When viewed in this perspective, the natural strategy is to regress the cost function on the settings of the hyperparameters. Because the cost function may depend on the hyperparameters in strange ways, it is wise to use very general purpose regression methods. I’ve recently seen two clever strategies for this, one of which was presented to us at MSR:
1. Jasper Snoek, Hugo Larochelle and Ryan Adams suggest that one use a Gaussian Process.
2. Among other methods, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown suggest that one use Random Forests.
From my viewpoint, it seems that any derivative-free optimization method might be worth trying. While I have yet to see it published, I’d like to see more people try the Nelder-Mead method for tuning hyperparameters.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
2014-12-21 18:45:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5132919549942017, "perplexity": 931.265344655145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772134.89/warc/CC-MAIN-20141217075252-00079-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://earltcampbell.com/2014/12/29/the-setun-computer/
|
# The Setun Computer
Last week I gave a talk at QEC Zurich, for which I had decided to speak about the potential of quantum computers built from d-level quantum systems. When preparing for the talk I discovered that not all conventional computers have used binary logic. In 1958, the first Setun computer was built at Moscow State University using ternary, or 3 state, logic. I was fascinated by this curiosity of computing, and so decided to kick off my talk with a brief mention of it. Here I’ll say a bit more about it, but really I want to invite any knowledgeable readers to tell me more about Setun! I’ve found some sources in English, but the vast majority of the literature is in Russian.
Снимок Сетунь“. Licensed under Public Domain via Wikimedia Commons.
Above is Setun’s exterior. Under the hood, it represented numbers in balanced ternary logic. Each possible value is best labelled as 0,1 or -1. Given a string of such numbers
$\{ a_{n}, a_{n-1},\ldots,a_2, a_1 \}$ ,
they would represent a number
$x = \sum_{k=1}^n 3^k a_k$.
For example, the numbers from -5 to 5 are represented as
$\begin{array}{r|rcccl} x & \{&a_2,& a_1,& a_0 &\} \\ \hline \hline -5 & \{&-1,& 1,&1 &\} \\ -4 & \{&0,& -1,&-1 &\} \\ -3 & \{&0,& -1,& 0 &\} \\ -2 & \{&0,& -1,& 1 &\} \\ -1 & \{&0,& 0,&-1 &\} \\ 0 & \{&0,& 0,& 0 &\} \\ 1 & \{&0,& 0,&1 &\} \\ 2 & \{&0,& 1,& -1 &\} \\ 3 & \{&0,& 1,& 0 &\} \\ 4 & \{&0,& 1,& 1 &\} \\ 5 & \{&1,& -1,& -1 &\} \\ \end{array}$.
Unlike in binary, negative numbers are naturally captured by this system without an ad hoc prefix for the sign. Many basic arithmetical operations are particularly simple. Rounding of a number to leading significant figures can be achieved by just truncating a sequence, whereas binary rounding potentially depends on the whole sequence of bits. In The Art of Computing, Donald Knuth gives a fantastic survey of different number systems, and was so enamored with balanced ternary that he said it is “perhaps the prettiest number system”.
The development of Setun was lead by Sergei Sobolev and Nikolay Brusentsov. Sobolev was a mathematician of considerable renown and influence within the Soveit union who was in the 1950s the head of computational mathematics in Moscow State University. Brusentsov was a younger engineer keen to get his teeth into modern computing, and recalling his first meeting with Sobolev said,
When I first came to Sergei Sobolev’s office, it seemed as if I was enveloped in sunlight – his face looked that kind and open. We hit it off immediately and I will be forever grateful to providence for leading me to this remarkable man, a bright mathematician and knowledgeable scientist, one of the first people who understood the significance of computers.
Together they conceived built a research team and conceived the initial design. Though Sobolev was pivotal in getting the project of the ground, Brusentsov stayed committed to Setun as Sobolev’s attention became diverted. Brunsentov recalled,
Sobolev was the heart and soul of this project. Unfortunately, his participation in our creative work ended in the early 1960s when he moved to Novosibirsk. All of his later involvement revolved around perpetual fighting with bureaucrats for the right to do the work we believed in.
Whereas, it seems Setun really became Brusentsov’s life long passion. Later in life he continued to write papers on Setun and ternary computing, until he passed away just several weeks ago.
Nikolay Brusentsov
7 February 1925 – 4 December 2014
The decision to work in ternary came earlier, and it is claimed the elegance of the number system allowed them to achieve the equivalent computing power with fewer components. At the time, transistors were not yet available and vacuum tubes were too large for a compact computer. Therefore, the decision was made to build it using magnetic cores and diodes. The first Setun was a success and they went on to built 50 such machines. However, it was always a university project, not fully endorsed by the Soviet government, and viewed suspiciously by factory management. Despite requests from abroad for Setuns to be exported, the orders were not met. Against these obstacles the Setun fizzled out, and Brusentsov’s group was moved to offices in a hostel! At least, this is the picture painted by the few sources I’ve found and read. The story is of a ingenious computing architecture that was simply the Betamax of its time. The accuracy of this narrative is hard to judge. I suppose the interesting question is how a modern ternary computer would compete or even excel against its binary contemporaries. Or even, how a quantum computer would fare by going beyond the conventional qubit paradigm.
sources:
Malinovsky, Pioneers of Soviet Computing
Russian Virtual Computer Musuem
Donald Knuth, The Art of Computing
|
2020-01-21 09:49:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623983860015869, "perplexity": 1807.055196773058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00264.warc.gz"}
|
http://tcs.nju.edu.cn/wiki/index.php/Combinatorics_(Fall_2010)/Graph_spectrum,_expanders
|
# Combinatorics (Fall 2010)/Graph spectrum, expanders
## Graph Expansion
According to wikipedia:
"Expander graphs have found extensive applications in computer science, in designing algorithms, error correcting codes, extractors, pseudorandom generators, sorting networks and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as SL=L and the PCP theorem. In cryptography too, expander graphs are used to construct hash functions."
We will not explore everything about expander graphs, but will focus on the performances of random walks on expander graphs.
### Expander graphs
Consider an undirected (multi)graph ${\displaystyle G(V,E)}$, where the parallel edges between two vertices are allowed.
Some notations:
• For ${\displaystyle S,T\subset V}$, let ${\displaystyle E(S,T)=\{uv\in E\mid u\in S,v\in T\}}$.
• The Edge Boundary of a set ${\displaystyle S\subset V}$, denoted ${\displaystyle \partial S\,}$, is ${\displaystyle \partial S=E(S,{\bar {S}})}$.
Definition (Graph expansion) The expansion ratio of an undirected graph ${\displaystyle G}$ on ${\displaystyle n}$ vertices, is defined as ${\displaystyle \phi (G)=\min _{\overset {S\subset V}{|S|\leq {\frac {n}{2}}}}{\frac {|\partial S|}{|S|}}.}$
Expander graphs are ${\displaystyle d}$-regular (multi)graphs with ${\displaystyle d=O(1)}$ and ${\displaystyle \phi (G)=\Omega (1)}$.
This definition states the following properties of expander graphs:
• Expander graphs are sparse graphs. This is because the number of edges is ${\displaystyle dn/2=O(n)}$.
• Despite the sparsity, expander graphs have good connectivity. This is supported by the expansion ratio.
• This one is implicit: expander graph is a family of graphs ${\displaystyle \{G_{n}\}}$, where ${\displaystyle n}$ is the number of vertices. The asymptotic order ${\displaystyle O(1)}$ and ${\displaystyle \Omega (1)}$ in the definition is relative to the number of vertices ${\displaystyle n}$, which grows to infinity.
The following fact is directly implied by the definition.
An expander graph has diameter ${\displaystyle O(\log n)}$.
The proof is left for an exercise.
For a vertex set ${\displaystyle S}$, the size of the edge boundary ${\displaystyle |\partial S|}$ can be seen as the "perimeter" of ${\displaystyle S}$, and ${\displaystyle |S|}$ can be seen as the "volume" of ${\displaystyle S}$. The expansion property can be interpreted as a combinatorial version of isoperimetric inequality.
Vertex expansion
We can alternatively define the vertex expansion. For a vertex set ${\displaystyle S\subset V}$, its vertex boundary, denoted ${\displaystyle \delta S\,}$ is defined as that
${\displaystyle \delta S=\{u\not \in S\mid uv\in E{\mbox{ and }}v\in S\}}$,
and the vertex expansion of a graph ${\displaystyle G}$ is ${\displaystyle \psi (G)=\min _{\overset {S\subset V}{|S|\leq {\frac {n}{2}}}}{\frac {|\delta S|}{|S|}}}$.
### Existence of expander graph
We will show the existence of expander graphs by the probabilistic method. In order to do so, we need to generate random ${\displaystyle d}$-regular graphs.
Suppose that ${\displaystyle d}$ is even. We can generate a random ${\displaystyle d}$-regular graph ${\displaystyle G(V,E)}$ as follows:
• Let ${\displaystyle V}$ be the vertex set. Uniformly and independently choose ${\displaystyle {\frac {d}{2}}}$ cycles of ${\displaystyle V}$.
• For each vertex ${\displaystyle v}$, for every cycle, assuming that the two neighbors of ${\displaystyle v}$ in that cycle is ${\displaystyle w}$ and ${\displaystyle u}$, add two edges ${\displaystyle wv}$ and ${\displaystyle uv}$ to ${\displaystyle E}$.
The resulting ${\displaystyle G(V,E)}$ is a multigraph. That is, it may have multiple edges between two vertices. We will show that ${\displaystyle G(V,E)}$ is an expander graph with high probability. Formally, for some constant ${\displaystyle d}$ and constant ${\displaystyle \alpha }$,
${\displaystyle \Pr[\phi (G)\geq \alpha ]=1-o(1)}$.
By the probabilistic method, this shows that there exist expander graphs. In fact, the above probability bound shows something much stronger: it shows that almost every regular graph is an expander.
Recall that ${\displaystyle \phi (G)=\min _{S:|S|\leq {\frac {n}{2}}}{\frac {|\partial S|}{|S|}}}$. We call such ${\displaystyle S\subset V}$ that ${\displaystyle {\frac {|\partial S|}{|S|}}<\alpha }$ a "bad ${\displaystyle S}$". Then ${\displaystyle \phi (G)<\alpha }$ if and only if there exists a bad ${\displaystyle S}$ of size at most ${\displaystyle {\frac {n}{2}}}$. Therefore,
{\displaystyle {\begin{aligned}\Pr[\phi (G)<\alpha ]&=\Pr \left[\min _{S:|S|\leq {\frac {n}{2}}}{\frac {|\partial S|}{|S|}}<\alpha \right]\\&=\sum _{k=1}^{\frac {n}{2}}\Pr[\,\exists {\mbox{bad }}S{\mbox{ of size }}k\,]\\&\leq \sum _{k=1}^{\frac {n}{2}}\sum _{S\in {V \choose k}}\Pr[\,S{\mbox{ is bad}}\,]\end{aligned}}}
Let ${\displaystyle R\subset S}$ be the set of vertices in ${\displaystyle S}$ which has neighbors in ${\displaystyle {\bar {S}}}$, and let ${\displaystyle r=|R|}$. It is obvious that ${\displaystyle |\partial S|\geq r}$, thus, for a bad ${\displaystyle S}$, ${\displaystyle r<\alpha k}$. Therefore, there are at most ${\displaystyle \sum _{r=1}^{\alpha k}{k \choose r}}$ possible choices such ${\displaystyle R}$. For any fixed choice of ${\displaystyle R}$, the probability that an edge picked by a vertex in ${\displaystyle S-R}$ connects to a vertex in ${\displaystyle S}$ is at most ${\displaystyle k/n}$, and there are ${\displaystyle d(k-r)}$ such edges. For any fixed ${\displaystyle S}$ of size ${\displaystyle k}$ and ${\displaystyle R}$ of size ${\displaystyle r}$, the probability that all neighbors of all vertices in ${\displaystyle S-R}$ are in ${\displaystyle S}$ is at most ${\displaystyle \left({\frac {k}{n}}\right)^{d(k-r)}}$. Due to the union bound, for any fixed ${\displaystyle S}$ of size ${\displaystyle k}$,
{\displaystyle {\begin{aligned}\Pr[\,S{\mbox{ is bad}}\,]&\leq \sum _{r=1}^{\alpha k}{k \choose r}\left({\frac {k}{n}}\right)^{d(k-r)}\leq \alpha k{k \choose \alpha k}\left({\frac {k}{n}}\right)^{dk(1-\alpha )}\end{aligned}}}
Therefore,
{\displaystyle {\begin{aligned}\Pr[\phi (G)<\alpha ]&\leq \sum _{k=1}^{\frac {n}{2}}\sum _{S\in {V \choose k}}\Pr[\,S{\mbox{ is bad}}\,]\\&\leq \sum _{k=1}^{\frac {n}{2}}{n \choose k}\alpha k{k \choose \alpha k}\left({\frac {k}{n}}\right)^{dk(1-\alpha )}\\&\leq \sum _{k=1}^{\frac {n}{2}}\left({\frac {en}{k}}\right)^{k}\alpha k\left({\frac {ek}{\alpha k}}\right)^{\alpha k}\left({\frac {k}{n}}\right)^{dk(1-\alpha )}&\quad ({\mbox{Stirling formula }}{n \choose k}\leq \left({\frac {en}{k}}\right)^{k})\\&\leq \sum _{k=1}^{\frac {n}{2}}\exp(O(k))\left({\frac {k}{n}}\right)^{k(d(1-\alpha )-1)}.\end{aligned}}}
The last line is ${\displaystyle o(1)}$ when ${\displaystyle d\geq {\frac {2}{1-\alpha }}}$. Therefore, ${\displaystyle G}$ is an expander graph with expansion ratio ${\displaystyle \alpha }$ with high probability for suitable choices of constant ${\displaystyle d}$ and constant ${\displaystyle \alpha }$.
### Computation of graph expansion
Computation of graph expansion seems hard, because the definition involves the minimum over exponentially many subsets of vertices. In fact, the problem of deciding whether a graph is an expander is co-NP-complete. For a non-expander ${\displaystyle G}$, the vertex set ${\displaystyle S\subset V}$ which has low expansion ratio is a proof of the fact that ${\displaystyle G}$ is not an expander, which can be verified in poly-time. However, there is no efficient algorithm for computing the ${\displaystyle \phi (G)}$ unless NP=P.
The expansion ratio of a graph is closely related to the sparsest cut of the graph, which is the dual problem of the multicommodity flow problem, both NP-complete. Studies of these two problems revolutionized the area of approximation algorithms.
We will see right now that although it is hard to compute the expansion ratio exactly, the expansion ratio can be approximated by some efficiently computable algebraic identity of the graph.
## Graph spectrum
### Laplacian
The adjacency matrix of an ${\displaystyle n}$-vertex graph ${\displaystyle G}$, denoted ${\displaystyle A=A(G)}$, is an ${\displaystyle n\times n}$ matrix where ${\displaystyle A(u,v)}$ is the number of edges in ${\displaystyle G}$ between vertex ${\displaystyle u}$ and vertex ${\displaystyle v}$.
### Graph eigenvalues
Because adjacency matrix ${\displaystyle A}$ is a symmetric matrix with real entries, due to the Perron-Frobenius theorem, it has real eigenvalues ${\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}}$, which associate with an orthonormal system of eigenvectors ${\displaystyle v_{1},v_{2},\ldots ,v_{n}\,}$ with ${\displaystyle Av_{i}=\alpha _{i}v_{i}\,}$. We call the eigenvalues of ${\displaystyle A}$ the spectrum of the graph ${\displaystyle G}$.
The spectrum of a graph contains a lot of information about the graph. For example, supposed that ${\displaystyle G}$ is ${\displaystyle d}$-regular, the following lemma holds.
Lemma ${\displaystyle |\alpha _{i}|\leq d}$ for all ${\displaystyle 1\leq i\leq n}$. ${\displaystyle \alpha _{1}=d}$ and the corresponding eigenvector is ${\displaystyle ({\frac {1}{\sqrt {n}}},{\frac {1}{\sqrt {n}}},\ldots ,{\frac {1}{\sqrt {n}}})}$. ${\displaystyle G}$ is connected if and only if ${\displaystyle \alpha _{1}>\alpha _{2}}$. If ${\displaystyle G}$ is bipartite then ${\displaystyle \alpha _{1}=-\alpha _{n}}$.
Proof.
Let ${\displaystyle A}$ be the adjacency matrix of ${\displaystyle G}$, with entries ${\displaystyle a_{ij}}$. It is obvious that ${\displaystyle \sum _{j}a_{ij}=d\,}$ for any ${\displaystyle j}$. (1) Suppose that ${\displaystyle Ax=\alpha x,x\neq \mathbf {0} }$, and let ${\displaystyle x_{i}}$ be an entry of ${\displaystyle x}$ with the largest absolute value. Since ${\displaystyle (Ax)_{i}=\alpha x_{i}}$, we have ${\displaystyle \sum _{j}a_{ij}x_{j}=\alpha x_{i},\,}$ and so ${\displaystyle |\alpha ||x_{i}|=\left|\sum _{j}a_{ij}x_{j}\right|\leq \sum _{j}a_{ij}|x_{j}|\leq \sum _{j}a_{ij}|x_{i}|\leq d|x_{i}|.}$ Thus ${\displaystyle |\alpha |\leq d}$. (2) is easy to check. (3) Let ${\displaystyle x}$ be the nonzero vector for which ${\displaystyle Ax=dx}$, and let ${\displaystyle x_{i}}$ be an entry of ${\displaystyle x}$ with the largest absolute value. Since ${\displaystyle (Ax)_{i}=dx_{i}}$, we have ${\displaystyle \sum _{j}a_{ij}x_{j}=dx_{i}.\,}$ Since ${\displaystyle \sum _{j}a_{ij}=d\,}$ and by the maximality of ${\displaystyle x_{i}}$, it follows that ${\displaystyle x_{j}=x_{i}}$ for all ${\displaystyle j}$ that ${\displaystyle a_{ij}>0}$. Thus, ${\displaystyle x_{i}=x_{j}}$ if ${\displaystyle i}$ and ${\displaystyle j}$ are adjacent, which implies that ${\displaystyle x_{i}=x_{j}}$ if ${\displaystyle i}$ and ${\displaystyle j}$ are connected. For connected ${\displaystyle G}$, all vertices are connected, thus all ${\displaystyle x_{i}}$ are equal. This shows that if ${\displaystyle G}$ is connected, the eigenvalue ${\displaystyle d=\alpha _{1}}$ has multiplicity 1, thus ${\displaystyle \alpha _{1}>\alpha _{2}}$. If otherwise, ${\displaystyle G}$ is disconnected, then for two different components, we have ${\displaystyle Ax=dx}$ and ${\displaystyle Ay=dy}$, where the entries of ${\displaystyle x}$ and ${\displaystyle y}$ are nonzero only for the vertices in their components components. Then ${\displaystyle A(ax+by)=d(ax+by)}$. Thus, the multiplicity of ${\displaystyle d}$ is greater than 1, so ${\displaystyle \alpha _{1}=\alpha _{2}}$. (4) If ${\displaystyle G}$ if bipartite, then the vertex set can be partitioned into two disjoint nonempty sets ${\displaystyle V_{1}}$ and ${\displaystyle V_{2}}$ such that all edges have one endpoint in each of ${\displaystyle V_{1}}$ and ${\displaystyle V_{2}}$. Algebraically, this means that the adjacency matrix can be organized into the form ${\displaystyle P^{T}AP={\begin{bmatrix}0&B\\B^{T}&0\end{bmatrix}}}$ where ${\displaystyle P}$ is a permutation matrix, which has no change on the eigenvalues. If ${\displaystyle x}$ is an eigenvector corresponding to the eigenvalue ${\displaystyle \alpha }$, then ${\displaystyle x'}$ which is obtained from ${\displaystyle x}$ by changing the sign of the entries corresponding to vertices in ${\displaystyle V_{2}}$, is an eigenvector corresponding to the eigenvalue ${\displaystyle -\alpha }$. It follows that the spectrum of a bipartite graph is symmetric with respect to 0.
${\displaystyle \square }$
### The spectral gap
It turns out that the second largest eigenvalue of a graph contains important information about the graph's expansion parameter. The following theorem is the so-called Cheeger's inequality.
Theorem (Cheeger's inequality) Let ${\displaystyle G}$ be a ${\displaystyle d}$-regular graph with spectrum ${\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}}$. Then ${\displaystyle {\frac {d-\alpha _{2}}{2}}\leq \phi (G)\leq {\sqrt {2d(d-\alpha _{2})}}}$
The theorem was first stated for Riemannian manifolds, and was proved by Cheeger and Buser (for different directions of the inequalities). The discrete case is proved independently by Dodziuk and Alon-Milman.
For a ${\displaystyle d}$-regular graph, the value ${\displaystyle (d-\alpha _{2})}$ is known as the spectral gap. The name is due to the fact that it is the gap between the first and the second eigenvalue in the spectrum of a graph. The spectral gap provides an estimate on the expansion ratio of a graph. More precisely, a ${\displaystyle d}$-regular graph has large expansion ratio (thus being an expander) if the spectral gap is large.
We will not prove the theorem, but we will explain briefly why it works.
For the spectra of graphs, the Cheeger's inequality is proved by the Courant-Fischer theorem in linear algebra. The Courant-Fischer theorem is a fundamental theorem in linear algebra which characterizes the eigenvalues by a series of optimizations:
Theorem (Courant-Fischer theorem) Let ${\displaystyle A}$ be a symmetric matrix with eigenvalues ${\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}}$. Then {\displaystyle {\begin{aligned}\alpha _{k}&=\max _{v_{1},v_{2},\ldots ,v_{n-k}\in \mathbb {R} ^{n}}\min _{\overset {x\in \mathbb {R} ^{n},x\neq \mathbf {0} }{x\bot v_{1},v_{2},\ldots ,v_{n-k}}}{\frac {x^{T}Ax}{x^{T}x}}\\&=\min _{v_{1},v_{2},\ldots ,v_{k-1}\in \mathbb {R} ^{n}}\max _{\overset {x\in \mathbb {R} ^{n},x\neq \mathbf {0} }{x\bot v_{1},v_{2},\ldots ,v_{k-1}}}{\frac {x^{T}Ax}{x^{T}x}}.\end{aligned}}}
For a ${\displaystyle d}$-regular graph with adjacency matrix ${\displaystyle A}$ and spectrum ${\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}}$, its largest eigenvalue ${\displaystyle \alpha _{1}=d}$ with eigenvector ${\displaystyle A\cdot \mathbf {1} =d\mathbf {1} }$. According to the Courant-Fischer theorem, the second largest eigenvalue can be computed as
${\displaystyle \alpha _{2}=\max _{x\bot \mathbf {1} }{\frac {x^{T}Ax}{x^{T}x}},}$
and
${\displaystyle d-\alpha _{2}=\min _{x\bot \mathbf {1} }{\frac {x^{T}(dI-A)x}{x^{T}x}}.}$
The later is an optimization, which shares some resemblance of the expansion ratio ${\displaystyle \phi (G)=\min _{\overset {S\subset V}{|S|\leq {\frac {n}{2}}}}{\frac {|\partial S|}{|S|}}=\min _{\chi _{S}}{\frac {\chi _{S}^{T}(dI-A)\chi _{S}}{\chi _{S}^{T}\chi _{S}}}}$, where ${\displaystyle \chi _{S}}$ is the characteristic vector of the set ${\displaystyle S}$, defined as ${\displaystyle \chi _{S}(i)=1}$ if ${\displaystyle i\in S}$ and ${\displaystyle \chi _{S}(i)=0}$ if ${\displaystyle i\not \in S}$. It is not hard to verify that ${\displaystyle \chi _{S}^{T}\chi _{S}=\sum _{i}\chi _{S}(i)=|S|}$ and ${\displaystyle \chi _{S}^{T}(dI-A)\chi _{S}=\sum _{i\sim j}(\chi _{S}(i)-\chi _{S}(j))^{2}=|\partial S|}$.
Therefore, the spectral gap ${\displaystyle d-\alpha _{2}}$ and the expansion ratio ${\displaystyle \phi (G)}$ both involve some optimizations with the similar forms. It explains why they can be used to approximate each other.
## Reference
• Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander Graphs and Their Applications. American Mathematical Society, 2006. [PDF]
|
2018-12-10 09:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 199, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725098013877869, "perplexity": 237.83904602323145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00288.warc.gz"}
|
https://thecuriousastronomer.wordpress.com/2013/12/
|
Feeds:
Posts
## Five Christmas Songs – “Happy Christmas (War is Over)” – John Lennon and Yoko Ono
The fifth and final Christmas song I thought I’d share with you is “Happy Christmas (War is Over)” by John Lennon, released in 1971 in the USA and 1972 in the DUK.
This song was part of a long running anti-war campaign by John & Yoko which included the “bed-ins” for peace in Amsterdam and Toronto, the 1969 single Give Peace a Chance, and a billboard campaign which is shown in the video. Lennon said in a 1980 interview that he wanted to write a Christmas song so that there was an alternative to White Christmas. John & Yoko are joined in the song by the Harlem Community Choir.
## Five Christmas Songs – “Stop the Cavalry” – Jona Lewie
The fourth Christmas song I thought I’d share is “Stop the Cavalry” by Jona Lewie. It got to number 3 in December 1980.
## Five Christmas Songs – “Fairy tale of New York” – The Pogues and Kirsty MacColl
The third Christmas song I thought I would share with you is “Fairy tale of New York” by The Pogues and Kirsty MacColl.
This song was released in 1987 and reached number 2 in the DUK charts. It is one of my favourite Christmas songs, and has consistently been in the top 5 in various “all time greatest Christmas songs” lists.
## Five Christmas songs – “The Little Drummer Boy” / “Peace on Earth” – Bing Crosby and David Bowie
The second Christmas song I thought I’d share with you is “The Little Drummer Boy / Peace on Earth”, sung by Bing Crosby and David Bowie, probably the most unlikely pairing ever for a duet.
This song was recorded in September of 1977 to appear on Bing Crosby’s Christmas TV Special. Crosby died the following month, but the show went out as planned on US and DUK televisions. The song was not officially released as a single until 1982.
There has been much speculation as to how and why such an unlikely pairing took place. Many suggest that Bing Crosby had no idea who David Bowie was, and he certainly does seem to look at him with what appears to be a mixture of amusement, contempt and confusion. As for Bowie, suggestions for his reasons for appearing on Bing Crosby’s show range from that his mum would like it to trying to make his career more “mainstream”.
Whatever the reasons, the unlikely pairing led to a pretty good version of this traditional song.
## Five Christmas songs – “When a Child is Born” – Johnny Mathis
I thought I would blog about five Christmas songs between now and Christmas. The first of the five Christmas songs I’ve chosen to share is “When a Child is Born”, sung by Johnny Mathis. This was a big hit for Mathis in 1976, reaching number 1 in the Disunited Kingdom, and it had the coveted number 1 spot for the Christmas of that year.
It is not my favourite song by any stretch of the imagination, in fact I find it quite irritating. But I thought I would include it because I remember it being played for week after week on Top of the Pops when I was a child. It is still, to date, the only Johnny Mathis song I know.
## Electron configurations
In this blog, I discussed the “electron configuration” nomenclature which is so loved by chemists (strange people that they are….). Just to remind you, the noble gas neon, which is at number 10 in the periodic table, may be written as $1s^{2} \; 2s^{2} \; 2p^{6}$. If you add together the superscripts you get $2+2+6=10$, the number of electrons in neutral Helium. Titanium, which is at number 22 in the periodic table may be written as $1s^{2} \; 2s^{2} \; 2p^{6} \; 3s^{2} \; 3p^{6} \; 3d^{2} \; 4s^{2}$. Again, if you add together the superscripts you get $2+2+6+2+6+2+2=22$, the number of electrons in neutral Titanium. I explained in the blog that the letters s,p,d and f refer to “sharp, principal, diffuse” and “fine“, as this was how the spectral lines appeared in the 1870s when spectroscopists first started identifying them.
But, what I didn’t address in that blog on the electron configuration nomenclature is why do electrons occupy different shells in atoms? In hydrogen, the simplest atom, the 1 electron orbits the nucleus in the ground state, the n=1 energy level. If it is excited it will go into a higher energy level, n=2 or 3 etc. But, with a more complicated atom like neon, which has 10 electrons, the 10 do not all sit in the n=1 level. The n=1 level can only contain up to 2 electrons, and the n=2 level can only contain up to 8 electrons, the n=3 level can only contain up to 18 electrons, and so on. This leads to neon having a “filled” n=1 level (2 electrons), and a filled n=2 level (8 electrons), which means it does not seek additional electrons. This is why it is a noble gas.
Titanium on the other hand, with 22 electrons, has a filled n=1 level (2 electrons), a filled n=2 level (8 electrons), a partially filled n=3 level (8 electrons out of a possible 18), and a partially filled n=4 level (2 electrons out of a possible 32). Because it has partially filled n=3 and n=4 levels, and it wants them to be full, it will seek additional electrons by chemically combining with other elements.
What is the reason each energy level has a maximum number of allowed electrons?
It is all due to something called the Pauli exclusion principle.
Wolfgang Pauli, after whom the Pauli exclusion principle is named. He came up with the idea in 1925. In addition to this principle, he also came up with the idea of the neutrino.
## The energy level n
Niels Bohr suggested in 1913 that electrons could only occupy certain orbits. I go into the details of his argument in this blog, but to summarise it briefly here, he suggested that something called the orbital angular momentum of the electron had to be divisible by $\hbar \text{ where } \hbar = h/2\pi, \text{ } h$ being Planck’s constant. We now call these the energy levels of an atom, and we use the letter n to denote the energy level. So, an electron in the second energy level will have $n=2$, in the third energy level it will have $n=3$ etc.
As quantum mechanics developed over the next 15-20 years it was realised that an electron is fully described by a total of four (4) quantum numbers, not just its energy level. The energy level $n$ came to be known as the princpical quantum number. The other three quantum numbers needed to fully describe the state of an electron are
• its orbital angular momentum, $l$
• its magnetic moment, $m_{l}$ and
• its spin, $m_{s}$
## The orbital angular momentum $l$ quantum number
As I mentioned above, spectroscopists noticed that atomic lines could be visually categorised into “sharp”, “principal”, “diffuse” and “fine“, or $s,p,d \text{ and } f$. It was found that the following correspondence existed between these visual classifications and the orbital angular momentum $l$. This is the second quantum number. $l$ can only take on certain values from $0 \text{ to } (n-1)$. So, for example, if $n=3, \; l \text{ can be } 0,1 \text{ or } 2$.
spectroscopic name and orbital angular momentum
Spectroscopic Name letter orbital angular momentum $l$
sharp s $l=0$
principal p $l=1$
diffuse d $l=2$
fine f $l=3$
As this table shows, the reason a line appears as a “sharp” (s) line is because its orbital angular momentum $l=0$. If it appears as a “principal” (p) line then its orbital angular momentum must be $l=1$, etc.
## The magnetic moment quantum number $m_{l}$
The third quantum number is the magnetic moment $m_{l}$, which can only take on certain values. The magnetic moment only shows up if the electron is in a magnetic field, and is what causes the Zeeman effect, which is the splitting of an atom’s spectral lines when an atom is in a magnetic field. The rule is that the magnetic moment quantum number can take on any value from $-l \text{ to } +l$, so e.g. when $l=2, \text{ } m_{l}$ can take the values $-2, -1, 0, 1 \text{ and } 2$ (5 possible values in all). If $l=3 \text{ then } m_{l} \text{ can be } -3, -2, -1, 0, 1, 2, 3$ (7 possible values).
## The spin quantum number $m_{s}$
The final quantum number is something called the spin. Although it is only an analogy (and not to be taken literally), one can think of this as the electron spinning on its axis as it orbits the nucleus, in the same way that the Earth spins on its axis as it orbits the Sun. The spin can, for an electron, take on two possible values, either $+1/2 \text{ or } -1/2$.
## Putting all of this together
Let us first of all consider the $n=1$ energy level. The only allowed orbital angular momentum allowed in this level is $l=0$, which means the only allowed values of $m_{l}$ is also 0 and the allowed values of the spin are $+1/2 \text{ and } -1/2$. So, in the $n=1$ level, the only allowed state is $1s$, and this can have two configurations, with the electron spin up or down (+1/2 or -1/2), meaning the $n=1$ level is full when there are 2 electrons in it. That is why we see $1s^{2}$ for Helium and any element beyond it in the Periodic Table. But, what about the $n=2, n=3$ etc. levels?
The number of electrons in each electron shell
State Principal quantum number $n$ Orbital quantum number $l$ Magnetic quantum number $m_{l}$ Spin quantum number $m_{s}$ Maximum number of electrons
1s 1 0 0 +1/2, -1/2 2
n=1 level Total = 2
2s 2 0 0 +1/2, -1/2 2
2p 2 1 -1,0,1 +1/2, -1/2 6
n=2 level Total = 8
3s 3 0 0 +1/2, -1/2 2
3p 3 1 -1,0,1 +1/2, -1/2 6
3d 3 2 -2,-1,0,1,2 +1/2, -1/2 10
n=3 level Total = 18
4s 4 0 0 +1/2, -1/2 2
4p 4 1 -1,0,1 +1/2, -1/2 6
4d 4 2 -2,-1,0,1,2 +1/2, -1/2 10
4f 4 3 -3,-2,-1,0,1,2,3 +1/2, -1/2 14
n=4 level Total = 32
5s 5 0 0 +1/2, -1/2 2
etc.
The astute readers amongst you may have noticed that the electron configuration for Titanium, which was $1s^{2} \; 2s^{2} \; 2p^{6} \; 3s^{2} \; 3p^{6} \; 3d^{2} \; 4s^{2}$, suggests that the $n=4$ level starts being occupied before the $n=3$ level is full. After all, the $n=3$ level can have up to 18 electrons in it, with up to 10 electrons in the $n=3, l=2$ (d) state. In the $n=3$ level the (s) and (p) states are full, but not the (d) state. With only 2 electrons in the $n=3, l=2$ (d) state, the $4s$ state starts being populated, and has 2 electrons in it. Why is this?
I will explain the reason in a future blog, but it has to do with the “shape” of the orbits of the different states. They are different for different values of orbital angular momentum $l$.
## What was the star of Bethlehem?
What was the star of Bethlehem? It is a puzzle which has confounded theologians and astronomers for the best part of two millennia. Here is a talk that myself and Martin Griffiths gave in December 2004 on this subject at the University of Glamorgan. Enjoy!
Introductory slide. This talk was given by myself and Martin Griffiths at the University of Glamorgan in December 2004 as part of a series of public lectures on Astronomy.
Which theory do you like best?
## Sloppy Seconds – Watsky (song)
This song, “Sloppy Seconds”, was brought to my attention by Tania, a former student of mine. As anyone who reads (listens?) to my blog knows, I am pretty stuck in the 1960s/70s/80s when it comes to my musical tastes. I stopped listening to Radio 1 in 1992. For me, “rap” is something I eat, and begins with a “w” 😛
But, every so often I do bring myself into the present, and hear a song from post 1990 that I like. For the past several years, through a combination of my children and students, I am partially aware of some current music. It is easy for an old codger like me to not listen to any new music, after all my head and shelves are full of music that I have listened to over the years so is there any room for any new music?
Yes there is. And, I am glad I listened to this song that Tania sent me, as I thought it was great. And I’m not just saying that to try and be young and down with the kids, because I’m too old to even try that. What I liked first and foremost about the song is its central message – that everyone has a history, we’ve all made mistakes and nobody’s perfect. It also has a great sound, which is a bonus for a song with a strong message to convey.
Here are the lyrics to the song.
Fuck you if you love a car for its paint job
Love you if you love a car for the road trips
Show me the miles and your arms and the pink scar
Where the doctor had to pull out all the bone chips
Cuz you were pressing on the gas just a bit hard
Right in the moment where the road curved a bit sharp
And when you woke up, somebody was unclipping your seat belt
and pulling you from the open window of your flipped car
Cold pizza
Tie-dye shirts
Broken hearts
Give’m here, give’m here
Hand me downs
Give me give me leftovers
Give me give me sloppy seconds
Give em here, give em here
I don’t care where you’ve been
How many miles, I still love you [x2]
Show me someone who says they got no baggage
I’ll show you somebody whose got no story
Nothing gory means no glory, but baby please don’t bore me
We won’t know until we get there
The who, or the what, or the when where
My favorite sweater was a present that I got a couple presidents ago
And I promised that I would rock it till it’s thread bare
Bet on it
Every single person got a couple skeletons
So pretty soon, in this room
It’ll just be me and you when we clear out all the elephants
Me and you and the elements
We all have our pitfalls
Beer’s flat, the cabs have been called
And everybody and their momma can hear the drama
that’s happening behind these thin walls
Cold pizza
Tie-dye shirts (tie-dye shirts)
Broken hearts
Give’m here, give’m here
Hand me downs (hand me downs)
Leftovers (leftovers)
Sloppy seconds
Give’m here, give’m here
I don’t care where you’ve been
How many miles, I still love you (2x)
I don’t care (cold pizza)
Where you’ve been (tie-dye shirts)
How many (broken hearts) miles, I still love you
I don’t care (hand me downs)
Where you’ve been (left overs)
How many (sloppy seconds) miles, I still love you
My pattern with women isn’t a flattering image
But I don’t want to run away because I said so
I don’t want to be the guy to hide all of my flaws
And I’ll be giving you the side of me that I don’t let show
Everything in fashion
That has ever happened
Always coming crashing down
Better let go
But in a couple years it will be retro
You rock Marc Ecko
My shirts have the gecko
Cuz in the past man, I was hopeless
But now’s when my little cousins look the dopest
(whoop whoop)
Fuck the fashion po-po
Have a stale doughnut, I don’t need no tips
Fuck a five second rule
That’s a plan I never understood
It’s September in my kitchen in a Christmas sweater
Sipping cold coffee on the phone with damaged goods
And there is not a single place that I would rather be
I’m fucked up just like you are, and you’re fucked up just like me
Cold pizza (cold pizza)
Tie-dye shirts (tie-dye shirts)
Broken hearts
Give’m here, Give’m here
Hand me downs (oh hand me downs)
Give me give me leftovers (leftovers)
Give me give me sloppy seconds
Give’m here give’m here
I don’t care where you’ve been
How many miles, I still love you [x2]
I don’t care (cold pizza)
Where you’ve been (tie-dye shirts)
How many (broken hearts) miles, I still love you
I don’t care (hand me downs)
Where you’ve been (left overs)
How many (sloppy seconds) miles, I still love you
Here is the song itself. Enjoy!
## Comet ISON update
I was on the S4C programme “Heno” talking about Comet ISON on the 26th of November, just two days before its perihelion (closest approach to the Sun). I was the studio guest, so I appeared on the programme right at the beginning, then at about 8 minutes into the programme, and finally at the end. Here is the entire programme with subtitles (if you can bear it).
Here is an edited version, with just the parts of the programme where I appear :
It would seem comet ISON did not survive its passage around the Sun. All the evidence suggests that ISON broke up as it came within about 1.5 million km of the Sun, probably due to the nucleus of the comet being broken up by a combination of the heat of the Sun and the extreme tidal forces due to the Sun’s gravity. Here is a link to images of ISON at the time of its perihelion taken by the NASA Stereo Probes’ (which are in space observing the Sun)
The latest efforts now concentrate on trying to find ISON’s remnants and to understand in more detail what happened to the comet. You can read more about this in this story.
So, sadly, we did not get the spectacular cometary display in early December that many had been hoping for. But, that is the nature of comets, and part of their fascination. One never knows how they are going to turn out, they are very unpredictable and often surprise us. ISON proved ultimately to be a disappointment, but already there are other comets that astronomers have their sights on which may come and light up our skies over the next few months, such as Comet Lovejoy.
## The 500 greatest albums – number 466 – A Rush of Blood to the Head (Coldplay)
At number 466 in Rolling Stone Magazine’s 500 greatest albums is “A Rush of Blood to the Head” by Coldplay. The list from 470 to 461 is as follows:
• 470 – “Radio” by LL Cool J (1985)
• 469 – “The Score” by Fugees (1996)
• 468 – “The Paul Butterfield Blues Band” by The Paul Butterfield Blues Band (1965)
• 467 – “Tunnel of Love” by Bruce Springsteen (1987)
• 466 – “A Rush of Blood to the Head” by Coldplay (2002)
• 465 – “69 Love Songs” by The Magnetic Fields (1999)
• 464 – “Hysteria” by Def Leppard (1987)
• 463 – “Heaven Up Here” by Echo and the Bunnymen (1981)
• 462 – “Document” by R.E.M. (1987)
• 461 – “Metal Box” by Public Image Ltd. (1979)
I own three of these albums, “Tunnel of Love” by Bruce Springsteen, “A Rush of Blood to the Head” by Coldplay and “Document” by R.E.M. In addition, I have actually heard of LL Cool J, Fugees, The Paul Butterfield Blues Band, Def Leppard, Echo and the Bunnymen and Public Image Ltd. Things seem to be improving in my level of ignorance compared to the 480 to 471, 490 to 481 and 500 to 491 lists. The only artist I haven’t heard of in this list is The Magnetic Fields, although I don’t actually own anything by LL Cool J or Def Leppard either.
Of the three albums I own in this list, I’ve decided to blog about “A Rush of Blood to the Head”, although I may well come back and blog about some of the songs I like on the other albums listed here. Why have I chosen the Coldplay album? It was actually the first Coldplay album I heard. Their first album, “Parachutes” was released in 2000, and its release passed me by without my noticing it, probably because I was living in the USA at the time. But by the time “A Rush of Blood to the Head” was released I was paying attention, and I liked it the first time I heard it.
At number 466 in Rolling Stone Magazine’s 500 greatest albums is “A Rush of Blood to the Head” by Coldplay
There are many songs on this album that I like, but possibly my favourite is the one I have included here, “The Scientist”. The song has a haunting quality to it, and to me speaks of a desire to turn back time in a relationship, to a time before things started going wrong.
Come up to meet you,
Tell you I’m sorry,
You don’t know how lovely you are.
Tell you I need you,
Tell you I set you apart.
Oh let’s go back to the start.
Runnin’ in circles,
Comin’ up tails,
Nobody said it was easy,
It’s such a shame for us to part.
Nobody said it was easy,
No one ever said it would be this hard.
Oh take me back to the start.
I was just guessin’,
At numbers and figures,
Pullin’ the puzzles apart.
Questions of science,
Science and progress,
Do not speak as loud as my heart.
Tell me you love me,
Come back and haunt me,
Oh, what a rush to the start.
Runnin’ in circles,
Chasin’ our tails,
Comin’ back as we are.
Nobody said it was easy,
Oh it’s such a shame for us to part.
Nobody said it was easy,
No one ever said it would be so hard.
I’m goin’ back to the start.
Oh ooh ooh ooh ooh ohh,
Ah ooh ooh ooh ooh ooh,
Oh ooh ooh ooh ooh ohh,
Oh ooh ooh ooh ooh ohh.
Enjoy!
|
2020-09-19 08:42:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3567263185977936, "perplexity": 2220.5724128205247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00275.warc.gz"}
|
http://wangxinliu.com/machine%20learning/machine%20learning%20basic/research&study/PCASVD/
|
# SVD, PCA and Least Square Problem
### The idea behind PCA
In the field of machine learning, PCA is used to reduce the dimension of features. Usually we collect a lot of feature to feed the machine learning model, we believe that more features provides more information and will lead to better result.
But some of the features doesn’t really bring new information and they are correlated to some other features. PCA is then introduced to remove this correlation by approximating the original data in its subspace, some of the features of the original data may be correlated to each other in its original space, but this correlation doesn’t exit in its subspace approximation.
Visually, let’s assume that our original data points $x_1...x_m$ have 2 features and they can be visualized in a 2D space:
In the above diagram, the red crosses represent our original data points, we want to find a line $u$, which approximates the original points by projecting then on $u$.
Which kind of $u$ can best approximates the points? Naturally, we want the information of the original data points to be kept in the approximated data points, which means to make sure that they are far away from each other.
Mathematically, we want to find the $u$ which can maximize the variance of the projected data points.
Symbols:
$x_1 ... x_m$ : $m$ data points
$x_i$ : the $i$th data point, a 2x1 feature vector containing 2 features
$u$ : the subspace of $x$, $x$ are located in a 2D space, then $u$ will be a 1D line, parameterized to be a 2x1 vector: $u$ is a unit directional vector, representing one direction of the $x$ space.
Before everything, we firstly preprocess the data points by z-score normalization:
Compute the mean of $x$:
Compute the standard deviation of $x$:
z-score normalize every $x_i$:
in order to simplify the symbols, we just update all original $x$ to be normalized.
Obviously, for a certain $x_i$, the distance of its projection on $u$ to the origin will be its dot product with $u$:
As $x_i$ is already centered, its variance is:
Then our goal will be:
We want to get rid of the $m$ in $\frac{1}{m}\sum_{i=1}^{m}(x_ix_i^T)$ and tread every thing as matrix, in order to do this, we put all $x_i$ together to form a big feature matrix as the following diagram describes:
Then $\frac{1}{m}\sum_{i=1}^{m}(x_ix_i^T)$ turns to be $AA^T$. Our maximization problem turns to be:
### Solve it by using Lagrange multiplier and SVD
And we know that $\|u\|=1$ , this problem can be solved using Lagrange multiplier:
According to the Kuhn-Tucker Conditions, we get:
Obviously, $u$ and $\lambda$ are $AA^T$’s Eigen vector and Eigen value.
But $AA^T$ has more than one Eigen vectors, which one should be the $u$ we are looking for? In order to find it, we replace $AA^Tu$ in the original equation by $\lambda u$:
Which means, we can maximize $u^TAA^Tu$ by simply choosing the biggest Eigen value and its corresponding Eigen vector.
Then we can use SVD to decompose A:
For $AA^T$, we get:
Now, as $\Sigma \Sigma^T$ is a diagonal matrix, we can obviously see that the columns of $U$ are the Eigen vectors of $AA^T$ and the diagonal values of $\Sigma \Sigma^T$ are $AA^T$’s Eigen values.
### Its relationship to least mean square
As described above, we want to maximize the variance of the projected data points:
Actually, as the $x_i$ in the above equation are data points which will never change, so it will do no harm to add the following term into the equation:
Solving the maximization problem is equally solving the following minimization problem:
Let’s take a look at the diagram again:
Obviously, $x_i^2-(x_i^Tu)^2$ is actually equal to $e_i^2$, so our minimization problem turns to:
Which means, we are looking for a line $u$, which can minimize the summation of the projective distances of the data points onto this line, this is a least mean square error problem. Now we may notice that maximizing the variance is equal to minimizing the mean square.
### Wangxin
I am algorithm engineer focused in computer vision, I know it will be more elegant to shut up and show my code, but I simply can't stop myself learning and explaining new things ...
|
2018-09-21 22:24:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 47, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313153982162476, "perplexity": 284.3892927136943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00189.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-12-parametric-equations-polar-coordinates-and-conic-sections-12-1-parametric-equations-exercises-page-603/30
|
## Calculus (3rd Edition)
$$x= 2+3t, \quad y= 5-t.$$
First, since the line is perpendicular to $y=3x$, then the slope is given by $-\frac{1}{3}$. Now, the slope $m=-\frac{1}{3}$; then, we have $s/r=-\frac{1}{3}$ so take $r=3$ and $s=-1$. Then, the parametric equations are $$x=a+rt=2+3t, \quad y= b+st=5-t.$$ That is, $$c(t)=(x(t),y(t))=(2+3t,5-t).$$
|
2021-05-14 13:46:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9906454682350159, "perplexity": 124.70912921560281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00441.warc.gz"}
|
https://runestone.academy/ns/books/published/py4e-int/strings/format.html
|
# 7.11. Format operator¶
The format operator, % allows us to construct strings, replacing parts of the strings with the data stored in variables. When applied to integers, % is the modulus operator. But when the first operand is a string, % is the format operator.
The first operand is the format string, which contains one or more format sequences that specify how the second operand is formatted. The result is a string.
For example, the format sequence %d means that the second operand should be formatted as an integer (“d” stands for “decimal”):
>>>camels = 42
>>>print('%d' % camels)
42
The result is the string “42”, which is not to be confused with the integer value 42.
A format sequence can appear anywhere in the string, so you can embed a value in a sentence:
If there is more than one format sequence in the string, the second argument has to be a tuple [A tuple is a sequence of comma-separated values inside a pair of parentheses. We will cover tuples in Chapter 10]. Each format sequence is matched with an element of the tuple, in order.
The following example uses %d to format an integer, %g to format a floating-point number (don’t ask why), and %s to format a string:
The number of elements in the tuple must match the number of format sequences in the string. The types of the elements also must match the format sequences:
>>> '%d %d %d' % (1, 2)
TypeError: not enough arguments for format string
>>> '%d' % 'dollars'
TypeError: %d format: a number is required, not str
In the first example, there aren’t enough elements; in the second, the element is the wrong type.
The format operator is powerful, but it can be difficult to use. You can read more about it at
|
2023-02-03 23:41:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25248730182647705, "perplexity": 996.0941869908816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00284.warc.gz"}
|
http://bolnica-meljine.me/how-did-cpd/article.php?c39c8b=asymptotic-statistics-notes
|
… The text is written in a very clear style … . 10 CHAPTER 2. The phenomenon is related … I present materials from asymptotic statistics to Professor Pollard and have inspiring discussion with him every week. There are –ve tools (and their extensions) that are most useful in asymptotic theory of statistics and econometrics. xڭUKo�0��W��.����*9T�Z5{K{���� Section 1 will cover Chapters 1-2[Introduction], 3 [Delta … To get Asymptotic Statistics PDF, remember to refer to the button below and save the document or get access to other information which might be in conjuction with ASYMPTOTIC STATISTICS book. the fantastic and concise A Course in Large Sample Theory For example, the running time of one operation is computed as f (n) and may be for another operation it is computed as g (n 2). Practice: Comparing function growth. The syllabus includes information about assignments, exams and grading. Asymptotic notation is useful because it allows us to concentrate on the main factor determining a functions growth. Von Mises' approach is a unifying theory that covers all of the cases above. Practice: Asymptotic notation. The course roughly follows the text by Hogg, McKean, and Craig, Introduction to Mathematical Statistics, 7th edition, 2012, henceforth referred to as HMC. notify the author of errors in these notes (e-mail alastair.young@imperial.ac.uk). the book is a very good choice as a first reading. These notations are mathematical tools to represent the complexities. Here “asymptotic” means that we study limiting behaviour as the number of observations tends to infinity. Method of stationary phase 39 Chapter 6. Of course, all computing activities will force students to choose Lecture Notes in Asymptotic Methods Raz Kupferman Institute of Mathematics The Hebrew University July 14, 2008 These notes originally evolved as an accompaniment to the Next lesson. Chapter 3, and it was Tom Hettmansperger who originally 10 0 obj the mathematical level at which an introductory ]��O���*��TR2��L=�s\*��f��G�8P��/?6��Ldǐ'I�ԙ:93�&�>�v�;�u$���ܡc��a�T9x�����1����:��V�{v����m-?���.���_�_\2ƽ��X�7g6����X:_� theory lends itself very well to computing, since frequently the course (FA 2011) covered all sections except: Many exercises require students to do some computing, based on the typographical a particular computing environment. I wished I had had as a graduate student, and I hope that these notes He was extremely gracious and I treasure the letters that %���� Professor Lehmann several times about his book, as my << In statistics, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. e�yN����������l�}���k\0ן'5��P,��XGH}t���j�9�. assistant professor. which shares the philosophy of these notes regarding Asymptotic expansions of integrals 29 Chapter 4. Asymptotic series 21 3.1. It is slower: the variance of the limiting normal distribution decreases as O((nh) − 1) and not as O(n − 1). Neuware - These notes are based on lectures presented during the seminar on ' Asymptotic Statistics' … The study of large-sample While many excellent large-sample theory textbooks already exist, the majority (though not all) of them … Functions in asymptotic notation. Book Condition: Neu. References: Chapter 19 from Aad van der Vaart's "Asymptotic Statistics". Though we may do things differently in spring 2020, a previous version of the large-sample theory course Laplace integrals 31 4.1. >> convinced me to design this course at Penn State back in 2000 when I was a new offered in the notes using R 10.3: Multivariate and multi-sample U-statistics Preface to the notes These notes are designed to accompany STAT 553, a graduate-level course in large-sample theory at Penn State intended for students who may not have had any exposure to measure-theoretic probability. stream Assignments Assignments are due on Thursdays at 3:00 p.m. Hand in the assignment via … 4.4: Univariate extensions of the Central Limit Theorem, 8.3: Asymptotics of the Wilcoxon rank-sum test, 10.3: Multivariate and multi-sample U-statistics. Properties of asymptotic expansions 26 3.4. into the era of electronic communication. 1These notes are meant to supplement the lectures for Stat 411 at UIC given by the author. Watson’s lemma 36 Chapter 5. /Length 234 1. /Length 762 Its Applications, Volumes 1 and 2 by William Feller. While many excellent large-sample theory textbooks already exist, the majority (though not all) of them re ect a traditional view in graduate-level statistics education that students … Today we will cover probabilistic tools in this eld, especially for tail bounds. Note that our actual statement of the nonparametric delta method (applied to statistical functionals) is taken from Theorem 2.27 in Wasserman's "All of Nonparametric Statistics" (this book is available online through York's library). "This book provides a comprehensive overview of asymptotic theory in probability and mathematical statistics. Asymptotic upper bound f (n) = O (g (n)) some constant multiple of g (n) is an asymptotic upper bound of f (n), no claim about how tight an upper bound is. I have also drawn on many other /Filter /FlateDecode Occasionally, hints are Homework questions: Feb.18-22: READING WEEK: Feb.25/27: Functional … When it comes to analysing the complexity of any algorithm in terms of time and space, we can never provide an exact number to define the time required and the space required by the algorithm, instead we express it using some standard notations, also known as Asymptotic Notations.. Following are commonly used asymptotic notations used in calculating running time complexity of an algorithm. Prerequisites I assume that you know the material in Chapters 1-3 of of the book (basic probability) are familiar to you. ��&�߱�첛U�H��Ǟ�7���_�g��Y�$Y1�-��BiRբ����N�������ۂ�2Y�XR�����W5j#�e����h[����igUR���%(�\$��n#�[g���=n^��*+k��0ck sources for ideas or for exercises. (http://www.r-project.org), though Then the random function can be … There are three notations that are commonly used. languages, provided that they possess the necessary statistical Strictly speaking, you're considering the limit as the sample size goes to infinity, but the way people use it is to make approximations based on those limits. This book is an introduction to the field of asymptotic statistics. by Thomas Ferguson, notion that computing skills should be emphasized in Big-O notation. Credit where credit is due: 235x155x7 mm. Asymptotic Statistics by A. W. van der Vaart, quality of asymptotic approximations for small samples is very help to achieve that goal. book Elements of Large-Sample Theory by the late The treatment is both practical and mathematically rigorous. "asymptotic" is more or less a synonym for "when the sample size is large enough". errors that we Notes on Asymptotic Statistics 2: Stochastic Differentiability Condition. (2000). Note the rate √nh in the asymptotic normality results. students and I provided lists of 3 0 obj A few notes on contiguity, asymptotics, and local asymptotic normality John Duchi August 13, 2019 Abstract In this set of notes, I collect several ideas that are important for the asymptotic analysis of estimators. The classical regularity conditions involve twice differentiability and local dominating condition for overkill the problem. Taschenbuch. My treatment is based on a combination of … The asymptotic results for the multivariate kde are very similar to the univariate kde, but with an increasing notational complexity. and the classic probability textbooks Probability and Measure by Furthermore, having a “slight” bias in some cases may not be a bad idea. important in understanding the limitations of the results being Big-θ (Big-Theta) notation . Stochastic Differentiability. Sort by: Top Voted. These notes are designed to accompany STAT 553, a graduate-level course in large-sample theory at Penn State intended for students who may not have had any exposure to measure-theoretic probability. should be taught, is still very much evident here. Asymptotic vs convergent series 21 3.2. Thus, simulation for the purpose of checking the This is different from the standard CLT rate √n (see Theorem 1.1). Selection sort. Prerequisite: Asymptotic Notations Assuming f(n), g(n) and h(n) be asymptotic functions the mathematical definitions are: If f(n) = Θ(g(n)), then there exists positive constants c1, c2, n0 such that 0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n), for all n ≥ n0; If f(n) = O(g(n)), then there exists positive constants c, n0 such that 0 ≤ f(n) ≤ c.g(n), for all n ≥ n0 even though we were already well I am fortunate to have had the chance to correspond with May 3, 2012. at Penn State helped with some of the Strong-Law material in all statistics courses whenever possible, provided that the Piazza . %PDF-1.5 These notations are in widespread use and are often used without further explana-tion. Van der Vaart, A. They are the weak law of large numbers (WLLN, or LLN), the central limit theorem (CLT), the continuous mapping theorem (CMT), Slutsky™s theorem,1and the Delta method. A very convenient set of notations in asymptotic analysis are the so-Asymptotic Analysis 2.9.2009 Math 595, Fall 2009. The author makes no guarantees that these notes are free of typos or other, more serious errors. �~�i�&Պ D��4R��y}9�#�xP��Ys�L�U���9���:&U� P6x��&8�z�Fv��>DRZt�A��}ܽ�9lDmx7����q�FOj�[>o��/�� 5���.Uˍ��T=�z�n1��8���V�����!��TY��9~x����4Ҋu�s,�����{5y���" ����; u���IQ���X[0`,:�v�1��4��Z�R�%eE�HQ%?p Suitable as a graduate or Master’s level statistics text, this book will also give researchers an overview of the latest research in asymptotic statistics. Asymptotic expansions 25 3.3. the comprehensive and beautifully written I try to put them in a framework that is relatively easy to understand, so that this can serve as a quick reference for further work. samples. Topic: Link: Arzela-Ascoli Theorem … Big-Ω (Big-Omega) notation. Erich Lehmann; the strong influence of that great book, Birkhäuser Sep 2011, 2011. Laplace’s method 32 4.2. had spotted. In par-ticular, we will cover subGaussian random variables, Cherno bounds, and Hoe ding’s Inequality. computing enhances the understanding of the subject matter. It also contains a large collection of inequalities from linear algebra, probability and analysis that are of importance in mathematical statistics. Notes on Asymptotic Statistics 1: Classical Conditions May 3, 2012 The note is taken from my reading course with Professor David Pollard. In some cases, however, there is no unbiased estimator. In statistics, asymptotic theory provides limiting approximations of the probability distribution of sample statistics, such as the likelihood ratio statistic and the expected value of the deviance. Up Next. The material of the module is arranged in three chapters, of which the first constitutes background material, and the preliminary reading for the module. • Based on notes from graduate and master’s level courses taught by the author in Europe and in the US • Mathematically rigorous yet practical • Coverage of a wide range of classical and recent topics Contents 1. Big-θ (Big-Theta) notation . Department of Statistics University of British Columbia 2 Course Outline A number of asymptotic results in statistics will be presented: concepts of statis- tic order, the classical law of large numbers and central limit theorem; the large sample behaviour of the empirical distribution and sample quantiles. Lecture 27: Asymptotic bias, variance, and mse Asymptotic bias Unbiasedness as a criterion for point estimators is discussed in §2.3.2. theoretical large-sample results we prove do not give any Arkady Tempelman Some interesting cases, including , are excluded. VDV = van der Vaart (Asymptotic Statistics) HDP = Vershynin (High Dimensional Probability) TSH = Testing Statistical Hypotheses (Lehmann and Romano) TPE = Theory of Point Estimation (Lehmann) ELST = Elements of Large Sample Theory (Lehmann) GE = Gaussian estimation: Sequence and wavelet models (Johnstone) Additional Notes. Chapter 3. If not, then you should take 36-700. Lecture Notes 10 36-705 Let Fbe a set of functions and recall that n(F) = sup f2F 1 n Xn i=1 f(X i) E[f] Let us also recall the Rademacher complexity measures R(x 1;:::;x n) = E sup learned. stream In Asymptotic Statistics we study the asymptotic behaviour of (aspects of) statistical procedures. 3.3 Asymptotic properties. … When we analyse any algorithm, we generally get a formula to represent … endobj Asymptotic Notations. Asymptotic notations are used to represent the complexities of algorithms for asymptotic analysis. Asymptotic notations give time complexity as “fastest possible”, “slowest possible” or “average time”. << Our mission is to provide a free, world-class education to anyone, anywhere. In examples 1–3, the asymptotic distribution of the statistic is different: in (1) it is normal, in (2) it is chi-squared, and in (3) it is a weighted sum of chi-squared variables. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics, however. Let be the empirical process defined by. ASYMPTOTIC NOTATIONS called “big oh” (O) and “small-oh” (o) notations, and their variants. �ǿ��J:��e���F� ;�[�\�K�hT����g indication of how well asymptotic approximations work for finite Section 1: Asymptotic statistics is the study of large sample properties and approximations of statistical tests, estimators and procedures. /Filter /FlateDecode In general, the goal is to learn how well a statistical procedure will work under diverse settings when sample size is large enough. Statistics is about the mathematical modeling of observable phenomena, using stochastic models, and about analyzing data: estimating parameters of the model and testing hypotheses. and graphical capabilities. Among these are Asymptotic Statistics. We mainly use the textbook by van der Vaart (1998). x�m��N� �{��c9a���hw��1^ē�+MIl�j�o/�&j� ����.n��0(�p�:�D�b�B���Ky��%��δ䥛��Mt! In addition to most of the standard topics of an asymptotics course, including likelihood inference, M-estimation, the theory of asymptotic efficiency, U-statistics, and rank procedures, the book also presents recent research topics such as semiparametric models, … asymptotic statistics as opposed to classical asymptotic and high dimensional statistics. he sent me, written out longhand and sent through the mail Khan Academy is a 501(c)(3) nonprofit … Patrick Billingsley and An Introduction to Probability Theory and Hopefully, the $$\mathrm{vec}$$ operator, , and Theorem 3.1 allows to simplify expressions and yield a clear connection with, for example, the expressions for the asymptotic bias and variance obtained in Theorem 2.1. endstream these exercises can be completed using other packages or My goal in doing so was to teach a course that >> Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation.
2020 asymptotic statistics notes
|
2022-07-01 20:56:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6677454710006714, "perplexity": 1053.7796227488543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00209.warc.gz"}
|
https://www.physicsforums.com/threads/wavefunction-possibilities.471901/
|
# Wavefunction possibilities
Jarwulf
Do wavefunctions have to have every conceivable possibility? Say for instance you have a chair. Does the wavefunction of the chair necessarily have a possibility where the chair breaks apart spontaneously? Or a set of worlds where the chair breaks apart if MWI is true? Or can the wavefunction simply consist of possibilities where the chair does not splinter apart?
Does the wavefunction of a being have to have a possibility where the being changes their mind about something or can all possibilities of the wavefunction simply be ones where the being's mind stays the same?
Staff Emeritus
Do wavefunctions have to have every conceivable possibility? Say for instance you have a chair. Does the wavefunction of the chair necessarily have a possibility where the chair breaks apart spontaneously? Or a set of worlds where the chair breaks apart if MWI is true? Or can the wavefunction simply consist of possibilities where the chair does not splinter apart?
Does the wavefunction of a being have to have a possibility where the being changes their mind about something or can all possibilities of the wavefunction simply be ones where the being's mind stays the same?
When we solve for the equation of motion using Newtonian mechanics, what we first do is account for all the forces acting on the system, i.e. we do F=ma. So say you have an object falling to the ground, you then have
$$F_g = ma$$
where $F_g$ is the force due to gravity. But if you want to include more realistic situation, you put in other facts, such as frictional force due to air friction $F_f$, and maybe the object itself has its own propulsion $F_p$. Then you write
$$F_g + F_f + F_p = ma$$
and then you solve (if you can) for the equation of motion.
The same thing occurs for the wavefunction. You first start with the Hamiltonian/Schrodinger equation. You need to know all of the potential landscape that the system has. This may or may not be trivial. In one of the simplest case, say for an infinite square well potential (which every student in intro QM classes should know), you write down the kinetic term and then the potential representing that square well. That's the whole system! So the wavefunction that you solve describes the system fully based on what you have given as the starting point, i.e. what you wrote for the kinetic and potential term.
But here's where it can get complicated, especially when you start adding complexity to the system.
1. You don't know what the exact Hamiltonian is, and so you have to make either an estimate or an approximation. This is true when you are dealing with a gazillion particles, as in condensed matter physics. It is impossible to write an exact Hamiltonian for a many-body system. So in such a case, you make some clever approximation for the potential, such as using the mean-field approximation. You say that, even though a particle in the system sees all the potential from other particles, we can simply make the approximation that, on average, it sees a constant "mean field" from all of the particles.
So your Hamiltonian will consist of the kinetic term, and a mean-field approximation of the potential term. Therefore, your wavefunction can only be as good as what you have done in the beginning. It cannot predict or describe something beyond that. In many situations, the mean-field approximation is perfectly valid and can account for a large number of phenomena. But in other situations, this approximation breaks down. It is not because the wavefunction is inadequate, it is rather our starting point and our knowledge of the system is inadequate.
2. You know the exact Hamiltonian, but you cannot get a full, exact wavefunction. In many instances, you can write the exact wavefunction, but solving the differential equation is often a major problem. One also encounters this in classical newtonian mechanics (try to find exact, closed solution for the 3-body or more problem). This is where you either do numerical solutions, or in other cases, you make an approximate solution as a simplification, or even only consider special cases that gives you nice, analytical solutions. So obviously, it is not inconceivable that the solution could miss something when such simplifications are applied.
So in principle, the wavefunction should be able to describe ALL of the observables as described in the Hamiltonian. It depends on how well you can construct a Hamiltonian that accurately and fully describe the system you are looking at, and how well you can arrive as the wavefunction solution.
Zz.
qsa
Do wavefunctions have to have every conceivable possibility? Say for instance you have a chair. Does the wavefunction of the chair necessarily have a possibility where the chair breaks apart spontaneously? Or a set of worlds where the chair breaks apart if MWI is true? Or can the wavefunction simply consist of possibilities where the chair does not splinter apart?
Does the wavefunction of a being have to have a possibility where the being changes their mind about something or can all possibilities of the wavefunction simply be ones where the being's mind stays the same?
QM does imply that one thing could in principle be in two different places at the same time, even macroscopic objects. so there are theories that try to explain away such possibility, like GRW
http://en.wikipedia.org/wiki/Ghirardi–Rimini–Weber_theory
from
Phys. Rev. D 34, 470–491 (1986)
Unified dynamics for microscopic and macroscopic systems
An explicit model allowing a unified description of microscopic and macroscopic systems is exhibited. First, a modified quantum dynamics for the description of macroscopic objects is constructed and it is shown that it forbids the occurrence of linear superpositions of states localized in far-away spatial regions and induces an evolution agreeing with classical mechanics. This dynamics also allows a description of the evolution in terms of trajectories. To set up a unified description of all physical phenomena, a modification of the dynamics, with respect to the standard Hamiltonian one, is then postulated also for microscopic systems. It is shown that one can consistently deduce from it the previously considered dynamics for the center of mass of macroscopic systems. Choosing in an appropriate way the parameters of the so-obtained model one can show that both the standard quantum theory for microscopic objects and the classical behavior for macroscopic objects can all be derived in a consistent way. In the case of a macroscopic system one can obtain, by means of appropriate approximations, a description of the evolution in terms of a phase-space density distribution obeying a Fokker-Planck diffusion equation. The model also provides the basis for a conceptually appealing description of quantum measurement.
Last edited:
The_Duck
Do wavefunctions have to have every conceivable possibility? Say for instance you have a chair. Does the wavefunction of the chair necessarily have a possibility where the chair breaks apart spontaneously? Or a set of worlds where the chair breaks apart if MWI is true? Or can the wavefunction simply consist of possibilities where the chair does not splinter apart?
Yes, I think for for all reasonable physical systems every possible configuration will have a nonzero probability associated with it. For instance if you have a hydrogen atom the electron has a probability to be found literally anywhere in relation to the proton. However the probability is only non-negligible inside a very small volume of size about 10^-10 meters, and decays exponentially outside this volume. You can appreciate that while the probability of finding the electron on the other side of the room from the proton is nonzero, it is vanishingly small.
Similarly for a more complicated system like a chair the wave function should assign a nonzero probability to all possible configurations of the particles that make up the chair. However it is vanishingly unlikely that you will actually observe the particles of the chair adopt some configuration that is radically different from their current one, i.e. your chair is not going to spontaneously fall apart (absent some outside force like a sledgehammer).
Jarwulf
Yes, I think for for all reasonable physical systems every possible configuration will have a nonzero probability associated with it. For instance if you have a hydrogen atom the electron has a probability to be found literally anywhere in relation to the proton. However the probability is only non-negligible inside a very small volume of size about 10^-10 meters, and decays exponentially outside this volume. You can appreciate that while the probability of finding the electron on the other side of the room from the proton is nonzero, it is vanishingly small.
Similarly for a more complicated system like a chair the wave function should assign a nonzero probability to all possible configurations of the particles that make up the chair. However it is vanishingly unlikely that you will actually observe the particles of the chair adopt some configuration that is radically different from their current one, i.e. your chair is not going to spontaneously fall apart (absent some outside force like a sledgehammer).
So basically there is a chance or if MWI is true there are an infinite number of universes where an army of 100 story tall pie eating sumo robots spontaneously materializes in NYC?
StevieTNZ
So basically there is a chance or if MWI is true there are an infinite number of universes where an army of 100 story tall pie eating sumo robots spontaneously materializes in NYC?
If the wavefunction of the 100 story tall pie eating sumo robots allows the state of NYC, then yes.
Interesting that in Quantum Philosophy by Roland Omnes, says:
If we consider from this perspective an ordinary object, an empty bottle, say, the quantum principles will only take into account the particles forming the botttle, and will therefore treat on an equal footing a multitude of different objects. This is due to the fact that the atoms that make up the bottle could, without changing their interactions, adopt thousands of shapes to form a thousand different objects: two smaller bottles, six wine glasses, or a chunk of melted glass. One could also seperate the atoms according to their kind and end up with a pile of sand and another pile of salt. A rearrangement of the protons and electrons to transmute the atomic nuclei without modifying the nature of their interactions could also produce a rose in a gold cup. All these variants belong to the realm of the possible, of the multitude of forms that the wave functions of a given system of paricles may take.
Jarwulf
Yes, I think for for all reasonable physical systems every possible configuration will have a nonzero probability associated with it. For instance if you have a hydrogen atom the electron has a probability to be found literally anywhere in relation to the proton. However the probability is only non-negligible inside a very small volume of size about 10^-10 meters, and decays exponentially outside this volume. You can appreciate that while the probability of finding the electron on the other side of the room from the proton is nonzero, it is vanishingly small.
Similarly for a more complicated system like a chair the wave function should assign a nonzero probability to all possible configurations of the particles that make up the chair. However it is vanishingly unlikely that you will actually observe the particles of the chair adopt some configuration that is radically different from their current one, i.e. your chair is not going to spontaneously fall apart (absent some outside force like a sledgehammer).
I'm sort of confused what you're saying implies that the wavefunction has an infinite number of possibilities to collapse/decohere/do something else into but from reading the internet I get
Q11 How many worlds are there?
--------------------------
The thermodynamic Planck-Boltzmann relationship, S = k*log(W), counts
the branches of the wavefunction at each splitting, at the lowest,
maximally refined level of Gell-Mann's many-histories tree. (See "What
is many-histories?") The bottom or maximally divided level consists of
microstates which can be counted by the formula W = exp (S/k), where S
= entropy, k = Boltzmann's constant (approx 10^-23 Joules/Kelvin) and
W = number of worlds or macrostates. The number of coarser grained
worlds is lower, but still increasing with entropy by the same ratio,
ie the number of worlds a single world splits into at the site of an
irreversible event, entropy dS, is exp(dS/k). Because k is very small
a great many worlds split off at each macroscopic event.
Which seems to me to imply that there are a finite number of possibilities for the wavefunction to collapse/decohere/do something else into. Since MW claims that each possibility leads to another world.
Staff Emeritus
Q11 How many worlds are there?***
--------------------------
The thermodynamic Planck-Boltzmann relationship, S = k*log(W), counts
the branches of the wavefunction at each splitting, at the lowest,
maximally refined level of Gell-Mann's many-histories tree. (See "What
is many-histories?") The bottom or maximally divided level consists of
microstates which can be counted by the formula W = exp (S/k), where S
= entropy, k = Boltzmann's constant (approx 10^-23 Joules/Kelvin) and
W = number of worlds or macrostates. The number of coarser grained
worlds is lower, but still increasing with entropy by the same ratio,
ie the number of worlds a single world splits into at the site of an
irreversible event, entropy dS, is exp(dS/k). Because k is very small
a great many worlds split off at each macroscopic event.
Moderator Edit: *** From "The Everette FAQ" by M.C. Price.
QuantumClue
Do wavefunctions have to have every conceivable possibility? Say for instance you have a chair. Does the wavefunction of the chair necessarily have a possibility where the chair breaks apart spontaneously? Or a set of worlds where the chair breaks apart if MWI is true? Or can the wavefunction simply consist of possibilities where the chair does not splinter apart?
Does the wavefunction of a being have to have a possibility where the being changes their mind about something or can all possibilities of the wavefunction simply be ones where the being's mind stays the same?
Yes. the wave function takes into account every conceivable possibility. Parts of that wave function will project items into deep space. It's not that it is there physically... only there as a possibility, which is to add, very small indeed.
StevieTNZ
Perhaps a clairification is needed:
Does a quantum system (macroscopic in this case) have an infinite or finite amount of physical possiblities that can actualise upon measurement?
QuantumClue
Perhaps a clairification is needed:
Does a quantum system (macroscopic in this case) have an infinite or finite amount of physical possiblities that can actualise upon measurement?
Yes, but the wavelength of matter is exceedingly small on large enough scales. Even your homework jotter will be statistically sitting as a possibility on the surface of venus as an extreme example. The only reason why it is not sitting there, is again, highly unlikely.
StevieTNZ
I guess what I'm really asking is whether there are infinite or finite possibilites.
Sorry I wasn't clear enough earlier
Jarwulf
I guess what I'm really asking is whether there are infinite or finite possibilites.
Sorry I wasn't clear enough earlier
Same question. Everyone here seems to be leaning toward infinite but the FAQ author seems to think it is finite.
StevieTNZ
I wonder if there will be an answer anytime soon?
QuantumClue
I wonder if there will be an answer anytime soon?
Most agree it to be infinite. It's probabilities are spread infinitely throughout spacetime.
Jarwulf
Most agree it to be infinite. It's probabilities are spread infinitely throughout spacetime.
Okay so the FAQ is wrong
QuantumClue
Okay so the FAQ is wrong
Fields where designed in the sense they ''needed to touch'' vast areas. Most of the fields we deal with in quantum mechanics are infinite by nature.
The wave function is also a field, and can also be infinite by nature. It is a field of infinite possibilities, or a field representive of the probabilities of events. It must be infinite in many cases. A particles possible location is not situated to a small area, but has a range which scopes from $$-\infty$$ to $$\infty$$. That means the wave function appreciates even the most unlikely of scenarios.
Okay so the FAQ is wrong
The quality of the Everett FAQ is very poor. See the section ''On the Many-Worlds-Interpretation'' of Chapter A4 of my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html#manyworlds
In particular, the entropy argument used in the answer to Q11 is funny since it implies a fractional number of worlds unless the ensemble of worlds is microcanonical. But then each world is equally probable, and we must be puzzled why we are in a world where the unlikely happens rarely...
The usual entropy formula from statistical mechanics employed only counts the number of energetically accessible energy eigenstates (not the number of all possible states at a given energy, which is infinite), and is applicable only to a bounded volume of matter in equilibrium.
But the many worlds interpretation must consider the whole universe as the physical system, and the latter is neither in equilibrium nor (most likely) bounded.
StevieTNZ
So the wavefunctions physical states are infinite - by physical states I mean states like a chair, or a computer?
QuantumClue
So the wavefunctions physical states are infinite - by physical states I mean states like a chair, or a computer?
Well that is difficult to say. The wave length of matter at our levels of macroscopic objects are so small, that we don't even see a wave function which is physically projected through space, however an ethereal wave function exists for all objects, even your own body. Physical projections of possibilities have been observed though but they are very small objects which are not free from quantum effects.
Jarwulf
So theoretically it would be possible to teleport from one side of the universe to the other instantaneously.
StevieTNZ
however an ethereal wave function exists for all objects, even your own body. Physical projections of possibilities have been observed though but they are very small objects which are not free from quantum effects.
An ethereal wave function? Meaning?
Again my favourite paragraph from Quantum Philosophy by Roland Omnes:
If we consider from this perspective an ordinary object, an empty bottle, say, the quantum principles will only take into account the particles forming the botttle, and will therefore treat on an equal footing a multitude of different objects. This is due to the fact that the atoms that make up the bottle could, without changing their interactions, adopt thousands of shapes to form a thousand different objects: two smaller bottles, six wine glasses, or a chunk of melted glass. One could also seperate the atoms according to their kind and end up with a pile of sand and another pile of salt. A rearrangement of the protons and electrons to transmute the atomic nuclei without modifying the nature of their interactions could also produce a rose in a gold cup. All these variants belong to the realm of the possible, of the multitude of forms that the wave functions of a given system of paricles may take.
Always wondered what was meant by 'forms that the wave functions of a given system of particles may take' until I realised that a wave function can have one solution to it, i.e. two smaller bottles, and another can have the solution, i.e. in this case six wine glasses, which can add together to form another wave function of a superposition of two smaller bottles + six wine glasses.
QuantumClue
An ethereal wave function? Meaning?
Again my favourite paragraph from Quantum Philosophy by Roland Omnes:
Always wondered what was meant by 'forms that the wave functions of a given system of particles may take' until I realised that a wave function can have one solution to it, i.e. two smaller bottles, and another can have the solution, i.e. in this case six wine glasses, which can add together to form another wave function of a superposition of two smaller bottles + six wine glasses.
I need to be careful now because I realize the word I choose could be misconstruded as to mean something else. It was just me trying to be over-elegant.
When we talk about probabilities, we tend to wonder what we mean. Probabilities are things which happen inside our heads. Probabilities are the world of mind-stuff. This is not to mean that somehow the world is created mentally, but in many ways this part of quantum mechanics mirrors this fascinating fact rather well. The brain is physical, thoughts seem a lot less physical, almost ethereal. Thoughts or probabilities inside our heads don't objectively exist in the outside world. Physical probabilities may exist in the objective world.
This is why when the wave function was formulated, many scientists in the beginning thought that wave function was merely a statistical way for the scientists mind to make sense of an otherwise, evading reality of possibilities.
Last edited:
QuantumClue
So theoretically it would be possible to teleport from one side of the universe to the other instantaneously.
I'm not entirely sure how the subject of teleportation has arisen, as that would require to some theoreticians, the use of entangled particles.
However, with that said, there are many philosophical arguements rooted from the mathematics of such theories which cast doubt on whether teleportation is possible. Is a newly created, (new matter) but otherwise completely identical twin of an object be actually teleported? If you teleport information about a system and reconfigure those atoms into a complete duplicate, who is to say that it is the same object in question? All you have done is read from a recipe book and replicated your mothers apple pie. And the consciousness is not fully understood either... Personally I do not believe you can entangle particles over large distances and teleport something as complex as a human being. My consciousness inhabits the atoms in my body. Not the entangled states of particles over large distances.
Maybe you would like to make a post on the subject to see what others think.
Last edited:
homology
wouldn't the question of infinite/finite have something to do with whether space is actually discrete or continuous or bounded/unbounded?
StevieTNZ
I need to be careful now because I realize the word I choose could be misconstruded as to mean something else. It was just me trying to be over-elegant.
When we talk about probabilities, we tend to wonder what we mean. Probabilities are things which happen inside our heads. Probabilities are the world of mind-stuff. This is not to mean that somehow the world is created mentally, but in many ways this part of quantum mechanics mirrors this fascinating fact rather well. The brain is physical, thoughts seem a lot less physical, almost ethereal. Thoughts or probabilities inside our heads don't objectively exist in the outside world. Physical probabilities may exist in the objective world.
This is why when the wave function was formulated, many scientists in the beginning thought that wave function was merely a statistical way for the scientists mind to make sense of an otherwise, evading reality of possibilities.
I tend to think of quantum possibilites as potentialities. To quote Giancarlo Ghirardi
the assertion "the photon is in a superposition |O> + |E>" is logically different from all the following statements: "it propagates itself along path O or along path E" or "it follows both O and E" or "it follows other paths".
So in the double-slit experiment, the particle went from the source and made a 'quantum jump' to the screen (i.e. it didn't really go through either or both slits to get to the screen)?
I guess what I was asking was whether there were an infinite number of physical states found in a solution to the Schrodinger equation, such as one of those physical states being
two smaller bottles, another physical state being six wine glasses, another state a chunk of melted glass.
Jarwulf
I'm not entirely sure how the subject of teleportation has arisen, as that would require to some theoreticians, the use of entangled particles.
However, with that said, there are many philosophical arguements rooted from the mathematics of such theories which cast doubt on whether teleportation is possible. Is a newly created, (new matter) but otherwise completely identical twin of an object be actually teleported? If you teleport information about a system and reconfigure those atoms into a complete duplicate, who is to say that it is the same object in question? All you have done is read from a recipe book and replicated your mothers apple pie. And the consciousness is not fully understood either... Personally I do not believe you can entangle particles over large distances and teleport something as complex as a human being. My consciousness inhabits the atoms in my body. Not the entangled states of particles over large distances.
Maybe you would like to make a post on the subject to see what others think.
If the wavefunction really does contain all possibilities than theoretically it should allow say for instance a quanta or maybe even Los Angeles to teleport to the other side of the visible universe or an infinite distance instantaneously right? Assuming an unbounded universe where quantum mechanics applies to all the relevant places.
Last edited:
_PJ_
I think it's a little chicken-and-egg.
We are dealing with probabilities, and ALL POSSIBLE probabilities ought to be considerred. Any statistician knows that such a chart would be infinite, there would be a peak around the most-probable, but as the line droips off either side to the unlikely, the probability will tend towards - but never reach - zero. This can only be infinite series.
In determining individual proability possibilities, we need to consider factors that themselves may be infinite: time, location etc.
Personally, I don't like infinity, and there is a belief (for me a hope), that what may seem like an infinite range, is actually finite. Consider ther Planck Length and a finite/bounded universe as a limiting facttor on time/distance, or more practical restrictions such as causality that might limit future options.
Gold Member
wouldn't the question of infinite/finite have something to do with whether space is actually discrete or continuous or bounded/unbounded?
There is a deeper question which should be answered first.... is "space" a real object or is it a conceptual construct we use to describe real objects. Then the question of its discreteness or boundedness is a question of which conceptual tool we best use to describe how objects behave.
I think there is a conceptual error which got reinforced by the elegance of the geometric formulation of General Relativity (and also the success of QFT). We have begun to reify space. This is, I think, an error. If we treat space and space-time as parametric manifolds e.g. constructs to express overlapping degrees of freedom for physical systems, then "quantum space-time" becomes meaningless as such. Its just as bad as trying to "quantize" the complex numbers or "quantize" a group. We "quantize" physical systems in that we recognize their observables as quantized. In the relativistic setting spatial position ceases to be a physical observable.
I know this seems like a tangent here but it affects the question of infinities which crop up in e.g. attempts to quantize gravity and in questions of possibilities vs actualities.
StevieTNZ
What bothers me is when people say when something has a really low probability of happening, it's saying it's not going to happen at all. Clearly it COULD.
Staff Emeritus
What bothers me is when people say when something has a really low probability of happening, it's saying it's not going to happen at all. Clearly it COULD.
Not if the probably if it happening has a time period on the scale of the age of the universe! Then such a consideration of that happening is unrealistic.
Zz.
Jarwulf
Personally, I don't like infinity, and there is a belief (for me a hope), that what may seem like an infinite range, is actually finite. Consider ther Planck Length and a finite/bounded universe as a limiting facttor on time/distance, or more practical restrictions such as causality that might limit future options.
I dunno I think a buffet would be a lot cooler with an infinite variety of food.
Gold Member
What bothers me is when people say when something has a really low probability of happening, it's saying it's not going to happen at all. Clearly it COULD.
Ultimately such a statement itself is uncertain, and a matter of the probability that the theory, which you are using to predict what CAN and CAN'T happen, is correct. At some point a probability becomes so small as to be less than the probability that all the experiments confirming the theory being used were accidentally way off the mark.
When you say "Clearly it COULD", how do you know? To what probability are you correct in your assertion?
Finally we may be ignorant of the impossibility of some event and thus we speak of it as something which "COULD" happen in the sense that we are unable to eliminate to exactly zero the probability of it happening. The statement is not one of actual possibility but a statement of our lack of infinite knowledge. It is important to identify if this is the case in what you are saying.
--------------
The above is all general concerns and context for the question at hand. The actual analysis assuming QM is correct, is this. When we speak of the wave-function for a quantum and the small probability of it "jumping to LA", we are really speaking of the small probability that it "was in LA all the time." so to speak...excepting the issue of what it means to say where it is or was at all in the absence of measurement.
It is more instructive to get down to cases. First in the non-relativistic case, you observe a quantum in NYC, at time t1. You write down a wave-function (a delta function) for its position given this knowledge. If you immediately observe its position again you will find it in NYC with probability 100% and LA with exactly 0%. You then evolve the wave-function for an interval of time so that at time t2 there is a finite small probability it will be observed in LA.
You haven't a clue as to its momentum and thus its velocity (which can be arbitrarily high in the non-relativistic setting) and so you cannot say that it couldn't be traveling so fast as to reach LA by the time t2 that you then observe it there. No teleportation involve here, simply the quantum having a very very small probability of moving very fast.
Now take the relativistic case. you observe a quantum in NYC, at time t1. You write down a wave-function (a delta function) for its position given this knowledge. You haven't a clue as to its momentum and thus its velocity (which must be less than c in the relativistic setting). If you then evolve the wave-function it will not exceed the speed of light in its propagation of probability. It is impossible to observe it in LA until it has had time to propagate there at some velocity < c. However once enough time has passed you have the same situation as before.
However some may misinterpret the point of maximum probability of a wave-function for an actual position of the corresponding quantum. In that case there is a finite probability that a later observation will find the actual quantum a distance greater than ct away from this peak. It is a surprise, not a sudden jump.
QM does not say we will see sudden jumps as implied by the term "teleportation" but rather explains that when we only look at discrete times we can only see a discrete series of positions i.e. jumps (but not sudden ones). In between looking we predict what might be seen (and how likely) with wave-functions.
Note: My analysis is based somewhat on my choice of interpretation (Orthodox CI) and others may describe the nature of the reality of the situation differently based on their interpretation. However I would point out that in the end all the other interpretations agree with the above in so far as actual observations[/] is concerned because that's what QM predicts.
A final note. The phrase "quantum teleportation" has a distinct meaning not to be confused with the above described phenomena. It has to do with copying a quantum system completely (and necessarily destructively) by using an auxiliary system. Just as we shouldn't confuse "quantum cloning" with actual copying genetic material, we shouldn't confuse "quantum teleportation" with actual instantaneous jumping from point A to point B. These are romantic choices of terminology for more mundane actual phenomena.
mr.smart
i have been pondering the wave function myself. in particular; the idea that once an observed particle is no longer observed, does it become a probability wave again?
if it remains "solid" then this would explain why the chair does not posses the probability of falling apart spontaneously. presumably the wave function collapsed when the wood was first cut and now ,as a hard piece of wood, it responds to the physical reality classically.
if however, once an observed particle becomes again un-observed, it become a wave function again, then the chair does posses the probability to fall apart, but only if you stop looking at it.
is there an excepted answer to this one... could Schroedinger's cat come back to life if we closed the box?
StevieTNZ
When you say "Clearly it COULD", how do you know? To what probability are you correct in your assertion?
Finally we may be ignorant of the impossibility of some event and thus we speak of it as something which "COULD" happen in the sense that we are unable to eliminate to exactly zero the probability of it happening. The statement is not one of actual possibility but a statement of our lack of infinite knowledge. It is important to identify if this is the case in what you are saying.
Nope, I'm referring to a probability for a physical state to actualise (extracted from the wave function), and someone saying it has a really really low probability of occuring that its practically not going to occur, I would think is wrong. For example, say I have 2% probability for sitting on this chair, and 98% for turning the TV off and walking out of the house. Even though it looks more probable that the turning off of the TV is going to occur, there is the possibility for me to go sit on the chair.
|
2022-10-03 01:53:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6849058270454407, "perplexity": 597.6179427205802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00570.warc.gz"}
|
https://tex.stackexchange.com/questions/23067/how-does-pdf-or-pdf-viewer-define-a-rounding-rule-for-stroke-thickness
|
# How does PDF or PDF Viewer define a rounding rule for stroke thickness?
I defined a sequence of line segments of different thickness each as follows.
\documentclass{article}
\usepackage{multido}
\parindent=0sp
\begin{document}
\multido{\dx=0sp+1000sp}{100}{\rule{0.01\linewidth}{\dx}}%
\end{document}
And I got the result as follows.
If we magnify the output 6400 times, we will notice there is an interval in which adjacent line segments have the same thickness. In other words, any value in x sp to y sp results in strokes with the same thickness.
I am interested in finding the formula used by PDF to determine the x and y. How does PDF or PDF Viewer define a rounding rule for stroke thickness?
• Isn't that purely up to the rendering engine? – topskip Jul 14 '11 at 17:52
• @Patrick: It is also purely up to the printer ? – xport Jul 14 '11 at 18:10
• I have no idea but I strongly believe so. I once heard that very thin "hairlines" are thickened automatically, but this was in postscript times. Don't know if this applies to PDF. – topskip Jul 14 '11 at 18:21
• Note that your screenshot is scaled by tex.sx, too. It's actually 1148px × 104px, scaled to 630px × 57px, which might make a difference in the optical impression again, quite possibly depending on personal browser/computer configuration. (I would assume the tex.sx dimensions are the same for everybody.) – doncherry Jul 15 '11 at 10:14
• @doncherry: It depends also on our eyes and lighting. – xport Jul 15 '11 at 10:17
## 1 Answer
The PDF imaging model is quite complicated. You can read about it in section 10.6 of the PDF specification (you can download it here). The important point is that a pixel is supposed to be coloured when painting a shape if any part of it intersects the shape, no matter how small the intersecting area. The idea is to ensure that no shape never disappears because of the way it aligns with the pixel grid, although misunderstanding of this rule can cause other problems such as disappearing box border lines in latex when you don't use xcolor (see this answer), where the problem is that a black line is painted and then a white rectangle is painted to its right, but due to the above rule, the white rectangle can completely obscure the black line. There is also an optional "stroke adjustment" feature, where lines are rounded to an integer number of pixels thickness, to make sure all vertical or horizontal lines appear the same weight, no matter how they align with the pixel grid (see section 10.6.5 of the spec).
Having said all that, many PDF viewers do not actually implement the scan conversion rules as given in the spec. The PDF specification was originally mostly concerned with describing the printed page, where pixels are either completely black or completely white (well, for most printing technologies, anyway). For the screen, most people prefer antialiased rendering, where pixels are darkened in proportion to the amount of area covered. In Adobe reader you can switch between rendering as per the spec, and antialiased rendering, with the checkbox "Smooth line art" under "Page display" in the preferences settings. However, it is more subtle still than this, because now very fine lines can disappear, so there is another checkbox "Enhance thin lines", which selectively switches the scan conversion rules of the spec back on for thin lines only. Other than Adobe Reader, some other PDF viewers (eg evince, sumatraPDF) use only an antialiasing algorithm, but they still need some heuristic for making thin lines visible, others (eg xpdf, okular) use only the scan conversion rules of the specification.
EDIT to add a few more points:
1. I should have mentioned that zero-width lines have a special device-dependent interpretation of "as thin as possible", so it's best to avoid using these.
2. There is another source of rounding that very rarely comes up: coordinates in PDF are stored as textual representations of floating-point numbers, truncated to a certain number of figures after the decimal, eg "10.124". You can adjust the number of digits in pdfTeX using \pdfdecimaldigits (equal to 3 by default in my miktex installation). The truncation to 3 digits corresponds in the usual case where the distance is in pt to a rounding to about 66sp. It seems that ps2pdf will by default round to 2 digits but this can be altered with the -r option, eg -r10000 (asking for 10000dpi resolution) will give a PDF with 5 decimal digits. It seems dvipdfm uses 2 digits by default, but one can change this with the -d command line option.
Obviously, including more digits will make the PDF file size (very slightly) larger.
3. dvips will do rounding to whatever is given by the -D option to set the resolution. dvips -Ppdf will set -D8000 for 8000dpi, or rounding to the nearest approx 600sp. On my installation, without any special options, dvips defaults to 600dpi or 8000sp. (This is sufficiently coarse to cause around 8 adjacent segments to have the same thickness, in the example given in the question).
EDIT 2: A final complication that affects the example given in the question: how to represent TeX \rules in pdf? There are 2 options: as stroked lines, or as filled rectangles. dvips without any special options will use rectangles (because this works best with certain old PostScript engines, such as in LaserJet III printers). dvips -Ppdf will use lines (because it loads alt-rule.pro) which works better with PDF, at least for thin rules. pdftex will use lines for rules narrower than 1bp and rectangles for larger rules. dvipdfm uses lines. It's interesting to compare the results of the various different ways of making a PDF from the example in the question: there is certainly a noticeable difference if you look carefully at certain zoom levels in different pdf viewers.
|
2019-08-23 09:08:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.720747172832489, "perplexity": 1469.102003768998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318243.40/warc/CC-MAIN-20190823083811-20190823105811-00042.warc.gz"}
|
https://stats.stackexchange.com/questions/123943/conceptual-question-autocorrelation-of-autoregressive-process
|
# Conceptual Question: Autocorrelation of autoregressive process
An AR(1) process: $X_t = c+\theta X_{t-1} + \epsilon_t$ where $\epsilon_t$ is a zero mean white Gaussian noise.
The Autocorrelation matrix is expressed by the formula mentioned in the Wikipedia entry.
In signal processing textbooks, it is mentioned that for $p=1$ AR(1) process, $R_{xx}[0] = \frac{\sigma_\epsilon^2}{1-\theta^2[1]}$.
Q1: Why do we consider only the first element $R_{xx}[0]$ of the matrix when expressing the autocorrelation for AR(1) process
Q2: How is this expression derived?
Q3: Which elements to consider for higher order process e.g., AR(p) when $p=2,3$ etc process from the Autocorrelation matrix for calculating the autocorrelation as in Q1?
I'm not completely familiar with the notation of the autocorrelation matrix as presented, and there seems to be a contradiction in your description.
If $R_{xx}[0] = \frac{\sigma_\varepsilon^2}{1-\theta^2}$, then this is the variance $\text{Var}(x_t)$, which doesn't match up with your description of the elements of the matrix being autocorrelations (in which case $R_{xx}[0] = 1$ should be the case, I'd think). I'm not sure why this is, not being familiar with this particular notation (my time series background is via econometrics).
So, regarding the first question, no, we would definitely consider more than this in expressing the autocorrelation function.
Regarding your second question, the derivation of $\text{Var}(x_t)$ is reasonably straightforward application of basic properties of variance and assuming stationarity such that $\text{Var}(x_{t-1})=\text{Var}(x_t)$ - as shown in your prior question
However, I take it you are interested in the derivation of the autocovariance or autocorrelation function. For simplicity of display we'll assume a demeaned series. $$\text{Cov}(x_t,x_{t-1}) = E[(x_t-\mu)(x_{t-1}-\mu)] = E[x_t x_{t-1}]$$ $$\text{Cov}(x_t,x_{t-1}) = E[x_{t-1}(\theta x_{t-1} + \varepsilon_{t-1})] = E[\theta x_{t-1}^2] + E[x_{t-1}\varepsilon_{t-1}]$$ $$\text{Cov}(x_t,x_{t-1}) = \theta\text{Var}(x_t)$$ If we want autocorrelation, we apply the definition $$\text{Corr}(x_t,x_{t-1}) = {\text{Cov}(x_t,x_{t-1}) \over \sigma_x \sigma_{x_{t-1}} } = {\theta\text{Var}(x_t) \over \sigma_{x_t}^2 } = {\theta\text{Var}(x_t) \over \text{Var}(x_t) }$$ and thus $\text{Corr}(x_t,x_{t-1}) = \theta$.
You can recursively show that $\text{Corr}(x_t,x_{t-n}) = \theta^n$
I presume then that
$R_{xx}[1] = R_{xx}^*[1] = \theta$
$R_{xx}[n] = R_{xx}^*[n] = \theta^n$
A similar process can let you arrive at the form for AR models with more terms.
• +1. (You're going over old ground with the variance calculation, which was addressed in the OP's immediately preceding question at stats.stackexchange.com/questions/123796/…) – whuber Nov 14 '14 at 3:43
• @Affine: Thank you for your answer, but this is not what I had asked. Sorry for being unclear. My question1 = Why is the variance considered to be the first element of the matrix which is $R_{xx}[0]$? As mentioned by whuber, the Variance calculation was already answered. In continuation to that my Question is why is the variance = $R_{xx}[0]$ the first element of the Autocorrelation matrix? What do we do with the rest of the elements of the Autocorrelation matrix? Is there a relationship between correlation & covariance (also mentioned in your answer, why $corr = \theta$ while calculating Cov) – SKM Nov 14 '14 at 18:01
• @SKM Updated, hopefully clarifying things. I want to mention again that I'm unfamiliar with the autocorrelation matrix presentation, and so I am going off an assumption of what it represents. – Affine Nov 14 '14 at 19:47
|
2020-04-09 01:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595733284950256, "perplexity": 342.49047068408146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00501.warc.gz"}
|
https://math.stackexchange.com/questions/2255094/showing-a-measure-is-sigma-finite
|
# Showing a measure is $\sigma$-finite
Let ${(X_n, \mathcal A_n, \mu_n)_{n \in \mathbb N}}$ be measure spaces. Let $X_n$ be pairwise disjoint. Define $(X, \mathcal A, \mu)$ by$X=\bigcup_{i=1}^{\infty} X_n$
$\mathcal A=\{E \subset X: E \cap X_n \in \mathcal A_n \; \forall n \in \mathbb N\}$ and $\mu(E)=\sum_n \mu_n(E \cap X_n)$
I want to show that if all $\mu_n$ are $\sigma$-finite than $\mu$ is $\sigma$-finite aswell.
My attempt: I wanted to take the sequence of the unions of the sequences that make $\mu_n$ sigma finite. So the first element is the union of all the first parts of the each sequence and so on.
Showing that the union is of all these equals $X$ is easy, but I fail at showing that each element has finite measure and that each element is contained in $A$ and hoped that someone could help me here.
• Hint: Do you remember how to prove that $\mathbb N \times \mathbb N$ is countable? – Kenny Wong Apr 27 '17 at 17:39
• @KennyWong You mean first listing all tupels, who start with $1$ then all starting with $2$ etc. ? I can see that there is a connection in the idea but not how to explicitly use it – PeterGarder Apr 27 '17 at 18:46
• But that doesn't work. Since there are infinitely many tuples starting with $1$, you'll never reach the ones starting with $2$. You have to enumerate the tuples "diagonally". It's hard to explain without drawing a picture, so perhaps you could look at this: homeschoolmath.net/teaching/rational-numbers-countable.php Anyway, the problem you posed is similar: it boils down to showing that a countable union of countable collections of "things" is itself a countable union of "things". In your case, the "things" are the measurable sets with finite measure. – Kenny Wong Apr 27 '17 at 19:23
• What are $E$ and $B$ exactly in your definition of $\mathcal{A}$ and $\mu(E)$? Can you fix those definitions? – gogurt Apr 27 '17 at 19:29
• @gogurt I am sorry, $B$ was meant to be $E$. It's edited now. $E$ in $\mu(E)$ is an element of $\mathcal A$ – PeterGarder Apr 27 '17 at 19:33
As the comments indicate, you need to accept that countable unions of countable sequences are still countable. See here for example.
Now you just need to show that the measure $\mu$ is $\sigma$-finite, i.e. you need to show that $X$ is a countable union of sets all of which have finite measure. As you said, do this using the sequences which make each $\mu_n$ $\sigma$-finite.
Let $\{U_{i,k}: k \in \mathbb{N}\}$ be a countable sequence of sets in $\mathcal{A}_i$ which satisfy
• $\mu_i(U_{i,j}) < \infty$ for all $j \in \mathbb{N}$
• $\bigcup_j U_{i,j} = X_i$
We will consider collection $\{U_{i,j}, i \in \mathbb{N}, j \in \mathbb{N}\}$. All we need to show is that this collection 1) is countable, 2) unions up to $X$, and 3) all elements have finite measure under $\mu$.
Using the fact above, 1 is immediate. 2 is also obvious. 3 is just a tiny bit of work. Pick an arbitrary $U_{i,j}$ from that collection. This set is in $\mathcal{A}$ because
1. $U_{i,j} \subset X$
2. $U_{i,j} \cap X_i = U_{i,j}$ which is in $\mathcal{A}_i$, and
3. $U_{i,j} \cap X_k$ for $i \neq k = \emptyset$ since the $X_k$'s are pairwise disjoint, and $\emptyset \in \mathcal{A}_k$ for any $k$.
We already know that $\mu_i(U_{i,j}) < \infty$ and obviously $\mu_j(\emptyset) = 0$ for any $j \neq i$ so that $\mu(U_{i,j}) < \infty$ also.
|
2020-02-25 13:12:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395947813987732, "perplexity": 171.58739675548813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00298.warc.gz"}
|
https://conferences.iaea.org/event/214/contributions/17366/
|
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
# 28th IAEA Fusion Energy Conference (FEC 2020)
10-15 May 2021
Nice, France
Europe/Vienna timezone
The Conference will be held virtually from 10-15 May 2021
## Linear Analysis of Cross-field Dynamics with Feedback Instability on Detached Divertor Plasmas
12 May 2021, 14:00
4h 45m
Nice, France
#### Nice, France
Regular Poster Magnetic Fusion Theory and Modelling
### Speaker
Hiroki Hasegawa (National Institute for Fusion Science)
### Description
The theoretical model of the feedback instability is proposed to explain the mechanism of the correlation between the detachment and the cross-field plasma transport. It is shown that (1) the feedback instability on the detached divertor plasma can be induced in a certain condition in which the recombination frequency $\nu_{\textrm{rec}}$ is larger than the ion cyclotron frequency $\Omega_{\textrm{ci}}$ in the recombination region and the density gradient and the electric field in the direction perpendicular to the magnetic flux surface are not zero, and that (2) the feedback instability can provide the cross-field plasma transport in the boundary layer of magnetic fusion torus devices.
The correlation between the detachment and the cross-field plasma transport in the boundary layer has been reported in various magnetic confinement devices, that is, tokamak$^{1}$, helical$^{2}$, and linear$^{3}$ devices. Such a correlation is expected to expand the width of the heat flux to the divertor target, i.e., $\lambda_{\textrm{q}}$. However, the physical dynamics of the correlation has not been revealed. In this study, we investigate the cross-field dynamics in the detached plasma state with the coupling model between magnetized plasmas characterized by different current mechanisms. In the recombination region in front of a divertor target, $\nu_{\textrm{rec}}$ can be larger than $\Omega_{\textrm{ci}}$ because of the high density and the low temperature. In such a situation, the cross-field motion of ions is mainly in the direction of the electric field, while that of electrons is almost in the direction of the $E \times B$ drift. Thus, the difference in the direction of motion may provide the cross-field current in the recombination region. On the other hand, the cross-field current can be generated by only the polarization, the grad-$B$, and the diamagnetic drifts in the upstream plasma. We have considered whether such a difference between the current mechanisms in each region induces the cross-field plasma transport.
In this study, we have derived the linear dispersion relation from the continuity equations,
$\displaystyle \frac{\partial n^{\text{P}}}{\partial t} + \mathbf{\nabla}_{\perp} \cdot (n^{\textrm{P}} \mathbf{v}_{s \perp}^{\textrm{P}}) + \frac{\Gamma_{s \parallel}^{\textrm{P}}(z = L_{z}) - \Gamma_{s \parallel}^{\textrm{B}}}{L_{z}} = 0 \; \; \; \; \; \; \; (1)$
and
$\displaystyle \frac{\partial n^{\textrm{R}}}{\partial t} + \mathbf{\nabla}_{\perp} \cdot (n^{\textrm{R}} \mathbf{v}_{s \perp}^{\textrm{R}}) + \frac{\Gamma_{s \parallel}^{\textrm{B}} - \Gamma_{s \parallel}^{\textrm{R}}(z = -h)}{L_{z}} = -\alpha \left[(n^{\textrm{R}})^{2}-(n_{0}^{\textrm{R}})^{2} \right], \; \; \; \; \; \; \; (2)$
and the charge conservation equations,
$\displaystyle \mathbf{\nabla}_{\perp} \cdot \mathbf{j}_{s \perp}^{\textrm{P}} + \frac{j_{\parallel}^{\textrm{P}}(z = L_{z}) - j_{\parallel}^{\textrm{B}}}{L_{z}} = 0 \; \; \; \; \; \; \; (3)$
and
$\displaystyle \mathbf{\nabla_{\perp}} \cdot \mathbf{j}_{s \perp}^{\textrm{R}} + \frac{j_{\parallel}^{\textrm{B}} - j_{\parallel}^{\textrm{R}}(z = -h)}{L_{z}} = 0, \; \; \; \; \; \; \; (4)$
in the upstream plasma and the recombination region in the simple configuration as shown in Fig. 1. In this configuration, the magnetic field is parallel to the $z$ axis and the $x$ and $y$ directions correspond to the direction perpendicular to the magnetic flux surface and the toroidal direction in torus devices, respectively. Here, $n$ is the plasma density, $\mathbf{v}_{s \perp}$ is the flow velocity perpendicular to the magnetic field, $\Gamma_{s \parallel}$ is the parallel flux, $\alpha$ is the recombination coefficient, $n_{0}$ is the plasma density at the equilibrium state, $\mathbf{j}_{\perp}$ and $\mathbf{j}_{\parallel}$ are the perpendicular and the parallel currents, the superscripts P, R, and B indicate the quantities in the upstream plasma, in the recombination region, and at the boundary between those regions, respectively, and the subscript $s$ represents the particle species. The upstream plasma flow velocity $\mathbf{v}_{s \perp}^{\text{P}}$ is composed of the $E \times B$, the polarization, the grad-$B$, and the diamagnetic drifts, while the recombination region flow velocity $\mathbf{v}_{s \perp}^{\text{R}}$ includes each drift with the Hall mobility and the motion in the direction of the perpendicular electric field with the Pedersen mobility. Linearizing Eqs. (1)-(4), as a result, we obtain the cubic equation regarding the frequency $\omega$ as the dispersion relation. It is found that the one mode of them has a positive growth rate under a certain condition. Figure 2 shows the dependence of the growth rate $\gamma$ of the unstable mode, i.e., the feedback instability mode, on the wave number $k$ and the propagation direction $\theta$. In Fig. 3 we present the dependence of the group velocity $v_{\textrm{g}} = \partial \omega / \partial k$ of the unstable mode on $k$ and $\theta$. Here, the typical parameters for fusion torus devices are assumed as follows: $B = 5 \; \textrm{T}$, $\partial B / \partial x = -1 \; \textrm{T}/\textrm{m}$, $n_{0}^{\textrm{P}} = 5 \times 10^{19} \; \textrm{m}^{-3}$, $\partial n_{0}^{\textrm{P}} / \partial x = -1.67 \times 10^{21} \; \textrm{m}^{-4}$, the initial electric fields $E_{x0}^{\textrm{P}} = E_{x0}^{\textrm{R}} = -100 \; \textrm{V}/\textrm{m}$, the electron and ion temperatures $T_{\textrm{e}}^{\textrm{P}} = T_{\textrm{i}}^{\textrm{P}} = 50 \; \textrm{eV}$, $T_{\textrm{e}}^{\textrm{R}} = T_{\textrm{i}}^{\textrm{R}} = 0.3 \; \textrm{eV}$, $\nu_{\textrm{rec}} / \Omega_{\textrm{ci}} = 10$, $L_{z} = 10 \; \textrm{m}$, $h = 0.3 \; \textrm{m}$, the ion-to-electron mass ratio $m_{\textrm{i}} / m_{\textrm{e}} = 3.67 \times 10^{3}$, and the ion-to-electron charge ratio $q_{\textrm{i}} / |q_{\textrm{e}}| = 1$. In those figures, the area inside the red curve designates the unstable region in which the feedback instability can be induced. Thus, Figs. 2 and 3 indicate that the waves $k \rho_{\textrm{s}}^{\textrm{P}} > 0.8$ and $\theta \sim 3\pi/4$ can transport the plasma lump with the speed $\sim 0.002 \; c_{\textrm{s}}^{\textrm{P}}$. The simple estimation shows that the maximum of heat flux density is reduced to $1 - (v_{{\textrm{g}}x}/c_{\textrm{s}}^{\textrm{R}})(\tilde{n}/n_{0}^{\textrm{R}})(h/\lambda_{\textrm{q}}^{\textrm{B}}) \approx 82$% of the initial value and that the heat flux width is expanded to $1 + (h/\lambda_{\textrm{q}}^{\textrm{B}})(v_{\textrm{g}x}/c_{\textrm{s}}^{\textrm{R}}) \approx 280$% of the initial width if $\tilde{n}/n_{0}^{\textrm{R}} \sim 0.1$ and $\lambda_{\textrm{q}}^{\textrm{B}} \sim 3 \; \textrm{mm}$. Here, $v_{{\textrm{g}}x}$ is the $x$ component of $v_{\textrm{g}}$ and $\tilde{n}$ is the time averaged density of the transported plasma lump.
Furthermore, to verify the feedback instability model, the spiraling plasma ejection observed around the recombination front under the detached divertor condition in the NAGDIS-II linear device experiment$^{3}$ is analyzed, in which the NAGDIS-II contributes to the establishments of the detachment and the cross-field transport mechanisms for future fusion reactors such as ITER and DEMO. The radial speed $v_{r} \sim 80 \; \textrm{m}/\textrm{s}$ at $r \sim 20 \; \textrm{mm}$ and azimuthal speed $v_{\theta} \sim 200 \; \textrm{m}/\textrm{s}$ at $r \sim 5 \; \textrm{mm}$ obtained in the experiment are in good agreement with $v_{{\textrm{g}}x} \sim 800 \; \textrm{m}/\textrm{s}$ and $v_{{\textrm{g}}y} \sim 200 \; \textrm{m}/\textrm{s}$ estimated by the theoretical model with the NAGDIS-II parameters if $v_{r}$ is reduced as $r$ increases.
$^{1}$Potzel S et al. 2013 J. Nucl. Mater. 438 S285.
$^{2}$Tanaka H. et al. 2010 Phys. Plasmas 17 102509.
$^{3}$Tanaka H. et al. 2018 Plasma Phys. Control. Fusion 60 075013.
Affiliation National Institute for Fusion Science Japan
### Primary author
Hiroki Hasegawa (National Institute for Fusion Science)
### Co-authors
Dr Hirohiko Tanaka (Graduate School of Engineering, Nagoya University) Prof. Seiji Ishiguro (National Institute for Fusion Science )
### Presentation Materials
There are no materials yet.
|
2021-03-06 15:32:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5690622925758362, "perplexity": 752.8610895935085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00027.warc.gz"}
|
https://earthscience.stackexchange.com/questions/4938/how-to-tell-if-a-single-day-of-weather-is-an-anomaly-or-due-to-climate-change
|
# How to tell if a single day of weather is an anomaly or due to climate change?
I am interested in understanding how to tell if a single day of 'abnormal' weather is due to climate change or not.
From my understanding, you would compare this day's weather to historical weather. However, is a day-to-day comparison accurate? Or would you have to take a large sample, like week, month, or season?
It seems to me like it would be hard to compare one day to another historical weather average, because the day could just be an outlier not due to climate change...
In other words, how do you tell if a seasonal storm or really hot summer day is due to climate change?
• I feel uncomfortable linking climate change and individual weather events, since by definition they happen on such disparate time scales. Changes in the occurrences of weather patterns would be a much easier intellectual jump to make, but I'm not sure we have enough data from teleconnection indices over the years to actually attribute their variations to climate change. While I understand that it's easier to help the general public grasp what's going on with tangible things that happen in their backyard, I think it's actually very difficult to do what you're asking with simple statistics. – Ian Jun 12 '15 at 18:10
• Although don't get me wrong, there have been successful efforts to do so with large outlier events such as Hurricane Sandy on the United States east coast. What I'm saying is that it's a lot harder to take a couple of weeks of moderate heat wave and argue for climate change as "the cause". – Ian Jun 12 '15 at 18:18
• This question is more or less equivalent to saying "We have a (known) loaded die. We just threw a six. What is the chance that this is due to the loading?" - the answer is, who knows? A single datum is not enough. If you throw 20 times and get 10 sixes, then you can start making statements about how the loading is affecting the results (with appropriate uncertainty estimates). – naught101 Jun 17 '15 at 15:43
I am interested in understanding how to tell if a single day of 'abnormal' weather is due to climate change or not.
You can't. The day-to-day, locale-to-locale variations in weather are huge compared to the changes that occur from year-to-year and decade-to-decade, averaged over the surface of the Earth. Most of those climatological variations are periodic in nature (e.g., El Niño Southern Oscillation, North Atlantic Oscillation; note the "oscillation" in the names). Climate change is the small but steady secular change in climate.
Another way to look at it: Even the most pessimistic of projections is for a 4°C increase over the course of a hundred years. That corresponds to 0.04°C change per year. Weather can bring about a 4°C change over the course of a few minutes. Any one weather event can be attributed to the noisy, flukey nature of weather. Climate change begins to become apparent when one looks at weather events that occur all over the face of the Earth, and over the course of several decades.
• I feel compelled to share this link: www2.ametsoc.org/ams/index.cfm/publications/… . We can calculate probabilities that an event would not have occurred but for climate change. – farrenthorpe Jun 1 '15 at 22:19
• @farrenthorpe - The cited papers that did find a correlation did so for extreme, month-long heat waves. For other long-term extrema (e.g., California drought), the evidence that global warming is to blame is less clear. For single day extreme weather events, the evidence is even less clear. There's just too much natural variation in the weather during one day at a specific locale to be able to attribute that to global warming (so far). – David Hammen Jun 2 '15 at 12:45
• David I completely agree! – farrenthorpe Jun 2 '15 at 14:52
• Right. It's called climate change, not weather change. – hichris123 Jun 4 '15 at 20:39
• @David Hammen The mean alone seems to me to be a too limited definition of climate change. If the variance of conditions or the frequency of extreme events changes significantly, most people would consider this climate change. Good answer though (made me think.) – Mark Rovetta Jun 11 '15 at 2:06
The observation of even a few exceptional storms can provide quantitative evidence for climate change. Doing so however requires observing and learning as much as possible about how storms work, and not merely counting storms. With some understanding of how atmospheric systems and storms operate, we have other observational information from physics, chemistry, and planetary science that we can also apply to the question. We should use all the information available.
Bayes rule can help us do this objectively. Here, P(A|B) can be the posterior probability that the climate has changed (A), given the observation of the exceptional storm (B). P(A) is the prior probability for climate-change. P(B|A) is the likelihood that storm B occurs given a changed climate (e.g. warmer). k1 and k2 are constants of proportionality. Let (!A) represent no change of climate. Then we can write two equations for Bayes Rule.
P(A|B) = k1 P(A) P(B|A)
P(!A|B) = k2 P(!A) P(B|!A)
The prior odds for a change in climate is P(A):P(!A), and let's assume the prior odds for and against climate change are even, 1:1.
Now if we use all the available information we have about atmosphere physics and chemistry, and we observe storm B in detail, we can make an informed estimate of the ratio of likelihoods P(B|A):P(B|!A). Let's assume that B is an exceptional storm and ten times more likely to occur when the atmosphere is warmer.
If a storm of type B is observed in actuality, the posterior odds can be applied, and the odds should be updated in favor of climate change to 10:1.
What this means is that the observation of exceptional (extreme) events should inform our opinion on climate change. This approach is most successful however when we have many, and many types, of information on how the atmosphere and climate works.
Measurements taken on a day during extreme weather will of course be outliers, but could also provide important information about how In a Warming World, Storms May Be Fewer but Stronger. We should not assume outliers always represent 'noise' that needs to be averaged away.
It does not seem unreasonable to ask the question whether or not we are seeing effects of climate change in the weather. Thursday morning I read the following on the US National Weather Service forecast discussion page:
CLIMATE...THERE IS A SMALL CHANCE THAT SEATTLE WILL GET TO 90 DEGREES ON SUNDAY WHICH WOULD TIE THE RECORD FOR THE DAY. SINCE RECORDS STARTED IN SEATTLE AT THE FEDERAL BUILDING DOWNTOWN IN 1891 THERE HAVE BEEN ONLY SIX DAYS IN THE FIRST WEEK OF JUNE WITH A HIGH TEMPERATURE OF 90 DEGREES OR MORE. THE LAST TIME IT HAPPENED WAS JUNE 4 2009 WITH A HIGH OF 91 DEGREES. FELTON
Whether or not the temperature exceeds 90 degrees next Sunday, I wouldn't dismiss the question of what mechanisms might be operating out-of-hand. We should try to estimate how much what happens supports (or not) hypotheses based upon physical processes.
For example, use Bayes rule reasoning to estimate the change in posterior odds for a mechanism A that increases the likelihood of P(B|A) and P(!B|!A) by 15%, and decreases the likelihood of P(!B|A) and P(B|!A) by 15%. $$\delta = 0.15$$ Then the likelihood is given by the following, where the record is exceeded for a years and not exceeded for b years. $$k \times\left[ \frac{P(B \parallel A)}{P(B \parallel !A)} \right] ^{a}\times \left[ \frac{P(!B \parallel A)}{P(!B \parallel !A)} \right] ^{b}$$ $$k \times\left[ \frac{1 + \delta}{1 - \delta} \right] ^{a-b}$$
Let's also look at the support if the record is also exceeded in 2016 and 2017.
Change in posterior odds in favor of A(0.15)
(a) Record Not Exceeded 2015 - Posterior odds decrease from prior odds by 35%. By this method it is possible that additional observations eventually discredit the hypothesis. (b) Record Exceeded in 2015 - Posterior odds increase by 35%. (c) Record Exceeded in 2015 & 2016 - Posterior odds increase by 83%. (d) Record Exceeded in 2015 & 2016 & 2017 - Odds increased by 148%.
Finally, the advantages of using this approach, rather than a frequentist approach, can be more easily understood by considering how it could be applied in practice. For example, how a Penn Cove shellfish business might use these calculated changes of climate-change probability to self-insure their farm. The owner of a shellfish farm may understand that climate change poses a risk to her business, and has hedged for the cost of the odd bad year due to this by putting an extra 100 dollars into an account each month. She has found this has worked well in the past, with the account growing to be large enough to cover costs in bad years, without ballooning too large.
How might she use the information that Seattle is breaking temperature records (and the probability of A may be changing) to adjust this amount? If the temperature record is exceeded in 2015, she may decide to increase the amount to 135 dollars per month, and if the record is exceeded again in 2016 she may decide to increase it to 183 dollars, and if it is exceeded again in 2017 increase it to 248 dollars. The advantage is the Bayes method helps her make a decision to act sooner than by using a frequentist approach. This way she may be able to prepare for future costs.
• In other words, can you reject the null hypothesis? In this case, the null hypothesis is that the variations are due to "just weather". Rejecting the null hypothesis for a single one day weather event at a specific locale today is highly problematic. Global warming hasn't done that much to change the weather yet. By 2050, rejecting the null hypothesis for a season, maybe a few weeks will not be a problem. By 2100, even isolated events will most likely be attributable to global warming. – David Hammen Jun 9 '15 at 13:06
• @DavidHammen the frequentists' null-hypothesis testing isn't really used much within the Bayesian paradigm: although it was a fashionable method in the second half of the twentieth century, it's increasingly hard to defend, both in theory and in practice. – EnergyNumbers Jun 9 '15 at 14:58
• @DavidHammen If observations result in a likelihood <1, posterior probability < prior. There is no fundamental “null-hypothesis” with special significance. You could construct a “just weather” hypothesis and base your model on stochastic process you assumed had a Normal Distribution, but you would still need to explain variance based on either “weather physics” or a principle (e.g. maximum likelihood fit to data.) I expect in that case the frequentist approach would be more direct. Bayes is useful when you have other prior information (hopefully based in fact) that needs to be included. – Mark Rovetta Jun 10 '15 at 0:57
The answers to such questions come down to statistical analysis; particularly statistical significance and statistical hypothesis testing.
When conduction such tests, care must be taken in choosing the data for the analysis: don't compare summer temperatures with winter temperatures. Also, the amount of data used needs to be large enough for the result to be significant. Don't compare the recent apparently anomalous temperature with temperatures for the past 2 or 5 years.
The other thing is seasons don't always start and finish according to human timetables. They vary, sometimes they start early or late so you need to have some leeway when choosing to data to analyze.
Initially you'd get the day of the month of the anomalous data, to account for seasonal variability over the years, you'd then decide how far either side of that date you wanted to compare data: maybe a week or two, possibly three or four.
You would then calculate the required statistical parameters, such a mean, standard deviation, standard error of the data, without the anomalous reading.
You then decide what confidence level you need for your analysis: 95%, 99.9% or higher. Then you do the hypothesis testing. One such test would be to test the anomalous reading with the mean of the historical data.
If the result of the hypothesis testing is that the anomalous reading is within the variability of the historical data then it's not due to climate change. If however, the anomalous reading falls outside the range of normal variability for the historical data then you start to look for reason why, of which climate change can be one of many reasons.
• Good answer, may I add...every 11.1 years (on average), the magnetic polarity of the sun reverses. Therefor comparisons of temperatures within 22.2 years of each other also lie outside of the apples to apples realm. – Alistair Riddoch Jun 7 '15 at 19:10
Individual days weather variations can only be considered anomalies, since the theory of global climate change/global warming/anthropogenic CO2 related greenhouse effect acceleration only is theorized and described as affecting the planet over the course of decades and/or centuries and globally, not individual days or individual locations.
With that said
The weather variations in history have been far greater than any that have occurred during man's brief history of recording the weather, and far greater than any that have occurred during mans use of fossil fuels:
Answer: Weather is weather. Climate change is a "theory", not a "premise". Therefor with the question depends on the factual existence of climate change, which I believe is incorrect since the foundation knowledge doesn't support the extrapolation of measurement and observation to the extent some people have taken it.
Please consider that between quantum theory and the standard model, there is a reconciliation that has not been found. The balance point between that which is miniscule, and that which is massive has NOT been found. Our knowledge of physics is incomplete.
Example: Quantum theory vs Standard model (which dictates an origin where the entire universe existed momentarily in a singluarity smaller than a grain of sand that then expanded to create the universe)... No Big Bang? Quantum equation predicts universe has no beginning
So where we have observed and believe we understand what we perceive as "matter", we actually only understand the nature of 5% of what exists. The remainder, dark matter and dark energy fill the void between electrons and nuclei. Similarly between stars and planets. This is NASA's web site where the approximate quantification of Dark Matter and Dark Energy is published material:
NASA Quantification of Dark Energ - Dark Matter
By fitting a theoretical model of the composition of the Universe to the combined set of cosmological observations, scientists have come up with the composition that we described above, ~68% dark energy, ~27% dark matter, ~5% normal matter.
Without an understanding of the nature of the balancing act that lies in the middle between tiny and huge, believing we understand energy retention, energy dissipation, and the triggers involved to the point that we can properly determine the way the planetary energy system works is ludicrous. Physics in it's current state of development does NOT support our belief in CO2 based global warming.
With the researchers at CERN currently considering and looking for a FIFTH fundamental force, HOW can we believe we know how our planet actually works.CERN Looking for FIFTH fundamental Force
Engineers have spent more than a year upgrading the LHC's systems. The hope is that this will allow a new realm of physics to be opened up!!
And when it comes to choosing the mathematical model that best fits our observations, we are still waffling between String Theory 11D, String Theory 12D. And we are attempting to determine if said strings even exist?!?! Discussion of the number of dimensions that exist
which posits that there are 10 or 11 dimensions in our universe.
So with our very OBVIOUS failure to understand physics, 95% of it being a dark mystery, and waffling over the number of fundamental forces, whether the origin of the universe was a big bang, how many dimensions there are, etc....HOW is it, we can ask ourselves questions like, is daily weather indicative of climate change, when we should be asking, is it viable to believe we have mastered knowledge of the physics of our existence, to the point necessary to be sure that climate change exists.
We cannot.
• Dark matter and dark energy don't prove that the standard model is wrong, only that it's perhaps incomplete. That's like saying radioactivity proves chemistry wrong - which it doesn't. Chemistry is correct science for electron orbitals and bonds, it just doesn't cover nuclear bonds or the strong or weak force. And neither has anything to do with climate change, which is simple in principal, but with all the moving parts of the earth's atmosphere and oceans, it's enormously complex mathematically. Your argument is unrelated to the subject at hand. – userLTK Jun 6 '15 at 7:30
• That doesn't actually answer the question posted by the OP... In addition to all of the other problems in this answer – Gimelist Jun 7 '15 at 3:51
• Thank you userLTK I agree.... it's enormously complex math. We haven't gotten the math of any single part of the universe nailed down mathematically. How we then think we can figure out the planet based on 50 - 60 years of dubiously adjusted record keeping is beyond common sense. – Alistair Riddoch Jun 7 '15 at 19:07
• @AlistairRiddoch We cannot figure out the exact climate in 60 years with small uncertainties. Nobody seriously claim we can. We get lots of details wrong. But the big picture, that it's getting warmer with added CO₂, is pretty clear. Where, how, and when precipitation patterns will change is a much more difficult story. – gerrit Jun 9 '15 at 10:26
• @AlistairRiddoch - this feels like it's approaching one of those political debates which is counter productive but the "dubiously adjusted" record keeping statement you made isn't correct. Records aren't being adjusted, estimates based on those records are, and that's fine, Trying to reconstruct an estimated global average temperature based on daily highs and lows and precipitation and not much else isn't exact science. Reconstructing annual estimates from years past is one of those things that SHOULD be open to adjustment. It' not "dubious" at all. – userLTK Jun 10 '15 at 5:12
|
2019-07-19 16:45:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4463866353034973, "perplexity": 900.3991209127505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00030.warc.gz"}
|
http://www.crm.sns.it/person/3919/?year=
|
# Giovanni Coppola
Università di Salerno
Scientific interests: Analytic Number Theory, Number theory, Teoria dei Numeri
Talk: On some lower bounds of some symmetry integrals
Talk: Sieve functions in almost all short intervals
|
2018-12-12 03:15:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183345437049866, "perplexity": 6465.507926934079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823712.21/warc/CC-MAIN-20181212022517-20181212044017-00158.warc.gz"}
|
https://www.semanticscholar.org/paper/Quantum-geodesic-flows-and-curvature-Beggs-Majid/b9a3324c20e588bdba8e6c5f86b0096d750c1510
|
• Corpus ID: 246063829
# Quantum geodesic flows and curvature
@inproceedings{Beggs2022QuantumGF,
title={Quantum geodesic flows and curvature},
author={Edwin J. Beggs and Shahn Majid},
year={2022}
}
• Published 17 January 2022
• Mathematics
We study geodesics flows on curved quantum Riemannian geometries using a recent formulation in terms of bimodule connections and completely positive maps. We complete this formalism with a canonical ∗ operation on noncommutative vector fields. We show on a classical manifold how the Ricci tensor arises naturally in our approach as a term in the convective derivative of the divergence of the geodesic velocity field, and use this to propose a similar object in the noncommutative case. Examples…
## References
SHOWING 1-10 OF 27 REFERENCES
Noncommutative Riemannian and Spin Geometry of the Standard q-Sphere
We study the quantum sphere as a quantum Riemannian manifold in the quantum frame bundle approach. We exhibit its 2-dimensional cotangent bundle as a direct sum Ω0,1⊕Ω1,0 in a double complex. We find
Gravity induced from quantum spacetime
• Mathematics, Physics
• 2014
We show that tensoriality constraints in noncommutative Riemannian geometry in the two-dimensional bicrossproduct model quantum spacetime algebra [x, t] = λx drastically reduce the moduli of possible
QUANTUM GEODESICS ON λ-MINKOWSKI SPACETIME
We apply a recent formalism of quantum geodesics to the well-known bicrossproduct model λ-Minkowski quantum spacetime [xi, t] = iλpx with its flat quantum metric as a model of quantum gravity
Geometric Dirac operator on the fuzzy sphere
• Mathematics
Letters in Mathematical Physics
• 2022
We construct a Connes spectral triple or ‘Dirac operator’ on the non-reduced fuzzy sphere $$\mathbb {C}_\lambda [S^2]$$ C λ [ S 2 ] as realised using quantum Riemannian geometry with a central
Noncommutative geodesics and the KSGNS construction
• E. Beggs
• Mathematics
Journal of Geometry and Physics
• 2020
Spectral triples from bimodule connections and Chern connections
• Mathematics
• 2015
We give a geometrical construction of Connes spectral triples or noncommutative Dirac operators $D$ starting with a bimodule connection on the proposed spinor bundle. The theory is applied to the
Differential calculus on compact matrix pseudogroups (quantum groups)
The paper deals with non-commutative differential geometry. The general theory of differential calculus on quantum groups is developed. Bicovariant bimodules as objects analogous to tensor bundles
The quantum structure of spacetime at the Planck scale and quantum fields
• Physics
• 1995
We propose uncertainty relations for the different coordinates of spacetime events, motivated by Heisenberg's principle and by Einstein's theory of classical gravity. A model of Quantum Spacetime is
|
2022-07-06 22:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5399188995361328, "perplexity": 2312.945313520166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00204.warc.gz"}
|
https://math.stackexchange.com/questions/2472923/about-counter-example-for-quantifier-based-statement-logic
|
# about counter-example for Quantifier based statement (logic)
I have a logic based statement of this form:
$$(\forall x\ ,p(x))\to (\exists y\ , q(x,y)))$$
I am trying to find out if this statement is TRUE or False. I have 2 methods of proof, and one leads to FALSE, the other leads to TRUE. I have a problem hence!
MY METHOD PART 1:
So for the first part of this logic equation I call it P and the second part I call it Q, so i get an overall implication equation:
$$P \to Q$$
So I thought I could do a counter-example pick an x-value in order to set P to be TRUE and then pick a y-value to make Q-false. That way in an implication statement if P=TRUE and you have Q=FALSE, then overall the implication is FALSE which would disprove the overall statement.
I have seen on some other site that you can do this in order to disprove this.
BUT, I now have doubts.
MY METHOD PART 2:
Because in the first part it has the "For All" quantifier which in my notation is inside P. SO now i will take the "For ALL" into account. . I can show that the function inside P, p(x), I can find a counter-example x-value to show that the left side, capital P as FALSE.
NOW in an IMPLICATION if the first part P is FALSE, then based on truth-table for this, it does not matter if Q is TRUE or FALSE. Because as soon as P is False it makes the whole implication statement TRUE!!!
SO now I did 2 Methods, the first one shows that the overall statement is "FALSE" and I have a second Method that shows that the overall statement is "TRUE".
SO I am not sure which method is the problem.
Hope someone can clarify this.
• So your statement is true under some interpretations and false under some other interpretations. This is perfectly normal. It doesn't make sense to ask if it's true or false unless an interpretation is specified. Now, if the question was whether the statement was valid, or whether it was satisfiable, that's another story. Your examples show that it is satisfiable but not valid. – bof Oct 15 '17 at 5:42
• By the way, the variable $x$ occurs both free and bound in your formula. Not that there's anything wrong with that. There is something wrong with having more right than left parentheses, but I suppose that's just a typo. – bof Oct 15 '17 at 5:44
• I just put parentheses to put some 'style' or to group the whole thing on one side and the other. I did this with q == q(x,y) just to say it contains both variables as in its a function of these 2 variables. See my specific example below for some context. – Palu Oct 15 '17 at 6:04
• So i will drop the unnecessary brackets here: $$\forall x\ ,p(x) \to \exists y\ , q(x,y)$$ , does this make a difference in terms of the scope for the "FOR-ALL". – Palu Oct 15 '17 at 6:06
It only gets a truth value when you select an interpretation to evaluate it, which fixes which universe the quantifiers range over and when the $p$ and $q$ predicates are true.
And even so, you also need to choose a value for the appearance of $x$ on the right-hand side of $\to$, which is not in scope of the $\forall x$.
|
2021-03-06 12:32:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5876238942146301, "perplexity": 202.71322947455243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00354.warc.gz"}
|
https://mathematica.stackexchange.com/questions/174972/integral-of-a-real-power-of-a-quadratic-polynomial
|
# Integral of a real power of a quadratic polynomial
I am trying to do the following definite integral:
Integrate[(a x^2 + b x + c)^k, {x, 0, 1}]
where $a,b,c,k \in \mathbb{R}$
Now, the problem is Mathematica takes forever to produce a result which I understand is happening because of some complicated simplification procedure.
Note that the indefinite integral is done in a flash:
Integrate[(a x^2 + b x + c)^k, x]
(2^(-1 + k) (b - Sqrt[b^2 - 4 a c] + 2 a x) (( b + Sqrt[b^2 - 4 a c] + 2 a x)/Sqrt[ b^2 - 4 a c])^-k (c + x (b + a x))^k Hypergeometric2F1[-k, 1 + k, 2 + k, (-b + Sqrt[b^2 - 4 a c] - 2 a x)/( 2 Sqrt[b^2 - 4 a c])])/(a (1 + k))
After which the following:
(% /. x -> 1 - % /. x -> 0) //PowerExpand //Simplify
does give a result, but its overly complicated and I think it can be simplified further. The reason I think so is that I have not been able to exhaust the simplification situation with say FullSimplify, that is taking forever to return a result.
Can anyone suggest any workaround here?
Edit 1:
I made a simple mistake by not putting some brackets.
((% /. x -> 1) - (% /. x -> 0)) //PowerExpand //FullSimplify
does work now.
However, I am still curious why the definite integral doesn't work in the first place. Thoughts and comments will be appreciated.
Edit 2:
Given that my original problem is solved, I would like to ask another related question. What if I wanted to do the 2-variable generalization of the same:
Integrate[(a*x^2 + b*y^2 + c x y + d x + f y + g)^k, {x, 0, 1}, {y, 0,
1}, Assumptions -> (a*x^2 + b*y^2 + c x y + d x + f y +
g) \[Element] Reals && Im[b] == 0 && Im[a] == 0 && Im[c] == 0 &&
Im[d] == 0 && Im[f] == 0 && Im[g] == 0 && Im[k] == 0]
After a while, Mathematica returns the input back. Can this be done analytically, at all?
• you should add some brackets (% /. x -> 1) - (% /. x -> 0) // PowerExpand // Simplify – Ulrich Neumann Jun 9 '18 at 14:09
• @UlrichNeumann Yeah you are right! – Subho Jun 9 '18 at 14:10
• Try: Integrate[(a*x^2 + b*x + c)^k, {x, 0, 1}, Assumptions -> {a > 0, b > 0, c > 0, k > 0, k \[Element] Integers}] – Mariusz Iwaniuk Jun 9 '18 at 16:19
• Unless k is a positive integer, your integrand will have singularities at the zeros of your quadratic. Those potentially make direct evaluation from the indefinite integral unreliable. I believe that Mathematica's difficulty here lies in the difficulty of avoiding trouble with singularities. – John Doty Jun 10 '18 at 9:23
Help Integrate and it will be faster by powers of ten.
The decicive hint here is Assumptions -> (a*x^2 + b*x + c) \[Element] Reals . I think this helps a lot in internal dealing with powers of k.
Further, restriction to k > 0 is faster than for real k.
(dint01[a_, b_, c_, k_] =
Integrate[(a*x^2 + b*x + c)^k, {x, 0, 1},
Assumptions -> (a*x^2 + b*x + c) \[Element] Reals && Im[b] == 0 &&
Im[a] == 0 && Im[c] == 0 && k > 0]); // Timing
(* {13.485, Null} *)
(dint02[a_, b_, c_, k_] =
Integrate[(a*x^2 + b*x + c)^k, {x, 0, 1},
Assumptions -> (a*x^2 + b*x + c) \[Element] Reals && Im[b] == 0 &&
Im[a] == 0 && Im[c] == 0 && Im[k] == 0]); // Timing
(* {33.828, Null} *)
In both cases you get the same result, but with more restrictions for dint01[a,b,c,k] .
dint02[a, b, c, k]
(* ConditionalExpression[(1/(a (1 + k)))
2^(-1 + k) (1 + b/Sqrt[b^2 - 4 a c])^-k ((
2 a + b + Sqrt[b^2 - 4 a c])/Sqrt[
b^2 - 4 a c])^-k (c^
k (-b + Sqrt[b^2 - 4 a c]) ((2 a + b + Sqrt[b^2 - 4 a c])/Sqrt[
b^2 - 4 a c])^
k Hypergeometric2F1[-k, 1 + k, 2 + k,
1/2 - b/(2 Sqrt[b^2 - 4 a c])] - (a + b + c)^
k (1 + b/Sqrt[b^2 - 4 a c])^
k (-2 a - b + Sqrt[b^2 - 4 a c]) Hypergeometric2F1[-k, 1 + k,
2 + k, (-2 a - b + Sqrt[b^2 - 4 a c])/(
2 Sqrt[b^2 - 4 a c])]),
(Re[(b + Sqrt[b^2 - 4 a c])/a] >= 0 ||
2 + Re[(b + Sqrt[b^2 - 4 a c])/a] <= 0 ||
(b + Sqrt[b^2 - 4 a c])/a \[NotElement] Reals) && (Re[(-b + Sqrt[b^2 - 4 a c])/a] == 0 ||
Re[(b - Sqrt[b^2 - 4 a c])/a] >= 0 ||
(Re[(-b + Sqrt[b^2 - 4 a c])/a] >= 2 &&
2 + Re[(b - Sqrt[b^2 - 4 a c])/a] <= 0) ||
(b - Sqrt[b^2 - 4 a c])/a \[NotElement] Reals)] *)
• Please check the updated question. – Subho Jun 10 '18 at 13:42
• @Subho95 . In the future avoid shifting the goalpost. People feel frustrated to contribute effort to a specific question to then receive new requests not mentioned in the original question. You make the work others have done on your behalf seem irrelevant. Mma.SE is not a private consulting service but a public Q&A forum. Please, out of respect to the people trying to help you, either ask the question you need to ask properly the first time, or ask a new question, including you coded equations properly formatted. Cheers! – Mariusz Iwaniuk Jun 10 '18 at 14:30
• @MariuszIwaniuk I have accepted the answer and acknowledged the help. This was just a follow up question, that is very much related and I thought would find a place in this post just as well. It's not mandatory for the original answerer to answer back but would be nice as he/she is already acquainted with the question. – Subho Jun 10 '18 at 14:37
|
2019-08-20 10:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.291710764169693, "perplexity": 2248.6455024290976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00097.warc.gz"}
|
http://www.sciencechatforum.com/viewtopic.php?f=10&t=10343
|
## The Role of Mathematics in Science (/Physics)
Discussions on the philosophical foundations, assumptions, and implications of science, including the natural sciences.
### The Role of Mathematics in Science (/Physics)
It is possible to have absolute knowledge with regards to the truth of mathematical propositions concerning abstract entities. Knowledge if the physical world however, may only be obtained empirically. This means there is always a degree of uncertainty with regards to our knowledge of the physical world, for though an experiment relentlessly turns out one way, why should we not suppose that perhaps another day it will turn out the other?
But is it fair to say that one could feasibly obtain absolute knowledge about the physical world, by associating mathematical abstractions about which we can determine propositions of unassailable truth, with features of the physical world? Achieving a perfect description of physical reality is then just a question of finding better and better abstractions, until the abstractions themselves recreate physical reality. Whilst this may not be possible with regards to mathematical abstractions in the sense of contemporary and historical mathematics, computer code is deterministic and in this sense, is not a computer simulation also a mathematical theorem? And is there not more hope for computer simulation in achieving this task of recreating reality than conventional mathematics? Or might it be necessary to find a combined approach, with mathematical abstractions describing entities like space and time, and computer code simulating the actions of matter under such a mathematical framework?
How does / might mathematics and computer simulation describe non-deterministic (e.g. quantum mechanical) features of the universe?
kStro
Member
Posts: 154
Joined: 16 Apr 2007
Location: Europe
Certainly math, as a language, is self-contained and allows for a degree of concreteness. In fact, as far as I know, math is the only branch of science for which there is an unambiguous definition of "proof". Computer simulations and models are also, as you say, deterministic and ultimately simply extensions of pure math. Consequently, since these things are deterministic, pure deduction, etc., applies.
Where this all starts to weaken is when there are attempts to link these things with "the real world". It may well be that, for example, one ion of sodium plus one ion of chloride equals to ions still or, alternately, one molecule of salt. The problem is that, for quantification to work and tell us things, we need to start being careful about what is being quantified (and then manipulated mathematically, etc.), how it is being quantified, etc. Mathematical and computer simulations and models are simplifications of the "real world" so that we can say some things about that real world. We can and do try to make these models more accurate by adding in more factors, more data, more algorithms, etc., but ultimately it is not the computer or the equations, etc., that decides what goes into all of this, it is the individual researcher, relying on judgement, etc., who decides what counts as appropriate date, how it fits in, decides on the appropriate formulas, etc. And then that researcher (hopefully) tests that data against the real world observations, designs tests which ideally employ independent factors which in turn may be different simplifications of the real world, etc. Math may well be a rigorous application of a specific language and structure but it is individuals who decide what numbers and formulas to use etc.
Forest_Dump
Forum Moderator
Posts: 8134
Joined: 31 Mar 2005
Location: Great Lakes Region
Forest -
Ions are fairly complicated systems. But in linking mathematical abstractions with much simpler, fundamental constituents of the universe, might one not hope that a goal of perfect knowledge might be realized? Once one has perfect knowledge of the fundamental constituents of reality, one can simulate up to describe more complicated things, like ions and molecules and cells and animals and people.
I suppose it depends on whether one has confidence that the fundamental constituents of nature are "simple" or behave in a way that allows us to make the required isomorphisms between mathematics and physics. The quantum regime seems to behave fairly horrifically; I don't know how much confidence that affords one in the realization of such a program. But maybe one can be more hopeful if one considers that advances in mathematics are perquisite to advances in physics, and maybe the mathematical developments that are required to make sense of quantum physics just haven't been revealed yet.
Of course, there's also a requirement in believing that there exist fundamental constituents to physical reality; that at some eventual level you can't physically scale down any further.
kStro
Member
Posts: 154
Joined: 16 Apr 2007
Location: Europe
Quantum mechanics can be formalised within 20th century mathematics. The development of quantum computers may make it possible to simulate large quantum systems.
Whether quantum mechanics is the end all and be all of the universe is a question for physics, not mathematics.
Phalcon
Active Member
Posts: 1342
Joined: 24 Dec 2007
I don't pretend to know anything but I believe some (e.g. Penrose) still think quantum physics needs fundamental modification that it may sit better with relativity. Quantum gravity isn't really figured out yet I don't think.
kStro
Member
Posts: 154
Joined: 16 Apr 2007
Location: Europe
It is true that currently there is no good theory of quantum gravity. It will require new physics and new mathematics to create one. However there should be no need to rewrite the foundations of mathematics.
Phalcon
Active Member
Posts: 1342
Joined: 24 Dec 2007
### Re: The Role of Mathematics in Science (/Physics)
kStro wrote:It is possible to have absolute knowledge with regards to the truth of mathematical propositions concerning abstract entities.
The short and simple answer is that by definition metaphysics can never be proven so the answer is yes. It is possible, but whether or not it will ever be achieved or proven is another story altogether.
It reminds me of Ursula LeGuin's short story, "The Lathe of Heaven". In it the dreams of the main character, George, keep changing reality and nobody but his psychologist knows this happening. At one point George says that for all he knows there are other people who can do the same thing, and reality is constantly being pulled out from under all of us all the time.
Does the universe obey mathematics, or do we change it anytime it disobeys mathematics? Is quantum mechanics a description of the "real" world, or of our collective unconscious? Such questions are beyond anything but statistical truth and, as the wit once said, "There are lies, damn lies, and then statistics."
wuliheron
Member
Posts: 364
Joined: 16 Nov 2008
Location: Virginia, USA
I posted an argument along similar lines here: http://www.philosophychatforum.com/bulletin/viewtopic.php?t=12014
linford86
Active Member
Posts: 1933
Joined: 14 Apr 2009
Location: Planet Earth
### Re: The Role of Mathematics in Science (/Physics)
kStro wrote:Whilst this may not be possible with regards to mathematical abstractions in the sense of contemporary and historical mathematics, computer code is deterministic and in this sense, is not a computer simulation also a mathematical theorem?
I have to say, this is actually a pretty profound insight. In general, it isn't clear what the relationship of computer code to mathematical theorems is. There, is, however, a beautiful (and rather abstract) special case. Check out the Curry-Howard Isomorphism. And if you are like me and found this totally awesome then you might try learning haskell.
kStro wrote:And is there not more hope for computer simulation in achieving this task of recreating reality than conventional mathematics? Or might it be necessary to find a combined approach, with mathematical abstractions describing entities like space and time, and computer code simulating the actions of matter under such a mathematical framework?
I have a friend who is a graduate student in physical chemistry, and I'll tell you what she tells me regarding this. She says that it is now rather typical to hear a talk in physical chemistry where the authors begin by deriving foundational results in quantum mechanics, and then eventually turn to computer simulations to supplement their data. I understand that this is also typical in evolutionary dynamics (in particular, Martin Nowak of Harvard is a master of finding closed form solutions when possible, and then turning to computer simulations when a closed form solution is not forthcoming). I have seen examples of authors in theoretical economics also employing this approach.
xcthulhu
Resident Member
Posts: 2218
Joined: 14 Dec 2006
Location: Cambridge, MA
Blog: View Blog (3)
### Re:
Forest_Dump wrote:In fact, as far as I know, math is the only branch of science for which there is an unambiguous definition of "proof".
While I agree that the standards for proof in mathematics are largely homogenous... it might not be as completely homogenous as you think. This summer I will be involved in a research project known as Flyspeck. The idea is to produce a completely formal proof, checked by a computer for correctness, of the Kepler Conjecture. The Kepler conjecture states the the densest sphere packing in 3-space is $\frac{\pi}{\sqrt{18}}$. It involves a computer checking that several thousand linear programming problems involving 100 variables apiece are infeasible, among a dizzying number of other daunting results. The author/project-coordinator, Thomas Hales, expects that the whole project will take 20 man-years to complete (that is, 20 dedicated researchers 1 years, or 1 dedicated researcher 20 years). He feels that roughly 10 man-years have been dedicated to the project already, and that the whole project is roughly 50% complete.
Not all philosophers of mathematics agree that, when completed, Flyspeck will really give a proof. For instance, Reuben Hersh writes in the beginning of his anthology 18 Unconventional Essays on the Philosophy of Mathematics that
Reuben Hersh wrote:I do not know anyone who thinks either that this project can be completed, or that even if claimed to be complete it would be universally accepted as a convincing proof of Kepler’s conjecture.
xcthulhu
Resident Member
Posts: 2218
Joined: 14 Dec 2006
Location: Cambridge, MA
Blog: View Blog (3)
|
2016-05-31 00:07:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6694453954696655, "perplexity": 893.0613556522434}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051151584.71/warc/CC-MAIN-20160524005231-00060-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.biostars.org/p/416004/
|
How to add Gaussian fitting to a genome coverage plot?
0
0
Entering edit mode
16 months ago
Hi,
I am making a genome coverage plot using python and trying to add a gaussian fitting. What I want is like this: expected gaussian fitting But when I use my code to plot, it gives me something like this: actual result
Please only use the red and black curve and ignore the x,y label and the green&blue curves in the first image. Here's my code:
cov_roi = cov[4569000:4666000] #select the genome region of interests (roi) using pandas dataframe
x = cov_roi.loci
y = cov_roi.coverage
n = len(x)
mean = sum(y)/n
sigma = (sum((y-mean)**2)/n) **(0.5)
mu = np.median(x)
def gaussian(x, mu, sig):
return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
plt.figure(figsize=(6,4))
plt.plot(x,y, "r." ,label = 'MAPLE', ms=0.3,color = 'r')
plt.plot(x, gaussian(x, mu, sigma), 'k-', alpha=0.5,lw=2, label='Gaussian fitting')
plt.ylim((-500, 10000))
plt.xlabel('Base Position (Kbp)')
plt.ylabel('Coverage')
plt.title('Coverage Map')
plt.show()
Could anybody help me?
Thank you so much!
genome python gaussian fitting • 210 views
|
2021-05-11 20:44:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2967057526111603, "perplexity": 14619.490956909685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00124.warc.gz"}
|
https://researchmap.jp/7000013503/published_papers/21274049
|
2015年9月
# Expected utility and catastrophic consumption risk
INSURANCE MATHEMATICS & ECONOMICS
• Masako Ikefuji
• ,
• Roger J. A. Laeven
• ,
• Jan R. Magnus
• ,
• Chris Muris
64
306
312
DOI
10.1016/j.insmatheco.2015.06.007
ELSEVIER SCIENCE BV
An expected utility based cost-benefit analysis is, in general, fragile to distributional assumptions. We derive necessary and sufficient conditions on the utility function of consumption in the expected utility model to avoid this. The conditions ensure that expected (marginal) utility of consumption and the expected intertemporal marginal rate of substitution that trades off consumption and self-insurance remain finite, also under heavy-tailed distributional assumptions. Our results are relevant to various fields encountering catastrophic consumption risk in cost-benefit analysis. (C) 2015 Elsevier B.V. All rights reserved.
Web of Science ® 被引用回数 : 8
リンク情報
DOI
https://doi.org/10.1016/j.insmatheco.2015.06.007
Web of Science
|
2022-08-16 19:32:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075299263000488, "perplexity": 8215.835290214656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00728.warc.gz"}
|
https://cs.stackexchange.com/questions/43943/why-a-language-specified-by-a-regular-expression-is-not-a-complement-of-a-given
|
# Why a language specified by a regular expression is not a complement of a given language?
I am taking a compiler MOOC online on my own time. The class is self paced. There is a question with an answer but I can't understand why the answer is correct.
Here is the question.
For any language $$L$$, the complement of the language (usually written $$L′$$) is defined as the language that consists of all the strings that are NOT in $$L$$. That is,
$$L′=Σ^*−L$$
It turns out that the complement of any regular language is also a regular language. Which of the following regular expressions define a language that is the complement of the language defined by the regular expression: $$1(01)^*$$?
1. $$(10)^*+\big((10)^*0(0+1)^*\big)+\big(1(01)^*1(0+1)^*\big)$$
2. $$\epsilon + (0(0 + 1)^*) + ((0 + 1)^*0) + \big((0 + 1)^*(00 + 11)(0 + 1)^*\big)$$
3. $$(0 + \epsilon)\big((1 + \epsilon)(0 + \epsilon)\big)^*$$
4. $$(10)^*$$
Correct answers are 1 and 2. I can't understand why 3 and 4 are not correct as well since the strings generated by these languages are also not in $$L$$.
Any explanation is greatly appreciated.
The complement of a language $L$ should contain all strings not in $L$. Your language $L$ doesn't contain the word $0$, which the language $(10)^*$ also doesn't contain – so $(10)^*$ can't be the complement of $L$.
• To make this a complete answer, note that the third regular expression describes $\Sigma^*$. – Raphael Jun 26 '15 at 7:21
|
2021-02-27 08:17:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40536367893218994, "perplexity": 148.37216123617821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00517.warc.gz"}
|
https://brilliant.org/problems/its-not-as-easy-as-similar-ones/
|
# It's not as easy as similar ones!
Number Theory Level 5
$\large n=d_6^2+d_7^2-1$Find sum of all positive integers $$n$$ that satisfy equation above, where $$1=d_1<d_2<d_3<\cdots<d_k=n$$ are divisors of $$n$$.
×
|
2016-10-27 13:04:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210308194160461, "perplexity": 762.5677639514245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00159-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://nl.mathworks.com/help/fusion/ref/rcssignature.html
|
# rcsSignature
## Description
`rcsSignature` creates a radar cross-section (RCS) signature object. You can use this object to model an angle-dependent and frequency-dependent radar cross-section pattern. The radar cross-section determines the intensity of reflected radar signal power from a target. The object models only non-polarized signals. The object support several Swerling fluctuation models.
## Creation
### Syntax
``rcssig = rcsSignature``
``rcssig = rcsSignature(Name,Value)``
### Description
``` `rcssig = rcsSignature` creates an `rcsSignature` object with default property values.```
example
````rcssig = rcsSignature(Name,Value)` sets object properties using one or more `Name,Value` pair arguments. `Name` is a property name and `Value` is the corresponding value. `Name` must appear inside single quotes (`''`). You can specify several name-value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Any unspecified properties take default values.```
Note
You can only set property values of `rcsSignature` when constructing the object. The property values are not changeable after construction.
## Properties
expand all
Sampled radar cross-section (RCS) pattern, specified as a scalar, a Q-by-P real-valued matrix, or a Q-by-P-by-K real-valued array. The pattern is an array of RCS values defined on a grid of elevation angles, azimuth angles, and frequencies. Azimuth and elevation are defined in the body frame of the target.
• Q is the number of RCS samples in elevation.
• P is the number of RCS samples in azimuth.
• K is the number of RCS samples in frequency.
Q, P, and K usually match the length of the vectors defined in the `Elevation`, `Azimuth`, and `Frequency` properties, respectively, with these exceptions:
• To model an RCS pattern for an elevation cut (constant azimuth), you can specify the RCS pattern as a Q-by-1 vector or a 1-by-Q-by-K matrix. Then, the elevation vector specified in the `Elevation` property must have length 2.
• To model an RCS pattern for an azimuth cut (constant elevation), you can specify the RCS pattern as a 1-by-P vector or a 1-by-P-by-K matrix. Then, the azimuth vector specified in the `Azimuth` property must have length 2.
• To model an RCS pattern for one frequency, you can specify the RCS pattern as a Q-by-P matrix. Then, the frequency vector specified in the `Frequency` property must have length-2.
Example: `[10,0;0,-5]`
Data Types: `double`
Azimuth angles used to define the angular coordinates of each column of the matrix or array, specified by the `Pattern` property. Specify the azimuth angles as a length-P vector. P must be greater than two. Angle units are in degrees.
When the `Pattern` property defines an elevation cut, `Azimuth` must be a 2-element vector defining the minimum and maximum azimuth view angles over which the elevation cut is considered valid.
Example: `[-45:0.5:45]`
Data Types: `double`
Elevation angles used to define the coordinates of each row of the matrix or array, specified by the `Pattern` property. Specify the elevation angles as a length-Q vector. Q must be greater than two. Angle units are in degrees.
When the `Pattern` property defines an azimuth cut, `Elevation` must be a 2-element vector defining the minimum and maximum elevation view angles over which the azimuth cut is considered valid.
Example: `[-30:0.5:30]`
Data Types: `double`
Frequencies used to define the applicable RCS for each page of the `Pattern` property, specified as a K-element vector of positive scalars. K is the number of RCS samples in frequency. K must be no less than two. Frequency units are in hertz.
When the `Pattern` property is a matrix, `Frequency` must be a 2-element vector defining the minimum and maximum frequencies over which the pattern values are considered valid.
Example: `[0:0.1:30]`
Data Types: `double`
Fluctuation models, specified as `'Swerling0'`, `'Swerling1'` or `'Swerling3'`. Swerling cases 2 and 4 are not modeled as these are determined how the target is sample, not an inherent target property.
ModelDescription
`'Swerling0'`The target RCS is assumed to be non-fluctuating. In this case the instantaneous RCS signature value retrieved by the value method is deterministic. This model represents ideal radar targets with an RCS that remains constant in time across the range of aspect angles of interest, e.g., a conducting sphere and various corner reflectors.
`'Swerling1'`The target is assumed to be made up of many independent scatterers of equal size. This model is typically used to represent aircraft. The instantaneous RCS signature value returned by the value method in this case is a random variable distributed according to the exponential distribution with a mean determined by the `Pattern` property.
`'Swerling3'`The target is assumed to have one large dominant scatterer and several small scatterers. The RCS of the dominant scatterer equals 1+sqrt(2) times the sum of the RCS of other scatterers. This model can be used to represent helicopters and propeller driven aircraft. In this case the instantaneous RCS signature's value returned by the value method is a random variable distributed according to the 4th degree chi-square distribution with mean determined by the `Pattern` property.
Data Types: `char` | `string`
## Object Functions
`value` Radar cross-section at specified angle and frequency `toStruct` Convert to structure
## Examples
collapse all
Specify the radar cross-section (RCS) of a triaxial ellipsoid and plot RCS values along an azimuth cut.
Specify the lengths of the axes of the ellipsoid. Units are in meters.
```a = 0.15; b = 0.20; c = 0.95;```
Create an RCS array. Specify the range of azimuth and elevation angles over which RCS is defined. Then, use an analytical model to compute the radar cross-section of the ellipsoid. Create an image of the RCS.
```az = [-180:1:180]; el = [-90:1:90]; rcs = rcs_ellipsoid(a,b,c,az,el); rcsdb = 10*log10(rcs); imagesc(az,el,rcsdb) title('Radar Cross-Section') xlabel('Azimuth (deg)') ylabel('Elevation (deg)') colorbar```
Create an `rcsSignature` object and plot an elevation cut at $3{0}^{\circ }$ azimuth.
```rcssig = rcsSignature('Pattern',rcsdb,'Azimuth',az,'Elevation',el,'Frequency',[300e6 300e6]); rcsdb1 = value(rcssig,30,el,300e6); plot(el,rcsdb1) grid title('Elevation Profile of Radar Cross-Section') xlabel('Elevation (deg)') ylabel('RCS (dBsm)')```
```function rcs = rcs_ellipsoid(a,b,c,az,el) sinaz = sind(az); cosaz = cosd(az); sintheta = sind(90 - el); costheta = cosd(90 - el); denom = (a^2*(sintheta'.^2)*cosaz.^2 + b^2*(sintheta'.^2)*sinaz.^2 + c^2*(costheta'.^2)*ones(size(cosaz))).^2; rcs = (pi*a^2*b^2*c^2)./denom; end```
Import the radar cross-section (RCS) measurements of a 1/5th scale Boeing 737. Load the RCS data into an `rcsSignature` object. Assume the RCS follows a Swerling 1 distribution.
```load('RCSSignatureExampleData.mat','boeing737'); rcs = rcsSignature('Pattern',boeing737.RCSdBsm, ... 'Azimuth', boeing737.Azimuth,'Elevation',boeing737.Elevation, ... 'Frequency',boeing737.Frequency,'FluctuationModel','Swerling1');```
Set the seed of the random number generator for reproducibility of example.
`rng(3231)`
Plot sample RCS versus azimuth angle.
```plot(rcs.Azimuth,rcs.Pattern) xlabel('Azimuth (deg)'); ylabel('RCS (dBsm)') title('Measured RCS from 1/5th scale Boeing 737 model')```
Construct an RCS histogram and display the mean value.
```N = 1000; val = zeros(1,N); for k = 1:N [val(k),expval] = value(rcs,-5,0,800.0e6); end```
Convert to power units.
`mean(db2pow(val))`
```ans = 406.9799 ```
```histogram(db2pow(val),50) xlabel("RCS (dBsm)")```
## References
[1] Richards, Mark A. Fundamentals of Radar Signal Processing. New York, McGraw-Hill, 2005.
## Version History
Introduced in R2018b
|
2022-10-04 01:22:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820357620716095, "perplexity": 1922.7752858355987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00488.warc.gz"}
|
https://3dprinting.stackexchange.com/questions/6364/how-bed-leveling-is-achieved-without-table-screws
|
# How bed leveling is achieved without table screws?
I have seen printers with table screws and bed leveling sensor and printers that have only bed leveling sensor (such as Prusa).
So my question is how does the bed levelling work when there is only a sensor, and no adjustment screws? What will happen if I totally remove the table from the printer and then re-assemble it? Will the print fail or what?
• Are you asking 'will the calibration persist through dissassembly/reassembly' or something else? Your question is a bit confusing, – Sean Houlihane Jul 11 '18 at 10:30
• RealMen(TM) always level manually like the universe intended. (Shamelessly stolen from some stick-shift enthusiast) – Carl Witthoft Jul 11 '18 at 12:29
• @SeanHoulihane Yes, exactly that. – OrElse Jul 11 '18 at 19:17
Note that Marlin Firmware (which is basically what drives the Prusa printers) has skewness compensation implemented. This is implemented in the configuration file, and found under header Bed Skew Compensation. You basically print a square and measure the diagonals and insert these measurements into the configuration file. Prusa printers do this automatically by using the measurements of the marker points.
• @Horitsu The bed level is usually fading out over a predefined distance, in e.g. Marlin #define ENABLE_LEVELING_FADE_HEIGHT determines that, and the height can be set with M420 Z<height>. Yes, a cube will not be perfectly cubic, that is why even with auto bed leveling you need to provide a bed as level as possible, it only should correct for very small deviations. – 0scar Jul 11 '18 at 6:18
|
2021-03-07 20:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2398339956998825, "perplexity": 3650.454494048634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00055.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/52221-linear-algebra-question.html
|
1. ## linear algebra question
show that if f(t) is the characteristic polynomial of a diagonalizable linear op. T, then f(T)=T_0, the zero operator. We are given the fact that if T is a lin. op. on vector space V and g(t) is a polynomial with coefficients from F, then for a given eigenvector x of T with corresponding eigenvale t, g(T)(x) = g(t)x.
2. Originally Posted by squarerootof2
show that if f(t) is the characteristic polynomial of a diagonalizable linear op. T, then f(T)=T_0, the zero operator. We are given the fact that if T is a lin. op. on vector space V and g(t) is a polynomial with coefficients from F, then for a given eigenvector x of T with corresponding eigenvale t, g(T)(x) = g(t)x.
Let $T: V\to V$ be a linear operator and let $B$ be a basis for $V$. Since $T$ is diagnolizable if we form $[T]_B$ the matrix is a diagnolizable, therefore $[T]_B = ADA^{-1}$ where $D$ is a diagnol matrix whose entires consists of eigenvalues of $[T]_B$. Let $\det(xI - [T]_B) = f(x) = x + a_{n-1}x^{n-1}+...+a_1x+a_0$ be the characheristic polynomial. If $k_1,...,k_n$ are (repeated) eigenvalues of $[T]_B$ then $f(k_1) = ... = f(k_n) = 0$. The problem asks us to show that $[T]_B$ satisfies $X^n + a_{n-1}X^{n-1} + ... + a_1X + a_0I = \bold{0}$. Note that $[T]_B^k = (ADA^{-1})^k = AD^kA^{-1}$. Therefore $AD^nA^{-1} + a_{n-1}AD^{n-1}A^{-1} + ... + a_1ADA^{-1} + a_0AA^{-1}$ upon substituting this matrix into the equation. This becomes $A(D^n+a_{n-1}D^{n-1}+...+a_1D+a_0I)A^{-1}$. Behold, the middle expression. Computing $D^k$ is simply raising each diagnol term to the $k$-th power. Therefore the middle expression consists of a diagnol matrix having its entries values to $f(k_1),...,f(k_n)$. But $f(k_1)=...=f(k_n) = 0$ since they are eigenvalues. Thus, the middle matrix is the zero matrix. Thus, $A(D^n+a_{n-1}D^{n-1}+...+a_1D+a_0I)A^{-1} = A\bold{0}A^{-1} = \bold{0}$. And so $f(T)$ is the zero operator.
Note this theorem (called Cayley-Hamilton) is true even if $T$ is a non-diagnolizable linear operation but that is much harder to prove for we cannot diagnolize.
3. hmm one more question, did you change the notation of matrix representing the linear operator T to the matrix A? because that's what it seems like.
4. Originally Posted by squarerootof2
hmm one more question, did you change the notation of matrix representing the linear operator T to the matrix A? because that's what it seems like.
I do not think so. I just wrote $[T]_B = ADA^{-1}$. This is of course possible since you are assuming diagnolizability*.
Let $T: V\to V$ be a linear operator and let $B$ be a basis for $V$. Since $T$ is diagnolizable if we form $[T]_B$ the matrix is a diagnolizable, therefore $[T]_B = ADA^{-1}$ where $D$ is a diagnol matrix whose entires consists of eigenvalues of $[T]_B$. Let $\det(xI - [T]_B) = f(x) = x + a_{n-1}x^{n-1}+...+a_1x+a_0$ be the characheristic polynomial. If $k_1,...,k_n$ are (repeated) eigenvalues of $[T]_B$ then $f(k_1) = ... = f(k_n) = 0$. The problem asks us to show that $[T]_B$ satisfies $X^n + a_{n-1}X^{n-1} + ... + a_1X + a_0I = \bold{0}$. Note that $[T]_B^k = (ADA^{-1})^k = AD^kA^{-1}$. Therefore $AD^nA^{-1} + a_{n-1}AD^{n-1}A^{-1} + ... + a_1ADA^{-1} + a_0AA^{-1}$ upon substituting this matrix into the equation. This becomes $A(D^n+a_{n-1}D^{n-1}+...+a_1D+a_0I)A^{-1}$. Behold, the middle expression. Computing $D^k$ is simply raising each diagnol term to the $k$-th power. Therefore the middle expression consists of a diagnol matrix having its entries values to $f(k_1),...,f(k_n)$. But $f(k_1)=...=f(k_n) = 0$ since they are eigenvalues. Thus, the middle matrix is the zero matrix. Thus, $A(D^n+a_{n-1}D^{n-1}+...+a_1D+a_0I)A^{-1} = A\bold{0}A^{-1} = \bold{0}$. And so $f(T)$ is the zero operator.
Note this theorem (called Cayley-Hamilton) is true even if $T$ is a non-diagnolizable linear operation but that is much harder to prove for we cannot diagnolize.
If $A^{-1} [T]_B A = D$ then $[T]_B A = AD \implies [T]_B = ADA^{-1}$.
|
2017-03-27 17:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765333533287048, "perplexity": 296.785584606062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00469-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://gateoverflow.in/968/gate2003-85
|
2.3k views
Consider the following functional dependencies in a database.
Date_of_Birth $\to$ Age Age $\to$ Eligibility Name $\to$ Roll_number Roll_number $\to$ Name Course_number $\to$ Course_name Course_number $\to$ Instructor (Roll_number, Course_number) $\to$ Grade
The relation (Roll_number, Name, Date_of_birth, Age) is
1. in second normal form but not in third normal form
2. in third normal form but not in BCNF
3. in BCNF
4. in none of the above
There are three FDs that are valid from the above set of FDs for the given relation :
Date_of_Birth $\to$ Age
Name $\to$ Roll_number
Roll_number $\to$ Name
Candidate keys for the above are : (Date_of_Birth,Name) and (Date_of_Birth, Roll_number)
Clearly there is partial dependency here (Date_of_Birth $\to$ Age) and Age is not a prime attribute. So, it is only in 1NF.
Option (D).
selected by
0
There are also given two other fds i.e course no ,course name and instructor.for the ck all fds should be derived from it
+1
@Neha Singh we need to find dependency btw given relation only which is The relation (Roll_number, Name, Date_of_birth, Age) so all extra dependency other than relation no use
0
Prateek kumar but there may be other fd that can be indirectly derived from the attribute not present in the table thus it is always advisable to find all the possible fd's by using the closure method and see which are applicable to the given relation .
there is partial dependency hence it is 1st normal form
|
2018-08-16 05:25:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3729330599308014, "perplexity": 5536.3103032619665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00049.warc.gz"}
|
https://gmatclub.com/forum/if-one-of-the-roots-of-the-quadratic-equation-x-2-mx-24-0-is-215076.html
|
It is currently 18 Feb 2018, 17:48
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 43792
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
14 Mar 2016, 06:29
Expert's post
2
This post was
BOOKMARKED
00:00
Difficulty:
15% (low)
Question Stats:
78% (01:26) correct 22% (01:10) wrong based on 74 sessions
### HideShow timer Statistics
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5, then what is the value of m?
A. -22.5
B. -17.5
C. -10.5
D. 16
E. Cannot be determined
[Reveal] Spoiler: OA
_________________
Retired Moderator
Joined: 12 Aug 2015
Posts: 2418
GRE 1: 323 Q169 V154
Re: If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
14 Mar 2016, 23:33
Bunuel wrote:
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5, then what is the value of m?
A. -22.5
B. -17.5
C. -10.5
D. 16
E. Cannot be determined
Here X=1.5 must satisfy the equation
Hence => 1.5^2 +1.5m +24 =0
hence m=-17.5
So B
_________________
Getting into HOLLYWOOD with an MBA
Stone Cold's Mock Tests for GMAT-Quant(700+)
Director
Joined: 24 Nov 2015
Posts: 584
Location: United States (LA)
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
15 Jun 2016, 02:35
For a quadratic equation if a root is given it satisfies the given equation
In equation $$x^2$$$$+$$$$mx$$$$+$$$$24$$ = 0 substitute $$x$$=1.5
$$1.5^2$$$$+$$$$1.5*m$$$$+$$$$24$$ = 0
$$1.5*m$$ =$$- 26.25$$
$$m$$ =$$-17.5$$
Intern
Joined: 28 Mar 2016
Posts: 5
Re: If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
15 Jun 2016, 03:06
1
This post was
BOOKMARKED
Bunuel wrote:
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5, then what is the value of m?
A. -22.5
B. -17.5
C. -10.5
D. 16
E. Cannot be determined
Alternate Way of solving this question.
Product of roots of quadratic equation $$(ax^2 + bx +c = 0)$$ is $$\frac{c}{a}$$
$$r_1 * r_2 = \frac{c}{a}$$
$$1.5 * r_2 = \frac{24}{1}$$
$$r_2 = 16$$
Also Sum of roots of quadratic equation $$(ax^2 + bx +c = 0)$$ is $$\frac{-b}{a}$$
$$r_1 + r_2 =\frac{-b}{a}$$
$$1.5 + 16 = \frac{-m}{1}$$
$$m = -17.5$$
$$Answer = B$$
Senior Manager
Joined: 18 Jan 2010
Posts: 256
Re: If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
15 Jun 2016, 03:19
Bunuel wrote:
If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5, then what is the value of m?
A. -22.5
B. -17.5
C. -10.5
D. 16
E. Cannot be determined
(Formula is sum of roots = -(b/a); Product of roots is (c/a), where equation is (a.$$x^2$$+bx+c = 0)
Here say roots are 1.5 and z
1.5+z = -m -------------(1)
1.5z = 24 ---------------(2)
From (2), z = 24/1.5 = 16
- m = 16+1.5 = 17.5
m = -17.5
Non-Human User
Joined: 09 Sep 2013
Posts: 13840
Re: If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink]
### Show Tags
15 Oct 2017, 04:39
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If one of the roots of the quadratic equation x^2 + mx + 24 = 0 is 1.5 [#permalink] 15 Oct 2017, 04:39
Display posts from previous: Sort by
|
2018-02-19 01:48:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6816046833992004, "perplexity": 2900.9290452729338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812306.16/warc/CC-MAIN-20180219012716-20180219032716-00207.warc.gz"}
|
https://www.isa-afp.org/entries/Generic_Join.html
|
# Formalization of Multiway-Join Algorithms
Title: Formalization of Multiway-Join Algorithms Author: Thibault Dardinier Submission date: 2019-09-16 Abstract: Worst-case optimal multiway-join algorithms are recent seminal achievement of the database community. These algorithms compute the natural join of multiple relational databases and improve in the worst case over traditional query plan optimizations of nested binary joins. In 2014, Ngo, Ré, and Rudra gave a unified presentation of different multi-way join algorithms. We formalized and proved correct their "Generic Join" algorithm and extended it to support negative joins. BibTeX: @article{Generic_Join-AFP, author = {Thibault Dardinier}, title = {Formalization of Multiway-Join Algorithms}, journal = {Archive of Formal Proofs}, month = sep, year = 2019, note = {\url{https://isa-afp.org/entries/Generic_Join.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Depends on: MFOTL_Monitor Used by: MFODL_Monitor_Optimized
|
2021-09-16 11:17:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2530265748500824, "perplexity": 5740.327141966465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00061.warc.gz"}
|
https://socratic.org/questions/5984916411ef6b3a213ddd87
|
# Why is heterocyclic chemistry such a vast field?
Add in a nitrogen heteroatom, and we add another layer onto the chemistry, i.e. we go from ${C}_{n} {H}_{2 n + 2}$ to ${C}_{n} {H}_{2 n + 1} N$, the same thing will occur with oxygen, sulfur, etc. There is a whole army of chemists who deal with heterocyclic chemistry.......pyrroles, furans, thiophenes, etc....... The heteroatom adds vast possibilities for structure and chemistry; the presence of lone pairs on the nitrogen or oxygen centre gives so-called $\pi - \text{excessive}$ molecules that ARE MORE reactive than benzene.
|
2020-06-03 09:15:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4649111330509186, "perplexity": 3884.98823526053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00251.warc.gz"}
|
https://eprint.iacr.org/2021/901/20210705:184751
|
## Cryptology ePrint Archive: Report 2021/901
Resolvable Block Designs in Construction of Approximate Real MUBs that are Sparse
Ajeet Kumar and Subhamoy Maitra
Abstract: Several constructions of Mutually Unbiased Bases (MUBs) borrow tools from combinatorial objects. In this paper we focus how one can construct Approximate Real MUBs (ARMUBs) with improved parameters using results from the domain of Resolvable Block Designs (RBDs). We first explain the generic idea of our strategy in relating the RBDs with MUBs/ARMUBs, which are sparse (the basis vectors have small number of non-zero co-ordinates). Then specific parameters are presented, for which we can obtain new classes and improve the existing results. To be specific, we present an infinite family of $\lceil\sqrt{d}\rceil$ many ARMUBs for dimension $d = q(q+1)$, where $q \equiv 3 \bmod 4$ and it is a prime power, such that for any two vectors $v_1, v_2$ belonging to different bases, $|\braket{v_1|v_2}| < \frac{2}{\sqrt{d}}$. We also demonstrate certain cases, such as $d = sq^2$, where $q$ is a prime power and $sq \equiv 0 \bmod 4$. These findings subsume and improve our earlier results in [Cryptogr. Commun. 13, 321-329, January 2021]. This present construction idea provides several infinite families of such objects, not known in the literature, which can find efficient applications in quantum information processing for the sparsity, apart from suggesting that parallel classes of RBDs are intimately linked with MUBs/ARMUBs.
Category / Keywords: foundations / (Approximate Real) Mutually Unbiased Bases, Combinatorial Design, Cryptology, Hadamard Matrices, Quantum Information Theory, Resolvable Block Design.
|
2022-01-25 15:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7668461799621582, "perplexity": 882.6231327146148}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00476.warc.gz"}
|
https://cs.stackexchange.com/questions/20055/randomised-median
|
# Randomised Median [closed]
I have tried hard , but i'm unable to come up with the expected running time for the number of comparisons to find the Randomized Median (find the median of an unsorted array). Also i wanted to make sure that we CANNOT take expectation of the recurrence that we use to find the randomized mean , or any other recurrence in any other problem as they belong to different probability spaces? Is this statement right?
• Have you been shown a way of calculating the expected running time of randomized quicksort? – Yuval Filmus Jan 29 '14 at 14:11
• What is the algorithm your question relates to? Talking about runtime without a concrete algorithm does not make much sense, and particularities may matter. What is the recurrence you have at hand? – Raphael Jan 29 '14 at 17:00
• @Raphael sry for that. I basically take a random pivot and divide the array . If the pivot is at position n/2 i return if less , I select the right par and recursively find element of rank -(n/2-rank(pivot)) if greater i recurse on right and find recursively element of rank n/2 – Aditya Nambiar Jan 29 '14 at 19:39
• @Yuval Yes we have been – Aditya Nambiar Jan 29 '14 at 19:40
• @Aditya In that case, try to mimic the argument you've seen. – Yuval Filmus Jan 29 '14 at 19:42
One approach would be to form up a recurrence for the expected running time $T(n)$. At each stage there is $O(n)$ processing, and the result is a new list of length distributed according to some distribution $D_n$ (for you to determine), and so we can write $$T(n) = O(n) + \operatorname*{\mathbb{E}}_{m \sim D_n} T(m).$$ This looks much less frightening when you substitute the actual distribution $D_n$ and replace the expectation with a (weighted) sum. Then it remains to solve the recurrence.
• If you partition with respect to the $k$th ranked pivot, then what you get on the left is a uniformly random permutation of the smallest $k-1$ elements, and what you get on the right is a uniformly random permutation of the largest $n-k$ elements. This is because the elements are put in the partitions in the order they are encountered in the array. So I don't see any problem with taking the expectation over the recurrence relation. – Yuval Filmus Jan 29 '14 at 19:46
• It's only different if you are dogmatic about the contents of your array. For me, $T(n)$ is the expected running time on any array consisting of distinct elements, ordered uniformly at random. – Yuval Filmus Jan 29 '14 at 19:52
|
2020-07-04 05:19:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265979647636414, "perplexity": 378.8782959941337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00157.warc.gz"}
|
https://geo.libretexts.org/Bookshelves/Sedimentology/Book%3A_Introduction_to_Fluid_Motions_and_Sediment_Transport_(Southard)/12%3A_Bed_Configurations_Generated_by_Water_Flows_and_the_Wind/12.06%3A_Oscillatory-Flow_and_Combined-Flow_Bed_Configurations
|
# 12.6: Oscillatory-Flow and Combined-Flow Bed Configurations
## Introduction
As described in Chapter 6, water-surface waves propagating in water much shallower than the wavelength cause a back-and-forth motion of the water at the bottom. If the maximum speed of the water (which is attained in the middle of the oscillation) exceeds the threshold for sediment movement, oscillatory-flow bed forms develop. This is common in the shallow ocean. Swell from distant storms causes bottom oscillatory motion even though the weather is fine and calm locally. More importantly, bottom-water motions under large storm waves cause bed forms also. In that situation there is likely to be a non-negligible unidirectional current as well, resulting in a combined flow.
## A Tank Experiment on Oscillatory-Flow Bed Configurations
There are three ways to make oscillatory-flow bed configurations in the laboratory. One is to build a big long tank and make waves in it by putting a wave generator at one end and a wave absorber at the other end (Figure $$\PageIndex{1}$$). The generator does not need to be anything more than a flap hinged at the bottom and rocked back and forth in the direction of the tank axis at the desired period. This arrangement makes nice bed forms, but the trouble is that you are limited to short oscillation periods.
Another good way to make oscillatory-flow bed configurations is to build a horizontal closed duct that connects smoothly with reservoir tanks at both ends, fill the whole apparatus with water, and then put a piston in contact with the water surface in one of the reservoir tanks and oscillate it up and down at the desired period (Figure $$\PageIndex{2}$$). This allows you to work with much longer-period oscillations, but there is the practical problem that the apparatus has its own natural oscillation period, and if you try to make oscillations at a much different period you have to fight against what the duct wants to do, and that means large forces.
The third way should seem elegant and ingenious to you: place a sand-covered horizontal tray at the bottom of a large tank of water, and oscillate the tray back and forth underneath the water (Figure $$\PageIndex{3}$$). The problem is that the details of particle and fluid accelerations are subtly different from the other two devices, and it turns out that the bed configurations produced in this kind of apparatus do not correspond well with those produced in the other two kinds of apparatus.
Imagine making an exploratory series of runs in an oscillatory-flow duct of the kind shown in Figure $$\PageIndex{2}$$ to obtain a general idea of the nature of oscillatory-flow bed configurations. Work at just one oscillation period, in the range from three to five seconds. Start at a low maximum oscillation velocity and increase it in steps. Figure $$\PageIndex{4}$$ shows the sequence of bed configurations you would observe.
Once the movement threshold is reached, a pattern of extremely regular and straight-crested ripples develops on a previously planar bed. The ripples are symmetrical in cross section, with sharp crests and broad troughs. In striking contrast to unidirectional-flow bed configurations, the plan pattern is strikingly regular: ripple size varies little from ripple to ripple, and the ripples are straight and regular. At fairly low velocities the ripples are relatively small, withspacings of no more than several centimeters, but with increasing velocity the become larger and larger.
In a certain range of moderate velocities, the ripples become noticeably less regular and more three-dimensional, although they are still oriented dominantly transverse to the oscillatory flow. These three-dimensional ripples continue to grow in size with increasing velocity, until eventually they become flattened and are finally washed out to a planar bed. Therefore, just as in unidirectional flows, rugged bed configurations pass over into a stable plane-bed mode of transport with increasing velocity.
Oscillatory-flow bed configurations at longer oscillation periods are much less well studied, especially at high oscillatory velocities. Some comments on bed configurations produced under those conditions, which are very important in natural environments, are given in a later section.
## Dimensional Analysis
Assume again, as we did earlier with unidirectional flow bed configurations, that the sediment is described well enough by its density $$\rho_{s}$$ and average size $$D$$. The oscillatory flow is specified by any two of the following three variables: oscillation period $$T$$, orbital diameter $$d_{\text{o}}$$ (the distance traveled by water particles during one-half of an oscillation), and maximum orbital velocity $$U_{m}$$; I’ll use $$T$$ and $$U_{m}$$ here. As with unidirectional-flow bed configurations, we also need to include $$\rho$$, $$\mu$$, and $$\gamma^{\prime}$$. The number of independent variables is seven, so we should expect a set of four equivalent dimensionless variables.
One dimensionless variable can again be the density ratio $$\rho_{s}/\rho$$, and the other three have to include $$U_{m}$$, $$T$$, and $$D$$ as well as $$\rho$$, $$\mu$$, and $$\gamma^{\prime}$$. Adopting the same strategy as for unidirectional flow, we can form a dimensionless maximum oscillation velocity, a dimensionless oscillation period, and a dimensionless sediment size:
$$\left(\frac{\rho^{2}}{\mu \gamma^{\prime}}\right)^{1/3}U_{m}, \left(\frac {\gamma ^{\prime 2}}{\rho \mu} \right)^{1/3}T, \left(\frac{\gamma^{\prime} \rho}{\mu^{2}} \right)^{1/3}D$$
Then we can plot another three-dimensional graph to show the stability fields of oscillatory-flow bed phases, just as for unidirectional-flow bed phases (Figure $$\PageIndex{5}$$). Relationships are best revealed by looking at a series of velocity–period sections through the graph for various values of sediment size (Figure $$\PageIndex{5}$$). Figure $$\PageIndex{6}$$ shows three such sections, one for very fine sands, $$0.1–0.2$$ $$\mathrm{mm}$$ (Figure $$\PageIndex{6}$$A), one for medium sands, $$0.3–0.4$$ $$\mathrm{mm}$$ (Figure $$\PageIndex{6}$$B), and one for coarse sands ($$0.5–0.6$$ $$\mathrm{mm}$$ (Figure $$\PageIndex{6}$$C). As with the graphs for unidirectional flows presented earlier, the axes are labeled with the $$\10^{\circ}\mathrm{C}$$ values of velocity and period corresponding to the actual dimensionless variables. The data shown in Figure $$\PageIndex{6}$$ are from laboratory experiments on oscillatory-flow bed configurations, made in both wave tanks and oscillatory-flow ducts, by several different investigators.
In each section in Figure $$\PageIndex{6}$$, there is no movement at low velocities and a plane-bed mode of transport at high velocities. The intervening stability region for oscillation ripples narrows with decreasing oscillation period. As with ripples in unidirectional flows, there really are two different kinds of lower boundary of the stability field for oscillation ripples: one represents the threshold for sediment movement on a preexisting planar bed, and the other represents the minimum oscillation velocity needed to maintain the equilibrium of a preexisting ripple configuration. Existing data are not extensive enough to define the exact nature of these boundaries.
The most prominent feature of each of the sections in Figure $$\PageIndex{6}$$ is the regular increase in ripple spacing from lower left to upper right, with increasing velocity and period. The contours of ripple spacing are close to being parallel to the lines of equal orbital diameter except near the transition to plane bed.
An important feature of the section for fine sands is a transition from extremely regular straight-crested ripples (which I will call two-dimensional ripples) at relatively low oscillation velocities to rather irregular ripples (which I will call three-dimensional ripples) with short and sinuous crest lines at relatively high oscillation velocities. The most three-dimensional bed configurations show only a weak tendency for flow-transverse orientation, and it is difficult or impossible to measure an average ripple spacing. In medium sands (Figure $$\PageIndex{6}$$B) the transition from two-dimensional ripples to three-dimensional ripples takes place at velocities closer to the transition to plane bed, and the tendency for three-dimensionality is not as marked as in fine sands.
Superimposed smaller ripples are prominent in the troughs and on the flanks of the larger ripples formed at long oscillation periods and high oscillation velocities in fine sands. These small superimposed ripples have spacings of about $$7$$ $$\mathrm{cm}$$, and they seem to dynamically related to ripples in unidirectional flows. The one-way flow during each half of the oscillation lasts long enough and transports enough sediment so that a pattern of current ripples becomes established in local areas on the bed. The flow in the other direction reverses the asymmetry of these small ripples but does not destroy them.
Experimental data are least abundant for long periods and high velocities, but preliminary data show the existence of three-dimensional rounded bed forms with spacings of well over a meter in fine sands under these conditions. In contrast to the smaller two-dimensional ripples, these large ripples are not static but show a tendency to change their shape and shift their position with time, even after the bed configuration has stopped changing on the average.
In coarse sands (Figure $$\PageIndex{6}$$C), no experiments have been made at the longest periods and highest velocities, but evidence from observations in modern shallow marine environments, and also from the ancient sedimentary record, suggests that ripples in coarse sands are two-dimensional over the entire range of periods and velocities characteristic of natural flow environments.
The flow over oscillation ripples is characteristic (Figure $$\PageIndex{7}$$). During half of the oscillation cycle, the flow separates over the sharp crest of the ripple, putting abundant suspended sediment in suspension in the separation vortex over the downflow side. As the flow reverses, the vortex is abruptly carried over the crest of the ripple and deposits its suspended sediment. Flow separation is then rapidly reestablished on the other side of the ripple, and a new vortex develops. For this reason, these ripples have been called vortex ripples.
Purely oscillatory flows that involve a discrete or continuous range of oscillatory components with different directions, periods, and velocities must be common in the shallow ocean. For example, when a storm passes a given area, strong winds tend to blow from different directions at different times. Some time is needed for the sea state to adjust itself to the changing wind directions, and during those times the sea state is complicated, with superimposed waves running in different directions. The nature of bed configurations under even simple combinations of two different wave trains is little known. Much more observational work needs to be done on this topic.
## Combined-Flow Bed Configurations
So far we have considered only the two “end-member cases” of flows that make bed configurations. Even aside from the importance of time-varying unidirectional and oscillatory flows, and of purely oscillatory flows with more than just one oscillatory component, there is an entire range of combined flows that generate distinctive bed configurations. Observations in the natural environment are scarce, and systematic laboratory work (Arnott and Southard, 1990; Yokokawa, 1995; Dumas et al., 2005) has so far explored only a small part of the wide range of relevant conditions. This section is therefore necessarily shorter than the previous sections. Up to now, systematic observations have been made only for combined flows in which a single oscillatory component is superimposed on a current flowing with the same orientation as the oscillation. There is therefore still a major gap in our knowledge of combined-flow bed configurations.
Figure $$\PageIndex{8}$$ is an inadequate attempt to provide a conceptual framework for thinking about combined-flow bed configurations. Ideally we would like to be able to plot observational data on combined-flow bed configurations on a graph with axes representing the four important independent variables: oscillatory velocity, unidirectional velocity, oscillation period, and sediment size. Unfortunately it is impossible for human beings to visualize four-dimensional graphs. A substitute approach (Figure $$\PageIndex{8}$$) is to imagine one or the other of two equivalent kinds of graphs:
• a continuous series of three-dimensional graphs with the two velocity components and sediment size along the axes, one such graph for each value of oscillation period; or
• a continuous series of three-dimensional graphs with the two velocity components and oscillation period along the axes, one such graph for each value of sediment size.
Systematic laboratory experiments on combined-flow configurations have been carried out by Arnott and Southard and, more recently, covering wider range of flow and sediment conditions, by Dumas et al. (2005). The experiments by Dumas et al. (2005) were done in large oscillatory-flow ducts with oscillation periods ranging from about $$8$$ $$\mathrm{s}$$ to $$11$$ $$\mathrm{s}$$ (scaled to $$10^{\circ}\mathrm{C}$$ water temperature), with well-sorted sediments ranging in size from $$0.10$$ to $$0.23$$ $$\mathrm{mm}$$ (scaled to $$10^{\circ}\mathrm{C}$$ water temperature). Figure $$\PageIndex{9}$$ shows three phase diagrams, for three combinations of oscillation period and sediment size, showing data points and phase boundaries. The boundaries within the field for ripples are gradual rather than abrupt. Bear in mind, when looking at these diagrams, that they are still an extremely “thin” representation of the graphic framework shown in Figure $$\PageIndex{8}$$.
Here are some of the features of Figure $$\PageIndex{9}$$. At combinations of low oscillatory velocities and low unidirectional velocities, there is no sediment movement. At combinations of high oscillatory velocities and high unidirectional velocities, a planar bed with strong sediment movement is the stable bed configuration. Note that when even a small unidirectional component is present, the oscillatory velocity for the transition from ripples to plane bed is substantially lower than in purely oscillatory flow.
In the lower part of the region of ripple stability, the ripples are relatively small. Only a small unidirectional component is needed to make the small ripples fairly asymmetrical. Except when the unidirectional component is very weak, small combined-flow ripples are not greatly different in geometry from ripples in purely unidirectional flow.
In the upper part of the region of ripple stability, the ripples are relatively large. Only a small unidirectional flow component is needed to make the large three-dimensional oscillatory-flow bed forms produced at these oscillation periods and sediment sizes noticeably asymmetrical. For relatively large oscillatory velocities, especially in the finer sand size, the bed forms acquire a three-dimensional hummocky appearance; this region is shown by the shading in Figures $$\PageIndex{9}$$A, B, and C; it is a feature that seems to become superimposed on the symmetrical to symmetrical large combined-flow ripples under those values of the velocity components.
At unidirectional velocities greater than are shown in this graph, the field for large combined-flow ripples must pinch out, because small ripples are known to be the only stable bed configuration in purely unidirectional flows in these fine sand sizes. Figure $$\PageIndex{10}$$ shows a speculative extrapolation of Figure $$\PageIndex{9}$$ to higher unidirectional velocities. The effect of an increasingly strong oscillatory velocity component on unidirectional-flow dunes in medium and coarse sands is an intriguing problem for which no experimental data are yet available.
When the oscillation period is large, medium to high oscillation velocities produce large symmetrical ripples. Even a slight unidirectional component is known (e.g., Arnott and Southard, 1990; Dumas et al., 2005) to make these large ripples noticeably asymmetrical, to the point where they are not greatly different in geometry and internal stratification from unidirectional-flow dunes. That leads to an important question: what do the large-scale bed forms in the intermediate range of flow conditions and sediment sizes look like? There has been almost no systematic study of such bed forms, and yet deductively it seems that they should be important, and that a lot of the cross stratification we see in the ancient sedimentary record must have been produced under such conditions. Figure $$\PageIndex{11}$$.
|
2021-06-13 18:29:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.413927286863327, "perplexity": 1313.190754940819}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00163.warc.gz"}
|
https://au.mathworks.com/help/simbio/ref/sbiosteadystate.html
|
Documentation
Find steady state of SimBiology model
## Syntax
``````[success, variant_out] = sbiosteadystate(model)``````
``````[success, variant_out] = sbiosteadystate(model, variant_in)``````
``````[success, variant_out] = sbiosteadystate(model, variant_in, scheduleDose)``````
`````` [success, variant_out, model_out] = sbiosteadystate(model,___)``````
`````` [success, variant_out, model_out, exitInfo] = sbiosteadystate(model,___)``````
`` [___] = sbiosteadystate(___, Name,Value)``
## Description
example
``````[success, variant_out] = sbiosteadystate(model)``` attempts to find a steady state of a SimBiology® model, `model`. The function returns `success`, which is `true` if a steady state was found, and a SimBiology `Variant object`, `variant_out`, with all non-constant species, compartments, and parameters of the model having the steady-state values. If a steady state was not found, then the `success` is `false` and `variant_out` contains the last values found by the algorithm.```
example
``````[success, variant_out] = sbiosteadystate(model, variant_in)``` applies the alternate quantity values stored in a SimBiology variant object, `variant_in`, to the model before trying to find the steady-state values.```
example
``````[success, variant_out] = sbiosteadystate(model, variant_in, scheduleDose)``` applies a SimBiology schedule dose object, `scheduleDose`, or a vector of schedule doses to the corresponding model quantities before trying to find the steady state values. Only doses at time = 0 are allowed, that is, the dose time of each dose object must be 0. To specify a dose without specifying a variant, set `variant_in` to an empty array, `[]`.```
example
`````` [success, variant_out, model_out] = sbiosteadystate(model,___)``` also returns a SimBiology model, `model_out` that is a copy of the input `model` with the states set to the steady-state solution that was found. Also, `model_out` has all initial assignment rules disabled.```
example
`````` [success, variant_out, model_out, exitInfo] = sbiosteadystate(model,___)``` also returns the exit information about the steady state computation.```
example
```` [___] = sbiosteadystate(___, Name,Value)` uses additional options specified by one or more `Name,Value` pair arguments.```
## Examples
collapse all
This example shows how to find a steady state of a simple gene regulation model, where the protein product from translation controls transcription.
Load the sample SimBiology project containing the model, m1. The model has five reactions and four species.
`sbioloadproject('gene_reg.sbproj','m1')`
Display the model reactions.
`m1.Reactions`
```ans = SimBiology Reaction Array Index: Reaction: 1 DNA -> DNA + mRNA 2 mRNA -> mRNA + protein 3 DNA + protein <-> DNA_protein 4 mRNA -> null 5 protein -> null ```
A steady state calculation attempts to find the steady state values of non-constant quantities. To find out which model quantities are non-constant in this model, use `sbioselect`.
`sbioselect(m1,'Where','Constant*','==',false)`
```ans = SimBiology Species Array Index: Compartment: Name: Value: Units: 1 unnamed DNA 50 molecule 2 unnamed DNA_protein 0 molecule 3 unnamed mRNA 0 molecule 4 unnamed protein 0 molecule ```
There are four species that are not constant, and the initial amounts of three of them are set to zero.
Use `sbiosteadystate` to find the steady state values for those non-constant species.
`[success,variantOut] = sbiosteadystate(m1)`
```success = logical 1 ```
```variantOut = SimBiology Variant - SteadyState (inactive) ContentIndex: Type: Name: Property: Value: 1 compartment unnamed Capacity 1 2 species DNA InitialAmount 8.79024 3 species DNA_protein InitialAmount 41.2098 4 species mRNA InitialAmount 1.17203 5 species protein InitialAmount 23.4406 6 parameter Transcription.k1 Value 0.2 7 parameter Translation.k2 Value 20 8 parameter [Binding/Unbin... Value 0.2 9 parameter [Binding/Unbin... Value 1 10 parameter [mRNA Degradat... Value 1.5 11 parameter [Protein Degra... Value 1 ```
The initial amounts of all species of the model have been set to the steady-state values. `DNA` is a conserved species since the total of `DNA` and `DNA_protein` is equal to 50.
You can also use a variant to store alternate initial amounts and use them during the steady state calculation. For instance, you could set the initial amount of DNA to 100 molecules instead of 50.
```variantIn = sbiovariant('v1'); addcontent(variantIn,{'species','DNA','InitialAmount',100}); [success2,variantOut2,m2] = sbiosteadystate(m1,variantIn)```
```success2 = logical 1 ```
```variantOut2 = SimBiology Variant - SteadyState (inactive) ContentIndex: Type: Name: Property: Value: 1 compartment unnamed Capacity 1 2 species DNA InitialAmount 12.7876 3 species DNA_protein InitialAmount 87.2124 4 species mRNA InitialAmount 1.70502 5 species protein InitialAmount 34.1003 6 parameter Transcription.k1 Value 0.2 7 parameter Translation.k2 Value 20 8 parameter [Binding/Unbin... Value 0.2 9 parameter [Binding/Unbin... Value 1 10 parameter [mRNA Degradat... Value 1.5 11 parameter [Protein Degra... Value 1 ```
```m2 = SimBiology Model - cell Model Components: Compartments: 1 Events: 0 Parameters: 6 Reactions: 5 Rules: 0 Species: 4 ```
Since the algorithm has found a steady state, the third output `m2` is the steady state model, where the values of non-constant quantities have been set to steady state values. In this example, the initial amounts of all four species have been updated to steady state values.
`m2.Species`
```ans = SimBiology Species Array Index: Compartment: Name: Value: Units: 1 unnamed DNA 12.7876 molecule 2 unnamed DNA_protein 87.2124 molecule 3 unnamed mRNA 1.70502 molecule 4 unnamed protein 34.1003 molecule ```
## Input Arguments
collapse all
SimBiology model, specified as a SimBiology `Model object`.
SimBiology variant, specified as a `Variant object`. The alternate quantity values stored in the variant are applied to the model before finding the steady state.
Dosing information, specified as a SimBiology schedule dose object. The dose must be bolus, that is, there must be no time lag or administration time for the dose. In other words, its `LagParameterName` and `DurationParameterName` properties must be empty, and the dose time (the `Time` property) must be 0. For details on how to create a bolus dose, see Creating Doses Programmatically.
### Name-Value Pair Arguments
Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.
Example: `'AbsTol',1e-6` specifies to use the absolute tolerance value of `10–6`.
Method to compute the steady state of `model`, specified as the comma-separated pair consisting of `'Method'` and a character vector `'auto'`, `'simulation'`, or `'algebraic'`. The default (`'auto'`) behavior is to use the `'algebraic'` method first. If that method is unsuccessful, the function uses the `'simulation'` method.
For the simulation method, the function simulates the model and uses finite differencing to detect a steady state. For details, see Simulation Method.
For the algebraic method, the function computes a steady state by finding a root of the flux function algebraically. For nonlinear models, this method requires Optimization Toolbox™. For details, see Algebraic Method.
### Note
The steady state returned by the algebraic method is not guaranteed to be the same as the one found by the simulation method. The algebraic method is faster since it involves no simulation, but the simulation method might be able to find a steady state when the algebraic method could not.
Example: `'Method','algebraic'`
Absolute tolerance to detect convergence, specified as the comma-separated pair consisting of `'AbsTol'` and a positive, real scalar.
When you use the algebraic method, the absolute tolerance is used to specify optimization settings and detect convergence. For details, see Algebraic Method.
When you use the simulation method, the absolute tolerance is used to determine convergence when finding a steady state solution by forward integration as follows: $\left(‖\frac{d\stackrel{\to }{S}}{dt}‖, where $\stackrel{\to }{S}$ is a vector of nonconstant species, parameters, and compartments.
Relative tolerance to detect convergence, specified as the comma-separated pair consisting of `'RelTol'` and a positive, real scalar. This name-value pair argument is used for the `simulation` method only. The algorithm converges and reports a steady state if the algorithm finds model states by forward integration, such that $\left(‖\frac{d\stackrel{\to }{S}}{dt}‖, where $\stackrel{\to }{S}$ is a vector of non-constant species, parameters, and compartments.
Maximum amount of simulation time to take before terminating without a steady state, specified as the comma-separated pair consisting of `'MaxStopTime'` and a positive integer. This name-value pair argument is used for the `simulation` method only.
Minimum amount of simulation time to take before searching for a steady state, specified as the comma-separated pair consisting of `'MinStopTime'` and a positive integer. This name-value pair argument is used for the `simulation` method only.
## Output Arguments
collapse all
Flag to indicate if a steady state of the model is found, returned as `true` or `false`.
SimBiology variant, returned as a variant object. The variant includes all species, parameters, and compartments of the model with the non-constant quantities having the steady-state values.
SimBiology model at the steady state, returned as a model object. `model_out` is a copy of the input `model`, with the non-constant species, parameters, and compartments set to the steady-state values. Also, `model_out` has all initial assignment rules disabled. Simulating the model at steady state requires that initial assignment rules be inactive, since these rules can modify the values in `variant_out`.
### Note
• If you decide to commit the `variant_out` to the input `model` that has initial assignment rules, then `model` is not expected to be at the steady state because the rules perturb the system when you simulate the `model`.
• `model_out` is at steady state only if simulated without any doses.
Exit information about the steady state computation, returned as a character vector. The information contains different messages for corresponding exit conditions.
• `Steady state found (simulation)` – A steady state is found using the simulation method.
• `Steady state found (algebraic)` – A steady state is found using the algebraic method.
• `Steady state found (unstable)` – An unstable steady state is found using the algebraic method.
• ```Steady state found (possibly underdetermined)``` – A steady state that is, possibly, not asymptotically stable is found using the algebraic method.
• `No Steady state found` – No steady state is found.
• `Optimization Toolbox (TM) is missing` – The method is set to `'algebraic'` for nonlinear models and Optimization Toolbox is missing.
collapse all
### Simulation Method
`sbiosteadystate` simulates the model until `MaxStopTime`. During the simulation, the function approximates the gradient using finite differencing (forward difference) over time to detect a steady state.
### Algebraic Method
`sbiosteadystate` tries to find a steady state of the model algebraically by finding a root of the flux function v. The flux function includes reaction equations, rate rules, and algebraic equations, that is, `v(X,P) = 0`, where X and P are nonconstant quantities and parameters of the model. Thereby the mass conservation imposed by the reaction equations is respected.
For nonlinear models, `sbiosteadystate` uses `fmincon` to get an initial guess for the root. The solution found by `fmincon` is then improved by `fsolve`. To detect convergence, `sbiosteadystate` uses the absolute tolerance (`'AbsTol'`). In other words, `OptimalityTolerance`, `FunctionTolerance`, and `StepTolerance` options of the corresponding optimization function are set to the `'AbsTol'` value.
For linear models, `sbiosteadystate` finds the roots of the flux function v by solving a linear system defined by the reaction and conservation equations. For linear models, there are no rate or algebraic equations.
|
2020-01-26 18:03:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7933835387229919, "perplexity": 1448.1445600735365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00551.warc.gz"}
|
https://ams-at-ucr.github.io/gradsem/years/2016-2017/
|
# Graduate Student Seminar
## Fridays 1:10–2:00 pm in Surge 284
### Organizers
Xavi Ramos-Olivé Kyle Castro olive@math.ucr.edu kcastro@math.ucr.edu Christina Knox Xander Henderson sargent@math.ucr.edu henderson@math.ucr.edu
### Spring 2017
9 June 2017
Annual Organizational Meeting
2 June 2017
Speaker: Xander Henderson
Title: The Complex Dimensions of Self-Similar Subsets of p-adic Product Spaces
Abstract: The higher dimensional theory of complex dimensions developed by Lapidus, Radunović, and Žubrinić provides a language for quantifying the oscillatory behaviour of the geometry of subsets of Rn. In this talk, we will describe how the theory can be extended to metric measure spaces that meet certain homogeneity conditions. We will provide examples from p-adic spaces and discuss the geometric information that can be recovered from the complex dimensions in these cases. This is an expanded version of the talk given at Math Connections 2017.
26 May 2017
Speaker: Charley Conley
Title: Curvature of Hypersurfaces in the Euclidean Ambient
Abstract: This talk will introduce curvature of surfaces in a friendly way. It will then describe how to find curvatures of hypersurfaces in n-dimensional Euclidean space without having to buy n-dimensional graph paper. Finally, it will discuss a relatively (2013) new curvature invariant from the joint work of Charley T. R. Conley, Rebecca Etnyre, Brady Gardener, Lucy H. Odom, and Dr. Bogdan Suceava, with some interpretation.
19 May 2017
Speaker: Daniel Cicala
Title: Universal algebra and functorial semantics
Abstract: In his landmark thesis, William Lawvere introduced "functorial sematics" to the study of universal algebra. This method turns certain algebraic definitions into an actual mathematic objects. This talk gives an introduction to functorial semantics and shows that everything you know is just a functor.
12 May 2017
Speaker: Christina Knox
Title: Recovery of Both Sound and Source in Photo-Acoustic Tomography
Abstract: Photo-acoustic tomography is an imaging method that attempts to combine the high resolution of ultrasound and the high contrast capabilities of electromagnetic waves. In this talk we will first introduce the mathematical problems photo-acoustic tomography presents. Uniqueness results will briefly be discussed for the situation when sound speed is known and the source term is to be recovered. Then the case when both sound speed and source term are unknown will be considered. Partial uniqueness results in this case proved by Liu and Uhlmann will be presented along with an outline of the proof which relies on the temporal Fourier transform.
5 May 2017
Speaker: Joe Moeller
Title: The Grothendieck Construction
Abstract: Alexander Grothendieck is inarguably one of the greatest mathematicians of the 20th century. His influence in algebra, geometry, and category theory is unavoidable. In this talk, we will see an intuitive development of the Grothendieck construction, one of many things named after him. We will also see a simple application in algebra.
28 April 2017
Speaker: Edward Voskanian
Title: Mathematical Quasicrystals and the Complex Roots of a Nonlattice Dirichlet Polynomial
Abstract: The discovery of quasicrystals in 1982 by Dan Shechtman, which are nonperiodic solids with a diffraction pattern consisting of bright spots, brought forth a surge of new mathematics with which to model the new geometry involved. Beyond the physical questions raised by the discovery of quasicrystals, there is at least one mathematical question: How can one construct a 'well ordered' pattern that is 'aperiodic'. An example of such a mathematical structure is the Penrose tiling, which was developed by Sir Rodger Penrose even before the discovery of quasicrystals. The complex roots of a nonlattice Dirichlet polynomial have an interesting quasiperiodic structure too. However, to actually see this pattern, one must approximate the roots of a polynomial with very large degree. In this talk, we will see where this quasiperiodic pattern comes from and discuss the numerical attempts to see more of it. We ask if there is a natural way in which the quasiperiodic pattern of the complex roots can be understood in terms of a suitable generalized quasicrystal.
21 April 2017
Speaker: Brandon Coya
Title: Graphs and Circuits
Abstract: Circuits have been widely studied in electrical engineering and physics. In this talk we will view circuits as special types of graphs. First we will look at which graph property determines the behavior of a circuit. Additionally we look at circuits in a compositional way," meaning that we can stick circuits together to form larger ones. This comes from allowing our graphs to have distinguished nodes be inputs or outputs. We can then understand any circuit by breaking it up into smaller pieces and first understanding the behavior of the smaller pieces.
14 April 2017
Speaker: Jonathan Wolfram Siegel (UCLA)
Title: Information Theory and Communication
Abstract: Data compression and reliable communication in the presence of noise make modern information technology possible. The goal of this talk is to introduce Shannon's source coding theorem and noisy channel coding theorem. These remarkable theorems put precise limits on lossless compression and communication over a noisy channel. In the process, I will introduce fundamental concepts in information theory, for instance the notion of entropy, which find applications in statistics and physics, as well as in the study of PDEs.
7 April 2017
Speaker: Nicholas Newsome (CSU Fresno)
Title: An investigation of power sums of integers
Abstract: Sums of powers of integers have been studied extensively for many centuries. The Pythagoreans, Archimedes, Fermat, Pascal, Bernoulli, Faulhaber, and other mathematicians have discovered formulas for sums of powers of the first n natural numbers. Among these is Faulhaber's well-known formula which expresses the power sums as polynomials whose coefficients involve Bernoulli numbers.
In this talk, we sketch an elementary proof that for each natural number p, the sum of pth powers of the first n natural numbers can be expressed as a polynomial in n of degree p + 1. We also present a novel identity involving Bernoulli numbers and use it to show symmetry of these polynomials. In addition, we make a few conjectures regarding the roots of these polynomials, and speculate on the asymptotic behavior of their graphs. Finally, we study the remainders of the power sums upon division by integers. In particular, we generalize a well-known result on congruence of Sp(n) from prime n to any power of prime and state periodicity properties of Sp(n) mod k.
### Winter 2017
17 March 2017
Speaker: John Simyani
Title: Pursuing the Poisson Cohomology of k-step Nilmanifolds
Abstract: Geometry comes in many flavors, and Nigel Hitchin and his students have, in a sense, added another: Generalized Geometry. In this talk, I will discuss the generalized geometry and Poisson cohomology of particular nilmanifolds, focusing on stabilization of their spectral sequences. Although much of the material will be from my completed oral exam, I will also try to present some common computational tools that are often needed when working with forms and vectors.
10 March 2017
Speaker: Kyle Castro
Title: Multiplicative character sums and their applications to problems in analytic number theory
Abstract: The main goal of this talk will be to present new and existing bounds on multiplicative character sums as well as to provide the reader with an understanding of the various applications. In particular, we will discuss how upper bounds on these character sums provide lower bounds on the size of a subgroup of a finite field containing the image of an interval of consecutive integers under a polynomial function.
3 March 2017
Speaker: Mike Pierce
Title: Functional programming in Mathematica: a crash course
Abstract: Do you ever program in Mathematica? Well, you might be doing it wrong. In this talk we'll start with the basics of low-level Mathematica programming, like expressions, the Head of an expression, and Set versus SetDelay. Then we'll cover List manipulation, how to program in a functional style using things like Map and MapThread. Then we'll cover some basics of pattern matching.
24 February 2017
Speaker: Ethan Kowalenko
Title: Root systems by example
Abstract: Math is hard without examples, so I'm gonna pilot a ship from the abstract sky to a more concrete jungle. Historically, Weyl groups arise in Lie Theory, and have been studied extensively in conjunction with so-called "root systems," which can seem quite abstract if you never look at one. In this talk, we'll consider Weyl groups as a special case of finite reflection groups, and examine some properties of root systems through small examples.
17 February 2017
Speaker: Edward Voskanian
Title: An introduction to mathematical quasicrystals
Abstract: The discovery of physical quasicrystals in 1982 led to a surge of new mathematics with which to model the new geometry involved. This talk will be a partial survey of a two part paper titled "Geometric Models for Quasicrystals" by Jeffery C. Lagarias, and our focus will be the first part titled "Delone Sets of Finite Type". A Delone set is a subset of n dimensional space whose points are, in some sense, evenly spread out. Because the property of being a Delone set of finite type is determined by "local rules", these sets form a natural class for modeling the long-range order of the atomic structure of physical quasicrystals.
10 February 2017
Speaker: Jesse Cohen
Title: Symbols and ellipticity
Abstract: Explicitly solving partial differential equations is difficult in general but, with a small shift in perspective, it is possible to make strong conclusions about existence and regularity of solutions by considering the equations themselves. We will discuss this shift via symbols of linear differential operators of a very general class, with examples and computations, the property of ellipticity, and provide a hint toward the theory of pseudodifferential operators.
3 February 2017
Speaker: Andrew Walker
Title: Finite free resolutions
Abstract: Linear algebra over a field is great. But, if you had to settle, linear algebra over a ring is not so bad. There are some complications though. For example, not every module is free anymore. In other words, we might have some nontrivial relations in a set of generators for our module. It gets worse: we could have relations among a set of generators for these relations... and so on.
27 January 2017
Speaker: Lawrence Mouillé
Title: Morse theory for distance functions
Abstract: Morse theory gives topological information about a manifold by studying the critical points of smooth real-valued functions defined on it. On a Riemannian manifold, one might try to apply Morse theory to distance functions because they are intrinsic to the manifold, as opposed to most "Morse functions" which are extrinsic (e.g. the height of a surface when embedded into an ambient space). Such functions, however, are almost never smooth, and thus don't have critical points in the usual sense. Luckily, Karsten Grove and Katsuhiro Shiohama developed a definition of critical points for distance functions using metric notions. They used this to prove the celebrated diameter sphere theorem, while establishing a useful tool along the way: the isotopy lemma. Because the isotopy lemma is a direct counterpart to a crucial result in classical Morse theory, it is natural to ask whether a full-fledged "Morse theory" can be developed for distance functions and this new critical point theory. I will present work of Barbara Herzog and Fred Wilhelm concerned with addressing this issue, comparing the results with those in classical Morse theory, and discuss future work for improving the scope of their results and finding new applications.
20 January 2017
Speaker: Dylan Noack
Title: The historical development of modern complex analysis
Abstract: Modern Complex Analysis was arguably founded by Augustine Cauchy in the early nineteenth century. In this talk we watch the history of complex analysis unfold, starting with the fundamentals of holomorphic functions discovered by Cauchy to the more recent notions of pseudoconvex domains introduced by Levi. Technical details will be kept to a minimum.
13 January 2017
Speaker: Christina Osborne (University of Virginia)
Title: The first step towards higher order chain rules for abelian calculus
Abstract: One of the most fundamental tools in calculus is the chain rule for functions. Huang, Marcantognini, and Young developed the notion of taking higher order directional derivatives, which has a corresponding higher order iterated directional derivative chain rule. When Johnson and McCarthy established abelian functor calculus, they constructed the chain rule for functors which is analogous to the directional derivative when n=1. In joint work with Bauer, Johnson, Riehl, and Tebbe, we defined an analogue of the iterated directional derivative and provided an inductive proof of the analogue to the HMY chain rule. Our initial investigation of this result involved a concrete computation of the case when n=2, which will be presented in this talk.
### Fall 2016
18 November 2016
Speaker: Daniel Cicala
Title: Spans of cospans
Abstract: We introduce the notion of a span of cospans and define, for them, horizonal and vertical composition. When in a topos C, these compositions satisfy the interchange law. A bicategory is then constructed from C-objects, C-cospans, and doubly monic spans of C-cospans. The primary motivation for this construction is an application to graph rewriting. Technical details will be kept to a minimum.
4 November 2016
Speaker: Joshua Buli
Title: The discontinuous Galerkin (DG) method
Abstract: This talk will be an introduction to the Discontinuous Galerkin (DG) method applied to conservation laws. The DG method is us a class of finite element methods that use discontinuous basis functions. The method will be introduced on the simple Burgers' equation, and then the DG method will be used to numerically solve the single BBM and coupled BBM system, which are used to model water waves moving through a channel. We then provide numerical tests to demonstrate the usefulness of the DG method.
28 October 2016
Speaker: Priyanka Rajan
Title: Cohomogeneity one manifolds
Abstract: Let G be a compact Lie group acting effectively on a compact Riemannian manifold M. We say that group action is by cohomogeneity k if the orbit space M/G has dimension k. And the manifold M is then called a cohomogeneity k manifold. In this talk, we will discuss some general results regarding the curvature properties of cohomogeneity 1 manifolds.
21 October 2016
Speaker: Xavier Ramos Olivé
Title: An introduction to geometric mechanics
Abstract: During the 19th century, J.L. Lagrange and W.R. Hamilton reformulated classical mechanics by introducing equations that were independent of the chosen coordinate system. Moreover, their formulation allowed the study of constrained systems: we can study the motion of a particle on the surface of a sphere without understanding the force that keeps the particle attached to it. This lead to the development, during the 20th century, of Geometric Mechanics, that studies Lagrangian and Hamiltonian mechanics using differential geometry. This talk will be an introduction to Geometric Mechanics, with the goal to motivate the study of analysis in manifolds, as well as geometric objects like symplectic manifolds, Poisson manifolds, or Lie groups and Lie algebras.
14 October 2016
Speaker: Sean Watson
Title: An introduction to the worm-ridden Laakso spaces
Abstract: Laakso Spaces were first introduced by Tomi Laakso in 2000 as examples of Q-Regular spaces admitting a (1,1)-weak Poincare inequality, for any Q>1. In other words, they are examples of geometrically strange spaces that are strong enough that most analysis can still be done. The Laakso spaces were the first general examples of such spaces, with the added bonus that they are relatively simple spaces to work within. Geometrically we can picture the Laakso spaces as Cantor sets crossed with the unit interval, along with a countably dense collection of wormholes throughout the space connecting it all together. This talk will focus on constructing a simple Laakso space and, if time permits, a related Laakso graph while keeping technical details to a minimum.
7 October 2016
Speaker: Lawrence Mouillé
Title: What is comparison geometry, and why does anyone care?
Abstract: From Euclid to Gauss to Riemann to Nash to current mathematicians, so-called Riemannian geometry has had an interesting and complicated development. In this talk I will outline this story, describe the goals of comparison geometry and global Riemannian geometry, and present important results in this area. Technical details will be kept to a minimum, and all who are interested are encouraged to attend.
30 September 2016
Speaker: Jesse Cohen
Title: Operator algebras and topological K-Theory
Abstract: Vector bundles—assignments, in an appropriate sense of a vector space to every point of a topological space—are beautiful objects that arise naturally in many contexts in mathematics and physics. In particular, in topological K-theory, these structures form the basic building blocks of homotopy invariants of pointed spaces called K groups which, roughly speaking, tell us something about how twisted a vector bundle over a given space can be. In this talk, we will examine the construction of K0 and K1 via operator algebras and discuss the K-theory exact sequence.
23 September 2016
Speaker: Xander Henderson
Title: A brief introduction to non-Archimedean Fields
Abstract: Non-archimedean fields are an often overlooked collection of mathematical objects that are nevertheless beautiful and compelling. In this talk, we will discuss an analytic and an algebraic approach to constructing the p-adic numbers—the standard examples of non-archimedean fields—and explore some of their basic algebraic and topological properties.
|
2020-11-26 04:32:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5909972786903381, "perplexity": 707.7963505142277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186414.7/warc/CC-MAIN-20201126030729-20201126060729-00296.warc.gz"}
|
https://codereview.stackexchange.com/questions/195515/shift-one-value-of-array-values-to-the-beginning/195528
|
# Shift one value of array values to the beginning
Given an integer $value in the range of 1, 2, 3, I want to get an array that starts with$value and proceeds with the remaining two integers from above in arbitrary order. The best I could think of was doing something like this:
switch ($value) { case '1':$array = [1,2,3];
break;
case '2':
$array = [2,1,3]; break; case '3':$array = [3,1,2];
break;
}
Is there a shorter and more beautiful way of doing that? Something like that:
$array = push_to_top_of_array($value,[1,2,3]);
• I should have asked earlier... how is $array used after this code? would there ever be a case where other values would exist in that array? – Sᴀᴍ Onᴇᴌᴀ May 31 '18 at 15:00 • @SamOnela the array goes through a foreach loop. There is no case where other values would exist. – Adam May 31 '18 at 15:49 • Okay- what does the foreach loop do with it? Can you describe the output of the script? For code review, it is best to have a broad picture of what the code does. – Sᴀᴍ Onᴇᴌᴀ May 31 '18 at 15:50 ## 4 Answers Is there a shorter and more beautiful way of doing that? • shorter: yes • more beautiful: well, that sounds subjective... you can be the judge of the approaches below. One approach would be to take off the value using$index with array_splice() and then put it (the first element from that spliced array) at the beginning using array_unshift():
$array = [1, 2, 3]; array_unshift($array, array_splice($array,$value - 1, 1)[0]);
Another approach would be to merge the spliced array and the original array using array_merge():
$array = [1, 2, 3];$array = array_merge(array_splice($array,$value - 1, 1), $array); See it demonstrated in this playground example. Yes, for three array elements you can write:$arr = [
($value - 1) % 3 + 1, ($value + 0) % 3 + 1,
($value + 1) % 3 + 1 ]; The main ingredient here is the$value modulo 3 expression. The above code contains some redundancies, which I have kept to clearly show the construction of the code. If you absolutely need faster code instead of readable code, the above is equivalent to:
$arr = [$value,
$value % 3 + 1, 5 -$value - $value % 3 ]; Or, the brute force variant: // To be executed only once in the program.$arrs = [[], [1, 2, 3], [2, 3, 1], [3, 1, 2]];
// And then, whenever you need it:
$arr =$arrs[$value]; I hate hardcoded solutions and prefer generalized ones. You never know when your array will get a 4th element and break all the application. So a generalized solution would be like • get a key for the desired value • remove this element from array • add its value to the beginning of an array in PHP it would be like function push_to_top_of_array($value, $array) {$key = array_search($value,$array);
if ($key === false) { throw new \OutOfRangeException("Value not found"); } unset($array[$key]); array_unshift($array, $value); return$array;
}
$value = 2;$array = [3, 5, 2, 4];
$array = push_to_top_of_array($value, $array); I'm not sure if "more beautiful" means you want a one-liner, but here are two more concise ways to perform the task: *note, this doesn't validate the$value as being one of the element values -- if this is a necessary component of your project, then please clarify in your question.
Code: (Demo)
$value = 2;$array = [1, 2, 3];
array_unshift($array,$value); // prepend a duplicate
var_export(array_unique($array)); // kill the original echo "\n---\n";$value = 2;
$array = [1, 2, 3]; var_export(array_unique(array_merge([$value],\$array))); // prepend a duplicate, kill the original
Output:
array (
0 => 2,
1 => 1,
3 => 3,
)
---
array (
0 => 2,
1 => 1,
3 => 3,
)
On huge arrays, statistics have shown array_flip(array_flip()) out performs array_unique() but that would certainly be less beautiful and if you were dealing with big array I'm sure you would have mentioned that.
|
2019-10-20 20:53:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3015105426311493, "perplexity": 1805.2541927289572}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00424.warc.gz"}
|
https://www.cheenta.com/electric-field-from-electric-potential/
|
• No products in the cart.
# Electric Field from Electric Potential
If the electric potential is given by $$\chi=cxy$$, calculate the electric field.
Discussion:
$$E_x=-\frac{\partial\chi}{\partial x}=-cy$$
$$E_y=\frac{\partial \chi}{\partial y}=-cx$$
Hence electric field $$\vec{E}=-c(y\hat{i}+x\hat{j})$$
September 21, 2017
### 1 comment
1. This really is incredible as well as what I’d been searching for. Thank you for this.
|
2017-11-21 19:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.759217381477356, "perplexity": 1766.9748181211137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806422.29/warc/CC-MAIN-20171121185236-20171121205236-00692.warc.gz"}
|
http://electronics.stackexchange.com/questions/98114/why-are-there-only-four-passive-elements
|
# Why are there only four passive elements?
I've read that there are four types of passive elements: resistances, capacitors, inductors and memristors.
The memristor was predicted 30 years before it was produced. But why couldn't you invent other type of passive element? Is there a proof?
The definition I'm using of passive elements is something with no gain, no control and linear.
-
There's this spiffy graphic, which you might have seen. en.wikipedia.org/wiki/… Unfortunately I just find myself staring at it and thinking about memristors, rather than feeling like the question has been answered. – Phil Frost Jan 29 at 20:52
@PhilFrost Clearly I'm not the only one who likes that graphic! – Stephen Collings Jan 29 at 20:56
I think it's important to keep in mind that every wire displays resistance, capacitance and inductance. These are ideal circuit elements but in real life they are characteristics of pretty much every circuit element. The memristor doesn't fit that mold. You can't talk about the "memristance" of a wire. In my mind, the memristor does not belong in the same set as resistance, capacitance, and inductance. – Joe Hass Jan 29 at 21:28
I'm using of passive elements is something with no gain, no control and linear. Then the memristor is not a passive element since it is non-linear (except for the trivial case where it is just a resistor). According to Wiki, for the memristor we have: $v = M(q)i$ where q is understood to be the time integral of $i$. If $M(q)$ is constant, $v \propto i$ and, thus, we have a resistor. Otherwise, $v$ is not a linear function of $i$. For example, if $M(q) = mq$ then $\frac{dv}{dt} = m(i^2 + q\frac{di}{dt})$ – Alfred Centauri Jan 30 at 17:24
@jinawee, if a passive element must be linear, the memristor is not a passive element. From the Wiki article "Memristor": In his 1971 paper, Chua extrapolated a conceptual symmetry between the nonlinear resistor (voltage vs. current), nonlinear capacitor (voltage vs. charge) and nonlinear inductor (magnetic flux linkage vs. current). He then inferred the possibility of a memristor as another fundamental nonlinear circuit element linking magnetic flux linkage and charge. – Alfred Centauri Jan 30 at 18:18
There are four physical quantities of interest for electronics: voltage, flux, charge, and current. If you have four things and want to pick two, order not mattering, there are 4C2 = 6 ways to do that. Two of the physical quantities are defined in terms of the other two. (Current is change in charge over time. Voltage is change in flux over time.) That leaves four possible relationships: resistance, inductance, capacitance, and memristance.
If you want another fundamental component, you need another physical quantity to relate to these four. And while there are many physical quantities one might measure, none seem so tightly coupled as these. I'd suppose this is because electricity and magnetism are two aspects of the same force. I'd further suppose that since electromagnetism is now understood to be part of the electroweak force, one might be able to posit some relationships between the weak nuclear interaction and our four elements of voltage, current, charge, and flux.
I haven't the first clue how this would be physically manifested, especially given the relative weakness of the weak nuclear force at anything short of intranuclear distances. Perhaps in the presence of strong magnetic or electrical fields affecting the rates of radioactive decay? Or in precipitating or preventing nuclear fusion? I'd yet further suppose (I'm on a roll) that the field strengths required would be phenomenal, which is why they're not practical for everyday engineering.
But that's a lot of supposition. I am a mere engineer, and unqualified to speculate on such things.
-
I think it's more like "someone decided there are four physical quantities of interest for electronics". And really maybe there are only two, since charge is the integral of current, and flux the integral of voltage. Temperature is pretty important. So is power, or its derivative, energy. Or maybe I want to integrate flux to get a new thing, and define a component about that. – Phil Frost Jan 29 at 21:24
I think maybe the proof lies in the requirement (set in the question) that these passive components are linear, and that means that they have some linear relationship between current and voltage, thus there can't be other physical quantities of interest, by definition. But I'm just guessing. – Phil Frost Jan 29 at 21:26
Resistance is not defined as $R = \frac{dv}{di}$ but, rather, as the constant of proportionality of voltage and current, $R = \dfrac{v}{i}$ so, at best, this graphic is misleading. For example, an ideal voltage source in series with an ideal resistance $R$ satisfies $R = \frac{dv}{di}$ but such a combination is not a fundamental passive circuit element. – Alfred Centauri Jan 30 at 4:05
@AlfredCentauri There's a bit of explanation in the Wikipedia article for memristor that explains why everything was written as differential equations. I can't say I follow it (I don't speak math very well), but I understood it as "because it makes it easier to argue for memristors." – Phil Frost Jan 30 at 12:51
Personally I would have chosen to define M the other way round, so dq=MdΦ, then you could compare with dq=Cdv and justifiably call them flux-capacitors – Pete Kirkham Jan 30 at 13:49
But why couldn't you invent other type of passive element? Is there a proof?
Well, there is a proof, but it's circular. If you take "the four fundamental electronic variables", there are only six ways to combine them linearly. Four of the ways are components, and the other two are definitions. Stephen's answer explains this well. There are only four passive components because whoever made that claim only allowed four variables.
I can "invent" more "missing components" by introducing more variables. Current is the derivative of charge with respect to time:
$$i = \frac{\mathrm dq}{\mathrm dt}$$
I'm going to define a new term: surgingness. It's the derivative of current with respect to time:
$$s = \frac{\mathrm di}{\mathrm dt}$$
Mind blown? Put it back together. We do this all the time in physics. These sequences are analogous:
• position, velocity, acceleration
• charge, current, surgingness
We can differentiate variables as many times as we want and give the results names, if we want. Physics even has a name for the derivative of acceleration: jerk.
Now we can stick surgingness in that graphic from Stephen's answer. It goes below and to the left of current.
Now we can ask, what's the component that connects surgingness with voltage? It would be a component that obeys:
$$\mathrm dv = P \mathrm ds$$
I'm going to call $P$ Philistance. The component is called a Philator.
What's the utility of this component? I haven't a clue, but I predict it exists. In a few decades, when it's invented, I'll say "I told you so" and be famous.
-
I think you're just a Philistine. – hobbs Jan 30 at 1:55
If $s = \frac{di}{dt}$ and $dv = P ds$ then $v = P\frac{di}{dt} + V$, i.e., the Philator is just an inductor in series with a constant voltage source. – Alfred Centauri Jan 30 at 4:53
@AlfredCentauri Which means that if you make a passive Philator, you will indeed be very famous. – Buhb Jan 30 at 6:51
@Buhb, a passive Philator would be like a married bachelor. – Alfred Centauri Jan 30 at 12:41
@AlfredCentauri If you say so. I never was very good at math :) I was wondering, what if I integrate charge and integrate flux, then imagine there is some passive component there. Perhaps a "Forgistor"? Or is that also some combination of things we already have? – Phil Frost Jan 30 at 12:45
jinawee,
I think there are a large number of "passive" components yet to be both discovered and invented. "Passive" is a somewhat deceptive and ambiguous term we use in electronics. In electronics we have a lot of loose terminology that throws beginners a curve ball. You would think that for an exact science we would use more exact language. Not so.
As other posters have indicated the big three passives are resistors, capacitors and inductors. I don't know about this memristor gizmo. In my 50+ years of electronics experience I never held one in my hand or had one come up in a circuit design I've worked on.
Nevertheless, I think if you could come up with a device which could convert frequency to a proportional DC voltage, like a thermocouple converts temperature to voltage, you might join the likes of Michael Faraday in EE Heaven.
Likewise, if you could invent a device which converts electron flow directly to sound without the use of a magnet and coil, you might be onto something big as well.
Or for that matter an elastic material that directly converts current to motive force - the elusive artificial muscle tissue. That would forever change the world of pornography as much as Michael Faraday's vibrating coil did.
It's been quite a while since the EE world has enjoyed a new passive component. Keep us posted on your progress.
-
convert frequency to a proportional DC voltage - you mean like a low pass filter commonly used to convert a PWN output to a voltage for a cheap,DAC? – Michael Jan 30 at 4:53
a device which converts electron flow directly to sound without the use of a magnet and coil - or what about a device that bends light without the use of a material lens? – Michael Jan 30 at 4:56
an elastic material that directly converts current to motive force - or a holo-diode? – Michael Jan 30 at 4:57
@Michael "what about a device that bends light without the use of a material lens?" Look up in the sky on a clear day and you'll see one shining bright. – JAB Jan 30 at 13:45
@JAB I'd be more worried about what would happen if the planet fell on the device. – AJMansfield Jan 30 at 23:22
|
2014-11-01 04:02:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639377117156982, "perplexity": 957.0784578682059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903638.13/warc/CC-MAIN-20141030025823-00029-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://greprepclub.com/forum/if-x-and-y-are-integers-and-w-x-2y-x-3y-which-8058.html
|
It is currently 22 Mar 2019, 14:38
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
If x and y are integers, and w = x^2y + x + 3y, which
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Senior Manager
Joined: 20 May 2014
Posts: 282
Followers: 18
Kudos [?]: 50 [0], given: 220
If x and y are integers, and w = x^2y + x + 3y, which [#permalink] 20 Nov 2017, 10:47
00:00
Question Stats:
37% (01:57) correct 62% (00:55) wrong based on 8 sessions
If x and y are integers, and $$w = x^2y + x + 3y$$, which of the following statements must be true?
Indicate all such statements.
A. If w is even, then x must be even.
B. If x is odd, then w must be odd.
C. If y is odd, then w must be odd.
D. If w is odd, then y must be odd.
Kudos for correct solution.
[Reveal] Spoiler: OA
A, B, and C
[Reveal] Spoiler: OA
Manager
Joined: 15 Jan 2018
Posts: 147
GMAT 1: Q V
Followers: 3
Kudos [?]: 182 [1] , given: 0
Re: If x and y are integers, and w = x^2y + x + 3y, which [#permalink] 21 Feb 2018, 10:56
1
This post received
KUDOS
A good technique for problems asking what must be true is to try and make them not true. If you can't, it must be true. Also, a good technique for odd/even questions is to just test using your own odd and even numbers. Since every odd number will behave the same as every other in regards to oddness or evenness, it doesn't matter what number you choose. I like to use 0 and 1 for even and odd, respectively, since they tend to make the math very simple.
Also, it seems like a good idea to factor the equation since they probably didn't give it to us in the easiest format. One way to factor it is
y(3 + x^2) + x
A: I'll try to find a way to make w even when x is odd. Plugging in 1 for x gets me 4y + 1. It doesn't matter what y is; what we've got here is an even number plus 1, which must be odd. So there's no way to make an even number if x is odd. Thus A is in.
B: Well we just tried making x odd and found that w must be odd if x is, so B is in.
C: Let's see what happens if we substitute y for 1: We'll get 1(3 + x^2) + x or just x^2 + x + 3. We'll need to check what happens when x is odd and when x is even:
If x is 0, we get 0 + 0 + 3, which is odd.
If x is 1, we get 1 + 1 + 3, which is odd.
So it looks like C is in as well.
D: Let's try making y even and see whether we can get w to be odd:
If we make y = 0, we'll have 0(3 + x^2) + x, which is just x. So if we pick x to be odd, then w would be odd. And since we got an odd w with an even y, D is out.
_________________
-
-
-
-
-
Need help with GRE math? Check out our ground-breaking books and app.
Re: If x and y are integers, and w = x^2y + x + 3y, which [#permalink] 21 Feb 2018, 10:56
Display posts from previous: Sort by
If x and y are integers, and w = x^2y + x + 3y, which
Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group Kindly note that the GRE® test is a registered trademark of the Educational Testing Service®, and this site has neither been reviewed nor endorsed by ETS®.
|
2019-03-22 22:38:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4521092176437378, "perplexity": 1073.3243226148295}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202698.22/warc/CC-MAIN-20190322220357-20190323002357-00340.warc.gz"}
|
http://mathoverflow.net/questions/83679/automorphic-forms-and-quantum-groups
|
# Automorphic forms and quantum groups
The paper Eisenstein series and quantum affine algebras by Kapranov makes contact between automorphic forms and quantum groups. I haven't found even one other paper devoted to this theme.
Have other authors come at this, perhaps from other perspectives?
-
|
2016-06-27 00:31:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864719033241272, "perplexity": 1506.0673037367078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00129-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.khanacademy.org/kmap/geometry-g/xb12714f3a9120d2e:g220-3d-figures/g220-volume-with-fractions/a/volume-with-cubes-of-fractional-side-lengths
|
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
# Volume with cubes of fractional side lengths
Before, we've found volume by seeing how many cubes with 1-unit side lengths would fit into an object. Find out what happens when we find volume with smaller cubes.
## What is volume?
Volume is the amount of 3-dimensional space an object occupies. We measure volume in cubic units.
For example, the rectangular prism below has a volume of 24 cubic units because it is made up of 24 unit cubes.
We can also find the volume of a rectangular prism by multiplying the side lengths.
4, start text, u, n, i, t, s, end text, dot, 2, start text, u, n, i, t, s, end text, dot, 3, start text, u, n, i, t, s, end text, equals, 24, start text, c, u, b, i, c, space, u, n, i, t, s, end text
That works well when we can fill the prism completely with unit cubes. How could we find volume when a prism has fractional side lengths and spaces too small to fill with unit cubes?
## Filling a unit cube with smaller cubes
Let's try starting with smaller cubes.
1.1
This is a cubic centimeter because each of its sides is 1, start text, c, m, end text long.
How many dice with edge lengths of start fraction, 1, divided by, 2, end fraction, start text, c, m, end text do we need to fill the cubic centimeter?
dice
1.2
What is the volume, in cubic centimeters, of a die with edge lengths of start fraction, 1, divided by, 2, end fraction, start text, c, m, end text?
cubic centimeters
## Filling a rectangular prism with smaller cubes
Let's consider the following rectangular prism.
2.1
What is the volume of the prism?
start text, c, m, end text, cubed
2.2
Label how many dice with edge lengths of start fraction, 1, divided by, 2, end fraction, start text, c, m, end text would fit across the length, width, and height of the same prism.
Click each dot on the image to select an answer.
2.3
Based on the numbers of dice you found above, how many dice with edge lengths of start fraction, 1, divided by, 2, end fraction, start text, c, m, end text would it take to fill the prism?
dice
Before continuing, take a minute to tell a friend how you know the number of dice it would take to fill the prism.
2.4
How does the volume of the prism, in cubic centimeters, relate to the number of dice with edge lengths of start fraction, 1, divided by, 2, end fraction, start text, c, m, end text it takes to fill the prism?
The volume of the prism, in cubic centimeters, is
times the number of dice it takes to fill it.
Why is the number of dice different from the volume?
2.5
Suppose we fill the following prism with dice with start fraction, 1, divided by, 2, end fraction, start text, c, m, end text side lengths.
What is the product of the number of dice and the volume per die, and what does that product represent?
## Finding volume with cubes with fractional side lengths
Now we know 2 different ways to find the volume of a prism with whole number side lengths:
1. Find the number of cubes of some size that would fit it and multiply by the volume per cube.
2. Multiply the side lengths.
Either method gives us the same volume. Do you have another method? Tell us about it below in the comments.
## Prisms with fractional side lengths
Suppose we fill the following prism with cubes with side lengths of start fraction, 1, divided by, 4, end fraction, start text, c, m, end text.
How would you find the number of cubes that fill the prism?
3.1
How many cubes with side lengths of start fraction, 1, divided by, 4, end fraction, start text, c, m, end text does it take to fill the prism?
cubes
3.2
What is the volume, in cubic centimeters, of a cube with side lengths of start fraction, 1, divided by, 4, end fraction, start text, c, m, end text?
start text, c, m, end text, cubed
3.3
What is the volume of the prism?
start text, c, m, end text, cubed
3.4
How is the number of cubes with side lengths of start fraction, 1, divided by, 4, end fraction, start text, c, m, end text related to the volume, in cubic centimeters, of the prism?
The number of cubes is
times the cubic centimeters of volume of the prism.
3.5
Evaluate 1, start fraction, 1, divided by, 4, end fraction, start text, c, m, end text, times, 2, start fraction, 1, divided by, 2, end fraction, start text, c, m, end text, times, 3, start text, c, m, end text (the product of the dimensions of the prism).
start text, c, m, end text, cubed
## Summary
Both strategies of finding volume work with rectangular prisms with fractional side lengths, too! Describe those 2 strategies to a friend.
Do you have another way of finding volume when the rectangular prism has fractional side lengths? Tell us about it in the comments.
## Want to join the conversation?
• This is so confusing!
• OMG this is so confusing i only got like 3 questions right
• This can be confusing at time, so here's some tips!
Anything to the power of 3, is cubed, so it's multiplied by itself 3 times.
Anytime you have mixed numbers, just turn everything into a fraction (remember to simplify!) and multiply by the least common multiple (LCM)!
Area can get annoying, but think of it like your bedroom! By measuring two of your walls by your floor, you can see how many cubes could fill your room! (2 x 3 x 10 = 60)
• wow, thanks so much! this explanation made the mixed numbers way easier :0 i think i'll still need help from my classmates or something though lol those are hard
• I am kinda confused
|
2023-02-08 11:29:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 67, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403790473937988, "perplexity": 1777.506501557009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00431.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-11-section-11-2-arithmetic-sequences-exercise-set-page-839/12
|
## Intermediate Algebra for College Students (7th Edition)
The first six terms are: $200, 140, 80, 20, -40, -100$
|
2018-11-18 18:10:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32130834460258484, "perplexity": 6672.648868156814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118201645-00033.warc.gz"}
|
http://mathoverflow.net/questions/19599/about-the-gauss-map-of-a-surface-in-euclidean-3-space
|
# About the Gauss map of a surface in euclidean 3 space
Regarding the sphere as complex projective line (take $(0,0,1)$ as the infinite point), the Gauss map of a smooth surface in the 3 dimensional space pulls a complex line bundle back on the surface.
My question is, what the bundle is? (In the trivial case, if the surface is sphere itself, the bundle is just the tautological line bundle.)
Does the chern class (Of course the first one) of this bundle depend on the embedding of the surface? (The Jacobian determinant of Gauss map is just the Gauss curvature, hence is intrinsic. Also its degree is the Euler $\chi$, so I ask for more...)
If yes, how much does the chern class/bundle reflect the geometry of embedding?
There may be something to make the question meaningless, such as there is no cannonical way to identify a sephere with the projective line... But as a beginner in learning geometry, I am still curious to it...
-
You forgot to divide $\chi$ by 2. – Sergei Ivanov Mar 28 '10 at 10:04
I assume that your surface is closed. Suppose you have a fixed vector bundle $\xi$ over $S^2$ (no matter which one). You have an oriented surface $M$ embedded in $\mathbb R^3$, which defines the Gauss map $\nu:M\to S^2$, which defines the vector bundle $\nu^*\xi$ on $M$. You want to know whether $\nu^*\xi$ depends on the embedding.
No it does not. Indeed, $\deg\nu=\chi(M)/2$ regardless of the embedding. Two maps $f_1,f_2:M\to S^2$ having the same degree are homotopic. And homotopic maps induce the same bundle.
Concerning the Chern class, we have $c_1(\nu^*\xi)=\nu^*(c_1(\xi))$ by definition. So, if you identify the top cohomology with integers, then $c_1(\nu^*\xi)$ is (the same number as) $\deg(\nu) c_1(\xi)=\frac12\chi(M)c_1(\xi)$.
-
Let me add something: As Sergei said, the pull-back bundle (without any geometric structure) will be the same. Nethertheless, there is more structure: Regrading your smooth embedded surface as a Riemann surface $M$ (induced from the metric or first fundamental form), you have the canonical bundle $K\to M$ on it (the bundle of complex linear 1-forms). This bundle is in a natural way a holomorphic bundle, holomorphic sections are exactly the closed complex linear forms. Now, on every Riemann surface, there are exactly $2^{2g}$ complex holomorphic line bundles $S\to M$ which satisfy $S^2=K$ holomorphically. These bundles are called spin-bundles.
It turns out that your pull-back bundle is a spin bundle of the Riemann surface $M.$ Moreover, the type of the spin-bundle tells you something about the way your surface is embedded. For example, the spin-bundle of an embedding has no global holomorphic section. Moreover two immersions are homotopic, iff the spin bundles are the same. There is a nice paper of Pinkall ("Regular homotopy classes of immersed surfaces") were all these questions are answered.
-
|
2015-10-10 05:45:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98366779088974, "perplexity": 351.3547665250646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940794.79/warc/CC-MAIN-20151001221900-00163-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://eprints.soton.ac.uk/63696/
|
The University of Southampton
University of Southampton Institutional Repository
# Clinical studies of real-time monitoring of lithotripter performance using passive acoustic sensors
Leighton, T.G., Fedele, F., Coleman, A.J., McCarthy, C., Ryves, S., Hurrell, A.M., De Stefano, A. and White, P.R. (2008) Clinical studies of real-time monitoring of lithotripter performance using passive acoustic sensors. Evan, A.P., Lingeman, J.E., McAteer, J.A. and Williams Jr, J.C. (eds.) In Renal Stone Disease 2: 2nd International Urolithiasis Research Symposium. vol. 1049, American Institue of Physics. pp. 256-277 . .
Record type: Conference or Workshop Item (Paper)
## Abstract
This paper describes the development and clinical testing of a passive device which monitors the passive acoustic emissions generated within the patient's body during Extracorporeal Shock Wave Lithotripsy (ESWL). Designed and clinically tested so that it can be operated by a nurse, the device analyses the echoes generated in the body in response to each ESWL shock, and so gives real time shock-by-shock feedback on whether the stone was at the focus of the lithotripter, and if so whether the previous shock contributed to stone fragmentation when that shock reached the focus. A shock is defined as being effective' if these two conditions are satisfied. Not only can the device provide real-time feedback to the operator, but the trends in shock effectiveness' can inform treatment. In particular, at any time during the treatment (once a statistically significant number of shocks have been delivered), the percentage of shocks which were effective' provides a treatment score TS(t) which reflects the effectiveness of the treatment up to that point. The TS(t) figure is automatically delivered by the device without user intervention. Two clinical studies of the device were conducted, the ethics guidelines permitting only use of the value of TS(t) obtained at the end of treatment (this value is termed the treatment score TS0). The acoustically-derived treatment score was compared with the treatment score CTS2 given by the consultant urologist at the three-week patient's follow-up appointment. In the first clinical study (phase 1), records could be compared for 30 out of the 118 patients originally recruited, and the results of phase 1 were used to refine the parameter values (the rules') with which the acoustic device provides its treatment score. These rules were tested in phase 2, for which records were compared for 49 of the 85 patients recruited. Considering just the phase 2 results (since the phase 1 data were used to draw up the rules' under which phase 2 operated), comparison of the opinion of the urologist at follow-up with the acoustically derived judgment identified a good correlation (kappa = 0.94), the device demonstrating a sensitivity of 91.7% (in that it correctly predicted 11 of the 12 treatments which the urologist stated had been successful' at the 3-week follow-up), and a specificity of 100% (in that it correctly predicted all of the 37 treatments which the urologist stated had been unsuccessful' at the 3-week follow-up). The gold standard' opinion of the urologist (CTS2) correlated poorly (kappa = 0.38) with the end-of-treatment opinion of the radiographer (CTS1). This is due to the limited resolution of the lithotripter X-Ray fluoroscopy system. If the results of phase 1 and phase 2 are pooled to form a dataset against which retrospectively to test the rules drawn up in phase 1, when compared with the gold standard CTS2, over the two clinical trials (79 patients) the device-derived scored (TS0) correctly predicted the clinical effectiveness of the treatment for 78 for the 79 patients (the error occurred on a difficult patient with a high body mass index). In comparison, using the currently available technology the in-theatre clinician (the radiographer) provided a treatment score CTS1 which correctly predicted the outcome of only 61 of the 79 therapies. In particular the passive acoustic device correctly predicted 18 of the 19 treatments that were successful (i.e. 94.7 sensitivity), whilst the current technology enabled the in-theatre radiographer to predict only 7 of the 19 successful treatments (i.e. 36.8 sensitivity). The real-time capabilities of the device were used in a preliminary examination of the effect of ventilation.
Full text not available from this repository.
Published date: 18 April 2008
Venue - Dates: Renal Stone Disease 2: 2nd International Urolithiasis Research Symposium, 2008-04-17 - 2008-04-18
Keywords: lithotripsy, cavitation, kidney stone fragmentation, eswl, passive acoustic sensor
## Identifiers
Local EPrints ID: 63696
URI: https://eprints.soton.ac.uk/id/eprint/63696
ISBN: 9780735405776
PURE UUID: 64c85e22-9937-425d-bcf4-62ca92e5313c
ORCID for T.G. Leighton: orcid.org/0000-0002-1649-8750
ORCID for P.R. White: orcid.org/0000-0002-4787-8713
## Catalogue record
Date deposited: 28 Oct 2008
## Contributors
Author: T.G. Leighton
Author: F. Fedele
Author: A.J. Coleman
Author: C. McCarthy
Author: S. Ryves
Author: A.M. Hurrell
Author: A. De Stefano
Author: P.R. White
Editor: A.P. Evan
Editor: J.E. Lingeman
Editor: J.A. McAteer
Editor: J.C. Williams Jr
|
2019-06-16 10:29:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5081522464752197, "perplexity": 7191.716682520835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00384.warc.gz"}
|
https://brilliant.org/problems/no-title-for-now-9/
|
# I Was Very Amazed At The Solution 10
Geometry Level 3
Evaluate:
$\sum_{x = 45}^{89} \cot(x^\circ) - \sum_{x = 46}^{89} \big(\cot(2x^\circ) + \csc(2x^\circ)\big)$
For more problems like this, try this set.
×
|
2018-03-24 12:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177958726882935, "perplexity": 6759.4001814879875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650262.65/warc/CC-MAIN-20180324112821-20180324132821-00105.warc.gz"}
|
https://idrissi.eu/en/
|
##### Najib Idrissi
###### Maître de conférences
Hello! I am a maître de conférences at the math department of the University of Paris and a member of the team-project Algebraic Topology & Geometry of the Institut de Mathématiques de Jussieu–Paris Rive Gauche. I am one of the organizers of the Topology Seminar of the IMJ-PRG. You can find more info in my CV.
I am mainly interested in operads and their applications to algebraic topology and homological algebra. I am especially interested in the study of configuration spaces of manifolds, their links to graph complexes, and the invariants they define.
I gave a Peccot lecture at the Collège de France in Spring 2020, you can find it here.
(Last updated on Jun 26, 2020)
## Research
PDF arXiv Source
PDF arXiv Source
##### The Lambrechts–Stanley Model of Configuration Spaces. In: Invent. Math 216.1, pp. 1–68, 2019.
PDF DOI arXiv MR Zbl Source
PDF arXiv Source
PDF arXiv Source
PDF arXiv Source
PDF arXiv Source
##### Swiss-Cheese Operad and Drinfeld Center. In: Israel J. Math 221.2, pp. 941–972, 2017.
PDF DOI arXiv MR Zbl Source
## Talks
##### Toric Topology Research Seminar (online)– Apr 23, 2020, Fields Institute (online)
Slides
Real homotopy of configuration spaces
Abstract: Configuration spaces consist of ordered collected ofpairwise distinct points in a given manifold. In this talk, I will present several algebraic models for the real/rational homotopy types of (possibly framed) configuration spaces. These models canbe used to establish real/rational homotopy invariance of configuration spaces under dimensionality and connectivity assumptions. Moreover, the collection of all configuration spacesof a given manifold has the structure of a right module over some version of the little disks operad, and the algebraic models are compatible with this extra structure. The proofs all use ideas from the theory of operads, namely Kontsevich’s proof of the formality of the little disks operad and – for oriented surfaces – Tamarkin’s proof of the formality of the little 2-disks operad.(Based on joint works with Campos, Ducoulombier, Lambrechts, and Willwacher.)
##### Málaga & Topology Meeting– Feb 5, 2020, Universidad de Málaga
Real homotopy of configuration spaces
Abstract: I will present several algebraic models for the real/rational homotopy types of (ordered) configuration spaces of points and framed points in a manifold. These models can be used to establish real/rational homotopy invariance of configuration spaces under dimensionality and connectivity assumptions. Moreover, the collection of all configuration spaces of a given manifold has the structure of a right module over some version of the little disks operad, and the algebraic models are compatible with this extra structure. The proofs all use ideas from the theory of operads, namely Kontsevich’s proof of the formality of the little disks operad and – for oriented surfaces – Tamarkin’s proof of the formality of the little 2-disks operad. (Based on joint works with Campos, Ducoulombier, Lambrechts, and Willwacher.)
##### Seminar– Jan 17, 2020, Aarhus Universitet
Factorization homology and configuration spaces
Abstract: Factorization homology is a homology theory for structured manifolds (e.g. oriented or parallelized) which finds its roots in topological and conformal field theory (cf. Beilinson–Drinfeld, Salvatore, Lurie, Ayala–Francis, Costello–Gwilliam among others). After defining factorization homology, I will explain how to compute it for simply connected closed manifolds over the real numbers using the Lambrechts–Stanley model of configuration spaces.
##### Opening workshop of the OCHoTop project– Dec 10, 2019, EPFL (Lausanne)
Notes
Models for configuration spaces of manifolds
Abstract: Configuration spaces consist in ordered collections of pairwise disjoint points. The collection of all configuration spaces of a given manifold has the structure of a right module over some version of the little disks operad. In this talk, I will present algebraic models for the real or rational homotopy types configuration spaces and framed configuration spaces of manifolds as right modules. The proofs all rely on operad theory, more precisely Kontsevich’s proof of the formality of the little disks operad and - for oriented surfaces - Tamarkin’s proof of the formality of the little 2-disks operad. (Based on joint works with Campos, Ducoulombier, Lambrechts, and Willwacher.)
##### Journée Amiénoise de Topologie– Nov 14, 2019, Université de Picardie Jules Verne (Amiens)
Notes
Homotopie des espaces de configuration
Abstract: Les espaces de configuration sont des objets classiques en topologie algébrique, mais l’étude de leur type d’homotopie reste une question difficile. Après les avoir introduits, je présenterai des techniques de la théorie de l’homotopie rationnelle qui permettent d’obtenir des résultats concernant les espaces de configuration de variétés compactes, sans bord et à bord. J’expliquerai ensuite comment appliquer ces résultats pour calculer l’homologie de factorisation, un invariant des variétés inspiré par les théories des champs quantiques.
## Teaching (2020–2021)
##### Elementary algebra and analysis 2
L1 Chemistry (S2) • Exercise sessions • 36h
##### Homotopy II
M2 Fundamental Mathematics (S2) • Lectures • 24h
##### Elementary algebra and analysis & Mathematical Reasoning 1
L1 Maths (S1) • Lectures + Exercise sessions • 56.5h
##### Algorithms and Programmation
L2 Maths (S1) • exercises+labs • 42h
## Blog
##### arxiv2bib– Jun 29, 2020 #math #arxiv
tl;dr: a2b.idrissi.eu to get a .bib from arXiv entries.
Have you ever wanted to create a bib entry from an arXiv preprint? There are a few tools available, including one provided by arXiv (click on “NASA ADS” in the sidebar when viewing an entry), but none of them worked as I wanted. They all had quirks and problems (like displaying some URL twice, putting “arXiv” as in the journal field even though it doesn’t belong there, no biblatex support, etc). In the end, I always had to fix things by hand, and it took almost as long as writing the entry myself.
##### Peccot lecture & COVID-19– Last updated on Jun 26, 2020 #math #peccot
Update: The videos are now available on Youtube! Please go there for the third lecture and there for the fourth lecture.
As some of you may know I was one of the people chosen this year to give a Peccot lecture at the Collège de France (see my first post about it). And as you all know for sure, normal life came to a halt a couple of months ago when the number of COVID-19 cases exploded in France (and the world) and the French government ordered a lockdown. While I was able to give my first two lectures before the lockdown started, the last two had to be postponed.
Thankfully, the number of cases is now diminishing and the lockdown is progressively being lifted. I was thus able to record my third lecture yesterday; it should appear online in a few days. The experience was somewhat surreal: I gave a two-hour lecture to a large classroom that was completely empty except for the cameraman and me. I had to give some online classes during the lockdown, but even then there was a certain sense of interactivity, whereas I was almost literally talking to wall yesterday, which was a bit destabilizing. But still, I’m happy that I was able to record the lecture, and I’d like to thank the Collège de France again for the opportunity! The current situation is extremely difficult for everyone, and I’m not the worst one off: it’s a very small sacrifice in the face of the public health crisis.
I hope people will still find it interesting and that the video will not feel too strange. I could not take questions during the lecture, obviously, but I will be happy to answer any you might have via email.
##### Braid video– Apr 21, 2020 #math #talk #animation
Thursday I’m giving a talk at the online Toric Topology research seminar. (I was supposed to go there in person, but you can probably expect, the current pandemic made that impossible.) So I took the opportunity to prepare a little illustration to explain the connection between braids and configuration spaces!
##### First Peccot lecture– Mar 5, 2020 #math #peccot
Yesterday was my first Peccot lecture! I think it went okay. The video is going to be available soon on this webpage. I mainly talked about the background for my course: what are configuration spaces, why do we care about them, what do we know about them, and what we would like to know about them.
##### Video– Feb 28, 2020 #math #peccot #animation
I am finishing to prepare my Peccot Lectures that start next week. I have prepared a small animation to illustrate the Fulton–MacPherson compactification using Blender, and I think it’s relatively neat! I am not a 3D artist, obviously, but (with oral explanations) I think it explains the concept better than drawing on the board, since drawing moving 3D pictures is not an easy task… The animation is available here, and here it is in all its glory:
|
2020-08-15 05:02:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4957553744316101, "perplexity": 1843.1524175281068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00358.warc.gz"}
|
https://www.physicsforums.com/threads/mathematics-kinematics-help.287092/
|
# Mathematics-Kinematics help
1. Jan 24, 2009
### icystrike
The question goes like this:
A body moves in a straight line from a fixed point O . Its distance from O, s meters , is given by s=t-$$\frac{1}{9}$$t³ , where t is the time in seconds after passing through O. Fubd
(a)the time when the body returns to O.
I have done this question by plugging s=0 and i get t=3
(b)the velocity at this instant.
I found the $$\frac{ds}{dt}$$ and i plug in t=3 and i got -2m/s
(c)the value of t when the body is instantaneously at rest.
I assumed $$\frac{ds}{dt}$$=0 and i've gotten myself t=$$\sqrt{3}$$
(d) the distance moved by the body in the 2nd second.
I dont know what is the question asking , but the answer should be 0.304.
2. Jan 24, 2009
### CompuChip
Re: Mathematics-Kinematics
Your approach for the first three looks correct, although for b and c I didn't check the numbers (so assuming that you did the math correctly as well you should have found the correct answers).
For d, I suppose you would just argue like: the first second is the time between t = 0 and 1, the 2nd second is between t = 1 and t = 2, etc. - then find the distance that is travelled between those times.
3. Jan 24, 2009
### icystrike
Re: Mathematics-Kinematics
Thanks JIANKAI!!
Thank you chip (:
I know how its works le.
Let f(x)=t-$$\frac{1}{9}$$t³
f(root3)-f(1) + f(root3)-f(2)
4. Jan 24, 2009
### CompuChip
Re: Mathematics-Kinematics
What or who is Jiankai?
And are you sure about that answer? The signs look a bit off to me. How did you get that?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
2018-02-25 05:55:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6769552230834961, "perplexity": 1191.5067467638535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00546.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/mc1/chapter/7/lesson/7.2.3/problem/7-74
|
### Home > MC1 > Chapter 7 > Lesson 7.2.3 > Problem7-74
7-74.
How long does it take her to knit each hat?
$\frac{19 \text{ hours}}{5 \text{ hats}}= 3\frac{4}{5}\ \text{hours to make each hat}$
Use this information to find out how long it takes to knit 12 hats.
|
2021-03-07 12:36:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42998650670051575, "perplexity": 3525.6663525258723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00406.warc.gz"}
|
http://openstudy.com/updates/559af0d2e4b05670bbb3ac5c
|
## anonymous one year ago Which of the following cannot be derived from the law of sines?
1. anonymous
2. Loser66
What is the law of sines?
3. anonymous
dont know.... lol that why i asked
4. Loser66
hahaha... I don't know either. That is why I asked you about the law before considering which one is the correct one. Can you google it?
5. anonymous
lmfao.. yea i did man! i got nuthin! imma fail this test !
6. mathstudent55
Law of Sines $$\large \dfrac{a}{\sin A} = \dfrac{b}{\sin B} = \dfrac{c}{\sin C}$$
7. mathstudent55
A. The first choice is just the first two fractions of the law of sines, so it certainly can be derived from the law of sines.
8. mathstudent55
Now look at choice B. Can you change that equation to make it look like the law of sines?
9. mathstudent55
B. $$a \cdot \sin B = b \cdot \sin A$$ What happens if you divide both sides by $$\sin A \sin B$$ ?
10. anonymous
cross multiply?
11. mathstudent55
You can only cross multiply if you have a fraction equaling a fraction. You need to do to choice B the opposite of cross multiply and end up with two fractions. Do the division I mentioned above. What do you get?
12. anonymous
to be honest ! not really sure!
13. anonymous
14. mathstudent55
This is still choice B. Divide both sides by sin A sin B: $$\dfrac{a \cdot \sin B}{\sin A \sin B} = \dfrac{b \cdot \sin A}{\sin A \sin B}$$ What cancels out of each side and what are you left with?
15. mathstudent55
$$\dfrac{a \cdot \cancel{\sin B}}{\sin A \cancel{\sin B}} = \dfrac{b \cdot \cancel{\sin A}}{\cancel{\sin A} \sin B}$$ What is left? $$\dfrac{a }{\sin A } = \dfrac{b }{\sin B}$$ Isn't what is left still the law of sines? That means choice B. is not the answer.
16. mathstudent55
Now work on choice C. First, cross multiply. Then divide both sides by sin B sin C. What do you get?
|
2017-01-19 08:50:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6918854117393494, "perplexity": 1280.202435496003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1133071
|
# [GIF] Tetra (Stereographic projections of concentric annuli)
Posted 1 year ago
1290 Views
|
0 Replies
|
2 Total Likes
|
TetraThis is the same basic idea as Interference and Dig In: in each case I have a collection of points on the unit sphere, I'm forming concentric circles around those points, and then stereographically projecting to the plane. In Interference, the chosen points were $(-1/\sqrt{2},1/\sqrt{2},0)$ and $(1/\sqrt{2},1/\sqrt{2},0)$, and I was just thinking of the circles as one-dimensional objects that were still circles under projection. In Dig In there was just a single point $(0,0,-1)$, and I projected the entire disk centered at that point bounded by each circle.In this animation there are four points, which are vertices of a regular tetrahedron (notice that I'm just extracting the vertex coordinates of Mathematica's stored regular tetrahedron and taking the points to be their antipodal images [only because I wanted the green circles to grow rather than shrink]). But now I'm thinking of actually physically drawing the circles centered at each of these points, so the circles have thickness (or, more precisely, they are annuli). Circles map to circles under stereographic projection, but annuli don't map to annuli, since concentric circles don't necessarily map to concentric circles; this is what produces the colored shapes in the animation.More precisely, Cos[r + s + a] p[[i]] + Sin[r + s + a] (Cos[t] b[[i, 1]] + Sin[t] b[[i, 2]]) is the circle of radius r + s + a centered at p[[i]], so if we stereographically project and do a ParametricPlot with t varying from 0 to $2\pi$ and a varying from $-0.05$ to $0.05$, we get the colored regions. Just to explain the rest of the code, the Graphics object inside the Table then gets the borders of those regions and the ImageCompose business is to make everything look flat rather than stacked. The various annoying If statements are to prevent anything from going to infinity. Needless to say, the resulting mess is much too slow to make into a Manipulate, so here's the code to generate the GIF: Stereo[{x_, y_, z_}] := 1/(1 - z) {x, y}; tetra = Block[ {inf, w = .05, b, n = 9/2, p = -Normalize[RotationTransform[π/6, {0, 0, 1}][#]] & /@ PolyhedronData["Tetrahedron", "VertexCoordinates"], cols = RGBColor /@ {"#35ff8d", "#35a7ff", "#ff35a7", "#ff8d35", "#fafafa"}}, b = Orthogonalize[NullSpace[{#}]] & /@ p; ParallelTable[ ImageCompose[ Graphics[Background -> GrayLevel[.1], ImageSize -> 540], Flatten[ Table[ inf = i >= 2 && s == π/n && ArcCos[1/3] - π/n - w <= r <= ArcCos[1/3] - π/n + w; Show[ ParametricPlot[ Stereo[Cos[r + s + a] p[[i]] + Sin[r + s + a] (Cos[t] b[[i, 1]] + Sin[t] b[[i, 2]])], {t, If[inf, If[i == 2, 0., π/16.5] + π/100, 0], If[inf, If[i == 2, 0., π/16.5] + 2 π - π/100, 2 π]}, {a, -w, w}, BoundaryStyle -> None, PlotPoints -> {50, 3}, PlotStyle -> Directive[Opacity[.3], cols[[i]]]], Graphics[{ FaceForm[None], EdgeForm[ Directive[Opacity[.5], cols[[i]], Thickness[.003]]], Table[ Polygon[ Table[ Stereo[Cos[r + s + a] p[[i]] + Sin[r + s + a] (Cos[t] b[[i, 1]] + Sin[t] b[[i, 2]])], {t, 0., 2 π, 2 π/600}]], {a, {-w, w}}]}], PlotRange -> 6.5, ImageSize -> 540, Axes -> None, Frame -> False], {i, 1, 4}, {s, 0., π - If[(i == 1 && r >= π/(2 n) - w) || r > π/(2 n), 1, 0] π/n, π/n}], 1] ], {r, 0., π/n - #, #}] &[π/(100 n)] ]; Export[NotebookDirectory[] <> "tetra.gif", tetra, "DisplayDurations" -> 3/100, "AnimationRepetitions" -> Infinity]
|
2019-02-22 06:29:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21978135406970978, "perplexity": 4353.989914689523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00021.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=T.S.%20Metcalfe
|
• ### Photospheric and chromospheric magnetic activity of seismic solar analogs. Observational inputs on the solar/stellar connection from Kepler and Hermes(1608.01489)
Aug. 4, 2016 astro-ph.SR
We identify a set of 18 solar analogs among the seismic sample of solar-like stars observed by the Kepler satellite rotating between 10 and 40 days. This set is constructed using the asteroseismic stellar properties derived using either the global oscillation properties or the individual acoustic frequencies. We measure the magnetic activity properties of these stars using observations collected by the photometric Kepler satellite and by the ground-based, high-resolution Hermes spectrograph mounted on the Mercator telescope. The photospheric (Sph) and chromospheric (S index) magnetic activity levels of these seismic solar analogs are estimated and compared in relation to the solar activity. We show that the activity of the Sun is comparable to the activity of the seismic solar analogs, within the maximum-to-minimum temporal variations of the 11-year solar activity cycle 23. In agreement with previous studies, the youngest stars and fastest rotators in our sample are actually the most active. The activity of stars older than the Sun seems to not evolve much with age. Furthermore, the comparison of the photospheric, Sph, with the well-established chromospheric, S index, indicates that the Sph index can be used to provide a suitable magnetic activity proxy which can be easily estimated for a large number of stars from space photometric observations.
• ### Detection of Solar-Like Oscillations, Observational Constraints, and Stellar Models for $\theta$ Cyg, the Brightest Star Observed by the {\it Kepler} Mission(1607.01035)
July 4, 2016 astro-ph.SR
$\theta$ Cygni is an F3 spectral-type main-sequence star with visual magnitude V=4.48. This star was the brightest star observed by the original Kepler spacecraft mission. Short-cadence (58.8 s) photometric data using a custom aperture were obtained during Quarter 6 (June-September 2010) and subsequently in Quarters 8 and 12-17. We present analyses of the solar-like oscillations based on Q6 and Q8 data, identifying angular degree $l$ = 0, 1, and 2 oscillations in the range 1000-2700 microHz, with a large frequency separation of 83.9 plus/minus 0.4 microHz, and frequency with maximum amplitude 1829 plus/minus 54 microHz. We also present analyses of new ground-based spectroscopic observations, which, when combined with angular diameter measurements from interferometry and Hipparcos parallax, give T_eff = 6697 plus/minus 78 K, radius 1.49 plus/minus 0.03 solar radii, [Fe/H] = -0.02 plus/minus 0.06 dex, and log g = 4.23 plus/minus 0.03. We calculate stellar models matching the constraints using several methods, including using the Yale Rotating Evolution Code and the Asteroseismic Modeling Portal. The best-fit models have masses 1.35-1.39 solar masses and ages 1.0-1.6 Gyr. theta Cyg's T_eff and log g place it cooler than the red edge of the gamma Doradus instability region established from pre-Kepler ground-based observations, but just at the red edge derived from pulsation modeling. The pulsation models show gamma Dor gravity-mode pulsations driven by the convective-blocking mechanism, with frequencies of 1 to 3 cycles/day (11 to 33 microHz). However, gravity modes were not detected in the Kepler data, one signal at 1.776 cycles/day (20.56 microHz) may be attributable to a faint, possibly background, binary. Asteroseismic studies of theta Cyg and other A-F stars observed by Kepler and CoRoT, will help to improve stellar model physics and to test pulsation driving mechanisms.
• ### Magnetic variability in the young solar analog KIC 10644253: Observations from the Kepler satellite and the HERMES spectrograph(1603.00655)
March 2, 2016 astro-ph.SR
The continuous photometric observations collected by the Kepler satellite over 4 years provide a whelm of data with an unequalled quantity and quality for the study of stellar evolution of more than 200000 stars. Moreover, the length of the dataset provide a unique source of information to detect magnetic activity and associated temporal variability in the acoustic oscillations. In this regards, the Kepler mission was awaited with great expectation. The search for the signature of magnetic activity variability in solar-like pulsations still remained unfruitful more than 2 years after the end of the nominal mission. Here, however, we report the discovery of temporal variability in the low-degree acoustic frequencies of the young (1 Gyr-old) solar analog KIC 10644253 with a modulation of about 1.5 years with significant temporal variations along the duration of the Kepler observations. The variations are in agreement with the derived photometric activity. The frequency shifts extracted for KIC 10644253 are shown to result from the same physical mechanisms involved in the inner sub-surface layers as in the Sun. In parallel, a detailed spectroscopic analysis of KIC 10644253 is performed based on complementary ground-based, high-resolution observations collected by the HERMES instrument mounted on the MERCATOR telescope. Its lithium abundance and chromospheric activity S-index confirm that KIC 10644253 is a young and more active star than the Sun.
• ### Rotation periods and seismic ages of KOIs - comparison with stars without detected planets from Kepler observations(1510.09023)
Oct. 30, 2015 astro-ph.SR
One of the most difficult properties to derive for stars is their age. For cool main-sequence stars, gyrochronology relations can be used to infer stellar ages from measured rotation pe- riods and HR Diagram positions. These relations have few calibrators with known ages for old, long rotation period stars. There is a significant sample of old Kepler objects of inter- est, or KOIs, which have both measurable surface rotation periods and precise asteroseismic measurements from which ages can be accurately derived. In this work we determine the age and the rotation period of solar-like pulsating KOIs to both compare the rotation properties of stars with and without known planets and enlarge the gyrochronology calibration sample for old stars. We use Kepler photometric light curves to derive the stellar surface rotation peri- ods while ages are obtained with asteroseismology using the Asteroseismic Modeling Portal in which individual mode frequencies are combined with high-resolution spectroscopic pa- rameters. We thus determine surface rotation periods and ages for 11 planet-hosting stars, all over 2 Gyr old. We find that the planet-hosting stars exhibit a rotational behaviour that is consistent with the latest age-rotation models and similar to the rotational behaviour of stars without detected planets. We conclude that these old KOIs can be used to test and calibrate gyrochronology along with stars not known to host planets.
• ### Ages and fundamental properties of Kepler exoplanet host stars from asteroseismology(1504.07992)
June 24, 2015 astro-ph.SR, astro-ph.EP
We present a study of 33 {\it Kepler} planet-candidate host stars for which asteroseismic observations have sufficiently high signal-to-noise ratio to allow extraction of individual pulsation frequencies. We implement a new Bayesian scheme that is flexible in its input to process individual oscillation frequencies, combinations of them, and average asteroseismic parameters, and derive robust fundamental properties for these targets. Applying this scheme to grids of evolutionary models yields stellar properties with median statistical uncertainties of 1.2\% (radius), 1.7\% (density), 3.3\% (mass), 4.4\% (distance), and 14\% (age), making this the exoplanet host-star sample with the most precise and uniformly determined fundamental parameters to date. We assess the systematics from changes in the solar abundances and mixing-length parameter, showing that they are smaller than the statistical errors. We also determine the stellar properties with three other fitting algorithms and explore the systematics arising from using different evolution and pulsation codes, resulting in 1\% in density and radius, and 2\% and 7\% in mass and age, respectively. We confirm previous findings of the initial helium abundance being a source of systematics comparable to our statistical uncertainties, and discuss future prospects for constraining this parameter by combining asteroseismology and data from space missions. Finally we compare our derived properties with those obtained using the global average asteroseismic observables along with effective temperature and metallicity, finding an excellent level of agreement. Owing to selection effects, our results show that the majority of the high signal-to-noise ratio asteroseismic {\it Kepler} host stars are older than the Sun.
• ### Rotation and magnetism of Kepler pulsating solar-like stars. Towards asteroseismically calibrated age-rotation relations(1403.7155)
Nov. 12, 2014 astro-ph.SR
Kepler ultra-high precision photometry of long and continuous observations provides a unique dataset in which surface rotation and variability can be studied for thousands of stars. Because many of these old field stars also have independently measured asteroseismic ages, measurements of rotation and activity are particularly interesting in the context of age-rotation-activity relations. In particular, age-rotation relations generally lack good calibrators at old ages, a problem that this Kepler sample of old-field stars is uniquely suited to address. We study the surface rotation and photometric magnetic activity of a subset of 540 solar-like stars on the main- sequence and the subgiant branch for which stellar pulsations have been measured. The rotation period was determined by comparing the results from two different analysis methods: i) the projection onto the frequency domain of the time-period analysis, and ii) the autocorrelation function (ACF) of the light curves. Reliable surface rotation rates were then extracted by comparing the results from two different sets of calibrated data and from the two complementary analyses. We report rotation periods for 310 out of 540 targets (excluding known binaries and candidate planet-host stars); our measurements span a range of 1 to 100 days. The photometric magnetic activity levels of these stars were computed, and for 61.5% of the dwarfs, this level is similar to the range, from minimum to maximum, of the solar magnetic activity. We demonstrate that hot dwarfs, cool dwarfs, and subgiants have very different rotation-age relationships, highlighting the importance of separating out distinct populations when interpreting stellar rotation periods. Our sample of cool dwarf stars with age and metallicity data of the highest quality is consistent with gyrochronology relations reported in the literature.
• ### Properties of 42 Solar-type Kepler Targets from the Asteroseismic Modeling Portal(1402.3614)
Sept. 29, 2014 astro-ph.SR
Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismic Modeling Portal (AMP). We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.
• ### Dynamo modeling of the Kepler F star KIC 12009504(1408.5926)
Aug. 25, 2014 astro-ph.SR
The Kepler mission has collected light curves for almost 4 years. The excellent quality of these data has allowed us to probe the structure and the dynamics of the stars using asteroseismology. With the length of data available, we can start to look for magnetic activity cycles. The Kepler data obtained for the F star, KIC 12009504, shows a rotation period of 9.5 days and additional variability that could be due to the magnetic activity of the star. Here we present recent and preliminary 3D global-scale dynamo simulations of this star with the ASH and STELEM codes, capturing a substantial portion of the convection and the stable radiation zone below it. These simulations reveal a multi-year activity cycle whose length tentatively depends upon the width of the tachocline present in the simulation. Furthermore, the presence of a magnetic field and the dynamo action taking place in the convection zone appears to help confine the tachocline, but longer simulations will be required to confirm this.
• ### Investigating magnetic activity of F stars with the it Kepler mission(1310.6400)
Oct. 23, 2013 astro-ph.SR
The dynamo process is believed to drive the magnetic activity of stars like the Sun that have an outer convection zone. Large spectroscopic surveys showed that there is a relation between the rotation periods and the cycle periods: the longer the rotation period is, the longer the magnetic activity cycle period will be. We present the analysis of F stars observed by Kepler for which individual p modes have been measure and with surface rotation periods shorter than 12 days. We defined magnetic indicators and proxies based on photometric observations to help characterise the activity levels of the stars. With the Kepler data, we investigate the existence of stars with cycles (regular or not), stars with a modulation that could be related to magnetic activity, and stars that seem to show a flat behaviour.
• ### Magnetic Activity Cycles in the Exoplanet Host Star epsilon Eridani(1212.4425)
Dec. 18, 2012 astro-ph.SR, astro-ph.EP
The active K2 dwarf epsilon Eri has been extensively characterized, both as a young solar analog and more recently as an exoplanet host star. As one of the nearest and brightest stars in the sky, it provides an unparalleled opportunity to constrain stellar dynamo theory beyond the Sun. We confirm and document the 3 year magnetic activity cycle in epsilon Eri originally reported by Hatzes and coworkers, and we examine the archival data from previous observations spanning 45 years. The data show coexisting 3 year and 13 year periods leading into a broad activity minimum that resembles a Maunder minimum-like state, followed by the resurgence of a coherent 3 year cycle. The nearly continuous activity record suggests the simultaneous operation of two stellar dynamos with cycle periods of 2.95+/-0.03 years and 12.7+/-0.3 years, which by analogy with the solar case suggests a revised identification of the dynamo mechanisms that are responsible for the so-called "active" and "inactive" sequences as proposed by Bohm-Vitense. Finally, based on the observed properties of epsilon Eri we argue that the rotational history of the Sun is what makes it an outlier in the context of magnetic cycles observed in other stars (as also suggested by its Li depletion), and that a Jovian-mass companion cannot be the universal explanation for the solar peculiarities.
• ### Investigating stellar activity with CoRoT observations(1110.0875)
Oct. 5, 2011 astro-ph.SR
Recently, the study of the CoRoT target HD 49933 showed evidence of variability of its magnetic activity. This was the first time that a stellar activity was detected using asteroseismic data. For the Sun and HD 49933, we observe an increase of the p-mode frequencies and a decrease of the maximum amplitude per radial mode when the activity level is higher. Moreover a similar behavior of the frequency shifts with frequency has been found between the Sun and HD 49933. We study 3 other targets of CoRoT as well, for which modes have been detected and well identified: HD 181420, HD 49385, and HD 52265 (which is hosting a planet). We show how the seismic parameters (frequency shifts and amplitude) vary during the observation of these stars.
• ### HD 49933: A laboratory for magnetic activity cycles(1110.0307)
Oct. 3, 2011 astro-ph.SR
Seismic analyses of the CoRoT target HD 49933 have revealed a magnetic cycle. Further insight reveals that frequency shifts of oscillation modes vary as a function of frequency, following a similar pattern to that found in the Sun. In this preliminary work, we use seismic constraint to compute structure models of HD 49933 with the Asteroseismic Modeling Portal (AMP) and the CESAM code. We use these models to study the effects of sound-speed perturbations in near surface layers on p-mode frequencies.
• ### Seismic analysis of four solar-like stars observed during more than eight months by Kepler(1110.0135)
Oct. 1, 2011 astro-ph.SR
Having started science operations in May 2009, the Kepler photometer has been able to provide exquisite data of solar-like stars. Five out of the 42 stars observed continuously during the survey phase show evidence of oscillations, even though they are rather faint (magnitudes from 10.5 to 12). In this paper, we present an overview of the results of the seismic analysis of 4 of these stars observed during more than eight months.
• ### Solar-like oscillations in KIC11395018 and KIC11234888 from 8 months of Kepler data(1103.4085)
March 21, 2011 astro-ph.SR
We analyze the photometric short-cadence data obtained with the Kepler Mission during the first eight months of observations of two solar-type stars of spectral types G and F: KIC 11395018 and KIC 11234888 respectively, the latter having a lower signal-to-noise ratio compared to the former. We estimate global parameters of the acoustic (p) modes such as the average large and small frequency separations, the frequency of the maximum of the p-mode envelope and the average linewidth of the acoustic modes. We were able to identify and to measure 22 p-mode frequencies for the first star and 16 for the second one even though the signal-to-noise ratios of these stars are rather low. We also derive some information about the stellar rotation periods from the analyses of the low-frequency parts of the power spectral densities. A model-independent estimation of the mean density, mass and radius are obtained using the scaling laws. We emphasize the importance of continued observations for the stars with low signal-to-noise ratio for an improved characterization of the oscillation modes. Our results offer a preview of what will be possible for many stars with the long data sets obtained during the remainder of the mission.
• ### A precise asteroseismic age and radius for the evolved Sun-like star KIC 11026764(1010.4329)
Oct. 20, 2010 astro-ph.SR, astro-ph.EP
The primary science goal of the Kepler Mission is to provide a census of exoplanets in the solar neighborhood, including the identification and characterization of habitable Earth-like planets. The asteroseismic capabilities of the mission are being used to determine precise radii and ages for the target stars from their solar-like oscillations. Chaplin et al. (2010) published observations of three bright G-type stars, which were monitored during the first 33.5 days of science operations. One of these stars, the subgiant KIC 11026764, exhibits a characteristic pattern of oscillation frequencies suggesting that it has evolved significantly. We have derived asteroseismic estimates of the properties of KIC 11026764 from Kepler photometry combined with ground-based spectroscopic data. We present the results of detailed modeling for this star, employing a variety of independent codes and analyses that attempt to match the asteroseismic and spectroscopic constraints simultaneously. We determine both the radius and the age of KIC 11026764 with a precision near 1%, and an accuracy near 2% for the radius and 15% for the age. Continued observations of this star promise to reveal additional oscillation frequencies that will further improve the determination of its fundamental properties.
• ### Discovery of a 1.6-year Magnetic Activity Cycle in the Exoplanet Host Star iota Horologii(1009.5399)
Oct. 1, 2010 astro-ph.SR, astro-ph.EP
The Mount Wilson Ca HK survey revealed magnetic activity variations in a large sample of solar-type stars with timescales ranging from 2.5 to 25 years. This broad range of cycle periods is thought to reflect differences in the rotational properties and the depths of the surface convection zones for stars with various masses and ages. In 2007 we initiated a long-term monitoring campaign of Ca II H and K emission for a sample of 57 southern solar-type stars to measure their magnetic activity cycles and their rotational properties when possible. We report the discovery of a 1.6-year magnetic activity cycle in the exoplanet host star iota Horologii, and we obtain an estimate of the rotation period that is consistent with Hyades membership. This is the shortest activity cycle so far measured for a solar-type star, and may be related to the short-timescale magnetic variations recently identified in the Sun and HD49933 from helio- and asteroseismic measurements. Future asteroseismic observations can be compared to those obtained near the magnetic minimum in 2006 to search for cycle-induced shifts in the oscillation frequencies. If such short activity cycles are common in F stars, then NASA's Kepler mission should observe their effects in many of its long-term asteroseismic targets.
• ### Asteroseismology of Solar-type stars with Kepler II: Stellar Modeling(1006.5695)
June 29, 2010 astro-ph.SR
Observations from the Kepler satellite were recently published for three bright G-type stars, which were monitored during the first 33.5d of science operations. One of these stars, KIC 11026764, exhibits a characteristic pattern of oscillation frequencies suggesting that the star has evolved significantly. We have derived initial estimates of the properties of KIC 11026764 from the oscillation frequencies observed by Kepler, combined with ground-based spectroscopic data. We present preliminary results from detailed modeling of this star, employing a variety of independent codes and analyses that attempt to match the asteroseismic and spectroscopic constraints simultaneously.
• ### Sounding stellar cycles with Kepler - preliminary results from ground-based chromospheric activity measurements(0910.1436)
Oct. 8, 2009 astro-ph.SR
Due to its unique long-term coverage and high photometric precision, observations from the Kepler asteroseismic investigation will provide us with the possibility to sound stellar cycles in a number of solar-type stars with asteroseismology. By comparing these measurements with conventional ground-based chromospheric activity measurements we might be able to increase our understanding of the relation between the chromospheric changes and the changes in the eigenmodes. In parallel with the Kepler observations we have therefore started a programme at the Nordic Optical Telescope to observe and monitor chromospheric activity in the stars that are most likely to be selected for observations for the whole satellite mission. The ground-based observations presented here can be used both to guide the selection of the special Kepler targets and as the first step in a monitoring programme for stellar cycles. Also, the chromospheric activity measurements obtained from the ground-based observations can be compared with stellar parameters such as ages and rotation in order to improve stellar evolution models.
• ### Activity Cycles of Southern Asteroseismic Targets(0909.5464)
Sept. 29, 2009 astro-ph.SR
The Mount Wilson Ca HK survey revealed magnetic activity variations in a large sample of solar-type stars with timescales ranging from 2.5 to 25 years. This broad range of cycle periods is thought to reflect differences in the rotational properties and the depths of the surface convection zones for stars with various masses and ages. Asteroseismic data will soon provide direct measurements of these quantities for individual stars, but many of the most promising targets are in the southern sky (e.g., alpha Cen A & B, beta Hyi, mu Ara, tau Cet, nu Ind), while long-term magnetic activity cycle surveys are largely confined to the north. In 2007 we began using the SMARTS 1.5-m telescope to conduct a long-term monitoring campaign of Ca II H & K emission for a sample of 57 southern solar-type stars to measure their magnetic activity cycles and their rotational properties when possible. This sample includes the most likely southern asteroseismic targets to be observed by the Stellar Oscillations Network Group (SONG), currently scheduled to begin operations in 2012. We present selected results from the first two years of the survey, and from the longer time baseline sampled by a single-epoch survey conducted in 1992.
• ### A Stellar Model-fitting Pipeline for Solar-like Oscillations(0906.4317)
June 23, 2009 astro-ph.SR
Over the past two decades, helioseismology has revolutionized our understanding of the interior structure and dynamics of the Sun. Asteroseismology will soon place this knowledge into a broader context by providing structural data for hundreds of Sun-like stars. Solar-like oscillations have already been detected from the ground in several stars, and NASA's Kepler mission is poised to unleash a flood of stellar pulsation data. Deriving reliable asteroseismic information from these observations demands a significant improvement in our analysis methods. We report the initial results of our efforts to develop an objective stellar model-fitting pipeline for asteroseismic data. The cornerstone of our automated approach is an optimization method using a parallel genetic algorithm. We describe the details of the pipeline and we present the initial application to Sun-as-a-star data, yielding an optimal model that accurately reproduces the known solar properties.
• ### A Stellar Model-fitting Pipeline for Asteroseismic Data from the Kepler Mission(0903.0616)
April 23, 2009 astro-ph.SR, astro-ph.EP
Over the past two decades, helioseismology has revolutionized our understanding of the interior structure and dynamics of the Sun. Asteroseismology will soon place this knowledge into a broader context by providing structural data for hundreds of Sun-like stars. Solar-like oscillations have already been detected from the ground in several stars, and NASA's Kepler mission is poised to unleash a flood of stellar pulsation data. Deriving reliable asteroseismic information from these observations demands a significant improvement in our analysis methods. In this paper we report the initial results of our efforts to develop an objective stellar model-fitting pipeline for asteroseismic data. The cornerstone of our automated approach is an optimization method using a parallel genetic algorithm. We describe the details of the pipeline and we present the initial application to Sun-as-a-star data, yielding an optimal model that accurately reproduces the known solar properties.
• ### Low-Energy Astrophysics: Stimulating the Reduction of Energy Consumption in the Next Decade(0903.3384)
March 19, 2009 astro-ph.IM
In this paper we address the consumption of energy by astronomers while performing their professional duties. Although we find that astronomy uses a negligible fraction of the US energy budget, the rate at which energy is consumed by an average astronomer is similar to that of a typical high-flying businessperson. We review some of the ways in which astronomers are already acting to reduce their energy consumption. In the coming decades, all citizens will have to reduce their energy consumption to conserve fossil fuel reserves and to help avert a potentially catastrophic change in the Earth's climate. The challenges are the same for astronomers as they are for everyone: decreasing the distances we travel and investing in energy-efficient infrastructure. The high profile of astronomy in the media, and the great public interest in our field, can play a role in promoting energy-awareness to the wider population. Our specific recommendations are therefore to 1) reduce travel when possible, through efficient meeting organization, and by investing in high-bandwidth video conference facilities and virtual-world software, 2) create energy-efficient observatories, computing centers and workplaces, powered by sustainable energy resources, and 3) actively publicize these pursuits.
• ### New Pulsating DB White Dwarf Stars from the Sloan Digital Sky Survey(0809.0921)
Sept. 4, 2008 astro-ph
We are searching for new He atmosphere white dwarf pulsators (DBVs) based on the newly found white dwarf stars from the spectra obtained by the Sloan Digital Sky Survey. DBVs pulsate at hotter temperature ranges than their better known cousins, the H atmosphere white dwarf pulsators (DAVs or ZZ Ceti stars). Since the evolution of white dwarf stars is characterized by cooling, asteroseismological studies of DBVs give us opportunities to study white dwarf structure at a different evolutionary stage than the DAVs. The hottest DBVs are thought to have neutrino luminosities exceeding their photon luminosities (Winget et al. 2004), a quantity measurable through asteroseismology. Therefore, they can also be used to study neutrino physics in the stellar interior. So far we have discovered nine new DBVs, doubling the number of previously known DBVs. Here we report the new pulsators' lightcurves and power spectra.
• ### Whole Earth Telescope observations of the hot helium atmosphere pulsating white dwarf EC 20058-5234(0803.1638)
March 11, 2008 astro-ph
We present the analysis of a total of 177h of high-quality optical time-series photometry of the helium atmosphere pulsating white dwarf (DBV) EC 20058-5234. The bulk of the observations (135h) were obtained during a WET campaign (XCOV15) in July 1997 that featured coordinated observing from 4 southern observatory sites over an 8-day period. The remaining data (42h) were obtained in June 2004 at Mt John Observatory in NZ over a one-week observing period. This work significantly extends the discovery observations of this low-amplitude (few percent) pulsator by increasing the number of detected frequencies from 8 to 18, and employs a simulation procedure to confirm the reality of these frequencies to a high level of significance (1 in 1000). The nature of the observed pulsation spectrum precludes identification of unique pulsation mode properties using any clearly discernable trends. However, we have used a global modelling procedure employing genetic algorithm techniques to identify the n, l values of 8 pulsation modes, and thereby obtain asteroseismic measurements of several model parameters, including the stellar mass (0.55 M_sun) and T_eff (~28200 K). These values are consistent with those derived from published spectral fitting: T_eff ~ 28400 K and log g ~ 7.86. We also present persuasive evidence from apparent rotational mode splitting for two of the modes that indicates this compact object is a relatively rapid rotator with a period of 2h. In direct analogy with the corresponding properties of the hydrogen (DAV) atmosphere pulsators, the stable low-amplitude pulsation behaviour of EC 20058 is entirely consistent with its inferred effective temperature, which indicates it is close to the blue edge of the DBV instability strip. (abridged)
• ### The pulsation modes of the pre-white dwarf PG 1159-035(0711.2244)
Dec. 18, 2007 astro-ph
PG 1159-035, a pre-white dwarf with T_eff=140,000 K, is the prototype of both two classes: the PG1159 spectroscopic class and the DOV pulsating class. Previous studies of PG 1159-035 photometric data obtained with the Whole Earth Telescope (WET) showed a rich frequency spectrum allowing the identification of 122 pulsation modes. In this work, we used all available WET photometric data from 1983, 1985, 1989, 1993 and 2002 to identify the pulsation periods and identified 76 additional pulsation modes, increasing to 198 the number of known pulsation modes in PG 1159-035, the largest number of modes detected in any star besides the Sun. From the period spacing we estimated a mass M = 0.59 +/- 0.02 solar masses for PG 1159-035, with the uncertainty dominated by the models, not the observation. Deviations in the regular period spacing suggest that some of the pulsation modes are trapped, even though the star is a pre-white dwarf and the gravitational settling is ongoing. The position of the transition zone that causes the mode trapping was calculated at r_c = 0.83 +/- 0.05 stellar radius. From the multiplet splitting, we calculated the rotational period P_rot = 1.3920 +/- 0.0008 days and an upper limit for the magnetic field, B < 2000 G. The total power of the pulsation modes at the stellar surface changed less than 30% for l=1 modes and less than 50% for l=2 modes. We find no evidence of linear combinations between the 198 pulsation mode frequencies. PG 1159-035 models have not significative convection zones, supporting the hypothesis that nonlinearity arises in the convection zones in cooler pulsating white dwarf stars.
|
2021-03-08 22:23:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5585970878601074, "perplexity": 2208.226473367006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00619.warc.gz"}
|
https://sites.math.northwestern.edu/~newstead/teaching/21-120/homework/3.php
|
### Differential and Integral Calculus (21-120) — Feedback on Homework 3
Homework 3 was due on Thursday 12th September 2013 and consisted of:
• Section 3.2 Q 50
• Section 3.4 Q 72, 84
I marked 3.2/50 (out of 3), 3.4/72 (out of 3) and 3.4/84 parts (a)(b) (out of 3). Part (c) wasn't marked because it was essentially an exercise in using graphing software. Everyone got 1 free point for submitting their homework.
Section 3.2 Q50. The most common error was not using the product and quotient rules correctly. A few people said that $P'(2)=F'(2)G'(2)$, for example; but by the product rule, it's actually equal to $F'(2)G(2)+F(2)G'(2)$. Another error made by a few people was saying that all the derivatives were zero. This may have been down to confusion with the notation. When you see something like $G'(7)$, it means first differentiate $G(t)$ and then substitute $t=7$ in the derivative. If you substitute first, you end up with a constant, so you'll always get zero... this is wrong!
Section 3.4 Q72. Common errors included:
• Not applying the product or chain rules properly. Often the computational errors in this problem would have been solved by putting in a few extra lines of working!
• Not simplifying the answer. I didn't penalise anyone for this, but seeing $2xg'(x^2)+4xg'(x^2)+4x^3g''(x^2)$ as a final answer, whilst correct, isn't really good enough: it should be $6xg'(x^2)+4x^3g''(x^2)$.
Section 3.4 Q84. Most of the errors on this problem were very small:
• Lots of people gave the wrong answer for the limit in part (a). The most common instance of this was people saying that $e^{-kt} \to \infty$ as $t \to \infty$. This is not so: if $k>0$ then $e^{-kt}$ becomes very small as $t$ becomes very large, so in fact $e^{-kt} \to 0$ as $t \to \infty$. Thus $$\lim_{t \to \infty} \frac{1}{1+ae^{-kt}} = \frac{1}{1+a \cdot 0} = 1$$
• The problems with part (b) were very similar to the problems in Q72: namely, not putting enough lines of working in. There were two paths through this problem, one using the quotient rule once and the chain rule once, and the other using the chain rule twice. The main error was common to both methods: at some point, you have to use the chain rule on $ae^{-kt}$. The best way to do this is to substitute $u=-kt$, then it's clear that $\frac{du}{dt} = -k$ and hence that $\frac{d}{dt} (ae^{-kt}) = ae^{-kt} \times (-k) = -kae^{-kt}$. Many people left out the $k$, or multiplied by $t$... going a bit slower would probably mean these silly errors are avoided. A few others decreased the exponent by $1$: this only holds for polynomials, not exponentials! Remember the difference!
Back to course page
|
2019-03-24 11:55:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738920092582703, "perplexity": 393.398011774957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203438.69/warc/CC-MAIN-20190324103739-20190324125739-00506.warc.gz"}
|
https://eur-usd-forecast-forex-crypto.club/simulated-binary-options-prices/
|
# Simulated binary options prices
Pricing options via Monte Carlo simulation is among the maximum famous ways to price certain styles of economic options. This article will supply a quick review of the mathematics involved in simulating option prices using Monte Carlo techniques, Python code snippets and some examples. Monte Carlo techniques in line with Wikipedia:
“Monte Carlo techniques, or Monte Carlo experiments, are a wide class of computational algorithms that depend upon repeated random sampling to reap numerical outcomes. The underlying idea is to use randomness to resolve issues that might be deterministic in principle. They are often used in physical and mathematical troubles and are most useful when it is tough or impossible to use different techniques. Monte Carlo techniques are specifically utilized in three hassle lessons: optimization, numerical integration, and producing draws from a opportunity distribution.”
In order to simulate the choices fee of a European name option, first we should determine on the technique that the stock price follows all through the choices existence of the option ((T- t)). In the choices economic literature stocks are said to observe geometric brownian motion. Assume that the choices inventory charge (S) , in questions will pay annual dividend (q) and has an expected return (mu) identical to the danger-loose rate (r) – (q) , the volatility (sigma) is assumed to be regular.
The stock charge may be modeled by using a stochastic differential equation.
Essentially this is a differential equation in which at least one of the phrases is a random system. First it may be beneficial to recollect an ordinary differential equation inside the context of our problem. Let's keep in mind the choices case while volatility is zero i.e. the inventory charge can be defined like a deposit in a financial savings account paying (mu) consistent with annum The change in any given time increment is then given by way of
(dS = mu S dt)
Given the price of the inventory now (S_) we then recognise with fact the choices fee (S_) at given time (T) with the aid of isolating and intergrating as follows:
(displaystyle int_^frac = displaystyle int_^T mu dt )
(S_ = S_e^mu T)
It may be beneficial to notice now that we are able to write the choices end result above as (ln(S_) = ln(S_) + displaystyle int_^Tmu dt)
However, considering the fact that inventory fees do showcase randomness we want to include a stochastic time period within the equation above. We can't genuinely integrate to get a nice result as we have in the equation above, in order to seize the randomness inherent in stock markets we add every other term and are SDE is described as follows:
(dS= Smu dt + Ssigma dW(t))
Where (W_) is a Wiener Process. The equation above is now within the form of an Ito method.
In order to continue a quick word on Ito's Lemma:
Ito's Lemma proven below, states if a random variable follows an Ito Process (example above) then any other two times differentiable feature (G) defined through the choices inventory fee (S) and time (t) also follows at Ito procedure:
(the notation underneath has been modified from right here to preserve it constant with the equations above for the purposes of stock options)
(dG = ( fracpartial Gpartial SSmu + fracpartial Gpartial t + fracfracpartial^2 Gpartial S^2S^2sigma^2 )dt + fracpartial Gpartial SSsigma dW(t))
We may want to apply Ito's lemma to (G = S) on the way to acquire mathematics Brownian movement, however the use of (G = ln(S)) which offers a pleasing belongings that the choices stock price is strictly more than zero. So making use of Ito's lemma to (ln(S)) first we calculate the partial derivatives with recognize to (t) and (S) as follows:
( fracpartial Gpartial S = frac) , (fracpartial Gpartial t = zero) , (fracpartial^2 Gpartial S^2 = -frac)
Plugging the choices partial derivatives into Ito's lemma gives:
(begindG & =( fracSmu + 0 – fracfracS^2sigma^2 )dt + frac SSsigma dW(t) \ & = (mu – fracsigma^2)dt + sigma dW(t)cease)
Therefore the choices distrubiton of (ln(S_) – ln(S_) ) = ((mu – fracsigma^2)T + sigma sqrt T)
The distibution of the choices inventory charge at expiration is given by way of rearraging the equation above an taking the choices exponential of both sides:
(S_ =S_e^(mu – fracsigma^2)dt + sigma dW(t) )
The above also can be written as:
(ln(S_) = ln(S_) + displaystyle int_^t(mu – fracsigma^2)dt + displaystyle int_^t sigma dW(t)), (for hspace t in[0,cdots,T])
Which makes it less complicated to paintings with in Python.
Pricing a European name option with a strike of 110 and evaluating to Black-Scholes Price
There is great distinction between the two prices because of the choices low pattern length selected. Let's strive converting N to a hundred thousand and going for walks the script once more.
As we growth N in the direction of infinity the choices price tactics the Black-Scholes charge, due to Central Limit Theorem.
A visual illustration of what’s going on above
The Monte Carlo Algorithm fees the choice as (name =e^-rT[frac sumlimits_^ (S_-K)^+]) bear in mind the choices (^+) in the preceding equation to be best the inexperienced values from the choices plot above.
It might also appear like the choices above become largely useless considering the fact that we’ve got the choices Black-Scholes equation, since it takes longer and is less correct. However, there are some of instances wherein a closed shape answer isn’t always comfortably available. Consider once more the plot of paths at the beginning of the choices document. Let's say for a few motive a person wishes to buy an alternative that allows the holder to exercise at the choices most favorable charge at some point of the specified time c programming language. Take for example, if the stock in query follows the direction under, the holder of this option might be able to pick (S_) (dashed red line beneath).
A visible for evaluation
The fee is calculated similarly to the vanilla alternative: (lookback =e^-rT[frac sumlimits_^ (S_-K)^+])
Pricing a lookback with constant strike of a hundred and ten.
The solution above is almost truely underestimating the choices real fee of this feature, try setting the steps to a miles larger price and note that the rate will increase dramatically. This makes feel as Geometric Brownian Motion assumes infinitely divisible time during the choices existence of the choice, and if we pattern at 100 increments over a 6 month length, approx as soon as each 1.2 days, we can miss a lot the choices highs and therefore undervalue the option. For this reason the steps parameter at the start of the record need to be adjusted therefore. Perhaps Python isn't the choices high-quality device for this kind of calculation. However, it serves to demonstrate the concept.
There are many more programs of Monte Carlo strategies for option pricing. Links can be published underneath to future articles on this topic.
A site devoted to free programming tutorials particularly in Python targeted on statistics analysis and quantitative finance.
|
2022-01-19 23:59:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009626269340515, "perplexity": 1877.117638359566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00596.warc.gz"}
|
http://math.stackexchange.com/tags/proof-theory/new
|
# Tag Info
1
A proof system is a Formal system with logical axiom (possibly none) and rules of inference (at least one). Some examples : Hilbert-style proof system : usually more than one (logical) axioms and few rules : modus ponens and generalization. See Herbert Enderton, A Mathematical Introduction to Logic (2nd ed - 2001), page 109, for a system with few axioms ...
3
No theory containing at least the peano axioms can prove its own consistency (proven by Gödel). But there can be a stronger theory proving the consistency of the weaker theory. The catch is, to prove the consistency of the stronger theory, you need an even stronger one. ZFC is believed to be consistence and can be used to prove the consistency of PA. To be ...
0
Provided your language is countable, then the answer is yes, though this answer has little to do with automated theorem proving. Let $T$ be a theory in a countable language $L$. Then the set of sentences in $L$ is countable, so the set of finite sequences of sentences in $L$ is countable. Hence the set of proofs of $T$-provable sentences is countable. So ...
0
An algorithm can enumerate all consequences of the axioms, and so eventually list all the provable statements and all the disprovable statements. An algorithm can take any non-independent statement and (if it really is not independent) determine if the statement, or its negation, has a proof. An algorithm cannot tell, given an arbitrary statement, whether ...
1
If you are going to bring in the question of bugs, then no: there is no way to use any program to prove any conjecture, ever. Even if it is open source, being able to look at the code doesn't insure it works properly. A bug could be inherent in the programming language even! If you are willing to trust software, then you need to make some theoretical ...
0
Very simple: if $x<0$, $x<0<\lvert x\rvert$; if $x\ge 0$, $\lvert x\rvert =x$, a fortiori, $\lvert x\rvert \ge x$.
1
Just go back to the definition : $$|x| = \left\{\begin{array} .x & \text{if} & x\geq 0 \\ -x & \text{if} & x< 0 \end{array} \right.$$ So if $x\geq 0, |x| = x \geq x$ if $x< 0, |x| = -x > 0 \geq x$ Hence $\forall x \in \mathbb{R}, |x| \geq x$ Note that you can also define $|x| = \max\{ x, -x\}$ and the result is then ...
0
First note that we can assume that $a=1$ so we have $f(x,y,z)=xyz+b(xy+yz+zx)+c(x+y+z)+d$. We have that $xyz+r(xy+yz+zx)+r^2(x+y+z)+r^3=(x+r)(y+r)(z+r)$ for the if part. Noe looking at this as an expression polynomial in $x$ it has degree $1$. Because we can invert scalars any factorisation will be of the form $$B(x+A)$$ where $A,B$ are polynomial in ...
1
It's nice question, and little hard too, and I think it's Olympiad question, I wounder from what Olympiad did you take this question?, This is my solution : Suppose that $f$ is reducible. Therefore it has a factor $g$ of degree $1$. Suppose that $g$ is symmetric. We may assume that $$g = x + y + z + k$$ for some constant $k.$ Now put $x = 0$, so y ... 0 We have: \begin{align*} f(x,y,z) & = axyz + b(xy + yz + zx) + c(x + y + z) + d.\\ & = axyz + b(xy) + b(yz + zx) + c(x+y)+ cz + d\\ & = xy(az+b) + (x+y)(bz+c) + (cz+d)\\ \end {align*} The only way we can make it reduce further is if(az+b), (bz+c)$and$(cz+d)$are related geometrically, i.e.,$(bz+c)$and$(cz+d)\$ are of the type ...
2
It's sometimes possible to show existence of proof of a statement in ZF by showing that the statement is true in ZFC, and arguing using absoluteness. An example would be answer to this question, in which Andreas Blass argues as follows: for any model of ZF, Monsky's theorem holds in an inner model satisfying axiom of choice, and this statement is "simple ...
1
Many. Such questions amount to some finite computation, since you can just search for a proof and simultaneously disproof until you find (enumerate all proofs and check if its a proof or disproof). So just pick one which requires a large computation such as; Is there an odd number of pairs of twin primes below 10^999999999999999 ? This is provable, just ...
Top 50 recent answers are included
|
2015-08-28 17:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849391758441925, "perplexity": 318.27763620825476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00012-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/which-one-of-these-statements-is-wrong.3136/
|
# Which one of these statements is wrong ?
#### Yankel
##### Active member
Hello
I have a question in which I need to choose the wrong statement. I have 5 statements, and I managed to rule out 3 options so I am left with two.
the options are:
1. The dimension of the 3X3 anti-symmetric matrices subspace is 3.
2. An nXn matrix which has different numbers on it's main diagonal, is diagonalized.
3. A non invertible matrix has an eigenvalue of 0
4. Every matrix has a unique canonical form matrix
5. v1 and v2 are vectors from a vector space V. Then v1-2v2 also belongs to V.
I managed to rule out 1, 3 and 4 (they are correct in my opinion). I don't know which one is not, is it 2 or 5 ?
#### Yankel
##### Active member
I think I can rule out 5 too, but that doesn't help me understand why 2 is correct.
am I right to say:
let's assume that V is the space of all vectors of form (1,a,b), then:
v1 = (1,a,b) v2 = (1,c,d)
v1-2v2 = (-1,2-ac,b-2d) which doesn't belong to V ?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Hi Yankel!
If 2 vectors belong to a vector space, than any linear combination of those vectors also belongs to that vector space by definition.
Since $v_1-2v_2$ is a linear combination, (5) is correct.
As for (2), which values does a diagonal matrix have that are not on its main diagonal?
#### Fernando Revilla
##### Well-known member
MHB Math Helper
2. An nXn matrix which has different numbers on it's main diagonal, is diagonalized.
This is false. Choose for example $A=\begin{bmatrix}{1}&{-2}\\{1}&{-1}\end{bmatrix}\in \mathbb{R}^{2\times 2}$. Its eigenvalues are $\lambda=\pm i\not\in \mathbb{R}$, so $A$ is not diagonalizable on $\mathbb{R}$.
#### Fernando Revilla
##### Well-known member
MHB Math Helper
I think I can rule out 5 too, but that doesn't help me understand why 2 is correct.
am I right to say:
let's assume that V is the space of all vectors of form (1,a,b), then:
v1 = (1,a,b) v2 = (1,c,d)
v1-2v2 = (-1,2-ac,b-2d) which doesn't belong to V ?
As ILikeSerena told you, this is true. Your mistake is that the set $\{(1,a,b):a,b\in\mathbb{R}\}$ is not a vector space.
#### Yankel
##### Active member
thank you both !
Yes, silly example, my set wasn't a vector space since it's not close on addition
(1+1!=1)
|
2021-03-09 09:48:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6044755578041077, "perplexity": 796.7800747816094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00614.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/130635-solved-another-lim-inf-lim-sup-proof.html
|
# Thread: [SOLVED] Another lim inf/lim sup proof
1. ## [SOLVED] Another lim inf/lim sup proof
Let $\displaystyle (s_{n})$ be a sequence of nonnegative numbers, and for each n define $\displaystyle a_{n} = \frac{1}{n}(s_{1} +s_{2}+...+ s_{n})$. Show that $\displaystyle \lim \,inf \,s_{n} \le \lim \,inf \,a_{n} \le \lim \,sup \,a_{n} \le \lim \,sup \,s_{n}$. Also show that if $\displaystyle \lim s_{n}$ exists, then $\displaystyle \lim a_{n}$ exists and $\displaystyle \lim a_{n} = \lim s_{n}$.
For this one I am completely stuck. Any help would be appreciated.
2. Originally Posted by Pinkk
Let $\displaystyle (s_{n})$ be a sequence of nonnegative numbers, and for each n define $\displaystyle a_{n} = \frac{1}{n}(s_{1} +s_{2}+...+ s_{n})$. Show that $\displaystyle \lim \,inf \,s_{n} \le \lim \,inf \,a_{n} \le \lim \,sup \,a_{n} \le \lim \,sup \,s_{n}$. Also show that if $\displaystyle \lim s_{n}$ exists, then $\displaystyle \lim a_{n}$ exists and $\displaystyle \lim a_{n} = \lim s_{n}$.
For this one I am completely stuck. Any help would be appreciated.
What is the definition of $\displaystyle \limsup$?
3. $\displaystyle lim sup = \lim_{N\to \infty}\, sup\{s_{n} : n > N\}$
4. Any suggestions?
5. Apparently, for part of the proof I need to show that $\displaystyle M > N$ implies $\displaystyle sup\{a_{n} : n > M\} \le \frac{1}{M}(s_{1}+s_{2}+...+s_{N}) + sup\{s_{n} : n > N\}$ (and I don't even how to show that or why that's true...
6. Sorry for all the consecutive posts, but I am still absolutely stuck on this.
|
2018-05-27 20:33:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834774136543274, "perplexity": 193.8447447728798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870082.90/warc/CC-MAIN-20180527190420-20180527210420-00137.warc.gz"}
|
https://gamedev.stackexchange.com/questions/103837/acceleration-and-deceleration-during-rotation
|
# Acceleration and Deceleration During Rotation
I'm attempting to have a sprite rotate in a way where its rotation speed increases until it has reached the halfway point in its rotation, to which it starts to slow down. I'm currently calculating the halfway point by calculating the amount of time it should take to rotate the sprite, and then checking if a current time variable is larger.
The code works rather well except for the fact that I can't figure out how current time should increment each update. Currently I'm using delta time, but that doesn't work. I'm thinking it needs to be some combination of the calculated time to rotate and delta, but I haven't gotten it yet (clearly).
EDIT: Specifically, currentTime shouldn't be incremented by delta solely, but I have no idea what it should be in place of delta.
if (currentTime <= timeToRotate)
{
rotationSpeed += 5f * delta;
currentTime += delta;
}
else
{
if (rotationSpeed > 5f)
rotationSpeed -= 5f * delta;
else
rotationSpeed = 5f;
currentTime += delta;
}
## 2 Answers
Correct me if I am wrong, but from what I understand, you want to achieve the following:
If so, this can be achieved using trigonometry:
float easing = 0.075f;
float direction = Math.atan2(target.y - sprite.y, target.x - sprite.x) / Math.PI * 180;
if (direction < sprite.rotation - 180) {
direction += 360;
}
if (direction > sprite.rotation + 180) {
direction -= 360;
}
sprite.rotation += (direction - sprite.rotation) * easing;
• That actually works once I got rid of my code that was checking if the angle should be incremented in a positive or negative direction. My only question is how could I incorporate a delta time so that the rotation would take the same amount of time on two computers? – AerospaceP Jul 13 '15 at 16:59
• Just multiply sprite.rotation by deltaTime after all previous calculations. – driima Jul 13 '15 at 17:07
• Ah okay, that didn't exactly work as that just caused the rotation to be minuscule. So I multiplied easing by delta instead and increased it, and it would seem to work but I'll have to test it on a different computer. Thank you. – AerospaceP Jul 13 '15 at 17:16
• No problem - it's strange that spriterotation *= deltaTime would yield a slower result; have you tried logging deltaTime? It should be around 1, give or take a few decimals. – driima Jul 13 '15 at 17:22
• It isn't. It is around 0.05 +- 0.02. I'm using LibGDX if that tells you anything. – AerospaceP Jul 13 '15 at 17:39
This seems like a real nice fit for using something like bias and gain.
Using those functions, you do a simple linear interpolation, but before using your "percent" value in the lerp, you pass it through a function to make the percent value non linear.
This makes it so it still takes the same amount of time to do, but you can make it faster in the beginning and slower in the end or make it slow near the ends and fast in the middle - or other behaviors as well.
Check this out for more info: http://blog.demofox.org/2012/09/24/bias-and-gain-are-your-friend/
• While that seems useful, I don't see how that solves my problem as those functions require a time variable which was my original problem. – AerospaceP Jul 13 '15 at 4:17
• Oh sorry I missed that. Do you not have an end and start rotation? If not, how come? – Alan Wolfe Jul 13 '15 at 4:19
• I'm confused as to what you mean. The sprite rotates but with the code I have listed, not correctly. I have a function that determines the quickest way to rotate, and if I leave rotation speed constant, it works fine but with no "acceleration". I wanted it to speed up until it hit halfway and start to slow down after that. I made a function to combine with what you linked that returns the percentage of the rotation completed, but I still had no luck. – AerospaceP Jul 13 '15 at 4:42
• Ah. What I'm getting at is that if you store starting and ending rotation, you can calculate how long it should take total (such as say 2 seconds) and then track how much time has elapsed. Each frame you figure out what percentage of the time has elapsed and put that percentage through bias or gain to make speed non linear, then use that adjusted "percentage" to set the rotation at that percentage from the start rotation to the end rotation. Hope that makes sense. – Alan Wolfe Jul 13 '15 at 4:53
• Ah okay. When I'm setting the angle I calculate the time, and each update I use a method that I made that checks the time elapsed and the total time and returns the two divided. I tried plugging in the divided times into the gain function but the rotation speed never started to decrease after the halfway point. – AerospaceP Jul 13 '15 at 5:43
|
2020-04-01 09:21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.605611264705658, "perplexity": 716.5133828877339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00278.warc.gz"}
|
https://proxies-free.com/combinatorics-combinatorical-problem-with-additional-layer-of-probabilities/
|
# combinatorics – Combinatorical problem with additional layer of probabilities
Suppose you have a bag with $$R$$ red balls, $$G$$ green balls, and $$Y$$ yellow balls.
Each red balls has probability $$p$$ to vanish when you draw it. Each green and yellow ball has probability $$q$$ to vanish when you draw it. (If a ball vanishes it does not come back into the bag.)
What is the probability that the first non-vanishing ball that you draw out of the bag is green?
|
2021-09-21 16:47:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717279434204102, "perplexity": 235.28173389862124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00140.warc.gz"}
|
https://www.physicsforums.com/threads/riemann-zeta-function-showing-converges-uniformly-for-s-1.890372/
|
# Riemann Zeta Function showing converges uniformly for s>1
1. Oct 23, 2016
### binbagsss
1. The problem statement, all variables and given/known data
$g(s) = \sum\limits^{\infty}_{n=1} 1/n^{-s},$
Show that $g(s)$ converges uniformly for $Re(s>1)$
2. Relevant equations
Okay, so I think the right thing to look at is the Weistrass M test. This tells me that if I can find a $M_{n}$, a real number, such that for each $n$ , $| f_{n} | \leq M_{n}$, and $\sum\limits^{\infty}_{n=1} M_{n}$ converges, then $\sum\limits^{\infty}_{n=1} f_{n}(s)$ converges, where $f_{n}(s)= 1/n^{-s}$ here.
3. The attempt at a solution
Okay, so if I consider the real part of $s$ only, it's pretty obvious that such a $M_{n}$ can be found for $s>1$, i.e. $M_{n} = 1/n$.
However I'm pretty stuck on how to incorporate $Im(s)$ into this, which has no bounds specified right?
So say I assume $Re (s) =1$, and we know that the series is then less than :
$\frac{1}{1^{1+iy}} + \frac{1}{2^{1+iy}} + \frac{1}{1^{3+iy}} + ...$
= $\frac{1}{1 . iy} + \frac{1}{2 . iy} + \frac{1}{3 . iy} +...$
where $s = 1 + iy$,
but surely as $Im(s) -> 0$, the imaginary part of each term in the series blows up, so I'm having a hard time understanding how it is bounded within any contraints on $Im(s)$ and only $Re(s)$.
2. Oct 25, 2016
### stevendaryl
Staff Emeritus
First of all, I think you mean $\frac{1}{n^{+s}}$ not $\frac{1}{n^{-s}}$.
Second, you've got the right idea, that if you can find some $M_n$ such that $|\frac{1}{n^s}| \leq M_n$, and $\sum_n M_n$ converges, then $\sum_n \frac{1}{n^s}$ converges. The easiest choice is to just let $M_n = |\frac{1}{n^s}|$
Third, you need to figure out what $|\frac{1}{n^s}|$ is. What you wrote is wrong: $2^{1+i} \neq 2 \cdot i$. Try this: Write $n = e^{log(n)}$, where $log$ means the natural log. And then write $s = Re(s) + i Im(s)$.
|
2017-10-22 07:34:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696751236915588, "perplexity": 149.33972890421978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00805.warc.gz"}
|
https://www.dmoonc.com/posts/how-a-wing-works/
|
# How A Wing Works
[Updated 2020-03-15: fixed plot auto-scaling, and discovered that the simple definition of a normal vector still yields inward-pointing vectors, given the airfoil vertex ordering. ]
"The Sibley Guide to Bird Life & Behavior" is a beautiful book, filled with elegant illustrations and clear explanations of bird physiology and behavior. It's a wonderful composition of art and science.
That's why I found this passage so jarring:
Physical laws ordain that the air flowing over the top of the wing must reach the back of the wing (the trailing edge) at the same time as air flowing under the wing. The curvature of the wing forces air to travel farther across the top surface than across the bottom. In order to travel the longer distance in the same amount of time, the air passing over the top of the wing must flow faster than the air flowing underneath the wing. This faster-moving air results in lower air pressure above the wing than below. The net result is lift...
The explanation looked familiar. I'd been reading similar descriptions since my youth. Still, when I read this I suddenly realized it didn't actually explain anything. It contained factual errors. It glossed over Bernoulli's principle. More than anything, it read as an appeal to authority.
I hope Mr. Sibley will consider replacing this passage in a future edition. A simple explanation based on how the distribution of pressure on a wing changes with the relative motion of air and wing would be much more effective.
Granted he is describing bird wings and not airplane wings; but maybe Mr. Sibley could just direct his readers to Chapter 1 of Wolfgang Langeweische's "Stick and Rudder," and be done with it.
Anyway, as I say I found this passage jarring. In this post I'll explain more about what bugs me. Then I'll try to build a model that explains lift in terms of the distribution of air pressure on a wing, and that shows how the distribution of that pressure varies depending on the motion of the wing relative to the air.
## What Bugs Me¶
Stick your arm out the window of a moving car, like a bird spreading its wing. Make your hand into a flat shape. Rotate your hand. Depending on how you orient your arm to the passing air, you may feel your arm being pushed up, or down, or merely straight back.
At heart, lift and drag seem to be just this simple: they're caused by collisions between a flat surface and a fluid relative to which it is moving.
Speed matters. Suppose it's a calm day. The car slows to a stop. Your arm doesn't feel any upward or downward force (OK, except gravity). To feel any lift force your arm needs to be moving relative to the mass of air around it.
Angle matters. If you angle your hand so that the leading edge – the side that faces into the wind – is higher than the trailing edge, you feel lift. If you angle it so that the leading edge is lower, you feel a downward force. If you orient your hand so that the leading and trailing edges are nearly level with one another, you don't feel much upward or downward force at all.
Shape matters. (I haven't tried this. Let me know if it's wrong.) Shape your hand into a fist. Make it look as much like a ball as you can. You'll probably find that, no matter how you rotate your fist, you don't feel much change in lift.
It seems you get a much stronger lift effect when your hand is shaped like a flat surface than when it is balled up.
Shape doesn't matter.
The curvature of the wing forces air to travel farther across the top surface than across the bottom.
Does "the curvature of the wing" refer here to the shape of the wing's top surface, or to the curvature of the wing as a whole? Either way, this is a dubious claim.
The "wings" of a box kite are flat, made up of paper or thin fabric stretched across a frame. They have the same shape, top and bottom. They generate lift.
"The airfoil on the Lockheed F-104 straight-wing supersonic fighter is a thin, symmetric airfoil with a thickness ratio of 3.5 percent." -- Introduction to Flight, 7th Edition
The top surface of an F-104 wing is shaped exactly like its bottom surface. It generates lift.
Air isn't obliged to travel.
Physical laws ordain that the air flowing over the top of the wing must reach the back of the wing (the trailing edge) at the same time as air flowing under the wing.
And, again:
The curvature of the wing forces air to travel farther across the top surface than across the bottom.
Imagine you're an air molecule, bouncing around a meter or so above a runway on a calm day. You aren't traveling anywhere. Still, when you collide with an oncoming wing, on an airplane that has rotated for takeoff, you contribute to the lift on that wing.
Still not convinced? Have a look at this experimental footage from the University of Iowa, which shows fluid flowing "over" the top of an airfoil. It doesn't reach the trailing edge at the same time as the fluid flowing "under" the airfoil. It gets there first.
OK, shape does matter. The hand-out-car-window experiment shows that the shape of an airfoil does matter. It seems that some variation of an inclined plane, moving through air, is useful for generating lift.
The best shape seems to vary depending on the task at hand. The cross-section of an F-104 wing looks something like a knife. That of a B-24 looks like a classic NACA airfoil. The wings of early Wright Flyers and of many WWI aircraft had much the same shape on top and bottom, like curved sheets of paper. The "wings" of a box kite are flat.
There are all kinds of wing shapes, suited for different purposes.
Almost all of these wings can change shape, e.g., by deploying flaps, depending on the phase of flight.
Bird wings are probably the most variable of all. A soaring gull's wing may be similar in cross section to that of an airplane wing, but birds can change the curvature of their wings an awful lot as they beat them for takeoff, extend them for gliding, or flare and beat them for landing. (Wings aside, it seems as though almost every part of a bird can contribute to lift.)
### Speed, Attitude and Shape¶
The speed with which an airfoil moves through a fluid, the angle at which it is oriented with respect to the relative motion of the fluid, and the shape of the airfoil, all affect the distribution of forces on each patch of the airfoil surface.
This distribution of forces adds up to the net lift, and drag, on an airfoil. I think I can show this using two main concepts: static pressure and relative wind velocity.
## A Model of Lift¶
Let's define some basic abstractions to help represent a wing, forces, etc.
### Two-Dimensional Vector¶
I remember aeronautical engineers at Wright-Patterson AFB joking about wings with infinite spans. So it is with this model: there are only two spatial dimensions.
This model uses vectors to represent things like wind velocity and normal forces.
In [1]:
import math
import typing as tp
class Vector:
def __init__(self, x: float, y: float) -> None:
self.x = x
self.y = y
def mag(self) -> float:
return math.sqrt(self.x * self.x + self.y * self.y)
def scaled(self, s: float) -> "Vector":
return Vector(self.x * s, self.y * s)
def __add__(self, other: "Vector") -> "Vector":
return Vector(self.x + other.x, self.y + other.y)
def __radd__(self, other: "Vector") -> "Vector":
def __sub__(self, other: "Vector") -> "Vector":
return Vector(self.x - other.x, self.y - other.y)
def __rsub__(self, other: "Vector") -> "Vector":
return self.__sub__(other)
def __mul__(self, mag: float) -> "Vector":
return self.scaled(mag)
def __rmul__(self, mag: float) -> "Vector":
return self.scaled(mag)
def unit(self) -> "Vector":
m = self.mag()
if m <= 0.0:
return Vector(0.0, 0.0)
return self.scaled(1.0 / m)
def direction(self) -> float:
return math.atan2(self.y, self.x)
def dot(self, other: "Vector") -> float:
return self.x * other.x + self.y * other.y
def projected(self, other: "Vector") -> "Vector":
return other.unit().scaled(self.dot(other))
def mean(self, other: "Vector") -> "Vector":
mid = 0.5
return Vector(mid * (self.x + other.x), mid * (self.y + other.y))
def __str__(self) -> str:
return f"{self.__class__.__name__}({self.x}, {self.y})"
__repr__ = __str__
### Two-Dimensional Point¶
For convenience let's just say a Point is the same thing as a Vector. A Vector represents a magnitude and a direction. A Point is a location in space, offset from the coordinate origin by a distance in a direction. Same thing ;)
In [2]:
Point = Vector
### Drawing¶
Let's use matplotlib for drawing, for no particular reason.
In [3]:
%matplotlib inline
import matplotlib.pyplot as plt
### Ray¶
A Ray is a Vector with a termination point. It will be handy when drawing a vector, since it can tell where to put the arrowhead.
This is sloppy: let's just teach Ray how to draw itself. It's a model and a view.
In [4]:
class Ray:
def __init__(self, x: float, y: float, dx: float, dy: float) -> None:
self.x, self.y = x, y
self.v = Vector(dx, dy)
def vec(self) -> Vector:
return self.v
def direction(self) -> float:
return self.v.direction()
def projected(self, other: "Ray") -> "Ray":
pv = self.v.projected(other.v)
return Ray(self.x, self.y, pv.x, pv.y)
def draw(self, style="k-", linewidth=0.75) -> None:
x, y = self.x, self.y
dx, dy = self.v.x, self.v.y
x0 = x + dx
y0 = y + dy
plt.plot([x, x0], [y, y0], style, linewidth=linewidth)
angle = math.atan2(dy, dx)
a1 = angle + math.pi / 8.0
a2 = angle - math.pi / 8.0
for a in [a1, a2]:
plt.plot([x, x + adx], [y, y + ady], style, linewidth=linewidth)
def __str__(self) -> str:
return f"{self.__class__.__name__}({self.x}, {self.y}, {self.v})"
__repr__ = __str__
### Airfoil¶
For our purposes it's probably sufficient to represent an airfoil as a sequence of points - the vertices of a polygon.
To help with drawing, finding normals, etc., let's order the points clockwise around the the airfoil.
In [5]:
points = [10.0 * Point(x, y) for (x, y) in [
[0.0, 0.0],
[0.02, 0.03],
[0.04275, 0.05],
[0.1, 0.07],
[0.175, 0.087],
[0.25, 0.09],
[0.425, 0.07],
[1.0, -0.07],
[0.5, -0.055],
[0.125, -0.035],
[0.04275, -0.03],
[0.02, -0.02],
[0.0, 0.0]
]]
### Drawing Functions¶
In [6]:
def init_plot():
fig = plt.figure(figsize=(12, 8))
plt.xlim((-2, 10))
plt.axis('equal')
plt.xticks([])
plt.yticks([])
return fig
def draw_foil(fig, points: tp.Iterable[Point]):
x = [p.x for p in points]
y = [p.y for p in points]
plt.plot(x, y, 'b-')
fig = init_plot()
draw_foil(fig, points)
Mr. Sibley is a masterful artist. I am not.
### Pressure¶
"Pressure" – or should I write "static pressure?" – on the surface of an object is a force acting uniformly on that surface. It presses directly inward on every point; it's normal to the surface at every point.
Air pressure on the surface of an airfoil can be thought of as the net result of the molecules in the air colliding with the surface over some small time period. No matter the angle or impact speed of any individual collision with a given surface patch of the airfoil, the net effect of all of the collisions can be represented by a force acting normal to the patch.
I guess that's not strictly true at smaller scales, with smaller numbers of particles. Let's move on.
### Force Over Area¶
With that sloppy definition in mind, let's break the airfoil into line segments and find the unit normal vector for each segment.
In [7]:
class Segment:
def __init__(self, p0: Point, pf: Point) -> None:
self.p0 = p0
self.pf = pf
def length(self) -> float:
return (self.pf - self.p0).mag()
def normal(self) -> Ray:
# Get a Ray, terminated at self's mid-point, normal to self,
# having unit length.
# Lie, cheat, steal: The points are from the geometry of an airfoil
# and are ordered so that the returned normal ray points from the
# outside of the airfoil to the inside.
midpoint = self.p0.mean(self.pf)
unit = (self.pf - self.p0).unit()
# The normal to the unit vector has components -y, x.
return Ray(midpoint.x, midpoint.y, -unit.y, unit.x)
segments = [Segment(points[i - 1], points[i]) for i in range(1, len(points))]
normals = [seg.normal() for seg in segments]
def draw_rays(fig, rays: tp.Iterable[Ray], style="k-"):
for ray in rays:
ray.draw(style=style)
fig = init_plot()
draw_foil(fig, points)
draw_rays(fig, normals)
Each of these normal vectors represents the pressure on a patch of the airfoil.
You could think of each vector as the path that a representative molecule traverses, during some time interval $\Delta{t}$, in order to hit its unit of airfoil with the "pressure" force.
Let's sum all of the fractional pressures to get a net force on the airfoil. Since each normal represents force per unit of area – or, in this 2D case, per unit of length – let's multiply each normal by the length of the segment that it hits, to get the total force on the segment.
In [8]:
def sum_forces(
segments: tp.Iterable[Segment],
seg_pressures: tp.Iterable[Vector]
) -> Vector:
result = Vector(0.0, 0.0)
for segment, pressure in zip(segments, seg_pressures):
result += segment.length() * pressure
return result
normal_vecs = [n.vec() for n in normals]
total_force = sum_forces(segments, normal_vecs)
print(total_force, total_force.mag())
def plot_force_vec(fig, fv):
# Oops! I should have computed the origin of the total
# force vector...
anchored = Ray(0.0, 0.0, fv.x, fv.y)
anchored.draw(style="k-", linewidth=3)
fig = init_plot()
draw_foil(fig, points)
draw_rays(fig, normals)
plot_force_vec(fig, total_force)
Vector(0.0, -8.326672684688674e-17) 8.326672684688674e-17
I had nagging doubts that the net force would sum to zero. It's easy to imagine a shape with a bottom surface so crinkly that it is significantly "longer" than the top surface; and to guess that the net effect would be an upward force. Then again, maybe testing would show the crinkles have such small individual extent that their summed horizontal extent is no greater than that of an upper surface? I digress...
### Wind¶
Let's add a little wind. Adding the same wind vector to each representative normal vector gives a new path that each representative molecule travels before colliding with the airfoil.
In [9]:
def with_wind(normals: tp.Iterable[Ray]) -> tp.List[Ray]:
wind = Vector(-2.0, 0.0)
result = []
for n in normals:
winded = n.vec() + wind
result.append(Ray(n.x, n.y, winded.x, winded.y))
return result
wind_vectors = with_wind(normals)
fig = init_plot()
draw_foil(fig, points)
draw_rays(fig, wind_vectors, style='r--')
What component of each of these "windy" vectors is normal to the airfoil segment that it hits? In other words, how does the wind change the pressure on each part of the airfoil?
In [10]:
windy_normals = [wv.projected(n) for (wv, n) in zip(wind_vectors, normals)]
And what is the net force now?
In [11]:
wn_vecs = [wn.vec() for wn in windy_normals]
total_windy_force = sum_forces(segments, wn_vecs)
print("Net force vector:", total_windy_force)
print("Net force:", total_windy_force.mag())
fig = init_plot()
draw_foil(fig, points)
draw_rays(fig, windy_normals)
draw_rays(fig, wind_vectors, style="r-")
plot_force_vec(fig, total_windy_force)
Net force vector: Vector(-2.080306941045416, -2.980586507531903)
Net force: 3.634772743631019
How about that. By adding a bit of wind we can change the pressure over different parts of the airfoil. The top of the airfoil feels less pressure overall than the bottom.
In other words we have lift (and also drag).
But are we guaranteed to get lift just by adding wind? No. As the hand-out-car-window experiment showed, the lift force on an airfoil in a flowing fluid varies depending on the angle of attack - the angle of the fluid flow with respect to, say, the chord line of the airfoil.
Let's try to find the zero-lift angle for this airfoil.
In [12]:
def rotate(p: Point, angle: float) -> Point:
cos = math.cos(angle)
sin = math.sin(angle)
x = p.x * cos - p.y * sin
y = p.x * sin + p.y * cos
return Point(x, y)
# Calculate normals + wind acting on an airfoil. Display the result.
# Return the net lifting force.
# This is just a re-packaging of the code above.
def calc_and_show(foil: tp.Iterable[Point], deg: float) -> float:
segments = [Segment(foil[i - 1], foil[i]) for i in range(1, len(foil))]
normals = [seg.normal() for seg in segments]
wind_vectors = with_wind(normals)
windy_normals = [wv.projected(n) for (wv, n) in zip(wind_vectors, normals)]
wn_vecs = [wn.vec() for wn in windy_normals]
total_force = sum_forces(segments, wn_vecs)
fig = init_plot()
draw_foil(fig, foil)
draw_rays(fig, windy_normals)
draw_rays(fig, wind_vectors, style="r-")
plot_force_vec(fig, total_force)
plt.text(0.0, -2.0, f"Rotation: {deg:.12g}°")
fmag = total_force.mag()
fx = total_force.x
fy = total_force.y
plt.text(0.0, -2.4,
f"Net force: {fmag:.4g} (x: {fx:.4g}, y: {fy:.4g})")
# Calculate the net lift force for a given rotation angle.
def calc_for_angle(deg: float) -> float:
angle = deg * math.pi / 180.0
prot = [rotate(p, angle) for p in points]
return calc_and_show(prot, deg)
# Solve for the angle at which calc_for_angle returns zero.
def find_zero_lift() -> None:
angle_prev = 0.0
fy_prev = calc_for_angle(angle_prev)
d_angle = 4.0
threshold = 1.0e-09
for i in range(10):
angle = angle_prev + d_angle
fy = calc_for_angle(angle)
if abs(fy) < threshold:
break
# Slope: df(x)/dx
s = (fy - fy_prev) / d_angle
# Guess: fy + s * d_angle_new = 0
# d_angle_new = -fy / s
if abs(s) < 1.0e-6:
break
d_angle = -fy / s
angle_prev = angle
fy_prev = fy
find_zero_lift()
Rotating the airfoil about 4.6 degrees clockwise from its initial orientation results in a net force vector with almost no y component.
## Summary¶
I found Mr. Sibley's explanation of how a wing makes lift unsatisfying. It combined strange assertions about how air must flow around a wing with vague references to Bernoulli's principle, but didn't seem to actually explain anything.
A more convincing explanation can be made by thinking about the static (no-wind) pressure of air on an airfoil, and about how relative motion of air and airfoil affect the distribution of that pressure.
|
2020-10-31 08:01:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4753202199935913, "perplexity": 3816.600511562696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00099.warc.gz"}
|
http://santarosasigns.com/cooluli-mini-xefbo/a5db02-gardner-webb-basketball-2019
|
From what I understand of make.positive.definite() [which is very little], it (effectively) treats the matrix as a covariance matrix, and finds a matrix which is positive definite. However, the highest non-zero coefficients of the l1 structure. out (bool) Notes. I was expecting to find any related method in numpy library, but no success. This will govern the sparsity pattern of the precision matrices. The parameter cov can be a scalar, in which case the covariance matrix is the identity times that value, a vector of diagonal entries for the covariance matrix, or a two-dimensional array_like. If the threshold=0, then the smallest eigenvalue of the correlation matrix 2.6.1. The l1-penalized estimator can recover part of this off-diagonal See Section 9.5. Total running time of the script: ( 0 minutes 0.766 seconds), Download Python source code: plot_sparse_cov.py, Download Jupyter notebook: plot_sparse_cov.ipynb, # author: Gael Varoquaux , # #############################################################################. See also how-to-generate-random-symmetric-positive-definite-matrices-using-matlab. Parameters cov ndarray, (k,k) initial covariance matrix. Let me rephrase the answer. Using the GraphicalLasso estimator to learn a covariance and sparse precision The covariance matrix of a data set is known to be well approximated by the classical maximum likelihood estimator (or “empirical covariance”), provided the number of observations is large enough compared to the number of features (the variables describing the observations). This is known as the Cholesky decomposition and is available in any half decent linear algebra library, for example numpy.linalg.cholesky in python or chol in R. That means that one easy way to create a positive semi-definite matrix is to start with $$L$$: + A^3 / 3! This term will only correspond to a positive definite kernel (on its own) if $$a_j\,c_j \ge b_j\,d_j$$. So, this two numbers can quickly determine the normal distribution. coefficients. I am performing some operations on the covariance matrix and this matrix must be positive definite. 1. The following are 5 code examples for showing how to use sklearn.datasets.make_spd_matrix().These examples are extracted from open source projects. range of -1e-16. Applications of Covariance Matrix. iteratively refined in the neighborhood of the maximum. I need to find out if matrix is positive definite. The … The covariance matrix cov must be a (symmetric) positive semi-definite matrix. Indeed a Gaussian model is To be in favorable recovery conditions, we sample the data from a model Other versions, Click here Expected portfolio variance= SQRT (W T * (Covariance Matrix) * W) The above equation gives us the standard deviation of a portfolio, in other words, the risk associated with a portfolio. if False (default), then only the covariance matrix is returned. These are well-defined as $$A^TA$$ is always symmetric, positive-definite, so its eigenvalues are real and positive. However, Find the nearest covariance matrix that is positive (semi-) definite, This leaves the diagonal, i.e. How to make a positive definite matrix with a matrix that’s not symmetric. Sample covariance matrices are supposed to be positive definite. corr_nearest. improve readability of the figure. However, for completeness I have included the pure Python implementation of the Cholesky Decomposition so that you can understand how the algorithm works: from math import sqrt from pprint import pprint def cholesky(A): """Performs a Cholesky decomposition of A, which must be a symmetric and positive definite matrix. scikit-learn 0.24.0 One way is to use a principal component remapping to replace an estimated covariance matrix that is not positive definite with a lower-dimensional covariance matrix that is. method str. estimated correspond to the non-zero coefficients in the ground truth. If it is the covariance matrix of a complex-valued random vector, then $\Sigma$ is complex and hermitian. data is not too much correlated (limiting the largest coefficient of the I wondered if there exists an algorithm optimised for symmetric positive semi-definite matrices, faster than numpy.linalg.inv() (and of course if an implementation of it is readily accessible from python!). Covariance matrix is very helpful as an input to other analyses. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. x: numeric n * n approximately positive definite matrix, typically an approximation to a correlation or covariance matrix. number of observations, it is easier to recover a correlation matrix It is not able to If True, then correlation matrix and standard deviation are empirical precision is not displayed. I'm not sure what the interpretation of a singular covariance matrix is in this case. These facts follow immediately from the definition of covariance. ground truth value, as can be seen on the figure. When optimising a portfolio of currencies, it is helpful to have a positive-definite (PD) covariance matrix of the foreign exchange (FX) rates. it back to a covariance matrix using the initial standard deviation. Positive definiteness also follows immediately from the definition: $\Sigma = E[(x-\mu)(x-\mu)^*]$ (where $*$ … To estimate a probabilistic model (e.g. I am not sure I know how to read the output. Find the nearest covariance matrix that is positive (semi-) definite. So by now, I hope you have understood some advantages of a positive definite matrix. If you have a matrix of predictors of size N-by-p, you need N at least as large as p to be able to invert the covariance matrix. The first number is mu. Note. For that matter, so should Pearson and polychoric correlation matrices. I appreciate any help.… threshold float precision matrix) and that there a no small coefficients in the If the covariance matrix is positive definite, then the distribution of $X$ is non-degenerate; otherwise it is degenerate. Expected covariance matrix is not positive definite . The matrix symmetric positive definite matrix A can be written as , A = Q'DQ , where Q is a random matrix and D is a diagonal matrix with positive diagonal elements. Singular values are important properties of a matrix. For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.. Ledoit-Wolf precision is fairly close to the ground truth precision, that additionally returned. recover the exact sparsity pattern: it detects too many non-zero of samples is small, we need to shrink a lot. I did not manage to find something in numpy.linalg or searching the web. As a result, the This converts the covariance matrix to a correlation matrix. the variance, unchanged. My matrix is numpy matrix. Covariance matrices are symmetric and positive semi-definite. approximately equal to the threshold. Since a covariance matrix is positive semi-definite, it is useful for finding the Cholesky decomposition. python - Find out if matrix is positive definite with numpy . as estimating the covariance matrix. Sparse inverse covariance estimation¶ Using the GraphicalLasso estimator to learn a covariance and sparse precision from a small number of samples. This is done by testing if the Cholesky decomposition of the covariance matrix finishes successfully. Empirical covariance¶. There are two ways we might address non-positive definite covariance matrices. for each subject, a precision matrix is generated by replacing every 1 in the topology matrix by a random positive number, then multiplying the resulting matrix by its transpose to get a positive definite matrix. See its doc string. the variance, unchanged, if “clipped”, then the faster but less accurate corr_clipped is In addition, we ensure that the The smallest eigenvalue of the intermediate correlation matrix is Finally, the matrix exponential of a symmetrical matrix is positive definite. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. :) Correlation matrices are a kind of covariance matrix, where all of the variances are equal to 1.00. parametrized by the precision matrix. Note that, the color range of the precision matrices is tweaked to if “clipped”, then the faster but less accurate corr_clipped is used.if “nearest”, then corr_nearest is used. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! As can be the nearest correlation matrix that is positive semidefinite and converts Parameters. precision matrix that cannot be recovered. It can be any number, real number and the second number is sigma. I have a sample covariance matrix of S&P 500 security returns where the smallest k-th eigenvalues are negative and quite small (reflecting noise and some high correlations in the matrix). Cholesky decomposition is used for simulating systems with multiple correlated variables. a Gaussian model), estimating the For any $$m\times n$$ matrix $$A$$, we define its singular values to be the square root of the eigenvalues of $$A^TA$$. For wide data (p>>N), you can either use pseudo inverse or regularize the covariance matrix by adding positive values to its diagonal. Then, finds statsmodels.stats.correlation_tools.cov_nearest, Multiple Imputation with Chained Equations. : it detects too many non-zero coefficients of the DataFrame ’ s time series it back to correlation! But less accurate corr_clipped is used.if “ nearest ”, then only the matrix... The matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Otherwise it is useful for finding the Cholesky decomposition of the covariance matrix, where all of variances! Too many non-zero coefficients complex and hermitian data from a model with a sparse covariance... From a small number of samples facts follow immediately from the definition of covariance using!, real number and the second number is sigma sure what the interpretation of a symmetrical is. “ nearest ”, then $\Sigma$ is non-degenerate ; otherwise it is helpful... Very easy to lose the positive definiteness of the variances are equal to 1.00 the Cholesky decomposition the... Elements of Q and D can be seen on figure 2, the highest non-zero.... Adjust an make covariance matrix positive definite python diagonal element, it is not able to recover the sparsity! The precision matrices the population matrices they are supposedly approximating * are * positive definite same. Returns the covariance function evaluated at x, is positive definite, this leaves diagonal... Setting the sparsity pattern of the variances are equal to 1.00 equal to non-zero! Sparse inverse covariance estimation¶ using the initial standard deviation are additionally returned nearest ”, then only covariance... – Evaluation points example code or to run this example in your browser via Binder of! Immediately from the definition of covariance positive definiteness of the precision matrices for the random,! ”, then only the covariance matrix that is because the population matrices are! These are well-defined as \ ( A^TA\ ) is always symmetric, positive-definite so. Nearest ”, then $\Sigma$ is complex make covariance matrix positive definite python hermitian this matrix must be positive definite less accurate is. A^2 / 2 matrix that is positive definite and sparse precision from a small number samples. Numpy.Linalg or searching the web 5 code examples for showing how to use sklearn.datasets.make_spd_matrix (.These. Model with a sparse inverse covariance estimation¶ using the initial standard deviation additionally! It can be seen on figure 2, the number of samples is larger! Where all of the GraphicalLasso setting the sparsity pattern: it detects too many non-zero coefficients for systems... 5 code examples for showing how to read the output 3.8 of the l1 estimated correspond the! Is parametrized by the precision matrices adjust an off diagonal element, it is.... Is parametrized by the precision matrix k ) initial covariance matrix finishes successfully of dimensions, thus the empirical is. Vector $x$ the covariance matrix ’ s time series improve readability of the l1 estimated correspond to non-zero. K ) initial covariance matrix where the variances are not 1.00 function evaluated at x, is definite... Or searching the web in numpy library, but that 's a numerical. Seen on figure 2, the number of samples is slightly larger than the number of.... Is used.if “ nearest ”, then corr_nearest is used will govern the sparsity pattern: it detects too non-zero. Numerical solution is degenerate download the full example code or to run this example in your browser via Binder some! A small number of samples a positive definite matrix to a covariance matrix is positive definite number... Quickly determine the normal distribution cov must be a ( symmetric ) positive semi-definite matrix, Josef,... Approximately equal to 1.00 is set by internal cross-validation in the ground truth make covariance matrix positive definite python sparsity pattern: detects! Follow immediately from the definition of covariance matrix where the variances are equal to 1.00 any related method in library... 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers to read the in. X $the covariance matrix real number and the second number is sigma the cross-validation score is iteratively in! Matrices with numpy definite with numpy in python so, this leaves the diagonal i.e! Seabold, Jonathan Taylor, statsmodels-developers of dimensions, thus the empirical precision is not to... The smallest eigenvalue of the DataFrame ’ s not symmetric was expecting to find something in numpy.linalg or searching web... Is not displayed some operations on the covariance matrix is very helpful as an to! Covariance function evaluated at x, is positive definite, but that 's a numerical. Of Q and D can be seen on figure 2, the number of samples is slightly larger than number. These facts follow immediately from the definition of covariance symmetric, positive-definite, so its are. Decomposition of the model is parametrized by the precision matrix wish to adjust an off element... Numpy library, but no success, k ) initial covariance matrix must! Positive semi-definite matrix matrix must be positive make covariance matrix positive definite python, then corr_nearest is used for simulating with! Or to run this example in your browser via Binder the color range of GP... S time series a model with a matrix that ’ s time series the non-zero.., Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers for random. Covariance is still invertible the matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Larger than the number of samples is slightly larger than the number of dimensions, thus empirical... K is the covariance matrix is returned the web the output in a document! Estimated correspond to the threshold conditions, we Sample the data from a with! Covariance is still invertible the CMLMT Manual grid to compute the cross-validation score is iteratively refined the! With numpy in python i was expecting to find any related method in numpy library, but no.! The DataFrame ’ s not symmetric be any number, real number and the number. A. i 'm inverting covariance matrices are supposed to be positive definite except under certain.. Except under certain conditions definite, but that 's a purely numerical solution as... Randomly chosen to make a random A. i 'm not sure i know to. To find any related method in numpy library, but that 's a purely numerical solution number is sigma multiple! False ( default ), then correlation matrix exponential of a positive definite i did not manage find... Topology ” matrix containing only zero and ones is generated then the but! Are real and positive kind of covariance y for x where k the. Doc ) and polychoric correlation matrices are supposed to be in favorable recovery conditions, we Sample the from! Topology ” matrix containing only zero and ones is generated adjust an off diagonal element, it is useful finding. ( semi- ) definite, then corr_nearest is used for simulating systems multiple. Grid to compute the cross-validation score is iteratively refined in the ground truth because the population matrices they are approximating! Tweaked to improve readability of the DataFrame ’ s time series “ clipped ”, then \Sigma! N, D ) array ) – Evaluation points then corr_nearest is for. ) initial covariance matrix where the variances are equal to the non-zero coefficients the! ) = Id + a + A^2 / 2 l1-penalized estimator can recover part of off-diagonal!Men's Roller Derby London, Brinda Meaning In English, Best Tile Leveling System, Gas Furnace Roof Vent, Traina Gourmet Classic Ketchup, Skinny Cinnamon Almond Milk Macchiato Calories, Fortress Meaning In Tagalog, Life Savers Pep O Mint Nutrition Facts, Where Can I Buy Enstrom Almond Toffee, " /> From what I understand of make.positive.definite() [which is very little], it (effectively) treats the matrix as a covariance matrix, and finds a matrix which is positive definite. However, the highest non-zero coefficients of the l1 structure. out (bool) Notes. I was expecting to find any related method in numpy library, but no success. This will govern the sparsity pattern of the precision matrices. The parameter cov can be a scalar, in which case the covariance matrix is the identity times that value, a vector of diagonal entries for the covariance matrix, or a two-dimensional array_like. If the threshold=0, then the smallest eigenvalue of the correlation matrix 2.6.1. The l1-penalized estimator can recover part of this off-diagonal See Section 9.5. Total running time of the script: ( 0 minutes 0.766 seconds), Download Python source code: plot_sparse_cov.py, Download Jupyter notebook: plot_sparse_cov.ipynb, # author: Gael Varoquaux , # #############################################################################. See also how-to-generate-random-symmetric-positive-definite-matrices-using-matlab. Parameters cov ndarray, (k,k) initial covariance matrix. Let me rephrase the answer. Using the GraphicalLasso estimator to learn a covariance and sparse precision The covariance matrix of a data set is known to be well approximated by the classical maximum likelihood estimator (or “empirical covariance”), provided the number of observations is large enough compared to the number of features (the variables describing the observations). This is known as the Cholesky decomposition and is available in any half decent linear algebra library, for example numpy.linalg.cholesky in python or chol in R. That means that one easy way to create a positive semi-definite matrix is to start with $$L$$: + A^3 / 3! This term will only correspond to a positive definite kernel (on its own) if $$a_j\,c_j \ge b_j\,d_j$$. So, this two numbers can quickly determine the normal distribution. coefficients. I am performing some operations on the covariance matrix and this matrix must be positive definite. 1. The following are 5 code examples for showing how to use sklearn.datasets.make_spd_matrix().These examples are extracted from open source projects. range of -1e-16. Applications of Covariance Matrix. iteratively refined in the neighborhood of the maximum. I need to find out if matrix is positive definite. The … The covariance matrix cov must be a (symmetric) positive semi-definite matrix. Indeed a Gaussian model is To be in favorable recovery conditions, we sample the data from a model Other versions, Click here Expected portfolio variance= SQRT (W T * (Covariance Matrix) * W) The above equation gives us the standard deviation of a portfolio, in other words, the risk associated with a portfolio. if False (default), then only the covariance matrix is returned. These are well-defined as $$A^TA$$ is always symmetric, positive-definite, so its eigenvalues are real and positive. However, Find the nearest covariance matrix that is positive (semi-) definite, This leaves the diagonal, i.e. How to make a positive definite matrix with a matrix that’s not symmetric. Sample covariance matrices are supposed to be positive definite. corr_nearest. improve readability of the figure. However, for completeness I have included the pure Python implementation of the Cholesky Decomposition so that you can understand how the algorithm works: from math import sqrt from pprint import pprint def cholesky(A): """Performs a Cholesky decomposition of A, which must be a symmetric and positive definite matrix. scikit-learn 0.24.0 One way is to use a principal component remapping to replace an estimated covariance matrix that is not positive definite with a lower-dimensional covariance matrix that is. method str. estimated correspond to the non-zero coefficients in the ground truth. If it is the covariance matrix of a complex-valued random vector, then$\Sigma$is complex and hermitian. data is not too much correlated (limiting the largest coefficient of the I wondered if there exists an algorithm optimised for symmetric positive semi-definite matrices, faster than numpy.linalg.inv() (and of course if an implementation of it is readily accessible from python!). Covariance matrix is very helpful as an input to other analyses. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. x: numeric n * n approximately positive definite matrix, typically an approximation to a correlation or covariance matrix. number of observations, it is easier to recover a correlation matrix It is not able to If True, then correlation matrix and standard deviation are empirical precision is not displayed. I'm not sure what the interpretation of a singular covariance matrix is in this case. These facts follow immediately from the definition of covariance. ground truth value, as can be seen on the figure. When optimising a portfolio of currencies, it is helpful to have a positive-definite (PD) covariance matrix of the foreign exchange (FX) rates. it back to a covariance matrix using the initial standard deviation. Positive definiteness also follows immediately from the definition:$\Sigma = E[(x-\mu)(x-\mu)^*]$(where$*$… To estimate a probabilistic model (e.g. I am not sure I know how to read the output. Find the nearest covariance matrix that is positive (semi-) definite. So by now, I hope you have understood some advantages of a positive definite matrix. If you have a matrix of predictors of size N-by-p, you need N at least as large as p to be able to invert the covariance matrix. The first number is mu. Note. For that matter, so should Pearson and polychoric correlation matrices. I appreciate any help.… threshold float precision matrix) and that there a no small coefficients in the If the covariance matrix is positive definite, then the distribution of$ X $is non-degenerate; otherwise it is degenerate. Expected covariance matrix is not positive definite . The matrix symmetric positive definite matrix A can be written as , A = Q'DQ , where Q is a random matrix and D is a diagonal matrix with positive diagonal elements. Singular values are important properties of a matrix. For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.. Ledoit-Wolf precision is fairly close to the ground truth precision, that additionally returned. recover the exact sparsity pattern: it detects too many non-zero of samples is small, we need to shrink a lot. I did not manage to find something in numpy.linalg or searching the web. As a result, the This converts the covariance matrix to a correlation matrix. the variance, unchanged. My matrix is numpy matrix. Covariance matrices are symmetric and positive semi-definite. approximately equal to the threshold. Since a covariance matrix is positive semi-definite, it is useful for finding the Cholesky decomposition. python - Find out if matrix is positive definite with numpy . as estimating the covariance matrix. Sparse inverse covariance estimation¶ Using the GraphicalLasso estimator to learn a covariance and sparse precision from a small number of samples. This is done by testing if the Cholesky decomposition of the covariance matrix finishes successfully. Empirical covariance¶. There are two ways we might address non-positive definite covariance matrices. for each subject, a precision matrix is generated by replacing every 1 in the topology matrix by a random positive number, then multiplying the resulting matrix by its transpose to get a positive definite matrix. See its doc string. the variance, unchanged, if “clipped”, then the faster but less accurate corr_clipped is In addition, we ensure that the The smallest eigenvalue of the intermediate correlation matrix is Finally, the matrix exponential of a symmetrical matrix is positive definite. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. :) Correlation matrices are a kind of covariance matrix, where all of the variances are equal to 1.00. parametrized by the precision matrix. Note that, the color range of the precision matrices is tweaked to if “clipped”, then the faster but less accurate corr_clipped is used.if “nearest”, then corr_nearest is used. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! As can be the nearest correlation matrix that is positive semidefinite and converts Parameters. precision matrix that cannot be recovered. It can be any number, real number and the second number is sigma. I have a sample covariance matrix of S&P 500 security returns where the smallest k-th eigenvalues are negative and quite small (reflecting noise and some high correlations in the matrix). Cholesky decomposition is used for simulating systems with multiple correlated variables. a Gaussian model), estimating the For any $$m\times n$$ matrix $$A$$, we define its singular values to be the square root of the eigenvalues of $$A^TA$$. For wide data (p>>N), you can either use pseudo inverse or regularize the covariance matrix by adding positive values to its diagonal. Then, finds statsmodels.stats.correlation_tools.cov_nearest, Multiple Imputation with Chained Equations. : it detects too many non-zero coefficients of the DataFrame ’ s time series it back to correlation! But less accurate corr_clipped is used.if “ nearest ”, then only the matrix... The matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Otherwise it is useful for finding the Cholesky decomposition of the covariance matrix, where all of variances! Too many non-zero coefficients complex and hermitian data from a model with a sparse covariance... From a small number of samples facts follow immediately from the definition of covariance using!, real number and the second number is sigma sure what the interpretation of a symmetrical is. “ nearest ”, then$ \Sigma $is non-degenerate ; otherwise it is helpful... Very easy to lose the positive definiteness of the variances are equal to 1.00 the Cholesky decomposition the... Elements of Q and D can be seen on figure 2, the highest non-zero.... Adjust an make covariance matrix positive definite python diagonal element, it is not able to recover the sparsity! The precision matrices the population matrices they are supposedly approximating * are * positive definite same. Returns the covariance function evaluated at x, is positive definite, this leaves diagonal... Setting the sparsity pattern of the variances are equal to 1.00 equal to non-zero! Sparse inverse covariance estimation¶ using the initial standard deviation are additionally returned nearest ”, then only covariance... – Evaluation points example code or to run this example in your browser via Binder of! Immediately from the definition of covariance positive definiteness of the precision matrices for the random,! ”, then only the covariance matrix that is because the population matrices are! These are well-defined as \ ( A^TA\ ) is always symmetric, positive-definite so. Nearest ”, then$ \Sigma $is complex make covariance matrix positive definite python hermitian this matrix must be positive definite less accurate is. A^2 / 2 matrix that is positive definite and sparse precision from a small number samples. Numpy.Linalg or searching the web 5 code examples for showing how to use sklearn.datasets.make_spd_matrix (.These. Model with a sparse inverse covariance estimation¶ using the initial standard deviation additionally! It can be seen on figure 2, the number of samples is larger! Where all of the GraphicalLasso setting the sparsity pattern: it detects too many non-zero coefficients for systems... 5 code examples for showing how to read the output 3.8 of the l1 estimated correspond the! Is parametrized by the precision matrices adjust an off diagonal element, it is.... Is parametrized by the precision matrix k ) initial covariance matrix finishes successfully of dimensions, thus the empirical is. Vector$ x $the covariance matrix ’ s time series improve readability of the l1 estimated correspond to non-zero. K ) initial covariance matrix where the variances are not 1.00 function evaluated at x, is definite... Or searching the web in numpy library, but that 's a numerical. Seen on figure 2, the number of samples is slightly larger than the number of.... Is used.if “ nearest ”, then corr_nearest is used will govern the sparsity pattern: it detects too non-zero. Numerical solution is degenerate download the full example code or to run this example in your browser via Binder some! A small number of samples a positive definite matrix to a covariance matrix is positive definite number... Quickly determine the normal distribution cov must be a ( symmetric ) positive semi-definite matrix, Josef,... Approximately equal to 1.00 is set by internal cross-validation in the ground truth make covariance matrix positive definite python sparsity pattern: detects! Follow immediately from the definition of covariance matrix where the variances are equal to 1.00 any related method in library... 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers to read the in. X$ the covariance matrix real number and the second number is sigma the cross-validation score is iteratively in! Matrices with numpy definite with numpy in python so, this leaves the diagonal i.e! Seabold, Jonathan Taylor, statsmodels-developers of dimensions, thus the empirical precision is not to... The smallest eigenvalue of the DataFrame ’ s not symmetric was expecting to find something in numpy.linalg or searching web... Is not displayed some operations on the covariance matrix is very helpful as an to! Covariance function evaluated at x, is positive definite, but that 's a numerical. Of Q and D can be seen on figure 2, the number of samples is slightly larger than number. These facts follow immediately from the definition of covariance symmetric, positive-definite, so its are. Decomposition of the model is parametrized by the precision matrix wish to adjust an off element... Numpy library, but no success, k ) initial covariance matrix must! Positive semi-definite matrix matrix must be positive make covariance matrix positive definite python, then corr_nearest is used for simulating with! Or to run this example in your browser via Binder the color range of GP... S time series a model with a matrix that ’ s time series the non-zero.., Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers for random. Covariance is still invertible the matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Larger than the number of samples is slightly larger than the number of dimensions, thus empirical... K is the covariance matrix is returned the web the output in a document! Estimated correspond to the threshold conditions, we Sample the data from a with! Covariance is still invertible the CMLMT Manual grid to compute the cross-validation score is iteratively refined the! With numpy in python i was expecting to find any related method in numpy library, but no.! The DataFrame ’ s not symmetric be any number, real number and the number. A. i 'm inverting covariance matrices are supposed to be positive definite except under certain.. Except under certain conditions definite, but that 's a purely numerical solution as... Randomly chosen to make a random A. i 'm not sure i know to. To find any related method in numpy library, but that 's a purely numerical solution number is sigma multiple! False ( default ), then correlation matrix exponential of a positive definite i did not manage find... Topology ” matrix containing only zero and ones is generated then the but! Are real and positive kind of covariance y for x where k the. Doc ) and polychoric correlation matrices are supposed to be in favorable recovery conditions, we Sample the from! Topology ” matrix containing only zero and ones is generated adjust an off diagonal element, it is useful finding. ( semi- ) definite, then corr_nearest is used for simulating systems multiple. Grid to compute the cross-validation score is iteratively refined in the ground truth because the population matrices they are approximating! Tweaked to improve readability of the DataFrame ’ s time series “ clipped ”, then \Sigma! N, D ) array ) – Evaluation points then corr_nearest is for. ) initial covariance matrix where the variances are equal to the non-zero coefficients the! ) = Id + a + A^2 / 2 l1-penalized estimator can recover part of off-diagonal! Men's Roller Derby London, Brinda Meaning In English, Best Tile Leveling System, Gas Furnace Roof Vent, Traina Gourmet Classic Ketchup, Skinny Cinnamon Almond Milk Macchiato Calories, Fortress Meaning In Tagalog, Life Savers Pep O Mint Nutrition Facts, Where Can I Buy Enstrom Almond Toffee, " />
# gardner webb basketball 2019
In this equation, ' W ' is the weights that signify the capital allocation and the covariance matrix signifies the interdependence of each stock on the other. Tests if the covariance matrix, which is the covariance function evaluated at x, is positive definite. This leaves the diagonal, i.e. matrix is ill-conditioned and as a result its inverse –the empirical as the observations are strongly correlated, the empirical covariance We could also force it to be positive definite, but that's a purely numerical solution. is not far from being diagonal, but the off-diagonal structure is lost. with a sparse inverse covariance matrix. x ((N, D) array) – Evaluation points. However if we wish to adjust an off diagonal element, it is very easy to lose the positive definiteness of the matrix. to download the full example code or to run this example in your browser via Binder. In this paper we suggest how to adjust an off-diagonal element of a PD FX covariance matrix while ensuring that the matrix remains positive definite. © Copyright 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers. Returns the covariance matrix of the DataFrame’s time series. If x is not symmetric (and ensureSymmetry is not false), symmpart(x) is used.. corr: logical indicating if the matrix should be a correlation matrix. set by internal cross-validation in the GraphicalLassoCV. The elements of Q and D can be randomly chosen to make a random A. If we use l2 shrinkage, as with the Ledoit-Wolf estimator, as the number The alpha parameter of the GraphicalLasso setting the sparsity of the model is The full range of values of the The calculations when there are constraints is described in Section 3.8 of the CMLMT Manual. I pasted the output in a word document (see attached doc). Neither is available from CLASSIFY function. Keep in mind that If there are more variables in the analysis than there are cases, then the correlation matrix will have linear dependencies and will be not positive-definite. In the case of Gaussian vectors, one has to fix vector mu from Rn and the covariance matrix C. This is a matrix of size n times n, and this matrix is symmetric and positive semi-definite. The most common ones are: Stochastic Modeling. from a small number of samples. Returns. This now comprises a covariance matrix where the variances are not 1.00. seen on figure 2, the grid to compute the cross-validation score is The fastest way for you to check if your matrix "A" is positive definite (PD) is to check if you can calculate the Cholesky decomposition (A = L*L') of it. I still can't find the standardized parameter estimates that are reported in the AMOS output file and you must have gotten with OpenMx somehow. Specifically to the estimation of the covariance of the residuals: We could use SVD or eigenvalue decomposition instead of cholesky and handle singular sigma_u_mle. Although by definition the resulting covariance matrix must be positive semidefinite (PSD), the estimation can (and is) returning a matrix that has at least one negative eigenvalue, i.e. In addition, with a small Hi again, Your help is greatly appreciated. Notes. The covariance is normalized by N-ddof. might be negative, but zero within a numerical error, for example in the Apply the inverse of the covariance matrix to a vector or matrix. To estimate a probabilistic model (e.g. Finally, the coefficients of the l1 precision estimate are biased toward Solve K.x = y for x where K is the covariance matrix of the GP. Parameters. precision matrix, that is the inverse covariance matrix, is as important For the random vector $X$ the covariance matrix plays the same role as the variance of a random variable. That is because the population matrices they are supposedly approximating *are* positive definite, except under certain conditions. The matlab code below does exactly that function A = random_cov(n) The matrix symmetric positive definite matrix A can be written as, A = Q'DQ, where Q is a random matrix and D is a diagonal matrix with positive diagonal elements. The calculation of the covariance matrix requires a positive definite Hessian, and when it is negative definite a generalized inverse is used instead of the usual inverse. Here, the number of samples is slightly larger than the number of precision matrix– is very far from the ground truth. zero: because of the penalty, they are all smaller than the corresponding dimensions, thus the empirical covariance is still invertible. it is not positive semi-definite. What is the best way to "fix" the covariance matrix? rather than a covariance, thus we scale the time series. a “topology” matrix containing only zero and ones is generated. I'm inverting covariance matrices with numpy in python. It learns a sparse precision. Assumes input covariance matrix is symmetric. You can calculate the Cholesky decomposition by using the command "chol (...)", in particular if you use the syntax : [L,p] = chol (A,'lower'); used.if “nearest”, then corr_nearest is used, clipping threshold for smallest eigen value, see Notes, factor to determine the maximum number of iterations in The elements of Q and D can be randomly chosen to make a random A. >From what I understand of make.positive.definite() [which is very little], it (effectively) treats the matrix as a covariance matrix, and finds a matrix which is positive definite. However, the highest non-zero coefficients of the l1 structure. out (bool) Notes. I was expecting to find any related method in numpy library, but no success. This will govern the sparsity pattern of the precision matrices. The parameter cov can be a scalar, in which case the covariance matrix is the identity times that value, a vector of diagonal entries for the covariance matrix, or a two-dimensional array_like. If the threshold=0, then the smallest eigenvalue of the correlation matrix 2.6.1. The l1-penalized estimator can recover part of this off-diagonal See Section 9.5. Total running time of the script: ( 0 minutes 0.766 seconds), Download Python source code: plot_sparse_cov.py, Download Jupyter notebook: plot_sparse_cov.ipynb, # author: Gael Varoquaux , # #############################################################################. See also how-to-generate-random-symmetric-positive-definite-matrices-using-matlab. Parameters cov ndarray, (k,k) initial covariance matrix. Let me rephrase the answer. Using the GraphicalLasso estimator to learn a covariance and sparse precision The covariance matrix of a data set is known to be well approximated by the classical maximum likelihood estimator (or “empirical covariance”), provided the number of observations is large enough compared to the number of features (the variables describing the observations). This is known as the Cholesky decomposition and is available in any half decent linear algebra library, for example numpy.linalg.cholesky in python or chol in R. That means that one easy way to create a positive semi-definite matrix is to start with $$L$$: + A^3 / 3! This term will only correspond to a positive definite kernel (on its own) if $$a_j\,c_j \ge b_j\,d_j$$. So, this two numbers can quickly determine the normal distribution. coefficients. I am performing some operations on the covariance matrix and this matrix must be positive definite. 1. The following are 5 code examples for showing how to use sklearn.datasets.make_spd_matrix().These examples are extracted from open source projects. range of -1e-16. Applications of Covariance Matrix. iteratively refined in the neighborhood of the maximum. I need to find out if matrix is positive definite. The … The covariance matrix cov must be a (symmetric) positive semi-definite matrix. Indeed a Gaussian model is To be in favorable recovery conditions, we sample the data from a model Other versions, Click here Expected portfolio variance= SQRT (W T * (Covariance Matrix) * W) The above equation gives us the standard deviation of a portfolio, in other words, the risk associated with a portfolio. if False (default), then only the covariance matrix is returned. These are well-defined as $$A^TA$$ is always symmetric, positive-definite, so its eigenvalues are real and positive. However, Find the nearest covariance matrix that is positive (semi-) definite, This leaves the diagonal, i.e. How to make a positive definite matrix with a matrix that’s not symmetric. Sample covariance matrices are supposed to be positive definite. corr_nearest. improve readability of the figure. However, for completeness I have included the pure Python implementation of the Cholesky Decomposition so that you can understand how the algorithm works: from math import sqrt from pprint import pprint def cholesky(A): """Performs a Cholesky decomposition of A, which must be a symmetric and positive definite matrix. scikit-learn 0.24.0 One way is to use a principal component remapping to replace an estimated covariance matrix that is not positive definite with a lower-dimensional covariance matrix that is. method str. estimated correspond to the non-zero coefficients in the ground truth. If it is the covariance matrix of a complex-valued random vector, then $\Sigma$ is complex and hermitian. data is not too much correlated (limiting the largest coefficient of the I wondered if there exists an algorithm optimised for symmetric positive semi-definite matrices, faster than numpy.linalg.inv() (and of course if an implementation of it is readily accessible from python!). Covariance matrix is very helpful as an input to other analyses. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. x: numeric n * n approximately positive definite matrix, typically an approximation to a correlation or covariance matrix. number of observations, it is easier to recover a correlation matrix It is not able to If True, then correlation matrix and standard deviation are empirical precision is not displayed. I'm not sure what the interpretation of a singular covariance matrix is in this case. These facts follow immediately from the definition of covariance. ground truth value, as can be seen on the figure. When optimising a portfolio of currencies, it is helpful to have a positive-definite (PD) covariance matrix of the foreign exchange (FX) rates. it back to a covariance matrix using the initial standard deviation. Positive definiteness also follows immediately from the definition: $\Sigma = E[(x-\mu)(x-\mu)^*]$ (where $*$ … To estimate a probabilistic model (e.g. I am not sure I know how to read the output. Find the nearest covariance matrix that is positive (semi-) definite. So by now, I hope you have understood some advantages of a positive definite matrix. If you have a matrix of predictors of size N-by-p, you need N at least as large as p to be able to invert the covariance matrix. The first number is mu. Note. For that matter, so should Pearson and polychoric correlation matrices. I appreciate any help.… threshold float precision matrix) and that there a no small coefficients in the If the covariance matrix is positive definite, then the distribution of $X$ is non-degenerate; otherwise it is degenerate. Expected covariance matrix is not positive definite . The matrix symmetric positive definite matrix A can be written as , A = Q'DQ , where Q is a random matrix and D is a diagonal matrix with positive diagonal elements. Singular values are important properties of a matrix. For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.. Ledoit-Wolf precision is fairly close to the ground truth precision, that additionally returned. recover the exact sparsity pattern: it detects too many non-zero of samples is small, we need to shrink a lot. I did not manage to find something in numpy.linalg or searching the web. As a result, the This converts the covariance matrix to a correlation matrix. the variance, unchanged. My matrix is numpy matrix. Covariance matrices are symmetric and positive semi-definite. approximately equal to the threshold. Since a covariance matrix is positive semi-definite, it is useful for finding the Cholesky decomposition. python - Find out if matrix is positive definite with numpy . as estimating the covariance matrix. Sparse inverse covariance estimation¶ Using the GraphicalLasso estimator to learn a covariance and sparse precision from a small number of samples. This is done by testing if the Cholesky decomposition of the covariance matrix finishes successfully. Empirical covariance¶. There are two ways we might address non-positive definite covariance matrices. for each subject, a precision matrix is generated by replacing every 1 in the topology matrix by a random positive number, then multiplying the resulting matrix by its transpose to get a positive definite matrix. See its doc string. the variance, unchanged, if “clipped”, then the faster but less accurate corr_clipped is In addition, we ensure that the The smallest eigenvalue of the intermediate correlation matrix is Finally, the matrix exponential of a symmetrical matrix is positive definite. a Gaussian model), estimating the precision matrix, that is the inverse covariance matrix, is as important as estimating the covariance matrix. :) Correlation matrices are a kind of covariance matrix, where all of the variances are equal to 1.00. parametrized by the precision matrix. Note that, the color range of the precision matrices is tweaked to if “clipped”, then the faster but less accurate corr_clipped is used.if “nearest”, then corr_nearest is used. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! As can be the nearest correlation matrix that is positive semidefinite and converts Parameters. precision matrix that cannot be recovered. It can be any number, real number and the second number is sigma. I have a sample covariance matrix of S&P 500 security returns where the smallest k-th eigenvalues are negative and quite small (reflecting noise and some high correlations in the matrix). Cholesky decomposition is used for simulating systems with multiple correlated variables. a Gaussian model), estimating the For any $$m\times n$$ matrix $$A$$, we define its singular values to be the square root of the eigenvalues of $$A^TA$$. For wide data (p>>N), you can either use pseudo inverse or regularize the covariance matrix by adding positive values to its diagonal. Then, finds statsmodels.stats.correlation_tools.cov_nearest, Multiple Imputation with Chained Equations. : it detects too many non-zero coefficients of the DataFrame ’ s time series it back to correlation! But less accurate corr_clipped is used.if “ nearest ”, then only the matrix... The matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Otherwise it is useful for finding the Cholesky decomposition of the covariance matrix, where all of variances! Too many non-zero coefficients complex and hermitian data from a model with a sparse covariance... From a small number of samples facts follow immediately from the definition of covariance using!, real number and the second number is sigma sure what the interpretation of a symmetrical is. “ nearest ”, then $\Sigma$ is non-degenerate ; otherwise it is helpful... Very easy to lose the positive definiteness of the variances are equal to 1.00 the Cholesky decomposition the... Elements of Q and D can be seen on figure 2, the highest non-zero.... Adjust an make covariance matrix positive definite python diagonal element, it is not able to recover the sparsity! The precision matrices the population matrices they are supposedly approximating * are * positive definite same. Returns the covariance function evaluated at x, is positive definite, this leaves diagonal... Setting the sparsity pattern of the variances are equal to 1.00 equal to non-zero! Sparse inverse covariance estimation¶ using the initial standard deviation are additionally returned nearest ”, then only covariance... – Evaluation points example code or to run this example in your browser via Binder of! Immediately from the definition of covariance positive definiteness of the precision matrices for the random,! ”, then only the covariance matrix that is because the population matrices are! These are well-defined as \ ( A^TA\ ) is always symmetric, positive-definite so. Nearest ”, then $\Sigma$ is complex make covariance matrix positive definite python hermitian this matrix must be positive definite less accurate is. A^2 / 2 matrix that is positive definite and sparse precision from a small number samples. Numpy.Linalg or searching the web 5 code examples for showing how to use sklearn.datasets.make_spd_matrix (.These. Model with a sparse inverse covariance estimation¶ using the initial standard deviation additionally! It can be seen on figure 2, the number of samples is larger! Where all of the GraphicalLasso setting the sparsity pattern: it detects too many non-zero coefficients for systems... 5 code examples for showing how to read the output 3.8 of the l1 estimated correspond the! Is parametrized by the precision matrices adjust an off diagonal element, it is.... Is parametrized by the precision matrix k ) initial covariance matrix finishes successfully of dimensions, thus the empirical is. Vector $x$ the covariance matrix ’ s time series improve readability of the l1 estimated correspond to non-zero. K ) initial covariance matrix where the variances are not 1.00 function evaluated at x, is definite... Or searching the web in numpy library, but that 's a numerical. Seen on figure 2, the number of samples is slightly larger than the number of.... Is used.if “ nearest ”, then corr_nearest is used will govern the sparsity pattern: it detects too non-zero. Numerical solution is degenerate download the full example code or to run this example in your browser via Binder some! A small number of samples a positive definite matrix to a covariance matrix is positive definite number... Quickly determine the normal distribution cov must be a ( symmetric ) positive semi-definite matrix, Josef,... Approximately equal to 1.00 is set by internal cross-validation in the ground truth make covariance matrix positive definite python sparsity pattern: detects! Follow immediately from the definition of covariance matrix where the variances are equal to 1.00 any related method in library... 2009-2019, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers to read the in. X \$ the covariance matrix real number and the second number is sigma the cross-validation score is iteratively in! Matrices with numpy definite with numpy in python so, this leaves the diagonal i.e! Seabold, Jonathan Taylor, statsmodels-developers of dimensions, thus the empirical precision is not to... The smallest eigenvalue of the DataFrame ’ s not symmetric was expecting to find something in numpy.linalg or searching web... Is not displayed some operations on the covariance matrix is very helpful as an to! Covariance function evaluated at x, is positive definite, but that 's a numerical. Of Q and D can be seen on figure 2, the number of samples is slightly larger than number. These facts follow immediately from the definition of covariance symmetric, positive-definite, so its are. Decomposition of the model is parametrized by the precision matrix wish to adjust an off element... Numpy library, but no success, k ) initial covariance matrix must! Positive semi-definite matrix matrix must be positive make covariance matrix positive definite python, then corr_nearest is used for simulating with! Or to run this example in your browser via Binder the color range of GP... S time series a model with a matrix that ’ s time series the non-zero.., Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers for random. Covariance is still invertible the matlab code below does exactly that function a = random_cov ( N ) Sample matrices. Larger than the number of samples is slightly larger than the number of dimensions, thus empirical... K is the covariance matrix is returned the web the output in a document! Estimated correspond to the threshold conditions, we Sample the data from a with! Covariance is still invertible the CMLMT Manual grid to compute the cross-validation score is iteratively refined the! With numpy in python i was expecting to find any related method in numpy library, but no.! The DataFrame ’ s not symmetric be any number, real number and the number. A. i 'm inverting covariance matrices are supposed to be positive definite except under certain.. Except under certain conditions definite, but that 's a purely numerical solution as... Randomly chosen to make a random A. i 'm not sure i know to. To find any related method in numpy library, but that 's a purely numerical solution number is sigma multiple! False ( default ), then correlation matrix exponential of a positive definite i did not manage find... Topology ” matrix containing only zero and ones is generated then the but! Are real and positive kind of covariance y for x where k the. Doc ) and polychoric correlation matrices are supposed to be in favorable recovery conditions, we Sample the from! Topology ” matrix containing only zero and ones is generated adjust an off diagonal element, it is useful finding. ( semi- ) definite, then corr_nearest is used for simulating systems multiple. Grid to compute the cross-validation score is iteratively refined in the ground truth because the population matrices they are approximating! Tweaked to improve readability of the DataFrame ’ s time series “ clipped ”, then \Sigma! N, D ) array ) – Evaluation points then corr_nearest is for. ) initial covariance matrix where the variances are equal to the non-zero coefficients the! ) = Id + a + A^2 / 2 l1-penalized estimator can recover part of off-diagonal!
UA-60143481-1
|
2021-04-10 21:21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6686527132987976, "perplexity": 660.9344622751327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00312.warc.gz"}
|
https://strawberryfields.ai/photonics/conventions/index.html
|
# Conventions and formulas¶
“The nice thing about standards is that you have so many to choose from.” - Tanenbaum [1]
In this section, we provide the definitions of various quantum operations used by Strawberry Fields, as well as introduce specific conventions chosen. We’ll also provide some more technical details relating to the various operations.
Note
In Strawberry Fields we use the convention $$\hbar=2$$ by default, but other conventions can also be chosen by setting the global variable sf.hbar at the beginning of a session. In this document we keep $$\hbar$$ explicit.
Note
The Kraus representation of the loss channel is found in [4] Eq. 1.4, which is related to the parametrization used here by taking $$1-\gamma = T$$.
The explicit expression for the harmonic oscillator wave functions can be found in [5] Eq. A.4.3 of Appendix A.
## Glossary¶
We also provide some details of the quantum photonics terms that are commonly used across Strawberry Fields, when programming and using photonic quantum computers.
## References¶
1
Andrew S. Tanenbaum and David J. Wetherall. Computer networks, 5th Ed. Prentice Hall, 2011.
2
S.M. Barnett and P.M. Radmore. Methods in Theoretical Quantum Optics. Oxford Series in Optical and Imaging Sciences. Clarendon Press, 2002. ISBN 9780198563617. URL: https://books.google.ca/books?id=Gw4sxyr6UhMC.
3
Pieter Kok and Brendon W. Lovett. Introduction to Optical Quantum Information Processing. Cambridge University Press, 2010. ISBN 9781139486439. URL: https://books.google.ca/books?id=G2zKNooOeKcC.
4
Victor V. Albert, Kyungjoo Noh, Kasper Duivenvoorden, R. T. Brierley, Philip Reinhold, Christophe Vuillot, Linshu Li, Chao Shen, S. M. Girvin, Barbara M. Terhal, and Liang Jiang. Performance and structure of bosonic codes. Aug 2017. arXiv:1708.05010.
5
J J Sakurai. Modern Quantum Mechanics. Addison-Wesley Publishing Company, 1994.
|
2022-12-04 09:04:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960502743721008, "perplexity": 3362.7610260359193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00215.warc.gz"}
|
https://gemseo.readthedocs.io/en/4.0.0/_modules/gemseo.uncertainty.distributions.openturns.dirac.html
|
gemseo / uncertainty / distributions / openturns
# dirac module¶
The Dirac distribution based on OpenTURNS.
class gemseo.uncertainty.distributions.openturns.dirac.OTDiracDistribution(variable, variable_value=0.0, dimension=1, transformation=None, lower_bound=None, upper_bound=None, threshold=0.5)[source]
The Dirac distribution.
Example
>>> from gemseo.uncertainty.distributions.openturns.distribution import (
... OTDistribution
... )
>>> distribution = OTDistribution('x', 'Exponential', (3, 2))
>>> print(distribution)
Exponential(3, 2)
Parameters
• variable (str) – The name of the random variable.
• variable_value (float) –
The value of the random variable.
By default it is set to 0.0.
• dimension (int) –
The dimension of the random variable.
By default it is set to 1.
• transformation (str | None) –
A transformation applied to the random variable, e.g. ‘sin(x)’. If None, no transformation.
By default it is set to None.
• lower_bound (float | None) –
A lower bound to truncate the distribution. If None, no lower truncation.
By default it is set to None.
• upper_bound (float | None) –
An upper bound to truncate the distribution. If None, no upper truncation.
By default it is set to None.
• threshold (float) –
A threshold in [0,1].
By default it is set to 0.5.
Return type
None
compute_cdf(vector)
Evaluate the cumulative density function (CDF).
Evaluate the CDF of the components of the random variable for a given realization of this random variable.
Parameters
vector (Iterable[float]) – A realization of the random variable.
Returns
The CDF values of the components of the random variable.
Return type
numpy.ndarray
compute_inverse_cdf(vector)
Evaluate the inverse of the cumulative density function (ICDF).
Parameters
vector (Iterable[float]) – A vector of values comprised between 0 and 1 whose length is equal to the dimension of the random variable.
Returns
The ICDF values of the components of the random variable.
Return type
numpy.ndarray
compute_samples(n_samples=1)
Sample the random variable.
Parameters
n_samples (int) –
The number of samples.
By default it is set to 1.
Returns
The samples of the random variable,
The number of columns is equal to the dimension of the variable and the number of lines is equal to the number of samples.
Return type
numpy.ndarray
plot(index=0, show=True, save=False, file_path=None, directory_path=None, file_name=None, file_extension=None)
Plot both probability and cumulative density functions for a given component.
Parameters
• index (int) –
The index of a component of the random variable.
By default it is set to 0.
• save (bool) –
If True, save the figure.
By default it is set to False.
• show (bool) –
If True, display the figure.
By default it is set to True.
• file_path (str | Path | None) –
The path of the file to save the figures. If the extension is missing, use file_extension. If None, create a file path from directory_path, file_name and file_extension.
By default it is set to None.
• directory_path (str | Path | None) –
The path of the directory to save the figures. If None, use the current working directory.
By default it is set to None.
• file_name (str | None) –
The name of the file to save the figures. If None, use a default one generated by the post-processing.
By default it is set to None.
• file_extension (str | None) –
A file extension, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.
By default it is set to None.
Returns
The figure.
Return type
Figure
plot_all(show=True, save=False, file_path=None, directory_path=None, file_name=None, file_extension=None)
Plot both probability and cumulative density functions for all components.
Parameters
• save (bool) –
If True, save the figure.
By default it is set to False.
• show (bool) –
If True, display the figure.
By default it is set to True.
• file_path (str | Path | None) –
The path of the file to save the figures. If the extension is missing, use file_extension. If None, create a file path from directory_path, file_name and file_extension.
By default it is set to None.
• directory_path (str | Path | None) –
The path of the directory to save the figures. If None, use the current working directory.
By default it is set to None.
• file_name (str | None) –
The name of the file to save the figures. If None, use a default one generated by the post-processing.
By default it is set to None.
• file_extension (str | None) –
A file extension, e.g. ‘png’, ‘pdf’, ‘svg’, … If None, use a default file extension.
By default it is set to None.
Returns
The figures.
Return type
list[Figure]
dimension: int
The number of dimensions of the random variable.
distribution: type
The probability distribution of the random variable.
distribution_name: str
The name of the probability distribution.
marginals: list[type]
The marginal distributions of the components of the random variable.
math_lower_bound: ndarray
The mathematical lower bound of the random variable.
math_upper_bound: ndarray
The mathematical upper bound of the random variable.
property mean: numpy.ndarray
The analytical mean of the random variable.
num_lower_bound: ndarray
The numerical lower bound of the random variable.
num_upper_bound: ndarray
The numerical upper bound of the random variable.
parameters: tuple[Any] | dict[str, Any]
The parameters of the probability distribution.
property range: list[numpy.ndarray]
The numerical range.
The numerical range is the interval defined by the lower and upper bounds numerically reachable by the random variable.
Here, the numerical range of the random variable is defined by one array for each component of the random variable, whose first element is the lower bound of this component while the second one is its upper bound.
property standard_deviation: numpy.ndarray
The analytical standard deviation of the random variable.
standard_parameters: dict[str, str] | None
The standard representation of the parameters of the distribution, used for its string representation.
property support: list[numpy.ndarray]
The mathematical support.
The mathematical support is the interval defined by the theoretical lower and upper bounds of the random variable.
Here, the mathematical range of the random variable is defined by one array for each component of the random variable, whose first element is the lower bound of this component while the second one is its upper bound.
transformation: str
The transformation applied to the random variable, e.g. ‘sin(x)’.
variable_name: str
The name of the random variable.
|
2023-01-29 05:35:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38824599981307983, "perplexity": 4872.875369793882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00080.warc.gz"}
|
https://www.coursehero.com/file/p77qbjv/101-c-y-j-d-a-x-i-b-x-y-x-i-j-y-i-j-F-igure-131-The-sample-points-and-solid/
|
101 c y j d a x i b x y x i j y i j f igure 131 the
• 240
This preview shows page 107 - 110 out of 240 pages.
We have textbook solutions for you!
The document you are viewing contains questions related to this textbook.
The document you are viewing contains questions related to this textbook.
Chapter 3 / Exercise 27
Intermediate Algebra
Mckeague
Expert Verified
101
We have textbook solutions for you!
The document you are viewing contains questions related to this textbook.
The document you are viewing contains questions related to this textbook.
Chapter 3 / Exercise 27
Intermediate Algebra
Mckeague
Expert Verified
c y j d a x i b x y ( x * i j , y * i j ) F igure 13.1. The sample points and solid volume of a Riemann sum 3. If f ( x , y ) 0 on R , then RR R f ( x , y ) dA represents the volume of the solid whose upper boundary is the surface z = f ( x , y ) and whose lower boundary is R . That is, ZZ R f ( x , y ) dA = vol. ( x , y , z ) : ( x , y ) R , 0 z f ( x , y ) = vol. ( x , y , z ) : a x b , c y d , 0 z f ( x , y ) . 4. From now on, we shall use the notation | D | to denote the area of a plane region D . Let us divide both sides of (13.1) by | R | = ( b - a )( c - d ). Since Δ A = | R | / n 2 , we have Δ A / | R | = 1 / n 2 , so we get 1 | R | ZZ R f ( x , y ) dA = lim n →∞ 1 n 2 n X i = 1 n X j = 1 f ( x * i j , y * i j ) . Since 1 n 2 i j f ( x * i j , y * i j ) is the average of the sample values f ( x * i j , y * i j ), we interpret the last equa- tion in terms of the average value of f : average ( x , y ) R f ( x , y ) = 1 | R | ZZ R f ( x , y ) dA . (13.2) E xample 13.1. Evaluate the double integral of the function f ( x , y ) = ( 5 - 2 x - 3 y if 5 - 2 x - 3 y 0 , 0 if 5 - 2 x - 3 y 0 , over the rectangle R = [0 , 3] × [0 , 2]. S olution . We shall use the interpretation of the double integral as volume. By Remark 3 above, the value of the integral ZZ R f ( x , y ) dA is the volume of the solid E that lies between R and the part of the surface z = f ( x , y ) with ( x , y ) R . The graph of f ( x , y ) (before the restriction to R ) consists of the part of the plane z = 5 - 2 x - 3 y with 5 - 2 x - 3 y 0 and the part of the xy -plane z = 0 with 5 - 2 x - 3 y 0. The line with equation 5 - 2 x - 3 y = 0 splits the rectangle R in two parts: a triangle with vertices (0 , 0), ( 5 2 , 0), and (0 , 5 3 ), 102
(0 , 0) (3 , 0) (3 , 2) (0 , 2) ( 5 2 , 0) (0 , 5 3 ) F igure 13.2. The pyramid E and its base where 5 - 2 x - 3 y 0; and a pentagon with vertices ( 5 2 , 0), (3 , 0), (3 , 2), (0 , 2), and (0 , 5 3 ), where 5 - 2 x - 3 y 0 (see Figure 13.2). For points ( x , y ) lying in the pentagon, the graph z = f ( x , y ) and the xy -plane coincide, and for ( x , y ) in the triangle, the graph z = f ( x , y ) is a triangle in the plane z = 5 - 2 x - 3 y . Thus, the solid E is the pyramid with vertices O (0 , 0 , 0), A ( 5 2 , 0 , 0), B (0 , 5 3 , 0), and C (0 , 0 , 5). Its volume is 1 3 | OC | · area( 4 OAB ) = 1 3 | OC | · 1 2 | OA | | OB | = 1 3 5 · 1 2 5 2 5 3 = 125 36 . Hence, ZZ R f ( x , y ) dA = 125 36 . 13.2. Iterated integrals In practice, we want to reverse the order of things in the above example, so that we are able to compute volumes, averages, etc. by interpreting them as double integrals and then evaluating the latter using some “standard” machinery independent of our interpretation. One such method is the method of iterated integrals. An iterated integral is an expression of one of the forms Z d c Z b a f ( x , y ) dx dy or Z b a Z d c f ( x , y ) dy dx . Sometimes, we write these integrals as Z d c Z b a f ( x , y ) dxdy and Z b a Z d c f ( x , y ) dydx , respectively. Note that these are definite integrals of functions defined by means of definite inte- grals and not double integrals. Thus, we can evaluate such integrals using the familiar methods from single-variable calculus.
|
2021-09-22 14:39:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116315245628357, "perplexity": 290.04637893512665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00495.warc.gz"}
|
http://math.stackexchange.com/questions/669737/how-many-positive-integers-1-000-000-contain-the-digit-2
|
# How many positive integers $< 1{,}000{,}000$ contain the digit $2$?
How many positive integers less than $1{,}000{,}000$ have the digit $2$ in them?
I could determine it by summing it in terms of the number of decimal places, i.e. between $999{,}999$ and $100{,}000$, etc.
Then to determine the number of numbers between $999{,}999$ and $100{,}000$ that have the digit $2$ in them would be $9^5$.
Is this correct, or am I miscounting?
-
There are an infinite number of numbers less than 1 with the digit 2 in them, let alone less than 1,000,000. For instance 0.2, 0.22, 0.222, etc. Perhaps you mean how many integers less than 1,000,000 have the digit 2 in their decimal representation? – abligh Feb 9 at 23:39
Do you mean integers > 0? – Shahar Feb 9 at 23:59
Answer confirmed to be 468559. ideone.com/fb5173 – Shahar Feb 10 at 0:04
There are infinite negative integers that have the digit 2 too – Lưu Vĩnh Phúc Feb 10 at 0:58
python: reduce(lambda x,y:x+y, [1 for x in xrange(1000000) if '2' in str(x)]) – ldrumm Feb 10 at 2:19
I'm afraid you've miscounted. In this case, it would be better to count indirectly, by finding the numbers that don't have the digit $2$ in them, then subtracting these from the total.
First, let's count the number of $6$-digit numbers without a $2$ in them. There are $8$ choices for the leading digit of such a number, and for each of the other $5$ digits, there are $9$ choices. Thus, there are $8\cdot 9^5$ such numbers. Similarly, we can find that $8\cdot 9^4$ $4$-digit numbers without a $2$ in them, and so on, down to the $2$-digit numbers. Depending on whether $0$ is considered a $1$-digit number, there are either $8$ or $9$ numbers with one digit and no $2$'s. It turns out that the answer is not affected, either way, as I will discuss below.
Note Depending on whether you are taking $0$ to be a number, the number in the $1$-digit case will differ (though the answer, itself, will not). In fact, if you are taking $0$ to be a number, then the answer is greatly simplified, as you need only choose one of the $9$ available digits for each of the $6$ decimal places. This yields $9^6$ numbers less than $1000000$ without $2$ as a digit, out of a total of $1000000=10^6$ numbers less than $1000000.$ This also suggests an alternate approach in the case that $0$ is not a number being considered. Proceed as before, but discard zero as an option, so there are $9^6-1$ numbers less than $1000000$ without $2$ as a digit, out of a total of $999999=10^6-1$ numbers less than $1000000.$ In either case, there are $10^6-9^6$ numbers less than $1000000$ with $2$ as a digit.
This even agrees with the (more intuitive but less efficient) method outlined above. In general, we can find the sum using the formula for sums of geometric progressions. Alternately, here's a neat trick we can use.
Now, assume that $0$ is not among the numbers under consideration. (As we saw above, this won't make a difference.) In that case, there are $8=8\cdot 9^0$ single-digit numbers not equal to $2$. Hence, there are $$8\cdot9^5+8\cdot9^4+8\cdot9^3+8\cdot9^2+8\cdot9^1+8\cdot9^0$$ numbers less than $1000000$ that do not have $2$ as a digit. Let's call this sum $S$. Now, \begin{align}9S &= 9\left(8\cdot9^5+8\cdot9^4+ 8\cdot9^3+8\cdot9^2+8\cdot9^1+8\cdot9^0\right)\\ &= 8\cdot9^6+8\cdot9^5+8\cdot9^4+8\cdot9^3+8\cdot9^2+8\cdot9^1\\ &= 8\cdot9^6+S-8\cdot9^0\\ &= S+8\cdot\left(9^6-9^0\right)\end{align} so $$8S=8\cdot\left(9^6-9^0\right),$$ and so $$S=9^6-9^0=9^6-1.$$ Since there are $10^6-1$ numbers less than $1000000,$ then as above, there are $$10^6-9^6=468559$$ numbers less than $1000000$ with $2$ as a digit.
-
Why make the distinction between numbers of different length? Just fill them up with zeros up to length 6. That will make it even easier. – canaaerus Feb 9 at 17:19
@canaaerus: Largely, I took the initial approach because I was unsure whether $0$ would be considered as a number in this case. I have added to my answer to suggest how the simpler approach may be incorporated in either case. – Cameron Buie Feb 9 at 17:25
@muntoo: There is nothing to account for, it’s just a different way of writing the numbers, which makes the counting in this case a lot easier. – canaaerus Feb 9 at 21:13
This answer is incorrect. You are skipping all of the valid numbers between 0-99,999 because you are assuming the leading number has to be at least 1, when it can be a zero. For example, the number 37 = 000,037 and is just as valid a number as any other. Just because we do not normally write it with the leading zeroes for base 10 numbers, does not make it invalid or incorrect to do so. This means 9 possible symbols in every position in the number. Making the answer 10^6 - 9^6 – BeowulfNode42 Feb 9 at 21:45
@BeowulfNode42. If you'll check the edit of the question that was current approximately an hour before your comment (as well as the current edit, which is better), you'll see that all those numbers ($37$ included) are covered. Were they not covered, I would not have found the same answer (as I did). – Cameron Buie Feb 9 at 22:07
Though not always the smartest way, such questions can mechanically be answered as follows. (In this case the "smart" way to do it is Cameron's answer. It is instructive to see that this mechanical procedure basically recovers Cameron's method.) Let $a_n$ and $b_n$ be the amounts of $n$-digit numbers that do not and do have a $2$ in them. So $a_0=1$ and $b_0=0$. These number satisfy the recurrence $$\begin{pmatrix}a_{n+1}\\b_{n+1}\end{pmatrix}= \begin{pmatrix}9&0\\1&10\end{pmatrix} \begin{pmatrix}a_n\\b_n\end{pmatrix}$$
(Take a moment to understand what this recurrence expresses.) Now $$\begin{pmatrix}9&0\\1&10\end{pmatrix}^6 \begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}531441\\468559\end{pmatrix}$$
so the answer is $468559=10^6-9^6$.
-
+1: I like it! One thing that might be worth addressing is what exactly a zero-digit number is. My first thought would be 0, but that count also seems to include 0 in the one-digit numbers. Perhaps it would be simpler to begin with $a_1=9$, $b_1=1$? – Cameron Buie Feb 9 at 19:19
@CameronBuie Think of $0$-digit numbers as empty sequences. There is only one empty sequence and it does not contain the digit $2$. Starting at $n=1$ may be a good idea anyway. – WimC Feb 9 at 20:45
can you give me one reference where I can learn this kind of things? (counting+recurrences) – Vicfred Feb 14 at 17:28
@Vicfred In my case it was learning by doing. I thought that such techniques were called "dynamic programming" but looking at the wp page it's not a great reference. Instead you can try your hand on some project Euler problems although some can be quite hard to crack. This problem is a rather fiendish example. I answered some other questions on mse in the same spirit but have to look them up... – WimC Feb 14 at 18:32
@WinMc yeah, I take part in codeforces and topcoder that's why I'm trying to improve my dynamic programming skills with problems like this so I'm looking where can I learn. – Vicfred Feb 14 at 19:03
The number of numbers from $1$ t0 $10^6$ that do not have the digit $2$ is clearly the same as the number of numbers that do not have the digit $9$. Now read each of these in base $9$ and you get all the numbers from 1 to $10^6$ (base 9) $=9^6$ (base 10). Therefore, there are $10^6-9^6$ numbers between $1$ and $10^6$ that use the digit 2.
-
Mucking around with unusual number bases normally does my head in, but this seems to be a straight forward method for this problem. Just remember that counting to 1,000,000 in base 9 results in all permutations of symbols 0-8 in all positions in the number. Also each position in the number is a power of 9 so the 1 in the 6th position is 1 * 9^6 just as in base 10 where a 1 in the 6th position is 1 * 10^6 – BeowulfNode42 Feb 9 at 21:35
You can get a generalised answer to this question (assuming that you are always asking how many integers with the digit 2 less than a particular power of 10).
For 10, there is 1
For 100, there is 1 + 1 + 10 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 9x1 + 10 = 19
For 1000, there is 19 + 19 + 100 + 19 + 19 + 19 + 19 + 19 + 19 + 19 = 9x19 + 100 = 271
So, to generalise if n is the power of 10,
$$A_1 = 1$$
$$A_n = 10^{n-1}+9A_{n-1}$$
So for 1,000,000,
$$A_6=10^5+9A_5$$ $$A_6=10^5+9(10^4 + 9A_4)$$ $$A_6=10^5+9*10^4+81*A_4$$ $$A_6=10^5+9*10^4+81(10^3+9A_3)$$ $$A_6=10^5+9*10^4+81*10^3+729A_3$$ $$A_6=10^5+9*10^4+81*10^3+729(10^2+9A_2)$$ $$A_6=10^5+9*10^4+81*10^3+729*10^2+6561A_2$$ $$A_6=10^5+9*10^4+81*10^3+729*10^2+6561(10+9A_1)$$ $$A_6=100000+90000+81000+72900+65610+59049$$ $$A_6=468559$$
-
most intuitive for me – zinking Feb 10 at 7:03
@zinking: It's worth noting that this is exactly WimC's approach, in less compact form. – Cameron Buie Feb 10 at 13:01
@CameronBuie getting it now, that's better, but the first pass I didn't get it, how dumb. – zinking Feb 10 at 15:05
ideone.com/AcV9PR refer this if you want calculations – zinking Feb 10 at 15:06
I see Pascal's Triangle on that bottom equation reduction. I need to take a break from math... – Cole Johnson Feb 10 at 16:55
You are miscounting, the answer is 468,559.
There are 6 digits, each digit can be 0-9. That makes ten options so 10^6 permutations. If you remove 2 from 0-9, there are 9 options so 9^6 permutations.
Set size = 10^6 = 1,000,000
Numbers with no 2s = 9^6 = 531,441
Number with at least one 2 = 10^6 - 9^6
= 1,000,000 - 531,441
= 468,559
-
+1 – In my opinion the cleanest and simplest way to arrive at the answer. No separate consideration of numbers of different length, no matrices, no base 9, no recursive formulas. You might want to point out though that for the answer it doesn't matter whether numbers are written with leading zeros. And I think "chance" is misleading, it's just about options (combinatorics, not probability theory); and they are not permutations but combinations. – A. Donda Feb 10 at 19:30
@A.Donda You are right, chance is misleading. I originally thought to use probability but quickly realized that permutations could be used directly. I forgot to rephrase that part. – Sam Beckman Feb 11 at 2:57
You can easily check your answer with a computer program or by counting.
Split into disjoint cases. There are 6 digits, so the number of numbers with a 2 in k positions and no other positions is $\binom{6}{k} 9^{6-k}$ where the $\binom{6}{k}$ counts the number of ways to choose the k positions of $2$'s and $9^{6-k}$ counts the number of ways to fill the rest of the positions with $\{0,1,3,4,5,6,7,8,9\}$. Summing from $k=1$ to $k=6$ gives you the answer as $\sum_{k=1}^6 \binom{6}{k} 9^{6-k}$.
Alternatively, count the number of numbers which don't have 2's in them. You can choose the 6 digits from $\{0,1,3,4,5,6,7,8,9\}$, so there are $9^6$ such numbers (including $000000=0$). Subtract this from the total number of numbers less than $1,000,000$ and you get your answer as well.
-
I guess from the group the question was posted in, that you are interested in a more mathematical approach. This is not like that!
It is a very simple condition for a small range and so a modern scripting language makes it easy to compute. Here's the python:
>>> sum(1 for x in range(1000000) if '2' in str(x))
468559
>>>
-
+1 BUT, not necessarily an answer per se as it doesn't show the mathematics behind the answer (especially because this is a brute force effort) – Cole Johnson Feb 10 at 17:00
So many lovely solutions above. For convenience in checking solutions by brute force, I offer the following Mathematica code,
Length[Select[Map[DigitCount[#][[2]] &, Range[10^6]], # > 0 &]]
Essentially, it makes a list of all the numbers from 1 to 1,000,000, then it checks the number of every digit in each of them (DigitCount), and throws away everything except the second digit ([[2]]), as that's the one we care about in this case. It then Selects all the results with at least one 2, and counts how many are left.
For the problem as stated, it returns 468559.
In the code as written, I check all integers up to and including 1,000,000, while the problem specified only integers up to and including 999,999. I did this because (a) it is trivially observed that there are no 2s in 1,000,000, so it wouldn't change our answer, and (b) 10^6 is quicker to type than 10^6-1.
-
This question was linked to Count occurrences of an integer and a possible solution out there would also work for this problem.
$$\text{Let }N= a_na_{n-1}...a_{2}a_{1}a_{0}$$ $$Count(N, K) = \begin{cases} \begin{cases} a_n\left(10^{n-1} - 9^{n-1}\right) & a_n < K \\ \left(a_n-1\right)\left(10^{n-1} - 9^{n-1}\right)+10^{n-1}& a_n > K \\ a_n\left(10^{n-1} - 9^{n-1}\right)+1& a_n = K\end{cases} & a_{n-1}...a_{2}a_{1}a_{0} = 0\\ \begin{cases}Count(a_n0....000) + Count(a_{n-1}...a_{2}a_{1}a_{0})& a_n \ne K \\ Count(a_n0....000) + N \mod 10^{n-1}& a_n = K\end{cases} & a_{n-1}...a_{2}a_{1}a_{0} \ne 0\end{cases}$$
And to extend it to arbitrary range (both inclusive) $$Count(M,N,K)=Count(N,K) - Count(M-1,K)$$
So replacing N=$1,000,000$ and $K=2$, we get
$$Count(0,1000000,2) = Count(1000000,2) = a_n\left(10^{n-1} - 9^{n-1} \right) =\left(10^{6}-9^{6}\right)=468559$$
-
the answer is $427608$ for the number with $2$ between $100000$ and $999999$. I create program to count the number with number two between $100000$ and $999999$. Here is the code I used
for (int num = 100000; num < 999999; num++)
{
if (num.ToString().Contains("2"))
{
count++;
}
}
Console.WriteLine("Total number that contain number 2 is :"+ count);
Your answer does not count $2$ which is less than $1,000,000$, yet contains the digit $2$. – robjohn Feb 10 at 6:33
|
2014-04-16 07:40:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7912302613258362, "perplexity": 350.5680710236342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/147409-logartimic-equation.html
|
1. ## logartimic equation
how to solve this equation?
log (3x - 2) = 1
thank you
2. Originally Posted by alessandromangione
how to solve this equation?
log (3x - 2) = 1
thank you
I assume this is a natural logarithm (of base $\displaystyle e$). If not, just replace $\displaystyle e$ with whatever your base happens to be...
$\displaystyle \log{(3x - 2)} = 1$
$\displaystyle e^{\log{(3x - 2)}} = e^1$
$\displaystyle 3x - 2 = e$
$\displaystyle 3x = e + 2$
$\displaystyle x = \frac{e + 2}{3}$.
3. i found another method...but i dont' get this method...can u explain to me? expcially how it got '10'...
3x - 2 = 10 1 . Rewrite this log equation in exponential form. 3x - 2 = 10
3x = 12
x = 4
Solve this linear equation for x.
4. Originally Posted by alessandromangione
i found another method...but i dont' get this method...can u explain to me? expcially how it got '10'...
3x - 2 = 10 1 . Rewrite this log equation in exponential form. 3x - 2 = 10
3x = 12
x = 4
Solve this linear equation for x.
In that case, the base of your logarithm is $\displaystyle 10$, not $\displaystyle e$.
Follow the same instructions as I gave you, but replace $\displaystyle e$ with $\displaystyle 10$.
|
2018-03-17 04:50:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172651171684265, "perplexity": 1384.9390601264274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644271.19/warc/CC-MAIN-20180317035630-20180317055630-00677.warc.gz"}
|
https://undergroundmathematics.org/calculus-of-powers/r7724
|
Review question
# Can we explain why this integral is zero? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource
Ref: R7724
## Question
The curve $y = 4(x+1)(x-1)(x-3)$ meets the $x$-axis at three points. Verify, by finding the numerical value of each, that the two areas formed between the curve and the $x$-axis are of equal size.
Explain why the value of $\displaystyle \int_{-1}^3 y \:dx$ is zero.
|
2021-12-07 18:14:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33371755480766296, "perplexity": 826.3260377001794}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00120.warc.gz"}
|
http://www.perimeterinstitute.ca/video-library/collection/string-seminars?page=16&qt-seminar_series=0
|
# String Seminars
This series consists of talks in the area of Superstring Theory.
## Seminar Series Events/Videos
Currently there are no upcoming talks in this series.
## DT-invariants and $K_2$
Thursday Aug 23, 2012
Speaker(s):
TBA
Collection/Series:
Scientific Areas:
## Resurgence from the path integral perspective
Tuesday Aug 21, 2012
Speaker(s):
TBA
Collection/Series:
Scientific Areas:
## Universality in all-order alpha' corrections to BPS/non-BPS brane world volume theories
Monday Aug 13, 2012
Speaker(s):
Knowledge of all-alpha' higher derivative corrections to leading order BPS and non-BPS brane actions would serve in future endeavor of determining the complete form of the non-abelian BPS and tachyonic effective actions. In this talk, we note that there is a universality in the all-alpha' order corrections to BPS and non-BPS branes. I talk about computing all amplitudes between one Ramond-Ramond C-field vertex operator and several SYM gauge/scalar vertex operators.
Collection/Series:
Scientific Areas:
## Chiral Symmetry Breaking via Gauge/Gravity Duality
Thursday Aug 02, 2012
Speaker(s):
TBA
Collection/Series:
Scientific Areas:
## Holographic Lattices
Tuesday May 29, 2012
Speaker(s):
We add a gravitational background lattice to the simplest holographic model of matter at finite density and calculate the optical conductivity. With the lattice, the zero frequency delta function found in previous calculations (resulting from translation invariance) is broadened and the DC conductivity is finite. The optical conductivity exhibits a Drude peak with a cross-over to power-law behavior at higher frequencies. Surprisingly, these results bear a strong resemblance to the properties of some of the cuprates.
Collection/Series:
Scientific Areas:
## Background Independent Holographic Description : From Matrix Field Theory to Quantum Gravity
Tuesday May 22, 2012
Speaker(s):
A local renormalization group procedure is proposed where length scale is changed in spacetime dependent manner. Combining this scheme with an earlier observation that high energy modes in renormalization group play the role of dynamical sources for low energy modes at each scale, we provide a prescription to derive background independent holographic duals for field theories.
Collection/Series:
Scientific Areas:
## On-shell Recursion Relation for String Tree-level Amplitude
Tuesday May 08, 2012
Speaker(s):
It is well known that on-shell recursion relation can be applied to tree-level amplitude in string theory. One technical issue of the application is the sum of infinite middle on-shell states. We discuss how we can do the sum exactly to reproduce the known result.
Collection/Series:
Scientific Areas:
## Simple Scattering Amplitudes in Higher Dimensions
Tuesday Apr 17, 2012
Speaker(s):
TBA
Collection/Series:
Scientific Areas:
## K3 Modular Parametrization and Calabi-Yau Threefold Moduli
Tuesday Mar 27, 2012
Speaker(s):
A series of generalizations of the Weierstrass normal
form for elliptic curves to the case of K3 surfaces will be presented. These have already been applied to better
understand F-theory/Heterotic string duality.
We will see how they also resolve a long-standing question of which
"mirror-compatible" variations of Hodge structure over the
thrice-punctured sphere can arise from families of Calabi-Yau threefolds.
Collection/Series:
Scientific Areas:
## Instanton - A Window into Physics of M5-branes
Friday Mar 23, 2012
Speaker(s):
Instantons and W-bosons in 5d N=2 Yang-Mills theory arise from a circle compactification of the 6d (2,0) theory as Kaluza-Klein modes and winding self-dual strings, respectively. We study an index which counts BPS instantons with electric charges in Coulomb and symmetric phases. We first prove the existence of unique threshold bound state of U(1) instantons for any instanton number. By studying SU(N) self-dual strings in the Coulomb phase, we find novel momentum-carrying degrees on the worldsheet.
Collection/Series:
Scientific Areas:
## RECENT PUBLIC LECTURE
### Mario Livio: Brilliant Blunders
Speaker: Mario Livio
|
2016-09-27 19:41:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5616958141326904, "perplexity": 3163.7559476606916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.8/warc/CC-MAIN-20160924173741-00270-ip-10-143-35-109.ec2.internal.warc.gz"}
|
http://ndl.iitkgp.ac.in/document/SHZPSFMzaXB2K2M1Ny94dS9FR1dEWVM0cDIzS0Rab2hXdmdCNjNxMk8rMD0
|
Minimum cuts in near-linear timeMinimum cuts in near-linear time
Access Restriction
Subscribed
Author Karger, David R. Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©2000 Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword Monte Carlo algorithm ♦ Connectivity ♦ Min-cut ♦ Optimization ♦ Tree packing Abstract We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a "semiduality" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an $\textit{m}-edge,$ $\textit{n}-vertex$ graph with high probability in $\textit{O}(m$ log3 $\textit{n})$ time. We also give a simpler randomized algorithm that finds $\textit{all}$ minimum cuts with high probability in $O(\textit{m}$ log3 $\textit{n})$ time. This variant has an optimal $\textit{RNC}$ parallelization. Both variants improve on the previous best time bound of O(n2 log3 n). Other applications of the tree-packing approach are new, nearly tight bounds on the number of $\textit{near-minimum}$ cuts a graph may have and a new data structure for representing them in a space-efficient manner. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2000-01-01 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 47 Issue Number 1 Page Count 31 Starting Page 46 Ending Page 76
Open content in new tab
Source: ACM Digital Library
|
2020-07-07 07:28:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17364439368247986, "perplexity": 3317.1104664913255}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00523.warc.gz"}
|
https://codegolf.stackexchange.com/questions/57721/generate-ordered-binary-combinations-without-repetitions
|
Challenge
Write the shortest program that receives two signed integers n and i and for each i between 1 and 2^n - 1 returns the next ordered permutation based on the binary representation of the number. There is no specific order of the combinations but the number of 1s in the binary representation must always stay the same or grow and numbers each output may happen only once
Clarification
The goal of this program is to generate all the combinations without repetitions of a set. To do so you may consider that each item is represented by a bit in a bit mask, that way if you have three items, A, B and C represented by 001, 010 and 100 respectively the combinations ABC, ACB, BCA, BAC, CAB and CBA are all represented as 111.
For increasing values of i your program should output a new combinations always with the same number of elements or more.
Input
You may read the input in the format
n i
or just use n and i variables, whichever suits you best.
Output
You may output a single number k
Sample cases
Each test case is two lines, input followed by output with a binary representation here for demonstration purposes:
3 1
1 (001)
3 2
4 (100)
3 3
2 (010)
3 4
6 (110)
3 7
7 (111)
Rules
• Shortest code wins (bytes)
• The result can be returned by a function or printed by your program.
• You may assume n and i variables already exist, there's no need to handle input if you don't want to.
• The answers with the same number of 1 bits can be output in any order
• The input is guaranteed to be well formed and 1 <= i <= 2^n - 1.
Happy Golfing!
closed as unclear what you're asking by Peter Taylor, pawel.boczarski, Luis Mendo, Blue, ETHproductionsSep 12 '15 at 17:09
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• I don't get it. Can you explain a little more how you derive the output? – Luis Mendo Sep 11 '15 at 23:32
• The first sentence is quite confusing, because you seem to be talking about two different is (one from input, and one which is a loop variable). On the third read-through of that sentence I came to the conclusion the "for each i" is actually intended to express a constraint on the possible inputs, and that seems to fit the rules at the end. But I'm still not sure what you mean by ordered permutation based on the binary representation of the number, and your example output doesn't seem to show a next-in-list function which the first sentence asks for. – Peter Taylor Sep 12 '15 at 8:08
• I added a clarification, is it better to understand now? – fpg1503 Sep 12 '15 at 16:36
• The example still seems to contradict the spec. Since the number of 1s must be non-decreasing, the next bitmask after 3 can't be 2. – Peter Taylor Sep 12 '15 at 20:19
Pyth, 13 bytes
i@o/N1^U2vzQ2
Just constructs a cartesian power of [0, 1], sorts it by number of 1s, and indexes before converting to base10.
Ruby, 72 bytes
p [0,1].repeated_permutation(n).sort_by{|x|x.count 1}[i].join('').to_i 2
How it works:
irb(main):001:0> [0, 1].repeated_permutation(3).to_a # generate permutations
=> [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1],
[1, 1, 0], [1, 1, 1]]
irb(main):002:0> _.sort_by {|x| x.count 1 } # sort by number of 1's
=> [[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [1, 0, 1],
[1, 1, 0], [1, 1, 1]]
irb(main):003:0> _[2].join '' # convert array of binary digits to string
=> "010"
irb(main):004:0> _.to_i 2 # interpret string as binary number
=> 2
|
2019-07-21 15:29:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36967578530311584, "perplexity": 666.5446420340772}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00229.warc.gz"}
|
https://iqtestpreparation.com/daily-test/3014
|
IQ Contest, Daily, Weekly & Monthly IQ & Mathematics Competitions
Question No 1
Find sequence?
5569,5695,6955,...?
Solution!
1 digit left shift numbers..
Question No 2
A delivery van leaves Selkirk and travels for 1.25 hours at an average speed of 50 metre per hour. How far will the Van have travelled?
Solution!
D=ST
D=50*1.25
D=62.5
.
Question No 3
A man has a daughter and a son. The son is is three years older than the daughter. In one year the man will be six time as old as the daughter is now. In ten years the man will be fourteen years older than the combined ages of his children at that time. what is the man's present age?
Solution!
Let's the age of daughter=x
age f son = x+3
age of father now=6x-1
condition:
M+0={(S+D)+ 10}+14]
6x-1=[{(x+3)+10}+14]
4x=28
x=7
Father's age=(6x-7)-1=41
.
.
Question No 12
Find is to Lose as Construct is to:
Solution!
No explanation available for this question..
Question No 13
2, 9, ___, 65, 126
Find the missing.
Solution!
(13 )+1, (23 )+1, (33 )+1,( 43 )+1, (53 )+1.... Is given series..
Question No 14
Vendor bought chocolates at 6 for a rupee. How many for a rupee must he sell to gain 20%?
Solution!
C.p of 6 chocolates= 120% of RS. 1= 6/5
For RS. 6/5 chocolates sold =6
For RS. 1 chocolates sold (6*5/6)=5
.
Question No 15
In a 100m race a once with a speed of 1.66 metre per second. If a give a start of 4 minute to be and still beats him by 12 seconds. What is the speed of B?
Solution!
Time taken by a to cover hundred metres =60 seconds
Since a gives a start of 4 minutes then time taken by b =72 seconds
B takes 72 second to cover 96 meters
speed of B =96 / 8
72 = 1.33 meter per second.
.
Question No 16
1,3,4,7,11,_,30,40.
Solution!
Question No 17
Idiom and phrases.
To take a thing lying down.
Solution!
Description is not available for this question..
Question No 18
-1, 1, -3, 5, -11... sum of next two terms will be:
Solution!
No explanation available for this question..
Question No 19
CX, DW, EV, FU...?
Solution!
1st letter in series is in Ascending order and
2nd letter in series is in descending order.
.
Question No 20
4.5, 8, 11.5, 15,___, 22
Find the missing.
Solution!
Question No 21
Find the odd man out
Solution!
No explanation available for this question..
Question No 22
Find sequence?
19869,98691,86919..?
Solution!
Shifting of one digit at left direction..
Question No 23
In alphabet series, some alphabets are missing which are given in that order as one of the alternatives below it. Choose the correct alternative.
_ bca _ cca _ ca _ b _ c
Solution!
The series is bbca / bcca / bcaa / bcaa / bbc..
Question No 24
Choose the word which is different from the rest
Solution!
No explanation available for this question..
Question No 25
Two bus tickets from city A to B and three tickets from city A to C cost Rs. 77 but three tickets from city A to B and two tickets from city A to C cost Rs. 73. What are the fares for cities B and C from A ?
Solution!
Let Rs. x be the fare of city B from city A and Rs. y be the fare of city C from city A.
Then, 2x + 3y = 77 ...(i) and
3x + 2y = 73 ...(ii)
Multiplying (i) by 3 and (ii) by 2 and subtracting, we get: 5y = 85 or y = 17.
Putting y = 17 in (i), we get: x = 13.
.
Question No 26
What comes next in the series FA , HC , JE , ?
Solution!
No explanation available for this question..
Question No 27
Ana is 5 years more than Jack. The sum of their ages is 29. Find the age of Ana.
Solution!
Let's
Jack = X
Aana = 5 + X
so, X + 5 + X = 29
X = 29
.
Question No 28
Her_______into mathematical concepts was evident when she correctly analyzed a challenge question.
Solution!
Insight is correct..
Question No 29
She is daughter of my only son's brother. She is my _____:
Solution!
No explanation available for this question..
Question No 30
Pick the odd one:
|
2020-07-05 04:44:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3610553741455078, "perplexity": 4879.952854350945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00474.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/figure-shows-two-identical-parallel-plate-capacitors-connected-battery-through-switch-s-initially-switch-closed-so-that-capacitors-are-completely-charged-energy-stored-capacitor_69051
|
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 12
Share
# Figure Shows Two Identical Parallel Plate Capacitors Connected to a Battery Through a Switch S. Initially, the Switch is Closed So that the Capacitors Are Completely Charged. - Physics
ConceptEnergy Stored in a Capacitor
#### Question
Figure shows two identical parallel plate capacitors connected to a battery through a switch S. Initially, the switch is closed so that the capacitors are completely charged. The switch is now opened and the free space between the plates of the capacitors is filled with a dielectric of dielectric constant 3. Find the ratio of the initial total energy stored in the capacitors to the final total energy stored.
#### Solution
When the switch is closed, both capacitors are in parallel.
⇒ The total energy of the capacitor when the switch is closed is given by E_i = 1/2 CV^2 + 1/2 CV^2 = CV^2
When the switch is opened and the dielectric is induced, the capacitance of the capacitor A becomes
C^' = KC = 3C
The energy stored in the capacitor A is given by
E_A = 1/2 C^'V^2
⇒ E_A = 1/2 (3C)V^2 = 3/2 CV^2
The energy in the capacitor B is given by
E_B = 1/2 xx C/3 xx V^2
therefore Total final Energy
E_f = E_A + E_B
⇒ E_f = 3/2 CV^2 + 1/6 CV^2
⇒ E_f = (9 CV^2 + 1CV^2)/6 = 10/6 CV^2
Now,
Ratio of the energies, E_1/E_2 = (CV^2)/(10/6 CV^2) = 3/5
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [1]
Solution Figure Shows Two Identical Parallel Plate Capacitors Connected to a Battery Through a Switch S. Initially, the Switch is Closed So that the Capacitors Are Completely Charged. Concept: Energy Stored in a Capacitor.
S
|
2020-04-01 04:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4748097062110901, "perplexity": 1363.0028328047501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00181.warc.gz"}
|
http://mathoverflow.net/questions/70659/fixed-points-of-the-borel-serre-compactification
|
# Fixed points of the Borel-Serre compactification
Let $\Gamma$ be an arithmetic group and $X$ its symmetric space. Borel-Serre constructed a space $\bar{X} \supset X$ such that $\bar{X}/\Gamma$ is a compactification of $X/\Gamma$ [Corners and Arithmetic Groups, Comm. Math. Helv. 48(1973), 436-491, §7].
Moreover $\bar{X}$ is a contractible, finite-dimensional CW-complex and $\Gamma$ operates properly and cellularly on $\bar{X}$. In particular, if $H \le \Gamma$ is a finite subgroup, then the fixed point space $\bar{X}^H$ is non-empty.
Is $\bar{X}^H$ contractible or at least path-connected ?
Background: If so, it would follow that the non-abelian cohomology $H^1(G;\Gamma)$ is finite for $\Gamma$ arithmetic and $G \subseteq \operatorname{Aut}(\Gamma)$ finite. See also Finiteness of non-abelian cohomology
-
|
2014-04-19 04:56:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978513479232788, "perplexity": 229.09951952944303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://spikedmath.com/forum/viewtopic.php?p=12
|
## Hi peoples!
If you just joined the forums, tell us a bit about yourself.
### Hi peoples!
Hi guys,
My name is Mike... and I'm a mathematician
My favourite number is ei. My favourite real number is 58008.
And I realize no one else is a member of the forum yet, so I'm just rambling to myself.
Math - It's in you to give.
SpikedMath
Site Admin
Posts: 133
Joined: Mon Feb 07, 2011 1:31 am
Location: Canada
### Re: Hi peoples!
SpikedMath wrote:Hi guys,
My name is Mike... and I'm a mathematician
My favourite number is ei. My favourite real number is 58008.
And I realize no one else is a member of the forum yet, so I'm just rambling to myself.
Hi Mike,
This is you on your other account.
Oh, hi there. What are you doing on your other account?
Oh nothing, just fooling around.. you know
Ah, I see.
MathFail
Moderator
Posts: 1
Joined: Mon Feb 07, 2011 2:59 am
### Re: Hi peoples!
Hi Mike, hope you're enjoying talking to yourself
Great comics BTW, keep it up!
I really like last one
Sarkin
Kindergarten
Posts: 1
Joined: Thu Feb 10, 2011 3:42 pm
Location: Lattakia, Syria
### Re: Hi peoples!
woo a spikedmath forum ^^
hi all 2 mikes and 1sarkin
Math problems? Call 1-800-[(10x)(13i)2]-[sin(xy)/2.362x]
poochon
High School
Posts: 30
Joined: Thu Feb 10, 2011 3:47 pm
Location: Israel
### Re: Hi peoples!
To stop you from "rambling to yourself" I thought I'd just say, well, you know, HI
And by the way, on registering, I was asked the question "What's the derivative of 7x^3". There's a similar question when commenting on the comic but there the answer is already written. Is there any possibility you could fix that?
PS. My favorite number is tau(τ)
willia2501
Kindergarten
Posts: 2
Joined: Thu Feb 10, 2011 3:33 pm
### Re: Hi peoples!
willia2501 wrote:To stop you from "rambling to yourself" I thought I'd just say, well, you know, HI
And by the way, on registering, I was asked the question "What's the derivative of 7x^3". There's a similar question when commenting on the comic but there the answer is already written. Is there any possibility you could fix that?
PS. My favorite number is tau(τ)
PS. PS. Hehe, by the time I was finished writing there had already been two more posts
Last edited by willia2501 on Fri Feb 11, 2011 9:13 am, edited 1 time in total.
willia2501
Kindergarten
Posts: 2
Joined: Thu Feb 10, 2011 3:33 pm
### Re: Hi peoples!
willia2501 wrote:To stop you from "rambling to yourself" I thought I'd just say, well, you know, HI
And by the way, on registering, I was asked the question "What's the derivative of 7x^3". There's a similar question when commenting on the comic but there the answer is already written. Is there any possibility you could fix that?
PS. My favorite number is tau(τ)
Yup, I changed it to a simple addition problem instead, since I realize some of my target audience hasn't taken calculus yet, and it shouldn't prevent them from registering.
Math - It's in you to give.
SpikedMath
Site Admin
Posts: 133
Joined: Mon Feb 07, 2011 1:31 am
Location: Canada
### Re: Hi peoples!
well then, let's start the party
Q.E.D. , or not?
Zapp
University
Posts: 124
Joined: Thu Feb 10, 2011 3:49 pm
### Re: Hi peoples!
Good day, everyone! It's nice to see the new forums!
I'm studying Computer Engineering, but I'm always in the mood for maths. My favourite sequence is the powers of two.
Last edited by E_net4 on Thu Feb 10, 2011 4:07 pm, edited 1 time in total.
E_net4
Kindergarten
Posts: 7
Joined: Thu Feb 10, 2011 3:55 pm
### Re: Hi peoples!
My favorite integer is -1.
I usually drink 0xdecaf coffee, hence produce no theorem.
memming
Kindergarten
Posts: 1
Joined: Thu Feb 10, 2011 3:58 pm
### Re: Hi peoples!
Hello World.
My name is Khalil (خليل) and as of this post I am a first-year Software Engineer student at UOIT.
I take pride in being a nerd, I take it as a complement.
My interests span over topics such as:
• Linux
• Engineering
• Programming
• Mathematics
• Physics
• Chemistry
• Humour
The above list is in no particular order.
Try asking a mother to pick her favourite child.
My favourite quote:
A Holocaust Survivor wrote:I will never say anything that couldn't stand as the last thing I ever say.
My favourite number is $\tau$.
Never mind about this:
Spoiler! :
Also Mike, can you add a LaTex engine like how xkcd did with jsMath to it's Mathematics forum?
Last edited by capncanuck on Thu Feb 10, 2011 4:19 pm, edited 3 times in total.
A Holocaust Survivor wrote:I will never say anything that couldn't stand as the last thing I ever say.
capncanuck
Elementary School
Posts: 17
Joined: Thu Feb 10, 2011 3:52 pm
Location: Canada
### Re: Hi peoples!
After today's cartoon (Marriage and the Fibonaughty Sexquence, #385), Mike should have had posts 1, 2, 3, 5, 8, ...
I too am a math nut--amateur--so I like this comic. I tend to see the world sideways, if not inside out.
I am in my 50s, and live in a small town in West Dakota.
bmonk
University
Posts: 133
Joined: Thu Feb 10, 2011 4:03 pm
### Re: Hi peoples!
Hi guys, thanks for joining I'm surprised so many people joined so fast!!
Math - It's in you to give.
SpikedMath
Site Admin
Posts: 133
Joined: Mon Feb 07, 2011 1:31 am
Location: Canada
### Re: Hi peoples!
Hi dude!
I have a physics/science/tech/general nerd forum, take a look if you want
Parascientifica's News Feed wrote:
Yes you can click that thing. ^
theboss
University
Posts: 147
Joined: Thu Feb 10, 2011 4:11 pm
Location: Invading your mind!
### Re: Hi peoples!
What up peeps. I'm Dominic. Undergraduate math student. Like really undergraduate. Like Cal 1 undergraduate.
So I don't get all the Spiked Math jokes, but I've gotten pretty good at googling them. Of course I don't always get em even after googling. Whatever. I pretend to laugh anyway, just to make Mike feel better.
This has to be the geekiest forum I've ever joined, and nobody's favorite number is 42 yet? C'mon man.
WhoDatMath
Mathlete
Posts: 78
Joined: Thu Feb 10, 2011 4:53 pm
### Re: Hi peoples!
42 aint that mathematical, 3435 is a lot awesomer as its a munchausen number.
http://spikedmath.com/285.html
I have a physics/science/tech/general nerd forum, take a look if you want
Parascientifica's News Feed wrote:
Yes you can click that thing. ^
theboss
University
Posts: 147
Joined: Thu Feb 10, 2011 4:11 pm
Location: Invading your mind!
### Re: Hi peoples!
Hey I'm Ashlie, undergrad maths student in the UK + I'm currently quite weirded out...last night I had a dream that I was on a spiked math forum....
0.7734
Kindergarten
Posts: 4
Joined: Thu Feb 10, 2011 5:30 pm
Location: England
### Re: Hi peoples!
Ello,
I'm Weiss, from Brisbane, Australia. I'm studying maths too (woo!).
My favourite number, of long standing, is 9!
Er.
9 factorial. I'm not excited about the number 9.
Weiss
Kindergarten
Posts: 2
Joined: Thu Feb 10, 2011 6:54 pm
### Re: Hi peoples!
Weiss wrote:My favourite number, of long standing, is 9!.
Grammar Fix'd.
A Holocaust Survivor wrote:I will never say anything that couldn't stand as the last thing I ever say.
capncanuck
Elementary School
Posts: 17
Joined: Thu Feb 10, 2011 3:52 pm
Location: Canada
### Re: Hi peoples!
Hi peeps!!
Good job mike... Love the forum!!
Cheerios,
Mark
mark
Kindergarten
Posts: 2
Joined: Thu Feb 10, 2011 8:21 pm
Location: Canada
Next
Return to Introduce Yourself
### Who is online
Users browsing this forum: No registered users and 0 guests
|
2013-05-18 11:22:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76615309715271, "perplexity": 12490.438491496221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00044-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mathstuneup.com.au/differentiation-of-polynomials/
|
## Before You Watch
This video continues directly on from Rates of Change and Differentiation, in which the concept of a rate of change was introduced and we investigated the need to know the value of instantaneous rates of change. The term ‘to differentiate’ was also discussed. Differentiation is the process of calculating a rate of change. This video describes how to differentiate a specific class of equations known as polynomials. So make sure you’ve seen Rates of Change and Differentiation recently before watching this video.
This video also builds upon earlier algebraic concepts such as indices, including negative indices, and linear equations. It is important to be comfortable with algebra and manipulating algebraic equations before continuing with calculus, so watch those videos again if you need to, then come back.
...
## Now What?
By now you will be familiar with the three videos that introduce the differential branch of calculus: Introduction to Calculus, Rates of Change and Differentiation and this topic, Differentiation of Polynomials. You should understand the core concepts of calculus and know what a rate of change is. You will also know that differentiation is all about calculating the rate of change, and know how to differentiate one category of equations, the polynomials. From here there are two main directions you can go.
One option is to explore how to differentiate other types of equations, such as those involving trigonometry, or exponentials. To do this you should consider looking at sites such as the Khan Academy: https://www.khanacademy.org/math/differential-calculus/taking-derivatives
Alternatively, you could investigate the other branch of calculus, integral calculus. This is introduced in the video Integration.
## But When I am Going to Use This
Calculus is the mathematical study of how things change relative to one another. For instance, velocity (or speed) is a change of position over a change in time, and acceleration is a change in velocity over a change in time – so any motion is studied using calculus. Other examples include the flow of water through pipes over time, or changing commodity prices against demand. Because change is everywhere, the potential applications for calculus are endless, particularly in engineering and science. Calculus is necessary knowledge for any degree related to engineering or science.
Maths Is Fun has a great page that takes you through a simple problem which highlights the need for calculus to discuss changes happening around us. It then continues to explore the main two areas of calculus, differentiation and integration, and provides regular questions to test your understanding.
IntMath gives some historical perspective to explain the sometimes confusing notation that is used in calculus, discussing how it is the mixed product of two mathematicians working independently. It also provides some excellent examples of applications of calculus that are in common use today, as well as helpful applets to understand both differential and integral calculus.
The Khan Academy has a comprehensive set of video tutorials covering a wide range of mathematical and other concepts, as well as questions to test your knowledge. This content provides a whole chapter on taking the derivatives, including of harder equations not covered in this video.
Patrick JMT (Just Maths Tutorials) has an extensive set of video tutorials covering a large range of mathematical concepts. This content runs through differentiation of simple polynomials, but the site also provides videos demonstrating more complex differentiation.
|
2019-10-14 15:13:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5276812314987183, "perplexity": 441.21718999729086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00113.warc.gz"}
|
https://stats.stackexchange.com/questions/13182/looking-at-residuals-vs-residual-percentages
|
# Looking at residuals vs. residual percentages
Suppose I fit a linear regression to some data (say, weight vs. height), and all the standard linear regression assumptions are satisfied (in particular, the data is homoscedastic). For example, here's a random figure pulled from amstat.org that looks like it satisfies what I'm thinking of:
Now I'm doing some exploratory data analysis, so I want to look at examples where the linear regression is particularly off; that is, I want to sort all the individuals by how badly the regression predicts their weight from their height (so that, say, I can look at further details, like whether people whose weight the model underpredicts tend to eat a lot of junk food). My question is:
• Should I sort all the individuals by their raw residuals?
• Or should I sort all the individuals by the residuals as a percentage of the prediction, i.e., by residual weight / predicted weight?
On the one hand, it seems like sorting by raw residuals might be the way to go, since standard linear regression errors are based off the squared residuals, and not the residual percentages. On the other hand, someone who weighs 70kg when their predicted weight is 50kg seems much more of an outlier than someone who weighs 120kg when their predicted weight is 100kg.
Is it just a matter of preference or the particular model at hand?
• Just out of curiousity, do the two methods lead to a much different ordering? – Dominic Comtois Jul 18 '11 at 6:22
• Do you have a rough noise model, e.g. $\sigma(x)$ increasing with x, with scaled residuals $(\frac{y - ax}{\sigma(x)})^2$ roughly flat ? – denis Jul 18 '11 at 8:41
• Ratios are indeed better indicators than absolute differences in this case @dominic yes there should be a different ordering, since in mentioned example both residuals are 20, but relative errors are $2/7 > 2/12$. @raegtin it seems you just want to define what are the outliers in the regression model, isn't it? – Dmitrij Celov Jul 18 '11 at 11:40
• @Denis: I'm assuming that $\sigma(x)$ is constant. @Dmitrij: yep, I basically just want to find the outliers. From a "minimize sum of squares" point of view, it seems like I shouldn't be scaling residuals to find outliers, but intuitively it seems like I should. @dominic: yep, like Dmitrij said, the two methods should lead to a much different ordering (especially if there's a large range in the x-axis). – raegtin Jul 18 '11 at 20:14
• Maybe part of the problem is that I'm surprised many models seem to have constant variance. For example, I'm surprised in the example above that there's not a larger variance in weight as height increases. – raegtin Jul 18 '11 at 20:17
|
2020-08-04 08:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7567211389541626, "perplexity": 808.7150069418241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00074.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/wire-carrying-2-current-placed-angle-60-respect-magnetic-field-strength-02-t-length-wire-0-q2028857
|
## 4
A wire carrying a 2-A current is placed at an angle of 60° with the respect to a magnetic field of strength 0.2 T. If the length of the wire is 0.6 m what is the magnitude of the magnetic force acting on the wire?
|
2013-05-26 08:55:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075742363929749, "perplexity": 146.58541400755024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706794379/warc/CC-MAIN-20130516121954-00097-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://biblio.ugent.be/publication/1105471
|
### Some subspaces of the projective space PG(Lambda(K) V) related to regular spreads of PG(V)
Bart De Bruyn UGent (2010) 20. p.354-366
abstract
Let V be a 2m-dimensional vector space over a field F (m >= 2) and let k is an element of {1, ... , 2m - 1}. Let A(2m-1,k) denote the Grassmannian of the (k - 1)-dimensional subspaces of PG(V) and let e(gr) denote the Grassmann embedding of A(2m-1,k) into PG(Lambda(k) V). Let S be a regular spread of PG(V) and let X-S denote the set of all ( k - 1)-dimensional subspaces of PG(V) which contain at least one line of S. Then we show that there exists a subspace Sigma of PG(Lambda(k) V) for which the following holds: (1) the projective dimension of Sigma is equal to ((2m)(k)) - 2 . ((m)(k)) - 1; (2) a (k - 1)-dimensional subspace alpha of PG(V) belongs to X-S if and only if e(gr)(alpha) is an element of Sigma; (3) Sigma is generated by all points e(gr)(p), where p is some point of X-S.
author
organization
year
type
journalArticle (original)
publication status
published
subject
keyword
Klein correspondence, Grassmann embedding, Regular spread, Grassmannian
journal title
ELECTRONIC JOURNAL OF LINEAR ALGEBRA
Electron. J. Linear Algebra
volume
20
pages
354 - 366
Web of Science type
Article
Web of Science id
000281206100002
JCR category
MATHEMATICS
JCR impact factor
0.808 (2010)
JCR rank
73/276 (2010)
JCR quartile
2 (2010)
ISSN
1081-3810
language
English
UGent publication?
yes
classification
A1
I have transferred the copyright for this publication to the publisher
id
1105471
handle
http://hdl.handle.net/1854/LU-1105471
date created
2011-01-20 10:15:06
date last changed
2016-12-19 15:41:28
```@article{1105471,
abstract = {Let V be a 2m-dimensional vector space over a field F (m {\textrangle}= 2) and let k is an element of \{1, ... , 2m - 1\}. Let A(2m-1,k) denote the Grassmannian of the (k - 1)-dimensional subspaces of PG(V) and let e(gr) denote the Grassmann embedding of A(2m-1,k) into PG(Lambda(k) V). Let S be a regular spread of PG(V) and let X-S denote the set of all ( k - 1)-dimensional subspaces of PG(V) which contain at least one line of S. Then we show that there exists a subspace Sigma of PG(Lambda(k) V) for which the following holds: (1) the projective dimension of Sigma is equal to ((2m)(k)) - 2 . ((m)(k)) - 1; (2) a (k - 1)-dimensional subspace alpha of PG(V) belongs to X-S if and only if e(gr)(alpha) is an element of Sigma; (3) Sigma is generated by all points e(gr)(p), where p is some point of X-S.},
author = {De Bruyn, Bart},
issn = {1081-3810},
journal = {ELECTRONIC JOURNAL OF LINEAR ALGEBRA},
keyword = {Klein correspondence,Grassmann embedding,Regular spread,Grassmannian},
language = {eng},
pages = {354--366},
title = {Some subspaces of the projective space PG(Lambda(K) V) related to regular spreads of PG(V)},
volume = {20},
year = {2010},
}
```
Chicago
De Bruyn, Bart. 2010. “Some Subspaces of the Projective Space PG(Lambda(K) V) Related to Regular Spreads of PG(V).” Electronic Journal of Linear Algebra 20: 354–366.
APA
De Bruyn, B. (2010). Some subspaces of the projective space PG(Lambda(K) V) related to regular spreads of PG(V). ELECTRONIC JOURNAL OF LINEAR ALGEBRA, 20, 354–366.
Vancouver
1.
De Bruyn B. Some subspaces of the projective space PG(Lambda(K) V) related to regular spreads of PG(V). ELECTRONIC JOURNAL OF LINEAR ALGEBRA. 2010;20:354–66.
MLA
De Bruyn, Bart. “Some Subspaces of the Projective Space PG(Lambda(K) V) Related to Regular Spreads of PG(V).” ELECTRONIC JOURNAL OF LINEAR ALGEBRA 20 (2010): 354–366. Print.
|
2018-04-27 01:10:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391628503799438, "perplexity": 5473.055753795341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00150.warc.gz"}
|
https://possiblywrong.wordpress.com/2019/09/07/snap-card-game-probability/
|
## Snap card game probability
Introduction
In the children’s card game Snap, a deck of cards is shuffled and divided as evenly as possible among two or more players. In alternating turns, each player deals a card from her stack onto a face-up pile in front of her. If at any time, the top cards of any two players’ face-up piles have the same rank, the first player to shout “Snap!” takes both face-up piles and adds them to the bottom of her remaining stack. The objective is to accumulate all of the cards.
It is possible for the players to deal through their entire stacks without a single snap, i.e., at no time do the top cards of two piles have the same rank. Let us call such a game boring (a term that may be suggested by, say, your niece to whom you are introducing the game). What is the probability of a boring game of Snap?
One of the reasons I love combinatorics is that it is so easy to ask a hard question. That is, we can pose a series of very similar-sounding problems, all of which are easy to state and understand, but whose minor differences result in major changes in complexity of their solutions. The motivation for this post is to describe a simple children’s card game as an example of this phenomenon.
Frustration Solitaire
Before tackling the actual problem described above, let’s consider a slightly modified version of the two-player game that is easier to analyze:
1. Instead of dividing a single shuffled deck in half, start with two separate complete shuffled decks, one for each player.
2. Instead of alternating turns dealing a card from each player’s stack, at each turn both players simultaneously deal a card face-up from their remainder.
Dealing through the entirety of both decks without a snap is effectively equivalent to winning a game of Frustration Solitaire, where a single player deals cards from a single shuffled deck, saying ranks “Ace, two, three, …, king” repeatedly, losing the game if the dealt card ever matches the called rank.
Even within this already-modified context, there are three slight variations that range from very simple to– I think– intractable:
1. If a snap requires that both cards match in rank and suit, then we are counting derangements, and the probability of a boring game is approximately $1/e$, or about 0.367879.
2. If a snap requires that both cards match in rank only, as in Frustration Solitaire, then we have discussed this problem before here, in the context of a Secret Santa drawing among 13 families each with 4 family members. In this case, the probability of a boring game is approximately 0.0162327.
3. If a snap requires that both cards match in rank or in suit… well, although this is still effectively a problem of counting permutations with restricted positions, I think this problem is much harder in practice, since the board of restricted positions can’t be nicely decomposed into sub-boards with no rows or columns in common.
War
Let’s consider one more modified version of the game, this time actually rolling back one of the changes above: let’s return to playing with a single shuffled deck divided evenly among the two players, but retain the change where the players deal simultaneously from their stacks.
This version of the game has a lot of structure in common with another children’s card game, War. In this case, a boring game of Snap corresponds to War with no war– that is, dealing through both deck halves without any pair of cards matching in rank.
This is a relatively straightforward inclusion-exclusion problem; with a deck with $r=13$ ranks and $s=4$ suits, the probability of a boring game is
$\frac{1}{(r s)!} \sum\limits_{j=0}^n (-1)^j {n \choose j} j!(r s-2j)! [x^j]g(x)^r$
where
$n = \lfloor\frac{r s}{2}\rfloor$
$g(x) = \sum\limits_{k=0}^{s/2} \frac{s!}{k!(s-2k)!} x^k$
which for a standard 52-card deck yields a probability of a boring game of about 0.210214.
(Edit 2020-09-04: This game is discussed in the Riddler column at FiveThirtyEight.com, where the problem is to compute the probability of not just a boring game of War, but a game where one player wins all of the tricks. If we require a particular player to win all of the tricks, then we can divide the above probability by $2^{rs/2}$; or for the probability of either player winning all of the tricks, divide by $2^{rs/2-1}$.)
Counting “words” with prohibited subwords
Coming back finally to the original rules of Snap, counting boring games is complicated by the fact that the players alternate turning individual cards face-up, rather than simultaneously revealing a pair of cards at a time, so that a snap may “start” with either the first or the second player’s deal. Intuitively, consider skipping the initial separation of the deck into halves, and simply deal cards one at a time from the single shuffled deck; a boring game is one in which no two consecutively dealt cards are the same rank.
There is a wonderful paper by Jair Taylor (referenced below) describing a very general but more sophisticated technique for counting arrangements with these types of restrictions. Applying this technique to Snap, the probability of a boring game using a deck with $r$ ranks and $s$ suits is
$\frac{(s!)^r}{(r s)!} \int_{t=0}^{\infty} e^{-t} g_s(t)^r dt$
where
$g_k(t) = (-1)^k \sum\limits_{i=0}^k (-1)^i {k-1 \choose k-i} \frac{t^i}{i!}$
yielding a probability of approximately 0.0454763, or about once every 22 shuffles, that a game of Snap will be boring.
Once again, however, it’s very easy to make this problem much harder: if we allow a snap to involve adjacent cards matching in rank or in suit, what is the resulting probability of a boring game? What if there are more than two players?
Reference:
1. Taylor, J., Counting words with Laguerre series [arXiv]
This entry was posted in Uncategorized. Bookmark the permalink.
### 3 Responses to Snap card game probability
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2021-06-14 17:58:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5892132520675659, "perplexity": 806.8457279176827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00489.warc.gz"}
|
https://math.stackexchange.com/questions/886041/conjecture-identity-for-sieve-of-eratosthenes-collisions
|
# Conjecture---Identity for Sieve of Eratosthenes collisions.
Let
$$\beta(n,k) = \max_{d \leq k}(d|n)$$
$$S(k)= \sum_{n=1}^{k!} \beta(n,k),$$
$$\hspace{20mm}$$and
$$T(k)=\# \{ ~i\cdot j~~\big|_{i=1}^k \big|_{j=1}^{k!} \}$$
Does $$S(k)=T(k)?$$
See OEIS A126959.
Replace $$k!$$ in $$S,T$$ with $$\exp (\psi(k) )$$, where $$\psi(\cdot)$$ is second Chebyshev function, to get A101459.
• What is T(n,k)? – frogeyedpeas Jan 1 '15 at 8:40
• @frogeyedpeas It usually denotes the number of elements in the set. – karvens Jan 1 '15 at 8:47
• For which values of $k$ have you verified that $S(k)=T(k)$? – Gerry Myerson Jan 2 '15 at 3:58
• oeis.org/A126959 has been calculated out to $n=36$, so you have verified $S(k)=T(K)$ out to $k=36$? – Gerry Myerson Jan 2 '15 at 4:07
• @GerryMyerson, One sequence fails at $k=10$ and the other at $k=17$. Yikes! – Fred Kline Jan 2 '15 at 4:34
|
2021-08-03 16:12:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508668184280396, "perplexity": 1476.1365287443036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00679.warc.gz"}
|
https://www.vedantu.com/question-answer/which-of-the-following-has-the-highest-electrode-class-12-chemistry-jee-main-630a5d2c1ed86e27f42678f5
|
Which of the following has the highest electrode potential?A. BeB. MgC. CaD. Ba
Verified
30.9k+ views
Hint: The elements beryllium, magnesium, calcium and barium belong to group 2 of the periodic table. These are called alkaline Earth metals. Electrode potential is defined as the tendency of a chemical species to gain or lose electrons.
Complete Step by Step Solution:
The general outer electronic configuration of alkaline earth metals is
${\rm{n}}{{\rm{s}}^{\rm{2}}}$.
These elements have two electrons in the S orbital of the valency cell. These elements lose two electrons to undergo oxidation.
${\rm{M}} \to {{\rm{M}}^{{\rm{2 + }}}}{\rm{ + 2}}{{\rm{e}}^{\rm{ - }}}$ where M = alkaline earth metal
These metals are strong reducing agents. Reducing agents are the chemical species that reduce other chemical species and undergo oxidation themselves. The oxidation potential is defined as the measure of the tendency of an element to lose electrons. Oxidation potential increases on moving from top to bottom in a group. This is because on moving down the group atomic size increases. Electrons are added to higher energy levels. Valence electrons are not closely held by the nucleus. The loss of electrons is easier as we move down the group.
Out of the given options, beryllium has the smallest atomic size. The loss of electrons or oxidation is difficult. Beryllium has the least oxidation potential.
We know that ${\rm{oxidation potential = }}\left( {{\rm{ - reduction potential}}} \right)$.
As Be has the least oxidation potential, it has the highest reduction potential.
So, option A is correct.
Note: As the atomic size increases down the group, the electropositive character which is the tendency to lose electrons increases on moving from Be to Ba. The oxidation potential is defined as the measure of the tendency of an element to get oxidised i.e., to lose electrons. The reduction potential is defined as the measure of the tendency of an element to get reduced i.e., to lose electrons.
|
2022-12-05 04:48:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020368218421936, "perplexity": 993.5827406191225}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00617.warc.gz"}
|
http://mathoverflow.net/feeds/question/74791
|
Do representations of Fuchsian groups have unitary deformations? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T14:40:27Z http://mathoverflow.net/feeds/question/74791 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/74791/do-representations-of-fuchsian-groups-have-unitary-deformations Do representations of Fuchsian groups have unitary deformations? anton 2011-09-07T20:28:22Z 2011-09-07T20:41:01Z <p>Let $G$ be $SL_2({\mathbb C})$ and for $a,b\in G$ let $[a,b]=aba^{-1}b^{-1}$ be the commutator bracket. Let $n$ be a natural number $\ge 2$ and let $X\subset G^{2n}$ be the set of all $g\in G^{2n}$ such that $$[g_1,g_2]\cdots[g_{2n-1},g_{2n}]=1.$$ The first question is, whether $X$ is connected. If not, can one give a list of the connected components? Finally, does the subset $SU(2)^{2n}\cap X$ meet every connected component?</p> <p>If the last question has an affirmative answer, every $G$-valued representation of the fundamental group $\Gamma$ of a compact Riemann surface of genus $n$ can be deformed to a unitary one, which explains the title of my question. </p> http://mathoverflow.net/questions/74791/do-representations-of-fuchsian-groups-have-unitary-deformations/74793#74793 Answer by Richard Kent for Do representations of Fuchsian groups have unitary deformations? Richard Kent 2011-09-07T20:35:27Z 2011-09-07T20:41:01Z <p>$X$ is the $SL_2(\mathbb{C})$--representation variety of the surface group, and, by Goldman's thesis, it is irreducible, and so connected.</p> <p>See </p> <p>Goldman, Topological components of spaces of representations. Invent. Math. 93 (1988), no. 3, 557–607.</p> <p>If you take $G$ to be $PSL_2(\mathbb{C})$, then there are two components (see also Goldman), one for each Stiefel-Whitney class.</p> <p>Edit: I should say that I recall that this is perhaps not so easy to find in Goldman's paper as he doesn't state it explicitly, but at some point he proves that the smooth locus of $X$ is connected, which gives the result.</p>
|
2013-05-21 14:40:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098003506660461, "perplexity": 497.2080891044197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700107557/warc/CC-MAIN-20130516102827-00072-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mattheath.wordpress.com/2008/08/01/maths-pwns-numerology/
|
## Don’t listen to the numbers!
Mark Chu-Carrol has written this post discussing this silliness about “spooky” patterns in the digits of $\pi$, $\sqrt 2$ and some other number, derived from $\pi$. I’d recommend reading Mark CC’s piece, including the comments, which contain a discussion about what sounds like a non-silly use of this sort of pattern spotting that (it is claimed) has historically been used in mystical traditions – namely seeing what the patterns your brain pulls out tell you about your brain. The guy being discussed doesn’t do that. He thinks that the patterns were left by a (“Pythagorean“) god to tell us stuff. As one commenter pointed out, this is the sort of thinking that leads to murdering John Lennon.
Anyway, I thought it might be a good time to talk about the mathematics of patterns showing up everywhere, and how it is way cooler than any supernatural pseudo-explanation. One kind of “patterns are inevitable” result is Ramsey theory. Taking one of the simplest examples here, draw six points on a piece of paper (arranged as a regular hexagon, say), and then take a red pen and a blue pen. Now, draw lines (each with one of the two pens but changing pen whenever you wish) joining together each pair of dots. Amongst your lines there must be a triangle all of one colour. Furthermore if we have any picture we want to find amongst coloured lines joining dots (with any finite number of colours) we only need to insist that there be more than some given number of dots and we can be sure of it. There are many similar results about absolutely guaranteeing structure in a large enough finite set.
OK, well that’s combinatorics, something I don’t actually know to much about. Also, it’s not obviously related to patterns in $\pi$. So let’s talk about measure theory and, since it’s more intuitive, let’s disguise it as probability. Consider the uniform distribution on an interval. We claim the following.
1. Fix a finite string of digits (in a fixed base n) and choose a number at random from our interval. The probability that the number we choose contains our string (in its base n expansion) is 1.
We can “bootstrap” this result up to the following infinitely awesome fact.
2. Choose a number at random from an interval. With probability 1, its expansion in every base will contain every finite string of characters in that base infinitely many times.
There are sketches of the proofs of these at the bottom of the page together with a note about what “probability 1” implies.
The stronger condition of normality (roughly that each string of a particular length comes up with the same frequency) is also true with probability 1.
Why this is unimaginably cool
Lets think about this property that is so amazingly common amongst numbers. Forget about the short patterns in base ten that the kook at the start had; think about base 27. Use the letters of the alphabet and a space as the symbols for digits. What kind of structure does the expansion of our randomly chosen number almost certainly have? Well, if you are looking for beauty, the complete works of Shakespeare will be there, and of Goethe, Camões, Tolstoy, Cervantes and Moliere. If you want guidance, there will be the King James Bible, the Koran, the Book of Mormon, Dianetics(tm), das Kapital and Atlas Shrugged. It will also contains versions of these books interspersed with snarky comments mocking the ideas contained therein.
It contains this blog post. It contains a detailed description of what you did today. It contains a detailed description of what you will do tomorrow (spooky, huh?). It contains the lost plays of Sophocles and the folk tales of extinct oral traditions and every beautiful story and poem that never got written (as well as rather a lot of meaningless strings of letters and spaces).
Or how about base 2. Big strings of zeroes and ones are, as you no doubt know, what tell computers what to do. So we have plain text files of all the books mentioned above as well as nice illustrated pdfs of each and mp3s of them being read in the voice of your favourite actor. We also have every flash video file on YouTube, and flash videos of every film you ever loved or hated and everything that ever happened anywhere (as well as nice DVD versions). It also contains video of things that didn’t happen. It has Richie Edwards and Lord Lucan racing on unicorns, and Frank Sinatra singing Rufus Wainwright songs. It contains an arbitrarily accurate physical model of the universe, predicting the location of each subatomic particle (including those in your brain) into the ridiculously far future, and others describing other possible universes equally precisely.
My point is, there is a lot of structure in nearly every number. This is very cool. It also means that if you find patters in a number it doesn’t mean that the number is trying to talk to you. It’s a number; they aren’t known for being very chatty. In any case, if it were talking to you, it would (with probability 1) say every imaginable stupid thing and keep contradicting itself. Don’t listen to the numbers!
A word on “with probability 1”
Counter-intuitive things end up being true when you talk about infinite sets (like “there are as many primes as natural numbers”). One of these is that “the probability of E occurring is 1″ is not the same as “E is certain to occur”. Equivalently, “the probability of E occurring is 0″ is not the same as “E is impossible”. The most obvious example to think of is the probability of any one specified number being chosen. Since the distribution is uniform (and we have infinitely many points) this has to be zero (otherwise the probability of “we pick any number” would end up being infinity rather than 1; for the details read about measure theory). However we have to pick some number, so clearly this is not impossible. Perhaps the best way of thinking about it is that a zero probability event occurring (or equivalently a probability 1 event failing to occur) is that an arbitrarily pessimistic person need not worry about it. That is even if given a ridiculously small fixed probability of something occuring (such as the probability according to quantum mechanics that you will simply vanish from existence in the next minute) still bothers you, you don’t have to worry about the probability zero event; it’s infinitely less likely than anything that is merely unlikely.
Sketches of proofs
In the following “length” is really 1 dimensional Lebesgue measure. If you want to know that everything I am doing is formally allowed, you will have to know a bit a measure theory.
Vague sketch of a proof 1NB some formulas failed to parse so I took out the “$” marks. Can anyone see what is wrong? For simplicity we shall assume our interval is the half-open [0,1); the general version is similar. The proof of this is very like the proof that the Cantor set has measure 0 (indeed the special case where the base is 3 and the string we want to avoid is (2) exactly this). Let the base be b and the string be $s=(s_1,\dots s_n)$ (where n is the length of s). We shall show that a subset of [0,1] with measure 0 that contains every number that doesn’t have s in it’s base b expansion. Let $A_1$ be the set obtained from [0,1) by removing the half open interval latex [0.s_2\dots s_n 00\dots ,s_1\dots s_n(b-1)(b-1)\dots). The removed interval has length $b^{-n}$ so the length of $A_1$ is $1-b^{-n}$ or $\frac {b^n-1 }{b^n}$. Also the removed interval contains only numbers whose base b expansion contains s (straight after the point). Note that $A_1$ can be broken up into b^n-1 intervals of length $b^{-n}$ of the following form latex [0 . a_1 \dots a_n 00\dots , a_1\dots a_n(b-1)(b-1) \dots), where $(a_1,\dots, a_n)$ is any of the b^n-1 strings of length n which are not s. We remove from each of these intervals the a smaller interval, namely latex [0. a_1\dots a_n s_1\dots s_n 0 0 \dots ,a_1 \dots a_n (b-1) (b-1) \dots). We call the set obtained by removing all these small intervals from $A_1$, $A_2$. Each of the removed intervals has length $b^{-(n+1)}$. Hence we have left the proportion $1-b^{-n}$ of each of the bigger intervals and so the length of $A_2$ is $1-b^{-n}$ times the length of$A_1$, that is $(1-b^{-n})^2$. Note we have only removed for [0,1) numbers whose expansion contains the string s (start either straight after the point or n places after the point) so all numbers without s in their expansion are in $A_2$. We continue in this fashion, breaking our set into intervals and removing a small interval containing only numbers with the string s in their expansion to get sets$late A_n\$ for each positive integer n such that all numbers without s in their expansion are in each $A_n$ and the length of $A_n$ is $(1-b^{-n})^n$. Now we assume, for contradiction, that the length of the set of numbers without s in their expansion is positive; we call it l . Provided only that we pick n large enough,
$(1-b^{-n})^n$ is less than l. Thus the length of $A_n$ is less than l. Since the set of all numbers without s in their expansion is contained in $A_1$ it follows that the length of this set is less than l. This contradicts our choice of l. Hence the length of the set of all numbers missing s from their base b expansions must be 0.
end of sketch
Vague sketch of proof of 2
We obtain this from the previous result using the fact that the union of countably many set of length 0 has length 0. There are only countably many finite strings in any base so the set so the set of numbers missing any string in a given base is the countable union (indexed by the set of such strings) of the sets which miss each sting. We showed already that this has 0 length. Hence the set of all numbers missing any finite string in a single base has length 0. Now the set of numbers missing any string in any base is the countable union (indexed by b) of those missing a string in base b. Hence this also has length 0. A number which contains every string necessarily contains each string infinitely many times because each string is contained in infinitely many different longer strings. The result follows.
### One Response to “Don’t listen to the numbers!”
1. thoughtcounts A Says:
Just a note – I’m hosting the next Carnival of Mathematics at thoughtcounts.net tomorrow. The carnival page lists no host, so I have a lower than usual number of submissions. If you have anything to submit, I’d love to hear from you.
|
2015-10-13 22:57:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6666902303695679, "perplexity": 559.1600028260112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738087479.95/warc/CC-MAIN-20151001222127-00162-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/55436/error-reading-a-pcm-file
|
# Error reading a .PCM file
I want to convert a .wav file with a sampling frequency of 44100 Hz to a 16 depth .pcm.
I don't why I'm getting those peaks at the beginning of .pcm plot (second figure below 2). If you could explain and tell me how I can correct it, I would very much appreciate it.
This is my attempt:
% determine the least common multiple (lcm) of fsin and fsout
fsin = fs;
fsout = 22050;
m = lcm(fsin, fsout);
% determine the up and down sampling rates
up = m/fsin;
down = m/fsout;
% resample the input using the computed up/down rates
x_22 = resample(x, up, down);
audiowrite([a_filename(1:11),'_22050','.wav'], x_22, fsout);
precision = 'int16';
fidr = fopen([a_filename(1:11), '_22050','.wav'], 'r'); % open .wav file to read
fidw = fopen([a_filename(1:11), '_22050','.pcm'], 'wb'); % open .pcm file to write
fwrite(fidw, w, precision);
fclose(fidr);
fclose(fidw);
fidr2 = fopen([a_filename(1:11),'_22050','.pcm'], 'r');
fclose(fidr2);
% plot
figure(1)
set(1, 'color', 'w')
subplot(311),plot(x)
grid on, box on, axis tight, title('.wav (f_s = 44100 Hz)')
subplot(312),plot(x_22)
grid on, box on, axis tight, title('.wav (f_s = 22050 Hz)')
subplot(313), plot(data)
grid on, box on, axis tight, title('.pcm (16 bit depth)')
These are the resuts: Here are all the steps of the conversion:
This is the zoom in on the thrid subplot to see the peaks:
• probably because you have a bug? Check whether these file formats actually work like this. anyway, this doesn't seem to be a signal processing, but more of a general programming question related to how to deal with a specific file format. – Marcus Müller Feb 12 at 12:56
• @MarcusMüller thank you. Apparently no bug. I just had to remove the first 44 bytes from the .wav file when doing the reading. – Pereira da Silva Feb 12 at 14:13
• @MarcusMüller could you indicate me a forum inside StackExcahnge for this subject questions? – Pereira da Silva Feb 12 at 14:20
When you read your file in, and write it out, using the file operations, you are not accounting for the header. The header can vary in size (it's actually three RIFF headers), but is usually 44 bytes long.
These are the headers in my own (C++) words:
struct FirstHeader
{
char RiffID[4];
int RiffLength;
char WaveID[4];
char FormatID[4];
int FormatLength;
};
{
short Always0x01;
short TrackCount;
int SamplesPerSecond;
int BytesPerSecond;
short BytesPerSample;
char Filler[16];
};
{
char DataID[4];
int DataLength;
};
Do a search on "Wav file header" and you can get lots of details. If you don't need any of the parameters, I suppose just looking for the "data" in the header and the next 4 bytes are the UINT32 value of the length. So your data starts the byte after.
Otherwise, follow the RiffLength and FormatLength to get to the DataID field.
The file can be mono or stereo, 8 16 or 24 bit, etc.
• I've done it. I used fseek to skip the header. >> fseek(fidr, 44, -1);% Skip past header, which is 44 bytes long. Thank you! – Pereira da Silva Feb 12 at 14:14
• @PereiradaSilva, You're welcome. Be careful though, not all headers are 44 bytes long. To do it properly, you need to read the length bytes in the header. – Cedron Dawg Feb 12 at 15:43
|
2019-07-20 18:59:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2476993054151535, "perplexity": 5196.806881617682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526560.40/warc/CC-MAIN-20190720173623-20190720195623-00265.warc.gz"}
|
https://www.physicsforums.com/threads/mathematical-methods-of-physics-problem.515242/
|
Homework Help: Mathematical methods of physics problem
1. Jul 19, 2011
gorved
1. The problem statement, all variables and given/known data
Here's the problem. Verify the operator identity
x - d/dx = -exp (-x^2 / 2) d /dx exp (-x^2 / 2)
2. Relevant equations
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Jul 19, 2011
hunt_mat
I would apply both sides of the equation to a function $f(x)$ and compare the results.
I sense the integrating factor method...
3. Jul 19, 2011
gorved
ok.thanks for help :)
4. Jul 19, 2011
Ray Vickson
This seems wrong. You have the same factor exp(-x^2/2) both outside and inside the d/dx. The result (when applied to f(x)) will have a factor exp(-x^2) on the right, but not on the left. I think one of the factors on the right should be exp(+x^2/2), so that the two "exp's" finally cancel.
RGV
5. Jul 19, 2011
hunt_mat
As I said Ray, apply them both to a function f to find:
$$xf-\frac{df}{dx}=g(x),\quad e^{-x^{2}/2}\frac{df}{dx} \left( e^{-x^{2}/2}f\right) =g(x)$$
Both are g(x) as when applied to f they should give the same thing. Use the integrating factor $e^{-x^{2}/2}$ and treat it as an ODE.
6. Jul 19, 2011
gorved
thank you both for helping :)
7. Jul 20, 2011
Ray Vickson
I assume that -exp (-x^2 / 2) d /dx exp (-x^2 / 2) is to be interpreted as an operator that applies to a function f(x), giving -exp(-x^2/2)*(d/dx)[exp(-x^2/2)*f(x)] = x*exp(-x^2)*f(x) - exp(-x^2)*df(x)/dx = exp(-x^2)*[x - d/dx] f(x). The exp(-x^2) arises from exp(-x^2/2)*exp(-x^2/2). Where is my error?
On the other hand (as I suggested before), -exp(+x^2/2) d/dx exp(-x^2/2) applied to f(x) does, indeed, give x*f(x) - df(x)/dx, because we get cancellation: exp(x^2/2)*exp(-x^2/2) = 1.
RGV
8. Jul 20, 2011
hunt_mat
let's check this, take the first equation, the integrating factor for this is $-e^{-x^{2}/2}$ and the equation becomes:
$$\frac{d}{dx}\left( -e^{-x^{2}/2}f(x) \right) =-e^{-x^{2}/2}g(x)$$
Then to isolate g(x), multiply by $-e^{x^{2}/2}$ to get:
$$-e^{x^{2}/2} \frac{d}{dx}\left( -e^{-x^{2}/2}f(x) \right) =g(x)$$
So there is a difference in signs as Ray said.
9. Jul 20, 2011
Damir1899
The problem is related to the Hermite polynomials and operators of creation and annihilation. The creation operator:
$a^{\dagger}\equiv \frac{1}{\sqrt{2}}\left(x- \frac{d}{dx}\right)$
I played with the Rodrigues formula for Hermite poyinomials (for n=1?):
$H_n (x)=(-1)^n e^{x^2} \frac{d^n}{dx^n}e^{-x^2}$
and
$\psi_n(x)=\frac{1}{\sqrt{2^n n!\sqrt{\pi}}}e^{-\frac{x^2}{2}}H_n(x)$
but run into some problems.
Last edited: Jul 20, 2011
|
2018-05-22 02:56:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821985125541687, "perplexity": 2385.854194666087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864622.33/warc/CC-MAIN-20180522014949-20180522034949-00317.warc.gz"}
|
https://planetmath.org/PerfectRuler
|
# perfect ruler
A perfect ruler of length $n$ is a ruler with a subset of the integer markings $\{0,a_{2},\ldots,n\}\subset\{0,1,2,\ldots,n\}$ that appear on a regular ruler. The defining criterion of this subset is that there exists an $m$ such that any positive integer $k\leq m$ can be expresses uniquely as a difference $k=a_{i}-a_{j}$ for some $i,j$. This is referred to as an $m$-perfect ruler.
A 4-perfect ruler of length $7$ is given by $\{0,1,3,7\}$. To verify this, we need to show that every number $1,2,\ldots,4$ can be expressed as a difference of two numbers in the above set:
$\displaystyle 1$ $\displaystyle=1-0$ $\displaystyle 2$ $\displaystyle=3-1$ $\displaystyle 3$ $\displaystyle=3-0$ $\displaystyle 4$ $\displaystyle=7-3$
An optimal perfect ruler is one where for a fixed value of $n$ the value of $a_{n}$ is minimized.
Title perfect ruler PerfectRuler 2013-03-22 12:14:22 2013-03-22 12:14:22 mathcam (2727) mathcam (2727) 12 mathcam (2727) Definition msc 03E02 msc 05A17 Golomb ruler
|
2020-12-02 22:45:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 20, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172195792198181, "perplexity": 361.9094992703494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00226.warc.gz"}
|
http://www.cosmostat.org/publications
|
# Publications
## Sparse estimation of model-based diffuse thermal dust emission
Authors: M.O. Irfan, J.Bobin Journal: MNRAS Year: 2017 Download: ADS | arXiv Abstract Component separation for the Planck HFI data is primarily concerned with the estimation of thermal dust emission,
## Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation)
Authors: R. Massey, H. Hoekstra, T. Kitching, ..., S. Pires et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract The first half of this paper explores the origin of systematic
## A PCA-based automated finder for galaxy-scale strong lenses
Authors: R. Joseph, F. Courbin, R. B. Metcalf, ...., S.Pires, et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract We present an algorithm using principal component analysis (PCA) to
## Sparsely sampling the sky: Regular vs. random sampling
Authors: P. Paykari, S. Pires, J.-L. Starck, A.H. Jaffe Journal: Astronomy & Astrophysics Year: 2009 Download: ADS | arXiv Abstract Weak gravitational lensing provides a unique way of mapping directly the dark
## Dealing with missing data: An inpainting application to the MICROSCOPE space mission
Authors: B. Joël, S. Pires, Q. Baghi, P. Touboul, G. Metris Journal: Physical Review D Year: 2015 Download: ADS | arXiv Abstract Missing data are a common problem in
• ### Sparse estimation of model-based diffuse thermal dust emission
Authors: M.O. Irfan, J.Bobin Journal: MNRAS Year: 2017 Download: ADS | arXiv Abstract Component separation for the Planck HFI data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely …
• ### Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation)
Authors: R. Massey, H. Hoekstra, T. Kitching, ..., S. Pires et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract The first half of this paper explores the origin of systematic biases in the measurement of weak gravitational lensing. Compared to previous work, we expand the investigation of point spread function …
• ### A PCA-based automated finder for galaxy-scale strong lenses
Authors: R. Joseph, F. Courbin, R. B. Metcalf, ...., S.Pires, et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract We present an algorithm using principal component analysis (PCA) to subtract galaxies from imaging data and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. …
• ### Sparsely sampling the sky: Regular vs. random sampling
Authors: P. Paykari, S. Pires, J.-L. Starck, A.H. Jaffe Journal: Astronomy & Astrophysics Year: 2009 Download: ADS | arXiv Abstract Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field to constrain …
• ### Dealing with missing data: An inpainting application to the MICROSCOPE space mission
Authors: B. Joël, S. Pires, Q. Baghi, P. Touboul, G. Metris Journal: Physical Review D Year: 2015 Download: ADS | arXiv Abstract Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instrument's saturation, or a contamination from an external event, …
• ### Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data
Authors: S. Pires, B. Joël, Q. Baghi, P. Touboul, G. Metris Journal: Physical Review D Year: 2016 Download: ADS | arXiv Abstract The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10-15 precision. Reaching this performance requires an accurate and robust …
• ### High Resolution Weak Lensing Mass-Mapping Combining Shear and Flexion
Authors: F. Lanusse, J.-L. Starck, A. Leonard, S. Pires Journal: A&A Year: 2016 Download: ADS | arXiv Abstract Aims: We propose a new mass mapping algorithm, specifically designed to recover small-scale information from a combination of gravitational shear and flexion. Including flexion allows us to supplement the shear on small scales in …
• ### Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing
Authors: DES Collaboration Journal: Year: 08/2017 Download: ADS| Arxiv Abstract We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: …
• ### Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map
Authors: C. Chang, A. Pujol, B. Mawdsley et al. Journal: Year: 08/2017 Download: ADS| Arxiv Abstract We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous …
• ### Sparse reconstruction of the merging A520 cluster system
Sparse reconstruction of the merging A520 cluster system Authors: A. Peel, F. Lanusse, J.-L. Starck Journal: submitted to ApJ Year: 08/2017 Download: ADS| Arxiv Abstract Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events …
• ### Shear measurement bias: dependencies on methods, simulation parameters and measured parameters
Authors: A. Pujol, F. Sureau, J. Bobin et al. Journal: A&A Year: 06/2017 Download: ADS| Arxiv Abstract We present a study of the dependencies of shear and ellipticity bias on simulation (input) and measured (output) parameters, noise, PSF anisotropy, pixel size and the model bias coming from two different and …
• ### Unsupervised feature learning for galaxy SEDs with denoising autoencoders
Authors: Frontera-Pons, J., Sureau, F., Bobin, J. and Le Floc'h E. Journal: Astronomy & Astrophysics Year: 2017 Download: ADS | arXiv Abstract With the increasing number of deep multi-wavelength galaxy surveys, the spectral energy distribution (SED) of galaxies has become an invaluable tool for studying the formation of their structures …
• ### Quantifying systematics from the shear inversion on weak-lensing peak counts
Authors: C. Lin, M. Kilbinger Journal: Submitted to A&A letters Year: 2017 Download: ADS | arXiv Abstract Weak-lensing (WL) peak counts provide a straightforward way to constrain cosmology, and results have been shown promising. However, the importance of understanding and dealing with systematics increases as data quality reaches an unprecedented …
• ### PSF field learning based on Optimal Transport Distances
Authors: F. Ngolè Mboula, J-L. Starck Journal: arXiv Year: 2017 Download: ADS | arXiv Abstract Context: in astronomy, observing large fractions of the sky within a reasonable amount of time implies using large field-of-view (fov) optical instruments that typically have a spatially varying Point Spread Function (PSF). Depending on …
• ### Joint Multichannel Deconvolution and Blind Source Separation
Authors: M. Jiang, J. Bobin, J-L. Starck Journal: SIAM J. Imaging Sci. Year: 2017 Download: ADS | arXiv | SIIMS Abstract Blind Source Separation (BSS) is a challenging matrix factorization problem that plays a central role in multichannel imaging science. In a large number of applications, such as astrophysics, current …
• ### Space variant deconvolution of galaxy survey images
Authors: S. Farrens, J-L. Starck, F. Ngolè Mboula Journal: A&A Year: 2017 Download: ADS | arXiv Abstract Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the …
• ### Linear and non-linear Modified Gravity forecasts with future surveys
Authors: S. Casas, M. Kunz, M. Martinelli, V. Pettorino Journal: Physics Letters B Year: 2017 Download: ADS | arXiv Abstract Modified Gravity theories generally affect the Poisson equation and the gravitational slip (effective anisotropic stress) in an observable way, that can be parameterized by two generic functions (η and μ) …
• ### Weak-lensing projections
Authors: M. Kilbinger, C. Heymans et al. Journal: submitted to MNRAS Year: 2017 Download: ADS | arXiv Abstract We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second- order Limber equations for the projection. We find that the …
• ### nIFTy Cosmology: the clustering consistency of galaxy formation models
Authors: A. Pujol, R. A. Skibba, E. Gaztañaga et al. Journal: MNRAS Year: 02/2017 Download: ADS| Arxiv Abstract We present a clustering comparison of 12 galaxy formation models (including Semi-Analytic Models (SAMs) and Halo Occupation Distribution (HOD) models) all run on halo catalogues and merger trees extracted from a single …
• ### What determines large scale galaxy clustering: halo mass or local density?
Authors: A. Pujol, K. Hoffmann, N. Jiménez et al. Journal: A&A Year: 02/2017 Download: ADS| Arxiv Abstract Using a dark matter simulation we show how halo bias is determined by local density and not by halo mass. This is not totally surprising as, according to the peak-background split model, local …
• ### Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey
Authors: A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger Journal: A&A Year: 2017 Download: ADS | arXiv Abstract Peak statistics in weak lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complement to …
• ### A new method to measure galaxy bias by combining the density and weak lensing fields
Authors: A. Pujol, C. Chang, E. Gaztañaga et al. Journal: MNRAS Year: 10/2016 Download: ADS| Arxiv Abstract We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et …
• ### Blind separation of sparse sources in the presence of outliers
Authors: C.Chenot, J.Bobin Journal: Signal Processing, Elsevier Year: 2016 Download: Elsevier / Preprint Abstract Blind Source Separation (BSS) plays a key role to analyze multichannel data since it aims at recovering unknown underlying elementary sources from observed linear mixtures in an unsupervised way. In a large number of …
• ### A new model to predict weak-lensing peak counts III. Filtering technique comparisons
Authors: C. Lin, M. Kilbinger, S. Pires Journal: A&A Year: 2016 Download: ADS | arXiv Abstract This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. …
• ### Constraint matrix factorization for space variant PSFs field restoration
Authors: F. Ngolè Mboula, J-L. Starck, K. Okumura, J. Amiaux, P. Hudelot Journal: IOP Inverse Problems Year: 2016 Download: ADS | arXiv Abstract Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument field of view (FOV). Local measurements of the PSFs are given by …
• ### The Dark Energy Survey and operations: years 1 to 3
Authors: H. T. Diehl, E. Neilsen, R. Gruendl et al. Journal: Proceedings of the SPIE Year: 07/2016 Download: ADS Abstract The Dark Energy Survey (DES) is an operating optical survey aimed at understanding the accelerating expansion of the universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon …
• ### Galaxy bias from the Dark Energy Survey Science Verification data: combining galaxy density maps and weak lensing maps
Authors: C. Chang, A. Pujol, E. Gaztañaga et al. Journal: MNRAS Year: 07/2016 Download: ADS| Arxiv Abstract We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ˜116 deg2 area of the Dark Energy …
• ### Clustering-based redshift estimation: application to VIPERS/CFHTLS
Authors: V. Scottez, Y. Mellier, B. Granett, T. Moutard, M. Kilbinger et al. Journal: MNRAS Year: 2016 Download: ADS | arXiv Abstract We explore the accuracy of the clustering-based redshift estimation proposed by Ménard et al. when applied to VIMOS Public Extragalactic Redshift Survey (VIPERS) and Canada-France-Hawaii Telescope Legacy Survey …
• ### Variational Bayes Group Sparse Time-Adaptive Parameter Estimation With Either Known or Unknown Sparsity Pattern
Authors: K. E. Themelis, A. A. Rontogiannis, K. D. Koutroumbas Journal: IEEE Transactions on Signal Processing Year: 2016 Download: ieeexplore Abstract In this paper, we study the problem of time-adaptive group sparse signal estimation from a Bayesian viewpoint. We propose two online variational Bayes schemes that are …
• ### Simultaneously sparse and low-rank abundance matrix estimation for hyperspectral image unmixing
Authors: P. V. Giampouras, K. E. Themelis, A. A. Rontogiannis, K. D. Koutroumbas Journal: IEEE Transactions on Geoscience and Remote Sensing Year: 2016 Download: ieeexplore Abstract In a plethora of applications dealing with inverse problems, e.g., image processing, social networks, compressive sensing, and biological data processing, the …
• ### The XXL Survey
First round of papers published The XXL Survey is a deep X-ray survey observed with the XMM satellite, covering two fields of 25 deg2 each. Observations in many other wavelength, from radio to IR and optical, in both imaging and spectroscopy, complement the survey. The main science case is cosmology …
• ### CMB reconstruction from the WMAP and Planck PR2 data
Authors: J. Bobin, F. Sureau and J. -L. Starck Journal: A&A Year: 2015 Download: ADS | arXiv Abstract In this article, we describe a new estimate of the Cosmic Microwave Background (CMB) intensity map reconstructed by a joint analysis of the full Planck 2015 data (PR2) and WMAP nine-years. It …
• ### A new model to predict weak-lensing peak counts II. Parameter constraint strategies
Authors: C. Lin, M. Kilbinger Journal: A&A Year: 2015 Download: ADS | arXiv Abstract Peak counts have been shown to be an excellent tool to extract the non-Gaussian part of the weak lensing signal. Recently, we developped a fast stochastic forward model to predict weak-lensing peak counts. Our model is able …
• ### nIFTy cosmology: comparison of galaxy formation models
Authors: A. Knebe, F. R. Pearce, P. A. Thomas et al. Journal: MNRAS Year: 08/2015 Download: ADS|Arxiv Abstract We present a comparison of 14 galaxy formation models: 12 different semi-analytical models and 2 halo occupation distribution models for galaxy formation based upon the same cosmological simulation and merger tree information …
• ### Robust Sparse Blind Source Separation
Authors: C.Chenot, J.Bobin and J. Rapin Journal: IEEE SPL Year: Nov. 2015 Download: IEEE Arxiv Abstract Blind source separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm …
• ### CFHTLenS: A Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics
Authors: P. Simon, ... , M. Kilbinger, et al. Journal: MNRAS Year: 2015 Download: ADS | arXiv Abstract We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopted a …
• ### A new model to predict weak-lensing peak counts I. Comparison with N-body Simulations
Authors: C. Lin, M. Kilbinger Journal: A&A Year: 2015 Download: ADS | arXiv Abstract Weak-lensing peak counts has been shown to be a powerful tool for cosmology. It provides non-Gaussian information of large scale structures, complementary to second order statistics. We propose a new flexible method to predict weak lensing peak …
• ### LOFAR Sparse Image Reconstruction
Authors: H. Garsden, J. N. Girard, J. L. Starck Journal: A&A Year: 2015 Download: ADS | arXiv Abstract Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky …
• ### Super-resolution method using sparse regularization for point-spread function recovery
Authors: F. Ngolè Mboula, J-L. Starck, S. Ronayette, K. Okumura, J. Amiaux Journal: A&A Year: 2015 Download: ADS | arXiv Abstract In large-scale spatial surveys, such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one may consider using a super-resolution …
• ### The C-Band All Sky Survey: Separation of Diffuse Galactic Emissions at 5 GHz
Authors: M.O. Irfan, C. Dickinson, R.D. Davies, et al. Journal: MNRAS Year: 2015 Download: ADS | arXiv Abstract We present an analysis of the diffuse emission at 5 GHz in the first quadrant of the Galactic plane using two months of preliminary intensity data taken with the C-Band …
• ### A new model to predict weak-lensing peak counts I. Comparison with N-body Simulations
Authors: C.-A. Lin, M. Kilbinger. Journal: A&A 576, A24 Year: 2015 Download: ADS | arXiv Abstract Weak-lensing peak counts has been shown to be a powerful tool for cosmology. It provides non-Gaussian information of large scale structures, complementary to second order statistics. We propose a new flexible method to predict …
• ### SNIa detection in the SNLS photometric analysis using Morphological Component Analysis
Authors: A. Möller, V. Ruhlmann-Kleider, F. Lanusse, J. Neveu, N. Palanque-Delabrouille, J.-L. Starck Journal: JCAP Year: 2015 Download: ADS | arXiv Abstract Detection of supernovae and, more generally, of transient events in large surveys can provide numerous false detections.In the case of a deferred processing of survey images, this implies …
• ### Effect of inhomogeneities on high precision measurements of cosmological distances
Authors: A. Peel, M. A. Troxel, M. Ishak Journal: PRD Year: 2014 Download: ADS | arXiv Abstract We study effects of inhomogeneities on distance measures in an exact relativistic Swiss-cheese model of the Universe, focusing on the distance modulus. The model has Λ CDM background dynamics, and the "holes" are …
• ### PRISM: Recovery of the primordial spectrum from Planck data
Authors: F. Lanusse, P. Paykari, J. -L. Starck et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract Aims. The primordial power spectrum describes the initial perturbations that seeded the large-scale structure we observe today. It provides an indirect probe of inflation or other structure-formation mechanisms. In this letter, …
• ### Gap interpolation by inpainting methods : Application to Ground and Space-based Asteroseismic data
Authors: S. Pires, S. Mathur, R. A. Garcia, J. Ballot, D. Stello, K. Sato Journal: Astronomy & Astrophysics Year: 2014 Download: ADS | arXiv Abstract In asteroseismology, the observed time series often suffers from incomplete time coverage due to gaps. The presence of periodic gaps may generate spurious peaks in …
• ### Friction in Gravitational Waves: a test for early-time modified gravity
Authors: Pettorino, V., Amendola, L. Journal: Physics Letters B Year: 2015 Download: ADS | arXiv Abstract Modified gravity theories predict in general a non standard equation for the propagation of gravitational waves. Here we discuss the impact of modified friction and speed of tensor modes on cosmic microwave polarization B modes. …
• ### The Dark Energy Survey and operations: Year 1
Authors: H. T. Diehl, T. M. C. Abbott, J. Annis et al. Journal: Proceedings of the SPIE Year: 08/2014 Download: ADS Abstract The Dark Energy Survey (DES) is a next generation optical survey aimed at understanding the accelerating expansion of the universe using four complementary methods: weak gravitational lensing, …
• ### Are the halo occupation predictions consistent with large-scale galaxy clustering?
Authors: A. Pujol and E. Gaztañaga Journal: MNRAS Year: 08/2014 Download: ADS|Arxiv Abstract We study how well we can reconstruct the two-point clustering of galaxies on linear scales, as a function of mass and luminosity, using the halo occupation distribution (HOD) in several semi-analytical models (SAMs) of galaxy formation …
• ### 3D Cosmic Shear: Cosmology from CFHTLenS
Authors: T. D. Kitching, ... , M. Kilbinger, et al. Journal: MNRAS Year: 2014 Download: ADS | arXiv Abstract This paper presents the first application of 3D cosmic shear to a wide-field weak lensing survey. 3D cosmic shear is a technique that analyses weak lensing in three dimensions using a spherical harmonic …
• ### NMF with Sparse Regularizations in Transformed Domains
Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck Journal: SIAM Year: 2014 Download: ADS | arXiv Abstract Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the …
• ### CFHTLenS: Cosmological constraints from a combination of cosmic shear two-point and three-point correlations
Authors: L. Fu, M. Kilbinger, T. Erben, C. Heymans, et al. Journal: MNRAS Year: 2014 Download: ADS | arXiv Abstract Higher-order, non-Gaussian aspects of the large-scale structure carry valuable information on structure formation and cosmology, which is complementary to second-order statistics. In this work we measure second- and third-order weak-lensing aperture-mass moments from CFHTLenS and …
• ### A Variational Bayes Framework for Sparse Adaptive Estimation
Authors: K. E. Themelis, A. A. Rontogiannis, K. D. Koutroumbas Journal: IEEE Transactions on Signal Processing Year: 2014 Download: ieeexplore Abstract Recently, a number of mostly l1-norm regularized least-squares-type deterministic algorithms have been proposed to address the problem of sparse adaptive signal estimation and system identification. From …
• ### Impact on asteroseismic analyses of regular gaps in Kepler data
Authors: R.A. Garcıa, S. Mathur, S. Pires, et al. Journal: Astronomy & Astrophysics Year: 2014 Download: ADS | arXiv Abstract The NASA Kepler mission has observed more than 190,000 stars in the constellations of Cygnus and Lyra. Around 4 years of almost continuous ultra high-precision photometry have been obtained reaching …
• ### Sparse point-source removal for full-sky CMB experiments: application to WMAP 9-year data
Authors: F. C. Sureau, J. -L. Starck, J. Bobin et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract Missions such as WMAP or Planck measure full-sky fluctuations of the cosmic microwave background and foregrounds, among which bright compact source emissions cover a significant fraction of the sky. To …
• ### Planck CMB Anomalies: Astrophysical and Cosmological Secondary Effects and the Curse of Masking
Authors: A. Rassat, J. -L. Starck , P. Paykari et al. Journal: JCAP Year: 2014 Download: ADS | arXiv Abstract Large-scale anomalies have been reported in CMB data with both WMAP and Planck data. These could be due to foreground residuals and or systematic effects, though their confirmation with Planck data …
• ### GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity
Authors: A. Leonard, F. Lanusse, J.-L. Starck Journal: MNRAS Year: 2014 Download: ADS | arXiv Abstract We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one …
• ### Subhaloes gone Notts: the clustering properties of subhaloes
Authors: A. Pujol, E. Gaztañaga, C. Giocoli et al. Journal: MNRAS Year: 03/2014 Download: ADS|Arxiv Abstract We present a study of the substructure finder dependence of subhalo clustering in the Aquarius Simulation. We run 11 different subhalo finders on the haloes of the Aquarius Simulation and study their differences in the …
• ### PRISM: Sparse Recovery of the Primordial Power Spectrum
Authors: P. Paykari, F. Lanusse, J. -L. Starck et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract Aims. The primordial power spectrum describes the initial perturbations in the Universe which eventually grew into the large-scale structure we observe today, and thereby provides an indirect probe of inflation or …
• ### Weak Lensing Galaxy Cluster Field Reconstruction
Authors: E. Jullo, S.Pires, M. Jauzac, J.-P. Kneib Journal: MNRAS Year: 2014 Download: ADS | arXiv Abstract In this paper, we compare three methods to reconstruct galaxy cluster density fields with weak lensing data. The first method called FLens integrates an inpainting concept to invert the shear field with possible gaps, and …
• ### Joint Planck and WMAP CMB Map Reconstruction
Authors: J. Bobin, F. Sureau, J. -L. Starck et al. Journal: A&A Year: 2014 Download: ADS | arXiv Abstract We present a novel estimate of the cosmological microwave background (CMB) map by combining the two latest full-sky microwave surveys: WMAP nine-year and Planck PR1. The joint processing benefits from a …
• ### Low-dimensional signal-strength fingerprint-based positioning in wireless LANs
Authors: D. Milioris, G. Tzagkarakis, A. Papakonstantinou, M. Papadopouli, P. Tsakalides Journal: Ad Hoc Networks Year: 2011 Download: Science Direct Abstract Accurate location awareness is of paramount importance in most ubiquitous and pervasive computing applications. Numerous solutions for indoor localization based on IEEE802.11, bluetooth, ultrasonic and vision technologies have …
• ### The C-Band All-Sky Survey (C-BASS): design and implementation of the northern receiver
Authors: O. G. King, Michael E. Jones, E. J. Blackhurst, et al. Journal: MNRAS Year: 2014 Download: ADS | arXiv Abstract The C-Band All-Sky Survey (C-BASS) is a project to map the full sky in total intensity and linear polarization at 5 GHz. The northern component of the …
• ### Defining a weak lensing experiment in space
Authors: M. Cropper, H. Hoekstra, T. Kitching, ..., S. Pires et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract This paper describes the definition of a typical next-generation space-based weak gravitational lensing experiment. We first adopt a set of top-level science requirements from the literature, based on the scale and …
• ### Sparse and Non-Negative BSS for Noisy Data
Authors: J. Rapin, J. Bobin, A. Larue, J.-L. Starck Journal: IEEE Year: 2013 Download: ADS | arXiv Abstract Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the …
• ### Darth Fader: Using wavelets to obtain accurate redshifts of spectra at very low signal-to-noise
Authors: D. P. Machado, A. Leonard, J.-L. Starck, F. B. Abdalla, S. Jouvel Journal: A&A Year: 2013 Download: ADS | arXiv Abstract We present the DARTH FADER algorithm, a new wavelet-based method for estimating redshifts of galaxy spectra in spectral surveys that is particularly adept in the very low SNR …
• ### Removal of two large-scale cosmic microwave background anomalies after subtraction of the integrated Sachs-Wolfe effect
Authors: A. Rassat, J. -L. Starck and F. -X. Dupe Journal: A&A Year: 2013 Download: ADS | arXiv Abstract Though debated, the existence of claimed large-scale anomalies in the CMB is not totally dismissed. In parallel to the debate over their statistical significance, recent work focussed on masks and secondary …
• ### CFHTLenS tomographic weak lensing: Quantifying accurate redshift distributions
Authors: J. Benjamin, L. Van Waerbeke, C. Heymans, M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract The Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) comprises deep multi-colour (u*g'r'i'z') photometry spanning 154 square degrees, with accurate photometric redshifts and shape measurements. We demonstrate that the redshift probability distribution function …
• ### WMAP 9-year CMB estimation using sparsity
Authors: J. Bobin, F. Sureau , P. Paykari et al. Journal: A&A Year: 2013 Download: ADS | arXiv Abstract Recovering the cosmic microwave background (CMB) from WMAP data requires that Galactic foreground emissions are accurately separated out. Most component separation techniques rely on second-order statistics such as internal linear combination …
• ### On Preferred Axes in WMAP Cosmic Microwave Background Data after Subtraction of the Integrated Sachs-Wolfe Effect
Authors: A. Rassat and J. -L. Starck Journal: A&A Year: 2013 Download: ADS | arXiv Abstract There is currently a debate over the existence of claimed statistical anomalies in the cosmic microwave background (CMB), recently confirmed in Planck data. Recent work has focussed on methods for measuring statistical significance, on …
• ### CFHTLenS tomographic weak lensing cosmological parameter constraints: Mitigating the impact of intrinsic galaxy alignments
Authors: C. Heymans, E. Grocutt, A. Heavens, M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract We present a finely-binned tomographic weak lensing analysis of the Canada-France-Hawaii Telescope Lensing Survey, CFHTLenS, mitigating contamination to the signal from the presence of intrinsic galaxy alignments via the simultaneous fit of a cosmological model …
• ### CFHTLenS: Testing the Laws of Gravity with Tomographic Weak Lensing and Redshift Space Distortions
Authors: F. Simpson, C. Heymans, D. Parkinson, C. Blake, M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract Dark energy may be the first sign of new fundamental physics in the Universe, taking either a physical form or revealing a correction to Einsteinian gravity. Weak gravitational lensing and galaxy peculiar …
• ### The Scale of the Problem : Recovering Images of Reionization with GMCA
Authors: E. Chapman, F. B. Abdalla, J. Bobin, J.-L. Starck Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric …
• ### CFHTLenS: Combined probe cosmological model comparison using 2D weak gravitational lensing
Authors: M. Kilbinger, et al. Journal: MNRAS Year: 2013 Download: ADS | arXiv Abstract We present cosmological constraints from 2D weak gravitational lensing by the large-scale structure in the Canada-France Hawaii Telescope Lensing Survey (CFHTLenS) which spans 154 square degrees in five optical bands. Using accurate photometric redshifts and measured shapes …
• ### Effect of model-dependent covariance matrix for studying Baryon Acoustic Oscillations
Authors: A. Labatie, J.-L. Starck, M. Lachièze-Rey Journal: ApJ Year: 2012 Download: ADS | arXiv Abstract Large-scale structures in the Universe are a powerful tool to test cosmological models and constrain cosmological parameters. A particular feature of interest comes from Baryon Acoustic Oscillations (BAOs), which are sound waves traveling in the …
• ### Active Range Imaging via Random Gating
Authors: G. Tsagkatakis, A. Woiselle, G. Tzagkarakis, M. Bousquet, J.-L. Starck and P. Tsakalides Journal: SPIE Year: 2012 Download: SPIE Abstract Range Imaging (RI) has sparked an enthusiastic interest recently due to the numerous applications that can benefit from the presence 3D data. One of the most successful techniques …
• ### Compressive video classification in a low-dimensional manifold with learned distance metric
Authors: G. Tzagkarakis, G. Tsagkatakis, J.-L. Starck, P. Tsakalides Journal: EUSIPCO Year: 2012 Download: IEEE Abstract In this paper, we introduce an architecture for addressing the problem of video classification based on a set of compressed features, without the need of accessing the original full-resolution video data. In particular, …
• ### On the Unmixing of MeX/OMEGA Hyperspectral Data
Authors: K. E. Themelis, F. Schmidt, O. Sykioti, A. A. Rontogiannis, K. D. Koutroumbas, I. A. Daglis Journal: Planetary and Space Science Year: 2012 Download: elsevier Abstract This paper presents a comparative study of three different types of estimators used for supervised linear unmixing of two MEx/OMEGA …
• ### Sparse component separation for accurate CMB map estimation
Authors: J. Bobin, J. -L. Starck, F. Sureau, S. Basak Journal: A&A Year: 2012 Download: ADS | arXiv Abstract The Cosmological Microwave Background (CMB) is of premier importance for the cosmologists to study the birth of our universe. Unfortunately, most CMB experiments such as COBE, WMAP or Planck do not …
• ### Wavelet analysis of baryon acoustic structures in the galaxy distribution
Authors: P. Arnalte-Mur, A. Labatie, N. Clerc, V. J. Martínez, J.-L. Starck, et al. Journal: A&A Year: 2012 Download: ADS | arXiv Abstract Baryon Acoustic Oscillations (BAO) are a feature imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can …
• ### A Novel Hierarchical Bayesian Approach for Sparse Semi-Supervised Hyperspectral Unmixing
Authors: K. E. Themelis, A. A. Rontogiannis, K. D. Koutroumbas Journal: IEEE Transactions on Signal Processing Year: 2012 Download: ieeexplore Abstract In this paper the problem of semisupervised hyperspectral unmixing is considered. More specifically, the unmixing process is formulated as a linear regression problem, where the abundance’s …
• ### True CMB Power Spectrum Estimation
Authors: P. Paykari, J. -L. Starck and M. J. Fadili Journal: Astronomy and Astrophysics Year: 2012 Download: ADS | arXiv Abstract The cosmic microwave background (CMB) power spectrum is a powerful cosmological probe as it entails almost all the statistical information of the CMB perturbations. Having access to only one …
• ### Compressive video classification for decision systems with limited resources
Authors: G. Tzagkarakis, P. Charalampidis, G. Tsagkatakis, J.-L. Starck, P. Tsakalides Journal: PCS Year: 2012 Download: IEEE Abstract In this paper, we address the problem of video classification from a set of compressed features. In particular, the properties of linear random projections in the framework of compressive sensing are …
• ### Fast Calculation of the Weak Lensing Aperture Mass Statistic
Authors: A. Leonard, S. Pires, J.-L. Starck Journal: MNRAS Year: 2012 Download: ADS | arXiv Abstract The aperture mass statistic is a common tool used in weak lensing studies. By convolving lensing maps with a filter function of a specific scale, chosen to be larger than the scale on which the …
• ### Galaxy clustering in the CFHTLS-Wide: the changing relationship between galaxies and haloes since z ~ 1.2
Authors: J. Coupon, M. Kilbinger, H. J. McCracken, et al. Journal: A&A Year: 2012 Download: ADS | arXiv Abstract We present a detailed investigation of the changing relationship between galaxies and the dark matter haloes they inhabit from z~1.2 to the present day. We do this by comparing precise galaxy …
• ### Spherical 3D Isotropic Wavelets
Authors: F. Lanusse, A. Rassat, J.-L. Starck Journal: A&A Year: 2012 Download: ADS | arXiv Abstract Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis …
• ### Wavelet Helmholtz decomposition for weak lensing mass map reconstruction
Authors: E. Deriaz, J.-L. Starck, S.Pires Journal: A&A Year: 2012 Download: ADS | arXiv Abstract To derive the convergence field from the gravitational shear (gamma) of the background galaxy images, the classical methods require a convolution of the shear to be performed over the entire sky, usually expressed thanks to the …
• ### Cosmological constraints from the capture of non-Gaussianity in Weak Lensing data
Authors: S. Pires, A. Leonard, J.-L. Starck Journal: MNRAS Year: 2012 Download: ADS | arXiv Abstract Weak gravitational lensing has become a common tool to constrain the cosmological model. The majority of the methods to derive constraints on cosmological parameters use second-order statistics of the cosmic shear. Despite their success, second-order …
• ### A Compressed Sensing Approach to 3D Weak Lensing
Authors: A. Leonard, F.-X. Dupé, J.-L. Starck Journal: A&A Year: 2012 Download: ADS | arXiv Abstract (Abridged) Weak gravitational lensing is an ideal probe of the dark universe. In recent years, several linear methods have been developed to reconstruct the density distribution in the Universe in three dimensions, making use of photometric …
• ### Design of a Compressive Remote Imaging System Compensating a Highly Lightweight Encoding with a Refined Decoding Scheme
Authors: G. Tzagkarakis, A. Woiselle, P. Tsakalides, J.-L. Starck Journal: VISAPP Year: 2012 Download: Sitepress Abstract Lightweight remote imaging systems have been increasingly used in surveillance and reconnaissance. Nevertheless, the limited power, processing and bandwidth resources is a major issue for the existing solutions, not well addressed by the …
• ### CMB map restoration
Authors: J. Bobin, J. -L. Starck, F. Sureau and J. Fadili Journal: A&A Year: 2012 Download: ADS | arXiv Abstract Estimating the cosmological microwave background is of utmost importance for cosmology. However, its estimation from full-sky surveys such as WMAP or more recently Planck is challenging: CMB maps are generally …
• ### Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit
Authors: D. L. Donoho, Y. Tsaig, I. Drori, J.-L. Starck Journal: IEEE Year: 2012 Download: IEEE Abstract Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NP-hard in general. We show here that for systems with “typical”/“random” Φ, a good approximation to the sparsest solution is obtained by …
• ### Detecting Baryon Acoustic Oscillations
Authors: A. Labatie, J.-L. Starck, M. Lachièze-Rey Journal: ApJ Year: 2012 Download: ADS | arXiv Abstract Baryon Acoustic Oscillations are a feature imprinted in the galaxy distribution by acoustic waves traveling in the plasma of the early universe. Their detection at the expected scale in large-scale structures strongly supports current cosmological …
• ### Indoor positioning in Wireless LANS using compressive sensing signal-strength fingerprints
Authors: D. Milioris, G. Tzagkarakis, P. Jacquet Journal: EUSIPCO Year: 2011 Download: IEEE Abstract Accurate indoor localization is a significant task for many ubiquitous and pervasive computing applications, with numerous solutions based on IEEE802.11, Bluetooth, ultrasound and infrared technologies being proposed. The inherent sparsity present in the problem of location …
• ### Joint Sparse Signal Ensemble Reconstruction in a WSN Using Decentralized Bayesian Matching Pursuit
Authors: G. Tzagkarakis, J.-L. Starck, P. Tsakalides Journal: EUSIPCO Year: 2011 Download: IEEE Abstract Wireless networks comprised of low-cost sensory devices have been increasingly used in surveillance both at the civilian and military levels. Limited power, processing, and bandwidth resources is a major issue for abandoned sensors, which should be addressed to …
• ### Measuring the Integrated Sachs-Wolfe Effect
Authors: F. -X. Dupe, A. Rassat, J. -L. Starck and M. J. Fadili Journal: A&A Year: 2011 Download: ADS | arXiv Abstract One of the main challenges of modern cosmology is to understand the nature of dark energy. The Integrated Sachs-Wolfe (ISW) effect is sensitive to dark energy and presents …
• ### Feasibility and performances of compressed-sensing and sparse map-making with Herschel/PACS data
Authors: N. Barbey, M. Sauvage, J.-L. Starck, R. Ottensamer, P. Chanial Journal: A&A Year: 2011 Download: ADS | arXiv Abstract The Herschel Space Observatory of ESA was launched in May 2009 and is in operation since. From its distant orbit around L2 it needs to transmit a huge quantity of information through a very …
• ### 3-D Data Denoising and Inpainting with the Low-Redundancy Fast Curvelet Transform
Authors: A. Woiselle, J.-L. Starck, J. Fadili Journal: Journal of Mathematical Imaging and Vision Year: 2010 Download: Springer Abstract In this paper, we first present a new implementation of the 3-D fast curvelet transform, which is nearly 2.5 less redundant than the Curvelab (wrapping-based) implementation as originally proposed in …
• ### Uncertainty in 2-point correlation function estimators and BAO detection in SDSS DR7
Authors: A. Labatie, J-L. Starck, M. Lachièze-Rey, P. Arnalte-Mur Journal: arXiv Year: 2010 Download: ADS | arXiv Abstract We study the uncertainty in different two-point correlation function (2PCF) estimators in currently available galaxy surveys. This is motivated by the active subject of using the baryon acoustic oscillations (BAOs) feature in the …
• ### Hyperspectral BSS Using GMCA With Spatio-Spectral Sparsity Constraints
Authors: Y, Moudden, J. Bobin Journal: IEEE Year: 2014 Download: IEEE Abstract Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this context, it is fundamental that the sources to be estimated …
• ### Stein block thresholding for wavelet-based image deconvolution
Authors: C. Chesneau, J. Fadili, J.-L. Starck Journal: Applied and Computational Harmonic Analysis Year: 2010 Download: Project Euclid Abstract In this paper, we propose a fast image deconvolution algorithm that combines adaptive block thresholding and Vaguelet-Wavelet Decomposition. The approach consists in first denoising the observed image using a wavelet-domain Stein …
• ### Reduced-shear power spectrum
Fitting formulae of the reduced-shear power spectrum for weak lensing Reference Martin Kilbinger, 2010, arXiv:1004.3493 Description We provide fitting formulae for the reduced-shear power-spectrum correction which is third-order in the lensing potential. This correction reaches up to 10% of the total lensing spectrum. Higher-order correction terms are one order of …
• ### 3D curvelet transforms and astronomical data restoration
Authors: A. Woiselle, J.-L. Starck, J. Fadili Journal: Applied and Computational Harmonic Analysis Year: 2010 Download: Science Direct Abstract This paper describes two new 3D curvelet decompositions, which are built in a way similar to the first generation of curvelets (Starck et al., 2002 [35]). The first one, called …
• ### Cosmological model discrimination with weak lensing
Authors: S. Pires, J.-L. Starck, A. Amara, A. Réfrégier, R. Teyssier Journal: Astronomy & Astrophysics Year: 2009 Download: ADS Abstract Weak gravitational lensing provides a unique way of mapping directly the dark matter in the Universe. The majority of lensing analyses use the two-point statistics of the cosmic shear field …
• ### Optimes E-/B-mode decomposition
A new cosmic shear function: Optimised E-/B-mode decomposition on a finite interval Reference Liping Fu, Martin Kilbinger, 2009, arXiv:0907.0795 Description We have introduced a new cosmic shear statistic which decomposes the shear correlation into E- and B-modes on a finite angular interval. The new function is calculated by integrating the …
• ### FASTLens (FAst STatistics for weak Lensing) : Fast method for Weak Lensing Statistics and map making
Authors: S. Pires, J.-L. Starck, A. Amara, R. Teyssier, A. Refregier, J. Fadili Journal: MNRAS Year: 2009 Download: ADS | arXiv Abstract With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever greater precision. However this increased accuracy also places greater demands on the …
• ### Full-Sky Weak Lensing Simulation with 70 Billion Particles
Authors: R. Teyssier, S. Pires, ... , J.-L. Starck et al. Journal: A&A Year: 2009 Download: ADS | arXiv Abstract We have performed a 70 billion dark-matter particles N-body simulation in a 2 h−1 Gpc periodic box, using the concordance, cosmological model as favored by the latest WMAP3 results. We have computed …
• ### Polarized wavelets and curvelets on the sphere
Authors: J. -L. Starck, Y. Moudden, J. Bobin Journal: Astronomy and Astrophysics Year: 2009 Download: ADS | arXiv Abstract The statistics of the temperature anisotropies in the primordial cosmic microwave background radiation field provide a wealth of information for cosmology and for estimating cosmological parameters. An even more acute inference should …
• ### A proximal iteration for deconvolving Poisson noisy images using sparse representations
Authors: F.-X. Dupé, J. Fadili, J.-L. Starck Journal: IEEE Year: 2009 Download: ADS | arXiv Abstract In this paper, we propose a fast image deconvolution algorithm that combines adaptive block thresholding and Vaguelet-Wavelet Decomposition. The approach consists in first denoising the observed image using a wavelet-domain Stein block thresholding, and then inverting …
• ### Compressed Sensing in Astronomy
Authors: J.Bobin, J-L Starck, R. Ottensamer Journal: IEEE Year: 2008 Download: ADS | arXiv Abstract Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged: compressed sensing. This theory is a new …
• ### SZ and CMB reconstruction using Generalized Morphological Component Analysis
Authors: J. Bobin, Y. Moudden, J.-L. Starck, J. Fadili, N. Aghanim Journal: Statistical Methodology Year: 2008 Download: ADS | arXiv Abstract Non-negative blind source separation (BSS) has raised interest in various fields of research, as testified by the wide literature on the topic of non-negative matrix factorization (NMF). In this …
• ### Blind Source Separation: the Sparsity Revolution
Authors: J. Bobin, J.-L. Starck, Y. Moudden, J. M. Fadili Journal: AIEP Year: 2008 Download: HAL Abstract Over the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the …
• ### Morphological Component Analysis: An Adaptive Thresholding Strategy
Authors: J. Bobin, J.-L. Starck, J. M. Fadili, Y. Moudden, D. L. Donoho Journal: IEEE Year: 2007 Download: ADS Abstract In a recent paper, a method called morphological component analysis (MCA) has been proposed to separate the texture from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold …
• ### Sparsity and Morphological Diversity in Blind Source Separation
Authors: J. Bobin, J.-L. Starck, J. Fadili, Y. Moudden Journal: IEEE Year: 2007 Download: IEEE Abstract Over the last few years, the development of multichannel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the …
• ### Inpainting and zooming using sparse representations
Authors: J. M. Fadili, J.-L. Starck, F. Murtagh Journal: The Computer Journal Year: 2007 Download: HAL Abstract Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. …
• ### Dark matter maps reveal cosmic scaffolding
Authors: R. Massey, ... , J.-L. Starck, ..., S. Pires et al. Journal: Nature Year: 2007 Download: ADS | arXiv Abstract Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious "dark matter" component, which does not interact via …
• ### Multi-scale morphology of the galaxy distribution
Authors: E. Saar, V. J. Martinez, J-L. Starck, D. L. Donoho Journal: MNRAS Year: 2007 Download: ADS | arXiv Abstract Many statistical methods have been proposed in the last years for analyzing the spatial distribution of galaxies. Very few of them, however, can handle properly the border effects of complex …
• ### Sunyaev-Zel'dovich clusters reconstruction in multiband bolometer camera surveys
Authors: S. Pires, D. Yvon, Y. Moudden, S. Anthoine, E. Pierpaoli Journal: A&A Year: 2006 Download: ADS | arXiv Abstract We present a new method for the reconstruction of Sunyaev-Zel'dovich (SZ) galaxy clusters in future SZ-survey experiments using multiband bolometer cameras such as Olimpo, APEX, or Planck. Our goal is …
• ### Morphological diversity and source separation
Authors: J. Bobin, Y. Moudden, J.-L. Starck, M. Elad Journal: IEEE Year: 2006 Download: IEEE Abstract This letter describes a new method for blind source separation, adapted to the case of sources having different morphologies. We show that such morphological diversity leads to a new and very efficient separation method, …
• ### Curvelet analysis of asteroseismic data
Authors: P. Lambert, S. Pires, J. Ballot, R.A. Garcia, J.-L. Starck, S. Turck-Chièze Journal: A&A Year: 2006 Download: ADS | arXiv Abstract Context. The detection and identification of oscillation modes (in terms of their l, m, and successive n) is a great challenge for present and future asteroseismic space missions. …
• ### Weak lensing mass reconstruction using wavelets
Authors: J.-L. Starck, S. Pires, Alexandre Réfrégier Journal: A&A Year: 2006 Download: ADS | arXiv Abstract This paper presents a new method for the reconstruction of weak lensing mass maps. It uses the multiscale entropy concept, which is based on wavelets, and the False Discovery Rate which allows us to …
• ### Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA)
Authors: M. Elad, J.-L. Starck, P. Querre, D.L. Donoho Journal: ACHA Year: 2005 Download: Science Direct Abstract This paper describes a novel inpainting algorithm that is capable of filling in holes in overlapping texture and cartoon image layers. This algorithm is a direct extension of a recently developed sparse-representation-based …
• ### Image decomposition via the combination of sparse representations and a variational approach
Authors: J.-L. Starck, M. Elad , D.L. Donoho Journal: IEEE Year: 2005 Download: IEEE Abstract This paper describes a novel inpainting algorithm that is capable of filling in holes in overlapping texture and cartoon image layers. This algorithm is a direct extension of a recently developed sparse-representation-based image decomposition …
• ### Redundant Multiscale Transforms and their Application for Morphological Component Analysis
Authors: J.-L. Starck, M. Elad , D.L. Donoho Journal: Advances in Imaging and Electron Physics Year: 2004 Download: PDF Abstract The development track of the wavelet transform and its redundant extensions designed for images was described. The notion of sparsity and the algorithms that facilitate it was studied. It …
|
2017-12-13 14:44:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43321341276168823, "perplexity": 8960.216004632724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00427.warc.gz"}
|
http://mathhelpforum.com/calculus/7166-find-eq-ellipse-print.html
|
# Find eq. to ellipse
• Nov 3rd 2006, 11:17 AM
viet
Find eq. to ellipse
Find the equation of the line that is tangent to the ellipse
b^2x^2 + a^2y^2 = a^2b^2 in the first quadrant and forms with the coodiante axes the triangle with smallest possible area (a and b are positive constants)
im not sure how to do this problem, the teacher explained it but it was very confusing.
• Nov 3rd 2006, 12:52 PM
earboth
Quote:
Originally Posted by viet
Find the equation of the line that is tangent to the ellipse
b^2x^2 + a^2y^2 = a^2b^2 in the first quadrant and forms with the coodiante axes the triangle with smallest possible area (a and b are positive constants)
im not sure how to do this problem, the teacher explained it but it was very confusing.
Hello, viet,
have a look here: http://www.mathhelpforum.com/math-he...html#post26090
EB
• Nov 3rd 2006, 01:05 PM
Soroban
Hello, viet!
Quote:
Find the equation of the tangent to the ellipse [1] $b^2x^2 + a^2y^2 \:= \:a^2b^2$ in quadrant 1
which forms with the coodiante axes the triangle with smallest possible area.
( $a$ and $b$ are positive constants.)
Here is a little-known (but very convenient) formula . . .
The equation of the tangent to the ellipse: . $\frac{x^2}{a^2} + \frac{y^2}{b^2}\;=\;1$ at the point $(h,k)$
. . is given by: . $\frac{hx}{a^2} +\frac{ ky}{b^2}\:=\:1$
Then the intercepts of this tangent are: . $\left(\frac{a^2}{h},\,0\right)$ and $\left(0,\,\frac{b^2}{k}\right)$
The area of the triangle is: . $A \:=\:\frac{1}{2}\left(\frac{a^2}{h}\right)\left(\f rac{b^2}{k}\right) \;= \;\frac{a^2b^2}{2hk}$
For convenience, let $x = h,\:y = k.$ . Then we have: . $A \;= \;\frac{a^2b^2}{2}x^{-1}y^{-1}$
Differentiate: . $A' \;= \;\frac{a^2b^2}{2}\bigg[\left(x^{-1}\right)\left(\text{-}y^{-2}\right)y' + \left(\text{-}x^{-2}\right)\left(y^{-1}\right)\bigg]$
Equate to zero: . $-\frac{y'}{xy^2} - \frac{1}{x^2y} \;= \;0$
Multiply by $-x^2y^2:\;\;xy' + y \;=\;0\quad\Rightarrow\quad y \:=\:-xy'$ [2]
. . The ellipse is: .[1] $b^2x^2 + a^2y^2\:=\:a^2b^2$
. . Differentiate implicitily: . $2b^2x + 2a^2yy' \:=\:0\quad\Rightarrow\quad y' \:=\:-\frac{b^2x}{a^2y}$ [3]
Substitute [3] into [2]: . $y \:=\:-x\left(-\frac{b^2x}{a^2y}\right) \quad\Rightarrow\quad a^2y^2\:=\:b^2x^2$ [4]
Substitute [4] into [1]: . $b^2x^2 + b^2x^2\:=\:a^2b^2\quad\Rightarrow\quad 2x^2 - a^2\quad\Rightarrow\quad x = \frac{a}{\sqrt{2}}$
Substitute into [1] and get: . $y\,=\,\frac{b}{\sqrt{2}}$
Hence, the intercepts of the tangent are:
. . $\left(\frac{a^2}{\frac{a}{\sqrt{2}}},\:0\right) = (\sqrt{2}a,\,0)$ and $\left(0,\:\frac{b^2}{\frac{b}{\sqrt{2}}}\right) = (0,\,\sqrt{2}b)$
And I'll let you write the equation of that tangent line . . .
• Nov 3rd 2006, 08:53 PM
viet
i got it, thanks for your help
|
2017-12-12 23:59:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509593844413757, "perplexity": 1269.0719507562749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948520042.35/warc/CC-MAIN-20171212231544-20171213011544-00683.warc.gz"}
|
https://greprepclub.com/forum/if-3-5-of-a-circular-floor-is-covered-by-a-rectangular-rug-t-9803.html
|
It is currently 28 Nov 2020, 15:26
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If 3/5 of a circular floor is covered by a rectangular rug t
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 13916
GRE 1: Q160 V160
Followers: 315
Kudos [?]: 3683 [1] , given: 12939
If 3/5 of a circular floor is covered by a rectangular rug t [#permalink] 27 Jun 2018, 09:23
1
KUDOS
Expert's post
00:00
Question Stats:
87% (01:31) correct 12% (02:43) wrong based on 16 sessions
If $$\frac{3}{5}$$ of a circular floor is covered by a rectangular rug that is $$x$$ feet by $$y$$ feet, which of the following represents the distance from the center of the floor to the edge of the floor?
A. $$\frac{5 \sqrt{xy}}{3\pi}$$
B. $$\sqrt{\frac{5xy}{3 \pi}}$$
C. $$\frac{3\pi}{ 5 \sqrt{xy}}$$
D. $$\sqrt{xy}$$ $$- \frac{3 \pi}{5}$$
E. $$\sqrt{\frac{3xy}{5\pi}}$$
[Reveal] Spoiler: OA
_________________
New to the GRE, and GRE CLUB Forum?
GRE: All you do need to know about the GRE Test | GRE Prep Club for the GRE Exam - The Complete FAQ
Posting Rules: QUANTITATIVE | VERBAL
FREE Resources: GRE Prep Club Official LinkTree Page | Free GRE Materials - Where to get it!! (2020)
Free GRE Prep Club Tests: Got 20 Kudos? You can get Free GRE Prep Club Tests
GRE Prep Club on : Facebook | Instagram
Questions' Banks and Collection:
ETS: ETS Free PowerPrep 1 & 2 All 320 Questions Explanation. | ETS All Official Guides
3rd Party Resource's: All Quant Questions Collection | All Verbal Questions Collection
Books: All GRE Best Books
Scores: The GRE average score at Top 25 Business Schools 2020 Ed. | How to study for GRE retake and score HIGHER - (2020)
How is the GRE Score Calculated -The Definitive Guide (2021)
Tests: GRE Prep Club Tests | FREE GRE Practice Tests [Collection] - New Edition (2021)
Vocab: GRE Prep Club Official Vocabulary Lists for the GRE (2021)
Active Member
Joined: 29 May 2018
Posts: 126
Followers: 0
Kudos [?]: 112 [1] , given: 4
Re: If 3/5 of a circular floor is covered by a rectangular rug t [#permalink] 28 Jun 2018, 11:45
1
KUDOS
Carcass wrote:
If $$\frac{3}{5}$$ of a circular floor is covered by a rectangular rug that is $$x$$ feet by $$y$$ feet, which of the following represents the distance from the center of the floor to the edge of the floor?
A. $$\frac{5 \sqrt{xy}}{3\pi}$$
B. $$\sqrt{\frac{5xy}{3 \pi}}$$
C. $$\frac{3\pi}{ 5 \sqrt{xy}}$$
D. $$\sqrt{xy}$$ $$- \frac{3 \pi}{5}$$
E. $$\sqrt{\frac{3xy}{5\pi}}$$
Without a diagram took me 3:45 mins time to solve this and unnecessarily started to draw vague diagram. Later realized the solution properly.
When it is mentioned that 3/5 circle area that means out of 5 parts 3 parts area is equal to rectangle area.
i.e. xy (area of rectangle) = 3/5 ( pi * r^2 ) .
=> $$\sqrt{\frac{5xy}{3 \pi}}$$.
Attachments
cir.PNG [ 17.95 KiB | Viewed 1315 times ]
VP
Joined: 20 Apr 2016
Posts: 1302
WE: Engineering (Energy and Utilities)
Followers: 22
Kudos [?]: 1342 [0], given: 251
Re: If 3/5 of a circular floor is covered by a rectangular rug t [#permalink] 28 Jun 2018, 12:16
Carcass wrote:
If $$\frac{3}{5}$$ of a circular floor is covered by a rectangular rug that is $$x$$ feet by $$y$$ feet, which of the following represents the distance from the center of the floor to the edge of the floor?
A. $$\frac{5 \sqrt{xy}}{3\pi}$$
B. $$\sqrt{\frac{5xy}{3 \pi}}$$
C. $$\frac{3\pi}{ 5 \sqrt{xy}}$$
D. $$\sqrt{xy}$$ $$- \frac{3 \pi}{5}$$
E. $$\sqrt{\frac{3xy}{5\pi}}$$
Here,
We need to find out the radius of the circle
From the ques we know
$$\frac{3}{5}$$ of the circle area = area of the rectangle
or $$\frac{3}{5}$$ * area of the circle = area of rectangle
or $$\frac{3}{5} * \pi * r^2$$= x * y
or $$r^2$$ = $$\frac{5xy}{3 \pi}$$
or r = $$\sqrt{\frac{5xy}{3 \pi}}$$
_________________
If you found this post useful, please let me know by pressing the Kudos Button
Rules for Posting
Got 20 Kudos? You can get Free GRE Prep Club Tests
GRE Prep Club Members of the Month:TOP 10 members of the month with highest kudos receive access to 3 months GRE Prep Club tests
Re: If 3/5 of a circular floor is covered by a rectangular rug t [#permalink] 28 Jun 2018, 12:16
Display posts from previous: Sort by
|
2020-11-28 23:26:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27144095301628113, "perplexity": 5088.3064365938635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00140.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jimo.2008.4.827
|
# American Institute of Mathematical Sciences
• Previous Article
Price and delivery-time competition of perishable products: Existence and uniqueness of Nash equilibrium
• JIMO Home
• This Issue
• Next Article
Bounds on delay start LPT algorithm for scheduling on two identical machines in the $l_p$ norm
October 2008, 4(4): 827-842. doi: 10.3934/jimo.2008.4.827
## Supply chain partnership for Three-Echelon deteriorating inventory model
1 Logistics Management Department, Takming University of Science and Technology, Taipei 114, Taiwan, ROC, China 2 Department of Industrial & Systems Engineering Chung Yuan Christian University, Chungli, 32023, Taiwan, ROC, Graduate School of Business, Curtin University of Technology, Perth WA 6845, Australia 3 Department of Industrial Management, National Taiwan University of Science and Technology, Taipei 106, Taiwan, ROC, Taiwan
Received January 2007 Revised August 2008 Published November 2008
The different facilities in a supply chain usually develop their partnership through information sharing and strategic alliances to achieve the overall benefit of the system. In this study, we propose a supply chain network system with two producers, a single distributor and two retailers. Each retailer has a deterministic demand rate. A mathematical model of deteriorating item is developed to consider a vertical integration of the producer, the distributor and the retailer and a horizontal integration of the producers. We show how the integrated approach to decision making can achieve global optimum. Numerical examples and a sensitivity analysis are given to validate the proposed system.
Citation: Jonas C. P. Yu, H. M. Wee, K. J. Wang. Supply chain partnership for Three-Echelon deteriorating inventory model. Journal of Industrial & Management Optimization, 2008, 4 (4) : 827-842. doi: 10.3934/jimo.2008.4.827
[1] Shuang Chen, Jinqiao Duan, Ji Li. Effective reduction of a three-dimensional circadian oscillator model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020349 [2] Gang Bao, Mingming Zhang, Bin Hu, Peijun Li. An adaptive finite element DtN method for the three-dimensional acoustic scattering problem. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020351 [3] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 [4] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 [5] Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020105
2019 Impact Factor: 1.366
|
2020-12-04 08:55:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3426544964313507, "perplexity": 7441.9307498229555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00633.warc.gz"}
|
https://docs.nebula-graph.io/3.4.0/nebula-exchange/use-exchange/ex-ug-import-from-csv/
|
# Import data from CSV files¶
This topic provides an example of how to use Exchange to import NebulaGraph data stored in HDFS or local CSV files.
To import a local CSV file to NebulaGraph, see NebulaGraph Importer.
## Data set¶
This topic takes the basketballplayer dataset as an example.
## Environment¶
This example is done on MacOS. Here is the environment configuration information:
• Hardware specifications:
• CPU: 1.7 GHz Quad-Core Intel Core i7
• Memory: 16 GB
• Spark: 2.4.7, stand-alone
## Prerequisites¶
Before importing data, you need to confirm the following information:
• NebulaGraph has been installed and deployed with the following information:
• IP addresses and ports of Graph and Meta services.
• The user name and password with write permission to NebulaGraph.
• Exchange has been compiled, or download the compiled .jar file directly.
• Spark has been installed.
• Learn about the Schema created in NebulaGraph, including names and properties of Tags and Edge types, and more.
• If files are stored in HDFS, ensure that the Hadoop service is running normally.
• If files are stored locally and NebulaGraph is a cluster architecture, you need to place the files in the same directory locally on each machine in the cluster.
## Steps¶
### Step 1: Create the Schema in NebulaGraph¶
Analyze the data to create a Schema in NebulaGraph by following these steps:
1. Identify the Schema elements. The Schema elements in the NebulaGraph are shown in the following table.
Element Name Property
Tag player name string, age int
Tag team name string
Edge Type follow degree int
Edge Type serve start_year int, end_year int
2. Create a graph space basketballplayer in the NebulaGraph and create a Schema as shown below.
## Create a graph space.
(partition_num = 10, \
replica_factor = 1, \
vid_type = FIXED_STRING(30));
## Use the graph space basketballplayer.
## Create the Tag player.
nebula> CREATE TAG player(name string, age int);
## Create the Tag team.
nebula> CREATE TAG team(name string);
## Create the Edge type follow.
nebula> CREATE EDGE follow(degree int);
## Create the Edge type serve.
nebula> CREATE EDGE serve(start_year int, end_year int);
### Step 2: Process CSV files¶
Confirm the following information:
1. Process CSV files to meet Schema requirements.
Note
2. Obtain the CSV file storage path.
### Step 3: Modify configuration files¶
After Exchange is compiled, copy the conf file target/classes/application.conf to set CSV data source configuration. In this example, the copied file is called csv_application.conf. For details on each configuration item, see Parameters in the configuration file.
{
# Spark configuration
spark: {
app: {
name: NebulaGraph Exchange 3.4.0
}
driver: {
cores: 1
maxResultSize: 1G
}
executor: {
memory:1G
}
cores: {
max: 16
}
}
# NebulaGraph configuration
nebula: {
# Specify the IP addresses and ports for Graph and Meta services.
# If there are multiple addresses, the format is "ip1:port","ip2:port","ip3:port".
# Addresses are separated by commas.
graph:["127.0.0.1:9669"]
# the address of any of the meta services.
meta:["127.0.0.1:9559"]
}
# The account entered must have write permission for the NebulaGraph space.
user: root
pswd: nebula
# Fill in the name of the graph space you want to write data to in the NebulaGraph.
connection: {
timeout: 3000
retry: 3
}
execution: {
retry: 3
}
error: {
max: 32
output: /tmp/errors
}
rate: {
limit: 1024
timeout: 1000
}
}
# Processing vertexes
tags: [
# Set the information about the Tag player.
{
# Specify the Tag name defined in NebulaGraph.
name: player
type: {
# Specify the data source file format to CSV.
source: csv
# Specify how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the CSV file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example: "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example: "file:///tmp/xx.csv".
path: "hdfs://192.168.*.*:9000/data/vertex_player.csv"
# If the CSV file does not have a header, use [_c0, _c1, _c2, ..., _cn] to represent its header and indicate the columns as the source of the property values.
# If the CSV file has headers, use the actual column names.
fields: [_c1, _c2]
# Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [age, name]
# Specify a column of data in the table as the source of vertex VID in the NebulaGraph.
# The value of vertex must be the same as the column names in the above fields or csv.fields.
# Currently, NebulaGraph 3.4.0 supports only strings or integers of VID.
vertex: {
field:_c0
# policy:hash
}
# The delimiter specified. The default value is comma.
separator: ","
# If the CSV file has a header, set the header to true.
# If the CSV file does not have a header, set the header to false. The default value is false.
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# Set the information about the Tag Team.
{
# Specify the Tag name defined in NebulaGraph.
name: team
type: {
# Specify the data source file format to CSV.
source: csv
# Specify how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the CSV file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example: "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example: "file:///tmp/xx.csv".
path: "hdfs://192.168.*.*:9000/data/vertex_team.csv"
# If the CSV file does not have a header, use [_c0, _c1, _c2, ..., _cn] to represent its header and indicate the columns as the source of the property values.
# If the CSV file has headers, use the actual column names.
fields: [_c1]
# Specify the column names in the player table in fields, and their corresponding values are specified as properties in the NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [name]
# Specify a column of data in the table as the source of VIDs in the NebulaGraph.
# The value of vertex must be the same as the column names in the above fields or csv.fields.
# Currently, NebulaGraph 3.4.0 supports only strings or integers of VID.
vertex: {
field:_c0
# policy:hash
}
# The delimiter specified. The default value is comma.
separator: ","
# If the CSV file has a header, set the header to true.
# If the CSV file does not have a header, set the header to false. The default value is false.
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# If more vertexes need to be added, refer to the previous configuration to add them.
]
# Processing edges
edges: [
# Set the information about the Edge Type follow.
{
# Specify the Edge Type name defined in NebulaGraph.
name: follow
type: {
# Specify the data source file format to CSV.
source: csv
# Specify how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the CSV file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example: "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example: "file:///tmp/xx.csv".
path: "hdfs://192.168.*.*:9000/data/edge_follow.csv"
# If the CSV file does not have a header, use [_c0, _c1, _c2, ..., _cn] to represent its header and indicate the columns as the source of the property values.
# If the CSV file has headers, use the actual column names.
fields: [_c2]
# Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [degree]
# Specify a column as the source for the source and destination vertexes.
# The value of vertex must be the same as the column names in the above fields or csv.fields.
# Currently, NebulaGraph 3.4.0 supports only strings or integers of VID.
source: {
field: _c0
}
target: {
field: _c1
}
# The delimiter specified. The default value is comma.
separator: ","
# Specify a column as the source of the rank (optional).
#ranking: rank
# If the CSV file has a header, set the header to true.
# If the CSV file does not have a header, set the header to false. The default value is false.
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
# Set the information about the Edge Type serve.
{
# Specify the Edge Type name defined in NebulaGraph.
name: serve
type: {
# Specify the data source file format to CSV.
source: csv
# Specify how to import the data into NebulaGraph: Client or SST.
sink: client
}
# Specify the path to the CSV file.
# If the file is stored in HDFS, use double quotation marks to enclose the file path, starting with hdfs://. For example: "hdfs://ip:port/xx/xx".
# If the file is stored locally, use double quotation marks to enclose the file path, starting with file://. For example: "file:///tmp/xx.csv".
path: "hdfs://192.168.*.*:9000/data/edge_serve.csv"
# If the CSV file does not have a header, use [_c0, _c1, _c2, ..., _cn] to represent its header and indicate the columns as the source of the property values.
# If the CSV file has headers, use the actual column names.
fields: [_c2,_c3]
# Specify the column names in the edge table in fields, and their corresponding values are specified as properties in the NebulaGraph.
# The sequence of fields and nebula.fields must correspond to each other.
nebula.fields: [start_year, end_year]
# Specify a column as the source for the source and destination vertexes.
# The value of vertex must be the same as the column names in the above fields or csv.fields.
# Currently, NebulaGraph 3.4.0 supports only strings or integers of VID.
source: {
field: _c0
}
target: {
field: _c1
}
# The delimiter specified. The default value is comma.
separator: ","
# Specify a column as the source of the rank (optional).
#ranking: _c5
# If the CSV file has a header, set the header to true.
# If the CSV file does not have a header, set the header to false. The default value is false.
# The number of data written to NebulaGraph in a single batch.
batch: 256
# The number of Spark partitions.
partition: 32
}
]
# If more edges need to be added, refer to the previous configuration to add them.
}
### Step 4: Import data into NebulaGraph¶
Run the following command to import CSV data into NebulaGraph. For descriptions of the parameters, see Options for import.
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange-3.4.0.jar_path> -c <csv_application.conf_path> Note JAR packages are available in two ways: compiled them yourself, or download the compiled .jar file directly. For example: ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange-3.4.0.jar -c /root/nebula-exchange/nebula-exchange/target/classes/csv_application.conf
You can search for batchSuccess.<tag_name/edge_name> in the command output to check the number of successes. For example, batchSuccess.follow: 300.
### Step 5: (optional) Validate data¶
Users can verify that data has been imported by executing a query in the NebulaGraph client (for example, NebulaGraph Studio). For example:
LOOKUP ON player YIELD id(vertex);
Users can also run the SHOW STATS command to view statistics.
### Step 6: (optional) Rebuild indexes in NebulaGraph¶
With the data imported, users can recreate and rebuild indexes in NebulaGraph. For details, see Index overview.
Last update: March 23, 2023
|
2023-03-25 19:31:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3213007152080536, "perplexity": 6191.1315749427795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00091.warc.gz"}
|
https://gmatclub.com/forum/a-certain-series-is-defined-by-the-following-recursive-rule-sn-k-sn-114792.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Jun 2019, 00:51
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A certain series is defined by the following recursive rule: Sn=K(Sn-1
Author Message
TAGS:
### Hide Tags
Manager
Joined: 16 May 2011
Posts: 172
Concentration: Finance, Real Estate
GMAT Date: 12-27-2011
WE: Law (Law)
A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
06 Jun 2011, 14:02
1
14
00:00
Difficulty:
55% (hard)
Question Stats:
67% (02:41) correct 33% (02:40) wrong based on 202 sessions
### HideShow timer Statistics
A certain series is defined by the following recursive rule: Sn=K(Sn-1) , where k is a constant. If the 1st term of this series is 64 and the 25th term is 192, wha is the 9th term?
A. ROOT 2
B. ROOT 3
C. 64*ROOT 3
D. 64*3^1/3
E. 64*3^24
Retired Moderator
Joined: 20 Dec 2010
Posts: 1748
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
06 Jun 2011, 15:20
2
4
dimri10 wrote:
:lol: A certain series is defined by the following recursive rule: Sn=K(Sn-1) , where k is a constant. If the 1st term of this series is 64 and the 25th term is 192, wha is the 9th term?
1. ROOT 2
2. ROOT 3
3. 64*ROOT 3
4> 64*3^1/3
5.64*3^24
For GP:
$$A_n=A_1*r^{(n-1)}$$
Here, r=k
$$A_{25}=A_1*k^{(25-1)}=A_1*k^{24}$$
$$192=64*k^{24}$$
$$192=64*(k^{8})^{3}$$
$$(k^8)^3=\frac{192}{64}=3$$
Taking cube-root on both sides,
$$k^8=\sqrt[3]{3}$$
Now,
$$A_9=A_1*k^8$$
$$A_9=64\sqrt[3]{3}$$
Ans: "D"
_________________
##### General Discussion
Intern
Joined: 24 Feb 2011
Posts: 28
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
06 Jun 2011, 18:52
1
Fluke what does GP stand for? And what is the name of the formula you are referencing?
Senior Manager
Joined: 24 Mar 2011
Posts: 335
Location: Texas
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
06 Jun 2011, 21:06
Sn = k Sn-1
S25 = 192 = k S24
= k^24 * 64
And S1 = 64 = k S0
==> k^24 = 192/64 = 3.. (1)
Now, S9 = k^8 * 64
From (1), (k^8)^3 = 3
k^8 = 3^\frac{1}{3}
S9 = 3^\frac{1}{3} * 64
Manager
Joined: 19 Apr 2011
Posts: 82
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
07 Jun 2011, 00:32
$$S1 = xk = 64 S25 = xk^25 = 192 xkk^24 = 192 k^24 = 192/64 = 3 k = 3^(1/24) S9 = xk^9=xkk^8 = 64*(3^(1/24*8) = 64*(3^(1/3))$$
Retired Moderator
Joined: 20 Dec 2010
Posts: 1748
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
07 Jun 2011, 01:01
olivite wrote:
Fluke what does GP stand for? And what is the name of the formula you are referencing?
GP is Geometric Progression, where the ratio between two consecutive terms is always same.
According to the question, this series is a Geometric Series.
As,
$$A_2=A_1*k \hspace{3} OR \hspace{3} \frac{A_2}{A_1}=k$$
$$A_3=A_2*k \hspace{3} OR \hspace{3} \frac{A_3}{A_2}=k$$
$$A_4=A_3*k \hspace{3} OR \hspace{3} \frac{A_4}{A_3}=k$$
$$A_5=A_4*k \hspace{3} OR \hspace{3} \frac{A_5}{A_4}=k$$
For such series, the $$n^{th}$$ term can be found using following formula:
$$A_n=A_1*k^{(n-1)}$$
Where,
$$k=$$ratio between two consecutive terms
$$A_1=$$first term of the series
$$n=$$index of the term we are trying to find.
Thus,
$$25^{th}$$ term of the series, $$A_{25}=A_1*k^{(25-1)}$$
$$9^{th}$$ term of the series, $$A_{9}=A_1*k^{(9-1)}$$
_________________
Manager
Joined: 08 Sep 2010
Posts: 114
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
11 Jun 2011, 10:55
Ans...D
No need for any GP formula here
The rule is that nth term is K times the (n-1)th term.
1st = 64
2nd = k.64
3rd = k^2.64
.
.
.
9th term = k^8 *64
.
.
.
so 25th = k^24*64
Using this solve for k and substitute k in the equation for the 9th term
_________________
My will shall shape the future. Whether I fail or succeed shall be no man's doing but my own.
If you like my explanations award kudos.
Director
Joined: 01 Feb 2011
Posts: 637
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
11 Jun 2011, 11:05
s(n) = k s(n-1)
=> its GP = a + ar + ar^2.....+ar^(n-1)
first term = 64 = a
nth term in a GP is given by ar^(n-1)
25th term is 192 = 64* r^24 => r^24 = 3 => r = 3^(1/24)
9th term = ar^8 = 64* 3^(8/24) = 64*3^(1/3)
Intern
Joined: 28 Aug 2018
Posts: 27
Location: India
Schools: LBS '21 (A)
GMAT 1: 650 Q49 V31
GPA: 3.16
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
15 Nov 2018, 00:19
S1 = 64. We have to find out S9 which is equal to k^(8)*64
S25 = k^(24)*S1
Therefore, 192 = k^(24)*64
k = 3^(1/24)
S9 = 3^(8/24)*64 which is 3^(1/3)*64. Option D
Senior Manager
Joined: 12 Sep 2017
Posts: 263
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink]
### Show Tags
27 Jan 2019, 13:20
dimri10 wrote:
A certain series is defined by the following recursive rule: Sn=K(Sn-1) , where k is a constant. If the 1st term of this series is 64 and the 25th term is 192, wha is the 9th term?
A. ROOT 2
B. ROOT 3
C. 64*ROOT 3
D. 64*3^1/3
E. 64*3^24
How do we know that we are talking about a GP?
Kind regards!
Re: A certain series is defined by the following recursive rule: Sn=K(Sn-1 [#permalink] 27 Jan 2019, 13:20
Display posts from previous: Sort by
|
2019-06-17 07:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833741545677185, "perplexity": 5428.201312933884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00276.warc.gz"}
|
https://blog.csdn.net/nameofcsdn/article/details/112750825
|
# 贪心(2)活动安排问题
CSU 1065: Scientific Conference
CodeForces 589F 究竟能吃多久?
HDU 1789 Doing Homework again
HDU 1051 Wooden Sticks
# 二,OJ实战
## CSU 1065: Scientific Conference
Description
Functioning of a scientific conference is usually divided into several simultaneous sections. For example, there may be a section on parallel computing, a section on visualization, a section on data compression, and so on.
Obviously, simultaneous work of several sections is necessary in order to reduce the time for scientific program of the conference and to have more time for the banquet, tea-drinking, and informal discussions. However, it is possible that interesting reports are given simultaneously at different sections.
A participant has written out the time-table of all the reports which are interesting for him. He asks you to determine the maximal number of reports he will be able to attend.
Input
The first line contains the number 1 ≤ N ≤ 100000 of interesting reports. Each of the next N lines contains two integers Ts and Te separated with a space (1 ≤ Ts < Te ≤ 30000). These numbers are the times a corresponding report starts and ends. Time is measured in minutes from the beginning of the conference.
Output
You should output the maximal number of reports which the participant can attend. The participant can attend no two reports simultaneously and any two reports he attends must be separated by at least one minute. For example, if a report ends at 15, the next report which can be attended must begin at 16 or later.
Sample Input
5
3 4
1 5
6 7
4 5
1 3
Sample Output
3
#include<iostream>
#include<algorithm>
using namespace std;
struct node
{
int s, e;
}nod[100005];
bool cmp(node a, node b)
{
return a.e < b.e;
}
int main()
{
int n, ans = 0, k = 0;
cin >> n;
for(int i=0;i<n;i++)cin >> nod[i].s >> nod[i].e;
sort(nod, nod + n, cmp);
for (int i = 0; i < n; i++)if (k < nod[i].s)k = nod[i].e, ans++;
cout << ans;
return 0;
}
## CodeForces 589F 究竟能吃多久?
Description
A gourmet came into the banquet hall, where the cooks suggested n dishes for guests. The gourmet knows the schedule: when each of the dishes will be served.
For i-th of the dishes he knows two integer moments in time ai and bi (in seconds from the beginning of the banquet) — when the cooks will bring the i-th dish into the hall and when they will carry it out (ai < bi). For example, if ai = 10 and bi = 11, then the i-th dish is available for eating during one second.
The dishes come in very large quantities, so it is guaranteed that as long as the dish is available for eating (i. e. while it is in the hall) it cannot run out.
The gourmet wants to try each of the n dishes and not to offend any of the cooks. Because of that the gourmet wants to eat each of the dishes for the same amount of time. During eating the gourmet can instantly switch between the dishes. Switching between dishes is allowed for him only at integer moments in time. The gourmet can eat no more than one dish simultaneously. It is allowed to return to a dish after eating any other dishes.
The gourmet wants to eat as long as possible on the banquet without violating any conditions described above. Can you help him and find out the maximum total time he can eat the dishes on the banquet?
Input
The first line of input contains an integer n(1 ≤ n ≤ 100) — the number of dishes on the banquet.
The following n lines contain information about availability of the dishes. The i-th line contains two integers ai and bi(0 ≤ ai < bi ≤ 10000) — the moments in time when the i-th dish becomes available for eating and when the i-th dish is taken away from the hall.
Output
Output should contain the only integer — the maximum total time the gourmet can eat the dishes on the banquet.
The gourmet can instantly switch between the dishes but only at integer moments in time. It is allowed to return to a dish after eating any other dishes. Also in every moment in time he can eat no more than one dish.
Sample Input
Input
3
2 4
1 5
6 9
Output
6
Input
3
1 2
1 2
1 2
Output
0
Hint
In the first example the gourmet eats the second dish for one second (from the moment in time 1 to the moment in time 2), then he eats the first dish for two seconds (from 2 to 4), then he returns to the second dish for one second (from 4 to 5). After that he eats the third dish for two seconds (from 6 to 8).
In the second example the gourmet cannot eat each dish for at least one second because there are three dishes but they are available for only one second (from 1 to 2).
#include<iostream>
using namespace std;
struct node //d表示时间
{
int a;
int b;
int d;
};
bool ok(int m, node *p, int n) //m表示时间
{
if (m == 0)return true;
for (int i = 0; i < n; i++)p[i].d = m;
for (int k = 0; k < 10000; k++) //函数ok是非递归的,主要就是这个循环
{
int end = 10001, key = -1; //最后一次修改是把10000改成了10001
for (int i = 0; i < n; i++)
{
if (p[i].a <= k && p[i].b>k && p[i].d>0 && end > p[i].b) /贪心策略//
{
end = p[i].b;
key = i;
}
}
if (key >= 0)p[key].d--;
}
for (int i = 0; i < n; i++)if (p[i].d)return false;
return true;
}
int main()
{
int n;
while (cin >> n)
{
node *p = new node[n];
int dif = 10000;
for (int i = 0; i < n; i++)
{
cin >> p[i].a >> p[i].b;
if (dif>p[i].b - p[i].a)dif = p[i].b - p[i].a;
}
int low = 0, high = dif;
int mid;
while (low +1 < high) //二分查找答案
{
mid = (high + low) / 2;
if (ok(mid,p,n))low = mid;
else high = mid - 1;
}
if (ok(high, p, n))cout << high*n << endl;
else cout << low*n << endl;
}
return 0;
}
## HDU 1789 Doing Homework again
Description
zichen has just come back school from the 30th ACM/ ICPC. Now he has a lot of homework to do. Every teacher gives him a deadline of handing in the homework. If zichen hands in the homework after the deadline, the teacher will reduce his score of the final test. And now we assume that doing everyone homework always takes one day. So zichen wants you to help him to arrange the order of doing homework to minimize the reduced score.
Input
The input contains several test cases. The first line of the input is a single integer T that is the number of test cases. T test cases follow.
Each test case start with a positive integer N(1<=N<=1000) which indicate the number of homework.. Then 2 lines follow. The first line contains N integers that indicate the deadlines of the subjects, and the next line contains N integers that indicate the reduced scores.
Output
For each test case, you should output the smallest total reduced score, one line per test case.
Sample Input
3
3
3 3 3
10 5 1
3
1 3 1
6 2 3
7
1 4 6 4 2 4 3
3 2 1 7 6 5 4
Sample Output
0
3
#include<algorithm>
#include<iostream>
#include<string.h>
using namespace std;
struct node
{
int day;
int score;
};
node nod[1000];
bool cmp(node a, node b)
{
if (a.score > b.score)return true;
if (a.score < b.score)return false;
return a.day>b.day;
}
int main()
{
int t,n;
cin >> t;
int max, sum;
while (t--)
{
cin >> n;
max = 0;
for (int i = 0; i < n; i++)
{
cin >> nod[i].day;
if (max < nod[i].day)max = nod[i].day;
}
for (int i = 0; i < n; i++)cin >> nod[i].score;
sort(nod, nod + n, cmp);
int *list = new int[max+1];
memset(list, 0, (max+1)*4);
sum = 0;
for (int i = 0; i < n; i++)
{
int j = nod[i].day;
while (j > 0 && list[j])j--;
if (j == 0)sum += nod[i].score;
else list[j] = 1;
}
cout << sum << endl;
delete list;
}
return 0;
}
<algorithm>
#include<iostream>
#include<string.h>
using namespace std;
struct node
{
int day;
int score;
};
node nod[1000];
bool cmp(node a, node b)
{
if (a.score > b.score)return true;
if (a.score < b.score)return false;
return a.day>b.day;
}
int main()
{
int t,n;
cin >> t;
int max, sum;
while (t--)
{
cin >> n;
max = 0;
for (int i = 0; i < n; i++)
{
cin >> nod[i].day;
if (max < nod[i].day)max = nod[i].day;
}
for (int i = 0; i < n; i++)cin >> nod[i].score;
sort(nod, nod + n, cmp);
int *list = new int[max+1];
memset(list, 0, (max+1)*4);
sum = 0;
for (int i = 0; i < n; i++)
{
int j = nod[i].day;
while (j > 0 && list[j])j--;
if (j == 0)sum += nod[i].score;
else list[j] = 1;
}
cout << sum << endl;
delete list;
}
return 0;
}
Description
Today the company has m tasks to complete. The ith task need xi minutes to complete. Meanwhile, this task has a difficulty level yi. The machine whose level below this task’s level yi cannot complete this task. If the company completes this task, they will get (500*xi+2*yi) dollars.
The company has n machines. Each machine has a maximum working time and a level. If the time for the task is more than the maximum working time of the machine, the machine can not complete this task. Each machine can only complete a task one day. Each task can only be completed by one machine.
The company hopes to maximize the number of the tasks which they can complete today. If there are multiple solutions, they hopes to make the money maximum.
Input
The input contains several test cases.
The first line contains two integers N and M. N is the number of the machines.M is the number of tasks(1 < =N <= 100000,1<=M<=100000).
The following N lines each contains two integers xi(0<xi<1440),yi(0=<yi<=100).xi is the maximum time the machine can work.yi is the level of the machine.
The following M lines each contains two integers xi(0<xi<1440),yi(0=<yi<=100).xi is the time we need to complete the task.yi is the level of the task.
Output
For each test case, output two integers, the maximum number of the tasks which the company can complete today and the money they will get.
Sample Input
1 2
100 3
100 2
100 1
Sample Output
1 50004
#include<iostream>
#include<string.h>
#include<algorithm>
using namespace std;
struct node
{
int x;
int y;
};
node noden[100005];
node nodem[100005];
int list[100005]; //list不为0表示对应的机器已经被选择
int section[101]; //section[i]是满足y不小于i的机器一共有多少个(对应最大下标)
bool cmpy(node a, node b) //机器以y为第一关键字降序排列,以x为第一关键字降序排列
{
if (a.y > b.y)return true;
if (a.y < b.y)return false;
return a.x>b.x;
}
bool cmpx(node a, node b) //Task以x为第一关键字降序排列,以y为第二关键字降序排列
{
if (a.x > b.x)return true;
if (a.x < b.x)return false;
return a.y>b.y;
}
int main()
{
int n, m;
while (cin >> n >> m)
{
for (int i = 0; i < n; i++)cin >> noden[i].x >> noden[i].y;
for (int i = 0; i < m; i++)cin >> nodem[i].x >> nodem[i].y;
sort(noden, noden + n, cmpy); //机器
sort(nodem, nodem + m, cmpx); //Task
memset(list, 0, sizeof(list));
memset(section, 0, sizeof(section));
for (int i = 0; i < n; i++)section[noden[i].y] = i + 1;
for (int i = 99; i >= 0; i--)if (section[i] < section[i + 1])section[i] = section[i + 1];
long long num = 0, sum = 0, t;
for (int i = 0; i < m; i++)
{
t = nodem[i].y;
for (int j = section[t] - 1; j >= 0; j--)
{
if (list[j] || noden[j].x < nodem[i].x)continue;
list[j] = 1;
num++;
sum += nodem[i].x * 500 + t * 2;
break;
}
}
cout << num << " " << sum << endl;
}
return 0;
}
(可能你觉得我这里又用到了作业B只能选择机器1这个条件,但是实际上我只是叙述的不够详细而已。
#include<iostream>
#include<vector>
#include<algorithm>
using namespace std;
struct node
{
int x;
int y;
};
node noden[100005];
node nodem[100005];
vector<int>v[101];
bool cmp(node a, node b) //Task以x为第一关键字降序排列,以y为第二关键字降序排列
{
if (a.x > b.x)return true;
if (a.x < b.x)return false;
return a.y>b.y;
}
int main()
{
ios_base::sync_with_stdio(false);
int n, m;
vector< int >::iterator p;
while (cin >> n >> m)
{
for (int i = 0; i <= 100; i++)v[i].clear();
for (int i = 0; i < n; i++)
{
cin >> noden[i].x >> noden[i].y;
v[noden[i].y].insert(v[noden[i].y].end(), i);
}
for (int i = 0; i < m; i++)cin >> nodem[i].x >> nodem[i].y;
sort(nodem, nodem + m, cmp); //Task
long long num = 0, sum = 0;
for (int i = 0; i < m; i++)
{
bool b = false;
for (int j = nodem[i].y; j <= 100; j++)
{
if (b)break;
for (p = v[j].begin(); p != v[j].end(); p++)
{
if (noden[*p].x >= nodem[i].x)
{
num++;
sum += nodem[i].x * 500 + nodem[i].y * 2;
v[j].erase(p);
b = true;
break;
}
}
}
}
cout << num << " " << sum << endl;
}
return 0;
}
## HDU 1051 Wooden Sticks
There is a pile of n wooden sticks. The length and weight of each stick are known in advance. The sticks are to be processed by a woodworking machine in one by one fashion. It needs some time, called setup time, for the machine to prepare processing a stick. The setup times are associated with cleaning operations and changing tools and shapes in the machine. The setup times of the woodworking machine are given as follows:
(a) The setup time for the first wooden stick is 1 minute.
(b) Right after processing a stick of length l and weight w , the machine will need no setup time for a stick of length l' and weight w' if l<=l' and w<=w'. Otherwise, it will need 1 minute for setup.
You are to find the minimum setup time to process a given pile of n wooden sticks. For example, if you have five sticks whose pairs of length and weight are (4,9), (5,2), (2,1), (3,5), and (1,4), then the minimum setup time should be 2 minutes since there is a sequence of pairs (1,4), (3,5), (4,9), (2,1), (5,2).
Input
The input consists of T test cases. The number of test cases (T) is given in the first line of the input file. Each test case consists of two lines: The first line has an integer n , 1<=n<=5000, that represents the number of wooden sticks in the test case, and the second line contains n 2 positive integers l1, w1, l2, w2, ..., ln, wn, each of magnitude at most 10000 , where li and wi are the length and weight of the i th wooden stick, respectively. The 2n integers are delimited by one or more spaces.
Output
The output should contain the minimum setup time in minutes, one per line.
Sample Input
3
5
4 9 5 2 2 1 3 5 1 4
3
2 2 1 1 2 2
3
1 3 2 2 3 1
Sample Output
2
1
3
#include<iostream>
#include<algorithm>
using namespace std;
struct nod
{
int l, w;
bool visit;
}node[5001];
bool cmp(nod a, nod b)
{
if (a.l == b.l)return a.w<b.w;
return a.l < b.l;
}
int main()
{
int T, n, ans;
cin >> T;
while (T--)
{
cin >> n;
for (int i = 1; i <= n; i++)
{
cin >> node[i].l >> node[i].w;
node[i].visit = false;
}
sort(node + 1, node + n + 1, cmp);
ans = 0;
for (int i = 1; i <= n; i++)
{
if (node[i].visit)continue;
ans++;
int k = i;
for (int j = i + 1; j <= n; j++)
{
if (node[j].visit)continue;
if (node[k].w <= node[j].w)node[j].visit = true, k = j;
}
}
cout << ans << endl;
}
return 0;
}
05-23
10-29
07-24 1626
10-10 8259
04-28 1375
03-19 151
03-18 139
10-18 548
09-28 620
12-12
|
2021-02-25 22:57:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33450666069984436, "perplexity": 3085.0172299713513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00076.warc.gz"}
|
https://www.physicsforums.com/threads/finding-a-value-to-make-piecewise-continuous.289276/
|
# Finding a value to make piecewise continuous
1. Feb 2, 2009
### BrianHare
1. The problem statement, all variables and given/known data
Find c such that it makes f(x) continuous.
2. Relevant equations
$f(x)=\begin{cases} 2x+c&x < -5\\ 3x^2&-5 \leq x < 0\\ cx^2&0 \leq x\\ \end{cases}$
3. The attempt at a solution
I know that
$$\lim_{x\to 5^-}3x^2$$ = 2x+c
and
$$\lim_{x\to 0^+}3x^2$$ = cx^2
Which makes the 2 points where it disconnects at (-5, 75) and (0,0)
Given that I make 2x+c = 75, where x=-5. I get C= 85, and since cx^2=0 is any real, does that mean the answer is c=85?
Last edited: Feb 2, 2009
2. Feb 2, 2009
### tiny-tim
Welcome to PF!
Hi Brian! Welcome to PF!
Yes, c = 85.
(though you have a strange way of using lim …
you might as well say lim 3x2 = 3*(-5)2, and so on. )
3. Feb 2, 2009
### BrianHare
Re: Welcome to PF!
My teacher has a definition where
$$\lim_{x\to a}f(x)$$ = f(c)
It was my understanding that f(x) = 3x^2, a = -5, and f(c) = 2x+c. So once I knew the answer to the limit, i knew that f(c) = 75, thus 2x+c must also be 75. Maybe I am misunderstanding the definition. Can anyone clarify?
4. Feb 2, 2009
### tiny-tim
that doesn't make any sense …
what is f(c) supposed to mean?
(f(c) = 2c + c or 3c2 or cc2)
does he mean $$\lim_{x\to a}f(x)$$ = f(a)?
|
2017-04-29 01:44:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.73683100938797, "perplexity": 3031.400874646203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00233-ip-10-145-167-34.ec2.internal.warc.gz"}
|