url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://mathematica.stackexchange.com/questions/207668/polar-plotting-hankel-function-with-a-lot-of-terms?noredirect=1 | # Polar plotting Hankel Function with a lot of terms
I am trying to plot a normalized polar plot for the following function with different values of $$a$$
$$\left\lvert \sum_{n=1}^\infty i^n (2n+1) \frac {P_n^1(cos(\theta))}{\sqrt{\frac{\pi k a}{2}}[-H_{n+\frac{3}{2}}^2 (ka) + \frac{n+1}{ka}H_{n+\frac{1}{2}}^2(ka)]} \right\rvert^2$$
where $$P_n^1(cos(\theta))$$ is the associated Legendre polynomial and $$H_{n+\frac{3}{2}}^2 (ka)$$ and $$H_{n+\frac{1}{2}}^2(ka)$$ are Hankel function of 2nd kind. Here $$k=2\pi$$ and the value of $$a$$ vary as $$a = [.25, .05, 2, 10, 20]$$
I can get plots upto $$a=2$$ with $$n=100$$ but for $$a=10, 20$$ I am having difficulty plotting. I am running into issues where machine precision is lost and the norm for Hankel function becomes too big. This is my attempt below
k = 2 \[Pi] ;
Pr[a_, m_, \[Theta]_] :=
Abs[Sum[I^n (2 n + 1) LegendreP[n, 1, Cos[\[Theta]]]/(
Sqrt[(\[Pi] k a )/
2] (-HankelH2[n + 3/2, k a ] + (n + 1)/(k a)
HankelH2[n + 1/2, k a ])), {n, 1, m}]]^2;
ap05m = 2; normp05 = FindMaximum[Pr[.05, ap05m, \[Theta]], {\[Theta], 0, 2 \[Pi]}];
ap25m = 7 ; normp25 = FindMaximum[Pr[.25, ap25m, \[Theta]], {\[Theta], 0, 2 \[Pi]}];
a2m = 100; norm2 = FindMaximum[Pr[2.0, a2m, \[Theta]], {\[Theta], 0, 2 \[Pi]}];
a10m = 150; norm10 = FindMaximum[Pr[10.0, a10m, \[Theta]], {\[Theta], 0, 2 \[Pi]}];
a20m = 150; norm20 = FindMaximum[Pr[20.0, a20m, \[Theta]], {\[Theta], 0, 2 \[Pi]}];
PolarPlot[{Pr[.05, ap05m,\[Theta]]/normp05[[1]],
Pr[.25, ap25m, \[Theta]]/normp25[[1]],
Pr[2, a2m, \[Theta]]/norm2[[1]],
Pr[10, a10m, \[Theta]]/norm10[[1]],
Pr[20, a20m, \[Theta]]/norm20[[1]]}, {\[Theta], 0, 2 \[Pi]},
PolarAxes -> True, PlotRange -> Automatic,
PolarGridLines -> Automatic, PolarTicks -> {"Degrees", Automatic},
PolarAxesOrigin -> {0, 1}, PlotLegends -> "Expressions"]
The norm for $$a=10$$ and $$a=20$$ are
a10m = 150; norm10 = FindMaximum[Pr[10.0, a10m, \[Theta]], {\[Theta], 0, 2 \[Pi]}]
$${7.24673*10^{27}, {\theta -> 0.121351}}$$
a20m = 150; norm20 = FindMaximum[Pr[20.0, a20m, \[Theta]], {\[Theta], 0, 2 \[Pi]}]
$${1.11063*10^{79}, {\theta -> 0.0809369}}$$
These normalization factors are too big and as a result I don't see the graphs for $$a=10$$ and $$a=20$$
This is what the graph is supposed to look like given by the professor
This is what my current graph looks like
I have tried $$Chop[]$$ function, but it didn't work. I would appreciate any help in plotting the normalized graph for $$a = 10, 20$$ on the same graph with $$a = .05, .25, 2$$.
• The problem with your code is that you recompute the prefactor for each value of angle again and again. The idea would be to precompute the prefactors once and store them in the table. Or, if a minimal code modification is required you can use memoization option of MA reference.wolfram.com/language/tutorial/…. – yarchik Oct 11 '19 at 8:21
• @yarchik that's a great idea! I didn't know that! However, i still don't get the graphs for a = 10 and a = 20. – Rumman Oct 11 '19 at 21:50
I am running into issues where machine precision is lost and the norm for Hankel function becomes too big.
Exactly. So you already know the answer: Don't use machine precision but ensure your computations are numerically sound by giving Mathematica enough digits to work with. I believe this answer about controlling precision is a good start.
In general, you have to understand that as soon as you say, e.g. 0.25, you told Mathematica that this is in machine precision. As long as you stick to infinitely accurate numbers like 1/4, you can tell Mathematica how precise it should evaluate numbers in certain algorithms later.
That being said, in the following I use ListPolarPlot and create the list of values myself. The adaptive plotting algorithms of Mathematica usually include many points to ensure your curve is smooth and calculating with higher precision takes a lot of time. We can speed this up by using ParallelTable but it will still require some minutes. I use the definitions of k and Pr you have already given.
as = {5/100, 1/4, 2, 10, 20};
ams = {2, 7, 100, 150, 150};
Block[{$MaxExtraPrecision = 100}, norms = First@ FindMaximum[Pr[##, \[Theta]], {\[Theta], 0, 2 \[Pi]}, WorkingPrecision -> 30] & @@@ Transpose[{as, ams}] ] First, note that I converted all your numbers to rationals and only in the final call to FindMaximum, I ask Mathematica to use a working precision of 30. Also, note that during the computation, Mathematica needs more precision and that's why we set $MaxExtraPrecision. If you don't set it, Mathematica will tell you that this setting is too low in a warning message.
Now, we can create the list of radii. For each curve, one list of radii, and we do this computation in parallel as it takes some minutes. With ParallelEvaluate, we distribute the setting of $MaxExtraPrecision to the parallel kernels. ParallelEvaluate[$MaxExtraPrecision = 100];
data = ParallelTable[
Pr[#1, #2, \[Theta]]/#3, {\[Theta], 0, 2 Pi, Pi/80}] & @@@
Transpose[{as, ams, norms}];
As you can see, in above code blocks, I used @@@ and ## (or #1). Familiarize yourself how this works by looking at this example
f[##, {#2}] & @@@ {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
and checking the documentation. Don't just copy the code.
ListPolarPlot[data, Joined -> True,
PolarAxes -> True,
PlotRange -> Automatic,
PolarGridLines -> Automatic,
PolarTicks -> {"Degrees", Automatic},
PolarAxesOrigin -> {0, 1},
PlotLegends -> (StringTemplate["a = \[Lambda]"] /@ N[as]),
PlotStyle -> {Blue, Red, Green, Magenta, Black}]
• Thanks a lot for your response. I am going reiterate your explanation in my own words so that you can correct/check my understanding of your explanation. -Mathematica by default uses Machine Precision if the values are using decimal – Rumman Oct 15 '19 at 3:47
• - This was giving me precision issue. So you changed the list of radius values to "infinitely accurate precision" by using "/" and you controlled the precision later on in the calculation - I understand your notations. I played around with "@@@" and "##", "#1" Thanks for the example. That was helpful. - You used ParallelTable to evaluate the 5 sets of data – Rumman Oct 15 '19 at 4:02
• @Rumman Yes, exactly. There are different ways of controlling precision/accuracy (see 0.2520 or 0.2520 notation) but you can read about them in several posts here or in the documentation. Just look up Precision, Accuracy, N, SetAccuracy, ... and the standard options WorkingPrecision, AccuracyGoal`, and the links therein. – halirutan Oct 15 '19 at 6:45 | 2020-02-27 00:26:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3242565989494324, "perplexity": 1388.1072232135075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00177.warc.gz"} |
https://zbmath.org/?q=an:07006381 | # zbMATH — the first resource for mathematics
Dimension functions for spherical fibrations. (English) Zbl 1417.55004
It has been conjectured in [D. Benson and J. Carlson, Math. Z. 195, 221–238 (1987; Zbl 0593.20062)] that a finite group $$G$$ has a free action on a finite CW-complex $$X$$ with the homotopy type of a product of spheres $$\mathbb{S}^{n_1} \times \mathbb{S}^{n_2} \times \dots \times \mathbb{S}^{n_k}$$ with trivial action on homology if and only if the maximal rank of an elementary abelian $$p$$-group contained in $$G$$ is at most $$k$$. The case $$k=1$$ in the conjecture is known to be true according to [R. G. Swan, Ann. Math. (2) 72, 267–291 (1960; Zbl 0096.01701)]. The case $$k=2$$ has been proved by A. Adem and J. H. Smith [ibid. 154, No. 2, 407–435 (2001; Zbl 0992.55011)] for finite groups that do not involve $$\text{Qd}(p) = (\mathbb{Z}/p)^2 \rtimes \text{SL}_2 (\mathbb{Z}/p)$$ for any prime $$p >2$$.
The Euler class of a fibration is said to be $$p$$-effective if its restriction to elementary abelian $$p$$-subgroups of maximal rank is not nilpotent. Let $$X_{\widehat p}$$ be the Bousfield-Kan $$p$$-completion, and $$X^{hK}: = \text{Map}(EK, X)^K$$, the space of homotopy fixed points. Let $X[m] = (\underbrace{X \ast X \ast \cdots \ast X}_{m-\text{times}} )_{\widehat p}$ be the $$p$$-completion of the $$m$$-fold join. In this paper, the authors show that if $$P$$ is a finite $$p$$-group and $$X \simeq (\mathbb{S}^n )_{\widehat p}$$ is a $$P$$-space, then there is a positive integer $$m$$ such that $$(X[m])^{hP} \simeq (\mathbb{S}^r )_{\widehat p}$$ for some $$r$$, and that if $$p$$ is an odd prime, then there is no mod-$$p$$ spherical fibration $$\xi : E \rightarrow B \text{Qd}(p)$$ with a $$p$$-effective Euler class. They also show that if $$G = \text{Qd}(p)$$, then there is no finite free $$G$$-CW-complex $$X$$ homotopy equivalent to a product of two spheres $$\mathbb{S}^n \times\mathbb{S}^n$$.
##### MSC:
55M35 Finite groups of transformations in algebraic topology (including Smith theory) 55S10 Steenrod algebra 55S37 Classification of mappings in algebraic topology
Full Text:
##### References:
[1] 10.2307/3062102 · Zbl 0992.55011 [2] 10.1017/CBO9780511526275 [3] 10.1090/S0273-0979-1988-15697-2 · Zbl 0653.57025 [4] 10.1007/BF01166459 · Zbl 0593.20062 [5] 10.1007/978-3-540-38117-4 [6] 10.1007/978-94-017-0215-7 [7] 10.1007/BF01389361 · Zbl 0517.57020 [8] 10.1515/9783110858372.312 [9] 10.1007/BF02566633 · Zbl 0726.55011 [10] 10.2307/2946585 · Zbl 0801.55007 [11] 10.1007/978-3-0348-8707-6 [12] 10.4171/OWR/2007/45 [13] 10.1093/qmath/han021 · Zbl 1198.57022 [14] 10.2307/1970266 · Zbl 0108.03101 [15] 10.1007/BF02699494 · Zbl 0857.55011 [16] ; May, Simplicial objects in algebraic topology. Simplicial objects in algebraic topology. Van Nostrand Mathematical Studies, 11, (1967) · Zbl 0769.55001 [17] ; Oliver, Algebraic topology. Algebraic topology. Lecture Notes in Math., 763, 539, (1979) [18] ; Sullivan, Geometric topology : localization, periodicity and Galois symmetry. Geometric topology : localization, periodicity and Galois symmetry. K-Monographs in Mathematics, 8, (2005) · Zbl 1078.55001 [19] 10.2307/1970135 · Zbl 0096.01701
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-09-18 14:28:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364044427871704, "perplexity": 776.5389707770206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00376.warc.gz"} |
https://www.carmin.tv/en/video/scalar-and-mean-curvature-comparison-via-the-dirac-operator | 34 videos
34 videos
4 videos
4 videos
## 2022 - Francophone Computer Algebra Days - Journées nationales de calcul formel
00:00:00 / 00:00:00
## Scalar and mean curvature comparison via the Dirac operator
Appears in collection : Not Only Scalar Curvature Seminar
I will explain a spinorial approach towards a comparison and rigidity principle involving scalar and mean curvature for certain warped products over intervals. This is motivated by recent scalar curvature comparison questions of Gromov, in particular distance estimates under lower scalar curvature bounds on Riemannian bands $M \times [-1,1]$ and Cecchini's long neck principle. I will also exhibit applications of these techniques in the context of the positive mass theorem with arbitrary ends. This talk is based on joint work with Simone Cecchini.
• Date of recording 1/21/22
• Date of publication 2/3/22
• Institution IHES
• Language English
• Audience Researchers
• Format MP4
### Last related questions on MathOverflow
You have to connect your Carmin.tv account with mathoverflow to add question
## Register
• Bookmark videos
• Add videos to see later & | 2023-03-29 17:22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19024664163589478, "perplexity": 4511.762678179113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00236.warc.gz"} |
https://www.lmfdb.org/L/rational/4/1932%5E2 | ## Results (14 matches)
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin
4-1932e2-1.1-c0e2-0-0 $0.981$ $0.929$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$0.0, 0.0 0 1 0 0.395505 Artin representation 2.1932.12t18.c Modular form 1932.1.w.a 4-1932e2-1.1-c0e2-0-1 0.981 0.929 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $0.0, 0.0$ $0$ $1$ $0$ $0.475638$ Artin representation 2.1932.6t5.b Modular form 1932.1.w.b
4-1932e2-1.1-c0e2-0-2 $0.981$ $0.929$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$0.0, 0.0 0 1 0 0.517869 Modular form 1932.1.w.c 4-1932e2-1.1-c0e2-0-3 0.981 0.929 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $0.0, 0.0$ $0$ $1$ $0$ $1.15378$ Artin representation 2.1932.12t18.d Modular form 1932.1.w.d
4-1932e2-1.1-c1e2-0-0 $3.92$ $237.$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$1.0, 1.0 1 1 0 0.336487 Modular form 1932.2.q.c 4-1932e2-1.1-c1e2-0-1 3.92 237. 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $0.351084$ Modular form 1932.2.a.c
4-1932e2-1.1-c1e2-0-2 $3.92$ $237.$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$1.0, 1.0 1 1 0 0.391334 Modular form 1932.2.q.a 4-1932e2-1.1-c1e2-0-3 3.92 237. 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $0.874593$ Modular form 1932.2.q.d
4-1932e2-1.1-c1e2-0-4 $3.92$ $237.$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$1.0, 1.0 1 1 0 0.914667 Modular form 1932.2.a.h 4-1932e2-1.1-c1e2-0-5 3.92 237. 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $0.961774$ Modular form 1932.2.q.b
4-1932e2-1.1-c1e2-0-6 $3.92$ $237.$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$1.0, 1.0 1 1 2 1.47619 Modular form 1932.2.a.d 4-1932e2-1.1-c1e2-0-7 3.92 237. 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $2$ $1.50373$ Modular form 1932.2.a.e
4-1932e2-1.1-c1e2-0-8 $3.92$ $237.$ $4$ $2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2}$ 1.1 $$1.0, 1.0 1 1 2 1.65579 Modular form 1932.2.a.f 4-1932e2-1.1-c1e2-0-9 3.92 237. 4 2^{4} \cdot 3^{2} \cdot 7^{2} \cdot 23^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $2$ $1.71866$ Modular form 1932.2.a.g | 2022-12-06 18:26:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566526412963867, "perplexity": 496.8508374775493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00624.warc.gz"} |
http://idehpouyan.ir/%D8%AF%DB%8C%D8%AA%D8%A7%D8%A8%DB%8C%D8%B3/last-article-feed/8-surface-science.html | سال ۱۳۹۷ سال حمایت از کالای ایرانی گرامی باد
## Surface Science
1. Publication date: August 2018
Source:Surface Science, Volume 674
Author(s): Teppei Suzuki, Taka-aki Yano, Masahiko Hara, Toshikazu Ebisuzaki
Iron pyrite (FeS2) is the most abundant metal sulfide on Earth. Owing to its reactivity and catalytic activity, pyrite has been studied in various research fields such as surface science, geochemistry, and prebiotic chemistry. Importantly, native iron–sulfur clusters are typically coordinated by cysteinyl ligands of iron–sulfur proteins. In the present paper, we study the adsorption of l-cysteine and its oxidized dimer, l-cystine, on the FeS2 surface, using electronic structure calculations based density functional theory and Raman spectroscopy measurements. Our calculations suggest that sulfur-deficient surfaces play an important role in the adsorption of cysteine and cystine. In the thiol headgroup adsorption on the sulfur-vacancy site, dissociative adsorption is found to be energetically favorable compared with molecular adsorption. In addition, the calculations indicate that, in the cystine adsorption on the defective surface under vacuum conditions, the formation of the S–Fe bond is energetically favorable compared with molecular adsorption. Raman spectroscopic measurements suggest the formation of cystine molecules through the S–S bond on the pyrite surface in aqueous solution. Our results might have implications for chemical evolution at mineral surfaces on the early Earth and the origin of iron–sulfur proteins, which are believed to be one of the most ancient families of proteins.
### Graphical abstract
2. Publication date: Available online 27 April 2018
Source:Surface Science
Author(s): Andrew R. Alderwick, Andrew P. Jardine, William Allison, John Ellis
We use 2-D wavepacket calculations to examine the scattering of helium atoms from dynamic assemblies of surface adsorbates, and in particular to explore the validity of the widely used kinematic scattering approximation. The wavepacket calculations give exact results for quasi-elastic scattering that are closely analogous to time-of-flight (TOF) experiments and they are analysed as such. A scattering potential is chosen to represent 8 meV helium atoms scattering from sodium atoms adsorbed on a Cu(001) surface and the adsorbates in the model move according to an independent Langevin equation. The energy broadening in the quasi-elastic scattering is obtained as a function of parallel momentum transfer and compared with the corresponding results using the kinematic scattering approximation. Under most circumstances the kinematic approximation and the more accurate wavepacket method are in good agreement; however, there are cases where the two methods give different results. We relate these differences to pathological features in the scattering form-factor.
### Graphical abstract
3. Publication date: Available online 13 March 2018
Source:Surface Science
Author(s): Aleksandar Matković, Aydan Çiçek, Markus Kratzer, Benjamin Kaufmann, Anthony Thomas, Zhongrui Chen, Olivier Siri, Conrad Becker, Christian Teichert
Dihydro-tetraaza-acenes are promising candidates for future applications in organic electronics, since these molecules form crystals through an interplay between H-bonding, dipolar, and van der Waals interactions. As a result, densely packed $π − π$ structures – favorable for charge transport – are obtained, with exceptional stability under ambient conditions. This study investigates growth morphologies of dihydro-tetraaza-pentacene and dihydro-tetraaza-heptacene on vicinal c-plane sapphire. Submonolayers and thin films are grown using hot wall epitaxy, and the structures are investigated ex-situ by atomic force microscopy. Molecular arrangement, nucleation densities, sizes, shapes, and stability of the crystallites are analyzed as a function of the substrate temperature. The two molecular species were found to assume a different orientation of the molecules with respect to the substrate. An activation energy of (1.23 ± 0.12) eV was found for the nucleation of dihydro-tetraaza-heptacene islands (composed of upright standing molecules), while (1.16 ± 0.25) eV was obtained for dihydro-tetraaza-pentacene needles (composed of lying molecules). The observed disparity in the temperature dependent nucleation densities of the two molecular species is attributed to the different thermalization pathways of the impinging molecules.
### Graphical abstract
4. Publication date: February 2018
Source:Surface Science, Volume 668
Author(s): T.W. White, D.A. Duncan, S. Fortuna, Y.-L. Wang, B. Moreton, T.-L. Lee, P. Blowey, G. Costantini, D.P. Woodruff
The interaction of oxalic acid with the Cu(110) surface has been investigated by a combination of scanning tunnelling microscopy (STM), low energy electron diffraction (LEED), soft X-ray photoelectron spectroscopy (SXPS), near-edge X-ray absorption fine structure (NEXAFS) and scanned-energy mode photoelectron diffraction (PhD), and density functional theory (DFT). O 1s SXPS and O K-edge NEXAFS show that at high coverages a singly deprotonated monooxalate is formed with its molecular plane perpendicular to the surface and lying in the $[ 1 1 ¯ 0 ]$ azimuth, while at low coverage a doubly-deprotonated dioxalate is formed with its molecular plane parallel to the surface. STM, LEED and SXPS show the dioxalate to form a (3 × 2) ordered phase with a coverage of 1/6 ML. O 1s PhD modulation spectra for the monooxalate phase are found to be simulated by a geometry in which the carboxylate O atoms occupy near-atop sites on nearest-neighbour surface Cu atoms in $[ 1 1 ¯ 0 ]$ rows, with a CuO bondlength of 2.00 ± 0.04 Å. STM images of the (3 × 2) phase show some centred molecules attributed to adsorption on second-layer Cu atoms below missing [001] rows of surface Cu atoms, while DFT calculations show adsorption on a (3 × 2) missing row surface (with every third [001] Cu surface row removed) is favoured over adsorption on the unreconstructed surface. O 1s PhD data from dioxalate is best fitted by a structure similar to that found by DFT to have the lowest energy, although there are some significant differences in intramolecular bondlengths.
### Graphical abstract
5. Publication date: January 2018
Source:Surface Science, Volume 667
Author(s): Thorsten Wagner, Daniel Roman Fritz, Zdena Rudolfová, Peter Zeppenfeld
Controlling the orientation of organic molecules on surfaces is important in order to tune the physical properties of the organic thin films and, thereby, increase the performance of organic thin film devices. Here, we present a scanning tunneling microscopy (STM) and photoelectron emission microscopy (PEEM) study of the deposition of the organic dye pigment α-sexithiophene (α-6T) on the vicinal Ag(441) surface. In the presence of the steps on the Ag(441) surface, the α-6T molecules exclusively align parallel to the step edges oriented along the [1$1 ¯$0]-direction of the substrate. The STM results further reveal that the adsorption of the α-6T molecules is accompanied by various restructuring of the substrate surface: Initially, the molecules prefer the Ag(551) building blocks of the Ag(441) surface. The Ag(551) termination of the terraces is then changed to a predominately Ag(331) one upon completion of the first α-6T monolayer. When closing the two layer thick wetting layer, the original ratio of Ag(331) and Ag(551) building blocks ( ≈ 1:1) is recovered, but a phase separation into microfacets, which are composed either of Ag(331) or of Ag(551) building blocks, is found.
### Graphical abstract
6. Publication date: July 2017
Source:Surface Science, Volume 661
Author(s): Thorsten Wagner, Daniel Roman Fritz, Robert Zimmerleiter, Peter Zeppenfeld
Regularly stepped (vicinal) surfaces provide a convenient means to control the number of defects of a surface. They can easily be prepared by a slight miscut of a low index surface. In the case of an fcc($n n 1$) surface with small integer n, it is even expected that the large number of steps will dominate the surface properties. We are the first to study the Ag(441) surface with a combination of scanning tunneling microscopy (STM) and high resolution electron diffraction (SPA-LEED). The surface is found to consist of different building blocks, which can be either (331) or (551) microfacets. To unravel the actual morphology, we carried out simulations of the reciprocal space maps (RSMs) in the framework of the simple kinematic approximation.
### Highlights
7. Publication date: February 2017
Source:Surface Science, Volume 656
Author(s): Ryan Sharpe, Jon Counsell, Michael Bowker
The interaction of Au and Pd in bimetallic systems is important in a number of areas of technology, especially catalysis. In order to investigate the segregation behaviour in such systems, the interaction of Pd and Au was investigated by surface science methods. In two separate sets of experiments, Au was deposited onto a Pd(111) single crystal, and Pd and Au were sequentially deposited onto TiO2(110), all in ultra-high vacuum using metal vapour deposition. Heating Au on Pd/TiO2(110) to 773K resulted in the loss of the Au signal in the LEIS, whilst still remaining present in the XPS, due to segregation of Pd to the surface and the formation of a Au-Pd core-shell structure. It is likely that this is due to alloying of Au with the Pd and surface dominance of that alloy by Pd. The Au:Pd XPS peak area ratio is found to substantially decrease on annealing Au/Pd(111) above 773K, corresponding with a large increase in the CO sticking probability to that for clean Pd(111). This further indicates that Au diffuses into the bulk of Pd on annealing to temperatures above 773K. It therefore appears that Au prefers to be in the bulk in these systems, reflecting the exothermicity of alloy formation.
### Graphical abstract
8. Publication date: November 2016
Source:Surface Science, Volume 653
Author(s): Arunabhiram Chutia, Ian P. Silverwood, Matthew R. Farrow, David O. Scanlon, Peter P. Wells, Michael Bowker, Stewart F. Parker, C. Richard A. Catlow
We report a density functional theory study on the relative stability of formate species on Cu(h,k,l) low index surfaces using a range of exchange-correlation functionals. We find that these functionals predict similar geometries for the formate molecule adsorbed on the Cu surface. A comparison of the calculated vibrational transition energies of a perpendicular configuration of formate on Cu surface shows an excellent agreement with the experimental spectrum obtained from inelastic neutron spectroscopy. From the calculations on adsorption energy we find that formate is most stable on the Cu(110) surface as compared to Cu(111) and Cu(100) surfaces. Bader analysis shows that this feature could be related to the higher charge transfer from the Cu(110) surface and optimum charge density at the interfacial region due to bidirectional electron transfer between the formate and the Cu surface. Analysis of the partial density of states finds that in the –5.5eV to –4.0eV region, hybridization between O p and the non-axial Cu d yz and d xz orbitals takes place on the Cu(110) surface, which is energetically more favourable than on the other surfaces.
### Graphical abstract
9. Publication date: November 2016
Source:Surface Science, Volume 653
Author(s): T. Eelbo, V.I. Zdravkov, R. Wiesendanger
This report deals with the preparation of a clean Ta(110) surface, investigated by means of scanning tunneling microscopy/spectroscopy as well as by low-energy electron diffraction and Auger electron spectroscopy. The surface initially exhibits a surface reconstruction induced by oxygen contamination. This reconstruction can be removed by annealing at high temperatures under ultrahigh vacuum conditions. The reconstruction-free surface reveals a surface resonance at a bias voltage of about −500mV. The stages of the transformation are presented and discussed. In a next step, Fe islands were grown on top of Ta(110) and investigated subsequently. An intermixing regime was identified for annealing temperatures of (550–590)K.
### Graphical abstract
10. Publication date: November 2016
Source:Surface Science, Volume 653
The triple phase boundary (TPB), where the gas phase, Ni particles and the yttria-stabilised zirconia (YSZ) surface meet, plays a significant role in the performance of solid oxide fuel cells (SOFC). Indeed, the key reactions take place at the TPB, where molecules such as H2O, CO2 and CO interact and react. We have systematically studied the interaction of H2O, CO2 and CO with the dominant surfaces of four materials that are relevant to SOFC, i.e. ZrO2(111), Ni/ZrO2(111), YSZ(111) and Ni/YSZ(111) of cubic ZrO2 stabilized with 9% of yttria (Y2O3). The study employed spin polarized density functional theory (DFT), taking into account the long-range dispersion forces. We have investigated up to five initial adsorption sites for the three molecules and have identified the geometries and electronic structures of the most stable adsorption configurations. We have also analysed the vibrational modes of the three molecules in the gas phase and compared them with the adsorbed molecules. A decrease of the wavenumbers of the vibrational modes for the three adsorbed molecules was observed, confirming the influence of the surface on the molecules' intra-molecular bonds. These results are in line with the important role of Ni in this system, in particular for the CO adsorption and activation.
### Graphical abstract
11. Publication date: October 2016
Source:Surface Science, Volume 652
Author(s): D.T. Payne, Y. Zhang, C.L. Pang, H.H. Fielding, G. Thornton
Excited electrons and holes are crucial for redox reactions on metal oxide surfaces. However, precise details of this charge transfer process are not known. We report two-photon photoemission ( =3.23eV) measurements of rutile TiO2(110) as a function of exposure to water below room temperature. The two-photon resonance associated with bridging hydroxyls is enhanced following water exposure, reaching a maximum at a nominal coverage of one monolayer. Higher coverages attenuate the observed resonance. Ultraviolet photoemission spectroscopy ( =21.22eV) of the initial, band gap states shows little change up to one monolayer water coverage. It is likely that the enhancement arises from dissociation within the adsorbed water monolayer, although other mechanisms cannot be excluded.
### Graphical abstract
12. Publication date: October 2016
Source:Surface Science, Volume 652
Author(s): Krisztina Kocsis, Matthias Niedermaier, Johannes Bernardi, Thomas Berger, Oliver Diwald
We transformed vapor phase grown ZnO nanoparticle powders into aqueous ZnO nanoparticle dispersions and studied the impact of associated microstructure and interface property changes on their spectroscopic properties. With photoluminescence (PL) spectroscopy, we probed oxygen interstitials Oi 2 in the near surface region and tracked their specific PL emission response at hvEM =2.1eV during the controlled conversion of the solid–vacuum into the solid–liquid interface. While oxygen adsorption via the gas phase does affect the intensity of the PL emission bands, the O2 contact with ZnO nanoparticles across the solid–liquid interface does not. Moreover, we found that the near band edge emission feature at hvEM =3.2eV gains relative intensity with regard to the PL emission features in the visible light region. Searching for potential PL indicators that are specific to early stages of particle dissolution, we addressed for aqueous ZnO nanoparticle dispersions the effect of formic acid adsorption. In the absence of related spectroscopic features, we were able to consistently track ZnO nanoparticle dissolution and the concomitant formation of solvated Zinc formate species by means of PL and FT-IR spectroscopy, dynamic light scattering, and zeta potential measurements. For a more consistent and robust assessment of nanoparticle properties in different continuous phases, we discuss characterization challenges and potential pitfalls that arise upon replacing the solid–gas with the solid–liquid interface.
### Graphical abstract
13. Publication date: October 2016
Source:Surface Science, Volume 652
In this article, some fundamental topics related to the initial steps of organic film growth are reviewed. General conclusions will be drawn based on experimental results obtained for the film formation of oligophenylene and pentacene molecules on gold and mica substrates. Thin films were prepared via physical vapor deposition under ultrahigh-vacuum conditions and characterized in-situ mainly by thermal desorption spectroscopy, and ex-situ by X-ray diffraction and atomic force microscopy. In this short review article the following topics will be discussed: What are the necessary conditions to form island-like films which are either composed of flat-lying or of standing molecules? Does a wetting layer exist below and in between the islands? What is the reason behind the occasionally observed bimodal island size distribution? Can one describe the nucleation process with the diffusion-limited aggregation model? Do the impinging molecules directly adsorb on the surface or rather via a hot-precursor state? Finally, it will be described how the critical island size can be determined by an independent measurement of the deposition rate dependence of the island density and the capture-zone distribution via a universal relationship.
### Graphical abstract
14. Publication date: September 2016
Source:Surface Science, Volume 651
Author(s): Stefan Gerhold, Michele Riva, Bilge Yildiz, Michael Schmid, Ulrike Diebold
The first stages of homoepitaxial growth of the (4×1) reconstructed surface of SrTiO3(110) are probed by a combination of pulsed laser deposition (PLD) with in-situ reflection high energy electron diffraction (RHEED) and scanning tunneling microscopy (STM). Considerations of interfacing high-pressure PLD growth with ultra-high-vacuum surface characterization methods are discussed, and the experimental setup and procedures are described in detail. The relation between RHEED intensity oscillations and ideal layer-by-layer growth is confirmed by analysis of STM images acquired after deposition of sub-monolayer amounts of SrTiO3. For a quantitative agreement between RHEED and STM results one has to take into account two interfaces: the steps at the circumference of islands, as well as the borders between two different reconstruction phases on the islands themselves. Analysis of STM images acquired after one single laser shot reveals an exponential decrease of the island density with increasing substrate temperature. This behavior is also directly visible from the temperature dependence of the relaxation times of the RHEED intensity. Moreover, the aspect ratio of islands changes considerably with temperature. The growth mode depends on the laser pulse repetition rate, and can be tuned from predominantly layer-by-layer to the step-flow growth regime.
### Graphical abstract
15. Publication date: August 2016
Source:Surface Science, Volume 650
Author(s): Karl-Heinz Dostert, Casey P. O'Brien, Wei Liu, Wiebke Riedel, Aditya Savara, Alexandre Tkatchenko, Swetlana Schauermann, Hans-Joachim Freund
Understanding the interaction of α,β-unsaturated carbonyl compounds with late transition metals is a key prerequisite for rational design of new catalysts with desired selectivity towards C=C or C=O bond hydrogenation. The interaction of the α,β-unsaturated ketone isophorone and the saturated ketone TMCH (3,3,5-trimethylcyclohexanone) with Pd(111) was investigated in this study as a prototypical system. Infrared reflection–absorption spectroscopy (IRAS) and density functional theory calculations including van der Waals interactions (DFT+vdWsurf) were combined to form detailed assignments of IR vibrational modes in the range from 3000cm1 to 1000cm1 in order to obtain information on the binding of isophorone and TMCH to Pd(111) as well as to study the effect of co-adsorbed hydrogen. IRAS measurements were performed with deuterium-labeled (d 5-) isophorone, in addition to unlabeled isophorone and unlabeled TMCH. Experimentally observed IR absorption features and calculated vibrational frequencies indicate that isophorone and TMCH molecules in multilayers have a mostly unperturbed structure with random orientation. At sub-monolayer coverages, strong perturbation and preferred orientations of the adsorbates were found. At low coverage, isophorone interacts strongly with Pd(111) and adsorbs in a flat-lying geometry with the C=C and C=O bonds parallel, and a CH3 group perpendicular, to the surface. At intermediate sub-monolayer coverage, the C=C bond is strongly tilted, while the C=O bond remains flat-lying, which indicates a prominent perturbation of the conjugated π system. Pre-adsorbed hydrogen leads to significant changes in the adsorption geometry of isophorone, which suggests a weakening of its binding to Pd(111). At low coverage, the structure of the CH3 groups seems to be mostly unperturbed on the hydrogen pre-covered surface. With increasing coverage, a conservation of the in-plane geometry of the conjugated π system was observed in the presence of hydrogen. In contrast to isophorone, TMCH adsorbs in a strongly tilted geometry independent of the surface coverage. At low coverage, an adsorbate with a strongly distorted C=O bond is formed. With increasing exposure, species with a less perturbed C=O group appear.
### Graphical abstract
16. Publication date: August 2016
Source:Surface Science, Volume 650
Author(s): Hatem Altass, Albert F. Carley, Philip R. Davies, Robert J. Davies
The dissociative chemisorption of HCl on clean and oxidized Cu(100) surfaces has been investigated using x-ray photoelectron spectroscopy (XPS) and scanning tunneling microscopy (STM). Whereas the dissociation of HCl at the clean surface is limited to the formation of a (√2×2)-R45° Cl(a) monolayer, the presence of surface oxygen removes this barrier, leading to chlorine coverages up to twice that obtained at the clean surface. Additional features in the STM images that appear at these coverages are tentatively assigned to the nucleation of CuCl islands. The rate of reaction of the HCl was slightly higher on the oxidized surface but unaffected by the initial oxygen concentration or the availability of clean copper sites. Of the two distinct domains of adsorbed oxygen identified at room temperature on the Cu(100) surfaces, the (√2×2)-R45° structure reacts slightly faster with HCl than the missing row (√2×22)-R45° O(a) structure. The results address the first stages in the formation of a copper chloride and present an interesting comparison with the HCl/O(a) reaction at Cu(110) surfaces, where oxygen also increased the extent of HCl reactions. The results emphasize the importance of the exothermic reaction to form water in the HCl/O(a) reaction on copper.
### Graphical abstract
17. Publication date: June 2016
Source:Surface Science, Volume 648
Author(s): Saman Hosseinpour, Mattias Forslund, C. Magnus Johnson, Jinshan Pan, Christofer Leygraf
In this article results from earlier studies have been compiled in order to compare the protection efficiency of self-assembled monolayers (SAM) of alkanethiols for copper, zinc, and copper–zinc alloys exposed to accelerated indoor atmospheric corrosion conditions. The results are based on a combination of surface spectroscopy and microscopy techniques. The protection efficiency of investigated SAMs increases with chain length which is attributed to transport hindrance of the corrosion stimulators in the atmospheric environment, water, oxygen and formic acid, towards the copper surface. The transport hindrance is selective and results in different corrosion products on bare and on protected copper. Initially the molecular structure of SAMs on copper is well ordered, but the ordering is reduced with exposure time. Octadecanethiol (ODT), the longest alkanethiol investigated, protects copper significantly better than zinc, which may be attributed to the higher bond strength of Cu–S than of Zn–S. Despite these differences, the corrosion protection efficiency of ODT for the single phase Cu20Zn brass alloy is equally efficient as for copper, but significantly less for the heterogeneous double phase Cu40Zn brass alloy.
### Graphical abstract
18. Publication date: April 2016
Source:Surface Science, Volume 646
Author(s): M. Murphy, M.S. Walczak, H. Hussain, M.J. Acres, C.A. Muryn, A.G. Thomas, N. Silikas, R. Lindsay
Ex situ atomic force microscopy and x-ray photoelectron spectroscopy are employed to characterise the adsorption of calcium phosphate from an aqueous solution of CaCl2.H2O and KH2PO4 onto rutile-TiO2(110) and α-Al2O3(0001). Prior to immersion, the substrates underwent wet chemical preparation to produce well-defined surfaces. Calcium phosphate adsorption is observed on both rutile-TiO2(110) and α-Al2O3(0001), with atomic force microscopy images indicating island-type growth. In contrast to other studies on less well-defined TiO2 and Al2O3 substrates, the induction period for calcium phosphate nucleation appears to be comparable on these two surfaces.
### Graphical abstract
19. Publication date: April 2016
Source:Surface Science, Volume 646
Author(s): Jan Knudsen, Jesper N. Andersen, Joachim Schnadt
During the past one and a half decades ambient pressure x-ray photoelectron spectroscopy (APXPS) has grown to become a mature technique for the real-time investigation of both solid and liquid surfaces in the presence of a gas or vapour phase. APXPS has been or is being implemented at most major synchrotron radiation facilities and in quite a large number of home laboratories. While most APXPS instruments operate using a standard vacuum chamber as the sample environment, more recently new instruments have been developed which focus on the possibility of custom-designed sample environments with exchangeable ambient pressure cells (AP cells). A particular kind of AP cell solution has been driven by the development of the APXPS instrument for the SPECIES beamline of the MAX IV Laboratory: the solution makes use of a moveable AP cell which for APXPS measurements is docked to the electron energy analyser inside the ultrahigh vacuum instrument. Only the inner volume of the AP cell is filled with gas, while the surrounding vacuum chamber remains under vacuum conditions. The design enables the direct connection of UHV experiments to APXPS experiments, and the swift exchange of AP cells allows different custom-designed sample environments. Moreover, the AP cell design allows the gas-filled inner volume to remain small, which is highly beneficial for experiments in which fast gas exchange is required. Here we report on the design of several AP cells and use a number of cases to exemplify the utility of our approach.
### Graphical abstract
20. Publication date: January 2016
Source:Surface Science, Volume 643
Author(s): Yuri Suchorski, Günther Rupprechter
In the present contribution we present an overview of our recent studies using the “kinetics by imaging” approach for CO oxidation on heterogeneous model systems. The method is based on the correlation of the PEEM image intensity with catalytic activity: scaled down to the μm-sized surface regions, such correlation allows simultaneous local kinetic measurements on differently oriented individual domains of a polycrystalline metal-foil, including the construction of local kinetic phase diagrams. This allows spatially- and component-resolved kinetic studies and, e.g., a direct comparison of inherent catalytic properties of Pt(hkl)- and Pd(hkl)-domains or supported μm-sized Pd-powder agglomerates, studies of the local catalytic ignition and the role of defects and grain boundaries in the local reaction kinetics.
### اطلاعات تماس
نشانی: اصفهان - منطقه صنعتی دولت آباد - خیابان بانک صادرات (خیابان 9 شیخ بهایی ) - فرعی سوم ایمیل : idehpouyan@gmail.com ما را در اینستاگرام دنبال کنید موبایل: 09133070453 سلیمی - 09133683569 انصاری پور دسترسی از طریق گوگل مپ | 2018-06-18 22:37:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4815410077571869, "perplexity": 5234.518072765457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861456.51/warc/CC-MAIN-20180618222556-20180619002556-00300.warc.gz"} |
http://mathhelpforum.com/geometry/119745-right-triangle-circumscribed-circle-proof-print.html | # Right Triangle Circumscribed Circle Proof
• December 10th 2009, 09:54 AM
ReneePatt
Right Triangle Circumscribed Circle Proof
Prove that the hypotenuse of a Euclidean right triangle is a diameter of the circumscribed circle.
Given theorem: Let $\triangle{ABC}$ be a triangle and let M be the midpoint of segment AB. If $\angle{ACB}$ is a right angle, then AM = MC.
I've drawn a diagram to help me but don't know where to begin on the proof.
Attachment 14393
• December 11th 2009, 07:51 AM
ReneePatt
I think I've got it
I think I finally figured out how to do this.
No help needed - THANKS!!!
• December 11th 2009, 08:02 AM
Amer
Quote:
Originally Posted by ReneePatt
Prove that the hypotenuse of a Euclidean right triangle is a diameter of the circumscribed circle.
Given theorem: Let $\triangle{ABC}$ be a triangle and let M be the midpoint of segment AB. If $\angle{ACB}$ is a right angle, then AM = MC.
I've drawn a diagram to help me but don't know where to begin on the proof.
Attachment 14393
there is a theorem that said
the circum angle established on the diameter equal 90 you can use this theorem
and the converse of it is true the angle established on the diameter from the circum of the circle equal 90 | 2015-05-05 16:39:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6292241215705872, "perplexity": 453.14530010006354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456823285.58/warc/CC-MAIN-20150501050703-00074-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/379651/what-does-it-mean-to-go-from-a-co-variant-vector-to-a-contravariant-vector | # What does it mean to go from a co-variant vector to a contravariant vector?
In most presentations of general-relativity I see the following statement,
We can change from a covariant vector to a contravariant vector by using the metric as follows, ${ A }^{ \mu }={ g }^{ \mu \nu }{ A }_{ \nu }$
My questions are,
1. What is the need to do this particular change in relativity?
2. The covariant components represent the components of a vector the contravariant components represent the components of a dual-vector, for finite dimensional vector spaces the two spaces are isomorphic. What is the significance of representing a quantity in contravariant or convariant forms? Is the need purely mathematical?
• Related: physics.stackexchange.com/q/105347/2451 and links therein. – Qmechanic Jan 13 '18 at 6:22
• The first answer in the link there says the samething the I mean to ask, the author says that they are isomorphic which does not mean they are the same. – Abhikumbale Jan 13 '18 at 6:40
• When the metric is fixed and constant, we get the illusion that there is an automatic, obvious identification of vectors and covectors, and also that things like dot products are just obviously right when expressed in a certain way. That isn't so, and nothing works right in relativity until you recognize that those manipulations depend on the metric and don't work the same way when the metric varies. When you take a dot product in freshman physics, you are actually converting a vector into a covector. You just don't realize you're doing it, because the metric happens to be diag(1,1,1). – Ben Crowell Jan 13 '18 at 19:37
It matters of two different concepts. Given a manifold, a vector is a geometric object associated to each point in the manifold. It can be decomposed into components with respect to a set of basis vectors.
$A = A^\mu \hat e_{(\mu)}$
where:
$\mu = 0, 1, 2, 3$
$A$ vector
$A^\mu$ contravariant components
$\hat e_{(\mu)}$ basis vectors
The geometric object is a reality independently of the coordinate system. A characterization is given by its square.
$A^2 = A \cdot A = A^\mu \hat e_{(\mu)} \cdot A^\nu \hat e_{(\nu)} = \hat e_{(\mu)} \cdot \hat e_{(\nu)} A^\mu A^\nu = g_{\mu\nu} A^\mu A^\nu$
where:
$\cdot$ scalar (dot) product
$g_{\mu\nu} = \hat e_{(\mu)} \cdot \hat e_{(\nu)}$ metric tensor
The square can also be written as
$A^2 = A_\mu A^\mu$
where:
$A_\mu = g_{\mu\nu} A^\nu$
As per above, we can define the dual vector.
$\tilde A = A_\mu \hat \theta^{(\mu)}$
where:
$\tilde A$ dual vector
$A_\mu$ covariant components
$\hat \theta^{(\mu)}$ basis dual vectors
By demanding
$\hat \theta^{(\mu)} (\hat e_{(\nu)}) = \delta^\mu_\nu$
where:
$\delta^\mu_\nu$ Kronecker delta
we can write the action of the dual vector on the vector as
$\tilde A (A) = A_\mu \hat \theta^{(\mu)} (A^\nu \hat e_{(\nu)}) = A_\mu A^\nu \hat \theta^{(\mu)} (\hat e_{(\nu)}) = A_\mu A^\nu \delta^\mu_\nu = A_\mu A^\mu$
Hence, a dual vector is a linear map from the vector space to the real numbers.
By defining the inverse metric tensor as
$g^{\mu\lambda} g_{\lambda\nu} = \delta^\mu_\nu$
where:
$g^{\mu\nu}$ inverse metric tensor
we have also
$A^\mu = g^{\mu\nu} A_\nu$
• This is just definitions – Bellem Jan 14 '18 at 12:56
• By contracting a vector with its dual you get a scalar, the norm of the vector (its square in reality), which is an invariant. If the vector is given by the coordinates you get the squared distance in spacetime, which is fundamental in both special and general relativity. That is why you need both contravariant and covariant components, however the latter have a different meaning as they define a linear application. – Michele Grosso Jan 14 '18 at 17:08
• Square distance? In gr you don't get square dinstance by squaring a vector, you can do that obly in sr. Then you can obtain the same thing through the definition of the pairing product, which gives you exactly the same results. You simply want to stress the importamce of having a metric manifold, which I agree with you, but the question was different. Of course if you want to have a metric manifold raising and lowering indices are a consequence... – Bellem Jan 14 '18 at 18:22
1. Because you need to identify vectors with covectors and that is possible only through the metric. If you think about components: you have upper and lower indices which reflect the way in which a tensor transforms under a change of coordinates. The position of indices reflects only that, thus I want something which allows me to indentify tensor with different disposition of indices.
2. The difference is that if you lower an upper index you change the quantities it contains by the metric. If you have the momentum four-vector $P^\mu=(E, \textbf{p})$ and you lower the index, in particular, $P_o$ is not the energy anymore, since you get $P_o=g_{o\mu}P^\mu$.
This is just a rough treating though, cheers!
• Vectors and co-vectors can be identified without the metric. – Abhikumbale Jan 13 '18 at 17:52
• I mean identify vectors with covectors of course... Now I edit – Bellem Jan 13 '18 at 18:07
• The point is that the position of indices gives you only information about the way under a tensor transforms and no more. So there should be something which allows me to move them... – Bellem Jan 13 '18 at 18:12 | 2019-10-15 08:57:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800463676452637, "perplexity": 410.3478232798872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00470.warc.gz"} |
https://myassignments-help.com/2022/11/03/linear-regression-analysis-dai-xie-stat5110/ | # 统计代写|线性回归分析代写linear regression analysis代考|STAT5110
## 统计代写|线性回归分析代写linear regression analysis代考|Introducing Further Explanatory Variables
If we wish to introduce further explanatory variables into a less-than-full-rank model, we can, once again, reduce the model to one of full rank. As in Section 3.7, we see what happens when we add $\mathbf{Z} \gamma$ to our model $\mathbf{X} \boldsymbol{\beta}$. It makes sense to assume that $\mathbf{Z}$ has full column rank and that the columns of $\mathbf{Z}$ are linearly independent of the columns of $\mathbf{X}$. Using the full-rank model
$$\mathbf{Y}=\mathbf{X}_1 \alpha+\mathbf{Z} \gamma+\varepsilon$$
where $\mathbf{X}_1$ is $n \times r$ of rank $r$, we find that Theorem $3.6$ (ii), (iii), and (iv) of Section 3.7.1 still hold. To see this, one simply works through the same steps of the theorem, but replacing $\mathbf{X}$ by $\mathbf{X}_1, \boldsymbol{\beta}$ by $\alpha$, and $\mathbf{R}$ by $\mathbf{I}_n-\mathbf{P}$, where $\mathbf{P}=\mathbf{X}_1\left(\mathbf{X}_1^{\prime} \mathbf{X}_1\right)^{-1} \mathbf{X}_1$ is the unique projection matrix projecting onto $\mathcal{C}(\mathbf{X})$.
Referring to Section $3.8$, suppose that we have a set of linear restrictions $\mathrm{a}_i^{\prime} \beta=$ $0(i=1,2 \ldots, q)$, or in matrix form, $\mathbf{A} \beta=0$. Then a realistic assumption is that these constraints are all estimable. This implies that $\mathbf{a}_i^{\prime}=\mathbf{m}_i^{\prime} \mathbf{X}$ for some $\mathbf{m}_i$, or $\mathbf{A}=\mathbf{M X}$, where $\mathbf{M}$ is $q \times n$ of rank $q$ [as $q=\operatorname{rank}(\mathbf{A}) \leq \operatorname{rank}(\mathbf{M})$ by A.2.1]. Since $\mathbf{A} \beta=\mathbf{M X} \beta=\mathbf{M} \theta$, we therefore find the restricted least squares estimate of $\theta$ by minimizing $|\mathbf{Y}-\theta|^2$ subject to $\theta \in \mathcal{C}(\mathbf{X})=\Omega$ and $\mathbf{M} \theta=0$, that is, subject to
$$\boldsymbol{\theta} \in \mathcal{N}(\mathbf{M}) \cap \Omega \quad(=\omega, \text { say }) .$$
If $\mathbf{P}{\Omega}$ and $\mathbf{P}\omega$ are the projection matrices projecting onto $\Omega$ and $\omega$, respectively, then we want to find $\hat{\theta}\omega=\mathbf{P}\omega \mathbf{Y}$. Now, from B.3.2 and B.3.3,
$$\mathbf{P}{\Omega}-\mathbf{P}\omega=\mathbf{P}{\omega^{\perp} \cap \Omega},$$ where $\omega^{\perp} \cap \Omega=\mathcal{C}(\mathbf{B})$ and $\mathbf{B}=\mathbf{P}{\Omega} \mathbf{M}^{\prime}$. Thus
$\hat{\boldsymbol{\theta}}\omega=\mathbf{P}\omega \mathbf{Y}$
$=\mathbf{P}{\Omega} \mathbf{Y}-\mathbf{P}{\omega^{\perp} \cap \Omega} \mathbf{Y}$
$=\hat{\theta}_{\Omega}-\mathbf{B}\left(\mathbf{B}^{\prime} \mathbf{B}\right)^{-} \mathbf{B}^{\prime} \mathbf{Y}$
## 统计代写|线性回归分析代写linear regression analysis代考|GENERALIZED LEAST SQUARES
Having developed a least squares theory for the full-rank model $\mathbf{Y}=\mathbf{X} \boldsymbol{\beta}+\varepsilon$, where $E[\varepsilon]=0$ and $\operatorname{Var}[\varepsilon]=\sigma^2 \mathbf{I}_n$, we now consider what modifications are necessary if we allow the $\varepsilon_i$ to be correlated. In particular, we assume that $\operatorname{Var}[\varepsilon]=\sigma^2 \mathbf{V}$, where $\mathbf{V}$ is a known $n \times n$ positive-definite matrix.
Since $\mathbf{V}$ is positive-definite, there exists an $n \times n$ nonsingular matrix $\mathbf{K}$ such that $\mathbf{V}=\mathbf{K K}^{\prime}$ (A.4.2). Therefore, setting $\mathbf{Z}=\mathbf{K}^{-1} \mathbf{Y}, \mathbf{B}=\mathbf{K}^{-1} \mathbf{X}$, and a $\eta=\mathbf{K}^{-1} \varepsilon$, we have the model $\mathbf{Z}=\mathbf{B} \beta+\eta$, where $\mathbf{B}$ is $n \times p$ of $\operatorname{rank} p$ (A.2.2).
Also, $E[\eta]=0$ and
$\operatorname{Var}[\eta]=\operatorname{Var}\left[\mathbf{K}^{-1} \varepsilon\right]=\mathbf{K}^{-1} \operatorname{Var}[\varepsilon] \mathbf{K}^{-1^{\prime}}=\sigma^2 \mathbf{K}^{-1} \mathbf{K K}^{\prime} \mathbf{K}^{\prime-1}=\sigma^2 \mathbf{I}_n$.
Minimizing $\boldsymbol{\eta}^{\prime} \boldsymbol{\eta}$ with respect to $\beta$, and using the theory of Section 3.1, the least squares estimate of $\beta$ for this transformed model is
\begin{aligned} \boldsymbol{\beta}^* &=\left(\mathbf{B}^{\prime} \mathbf{B}\right)^{-1} \mathbf{B}^{\prime} \mathbf{Z} \ &=\left(\mathbf{X}^{\prime}\left(\mathbf{K K}^{\prime}\right)^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime}\left(\mathbf{K K}^{\prime}\right)^{-1} \mathbf{Y} \ &=\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{Y} \end{aligned}
with expected value
$$E\left[\boldsymbol{\beta}^\right]=\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X} \boldsymbol{\beta}=\boldsymbol{\beta},$$ dispersion matrix \begin{aligned} \operatorname{Var}\left[\beta^\right] &=\sigma^2\left(\mathbf{B}^{\prime} \mathbf{B}\right)^{-1} \ &=\sigma^2\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1}, \end{aligned}
and residual sum of squares
\begin{aligned} \mathbf{f}^{\prime} \mathbf{f} &=\left(\mathbf{Z}-\mathbf{B} \boldsymbol{\beta}^\right)^{\prime}\left(\mathbf{Z}-\mathbf{B} \boldsymbol{\beta}^\right) \ &=\left(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}^\right)^{\prime}\left(\mathbf{K} \mathbf{K}^{\prime}\right)^{-1}\left(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}^\right) \ &=\left(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}^\right)^{\prime} \mathbf{V}^{-1}\left(\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}^\right) \end{aligned}
# 线性回归分析代考
## 统计代写|线性回归分析代写linear regression analysis代考|Introducing Further Explanatory Variables
$$\mathbf{Y}=\mathbf{X}1 \alpha+\mathbf{Z} \gamma+\varepsilon$$ 在哪里 $\mathbf{X}_1$ 是 $n \times r$ 等级 $r$ ,我们发现定理3.6第 $3.7 .1$ 节的 (ii)、 (iii) 和 (iv) 仍然有效。要看到这一点,只需 完成定理的相同步骤,但替换 $\mathbf{X}$ 经过 $\mathbf{X}_1, \boldsymbol{\beta}$ 经过 $\alpha$ ,和 $\mathbf{R}$ 经过 $\mathbf{I}_n-\mathbf{P}$ ,在哪里 $\mathbf{P}=\mathbf{X}_1\left(\mathbf{X}_1^{\prime} \mathbf{X}_1\right)^{-1} \mathbf{X}_1$ 是投影到的唯一投影矩阵 $\mathcal{C}(\mathbf{X})$. 参考部分 $3.8$ ,假设我们有一组线性限制 $\mathrm{a}_i^{\prime} \beta=0(i=1,2 \ldots, q)$ ,或以矩阵形式, $\mathbf{A} \beta=0$. 然后一个 现实的假设是这些约束都是可估计的。这意味着 $\mathbf{a}_i^{\prime}=\mathbf{m}_i^{\prime} \mathbf{X}$ 对于一些 $\mathbf{m}_i$ ,或者 $\mathbf{A}=\mathbf{M X}$ ,在哪里 $\mathbf{M}$ 是 $q \times n$ 等级 $q$ [作为 $q=\operatorname{rank}(\mathbf{A}) \leq \operatorname{rank}(\mathbf{M})$ 由 A.2.1]。自从 $\mathbf{A} \beta=\mathbf{M X} \beta=\mathbf{M} \theta$ ,因此我们找到限 制最小二乘估计 $\theta$ 通过最小化 $|\mathbf{Y}-\theta|^2$ 受制于 $\theta \in \mathcal{C}(\mathbf{X})=\Omega$ 和 $\mathbf{M} \theta=0$ ,也就是说,服从 $\boldsymbol{\theta} \in \mathcal{N}(\mathbf{M}) \cap \Omega \quad(=\omega$, say $)$ 如果 $\mathbf{P} \Omega$ 和 $\mathbf{P} \omega$ 是投影到的投影矩阵 $\Omega$ 和 $\omega$ ,分别,那么我们要找到 $\hat{\theta} \omega=\mathbf{P} \omega \mathbf{Y}$. 现在,从 B.3.2 和 B.3.3, $$\mathbf{P} \Omega-\mathbf{P} \omega=\mathbf{P} \omega^{\perp} \cap \Omega,$$ 在哪里 $\omega^{\perp} \cap \Omega=\mathcal{C}(\mathbf{B})$ 和 $\mathbf{B}=\mathbf{P} \Omega \mathbf{M}^{\prime}$. 因此 $$\hat{\boldsymbol{\theta}} \omega=\mathbf{P} \omega \mathbf{Y}$$ $=\mathbf{P} \Omega \mathbf{Y}-\mathbf{P} \omega^{\perp} \cap \Omega \mathbf{Y}$ $$=\hat{\theta}{\Omega}-\mathbf{B}\left(\mathbf{B}^{\prime} \mathbf{B}\right)^{-} \mathbf{B}^{\prime} \mathbf{Y}$$
## 统计代写|线性回归分析代写linear regression analysis代考|GENERALIZED LEAST SQUARES
$\operatorname{Var}[\eta]=\operatorname{Var}\left[\mathbf{K}^{-1} \varepsilon\right]=\mathbf{K}^{-1} \operatorname{Var}[\varepsilon] \mathbf{K}^{-1^{\prime}}=\sigma^2 \mathbf{K}^{-1} \mathbf{K}^{\prime} \mathbf{K}^{\prime-1}=\sigma^2 \mathbf{I}_n$.
$$\boldsymbol{\beta}^*=\left(\mathbf{B}^{\prime} \mathbf{B}\right)^{-1} \mathbf{B}^{\prime} \mathbf{Z} \quad=\left(\mathbf{X}^{\prime}\left(\mathbf{K} \mathbf{K}^{\prime}\right)^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime}\left(\mathbf{K} \mathbf{K}^{\prime}\right)^{-1} \mathbf{Y}=\left(\mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{V}^{-1} \mathbf{Y}$$
myassignments-help数学代考价格说明
1、客户需提供物理代考的网址,相关账户,以及课程名称,Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明,让您清楚的知道您的钱花在什么地方。
2、数学代写一般每篇报价约为600—1000rmb,费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵),报价后价格觉得合适,可以先付一周的款,我们帮你试做,满意后再继续,遇到Fail全额退款。
3、myassignments-help公司所有MATH作业代写服务支持付半款,全款,周付款,周付款一方面方便大家查阅自己的分数,一方面也方便大家资金周转,注意:每周固定周一时先预付下周的定金,不付定金不予继续做。物理代写一次性付清打9.5折。
Math作业代写、数学代写常见问题
myassignments-help擅长领域包含但不是全部: | 2022-11-28 19:15:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421315789222717, "perplexity": 2073.602967324356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00735.warc.gz"} |
https://www.vcalc.com/wiki/TylerJones/Transmitter+Antenna+Gain+Factor+%28Monostatic+SAR%29 | # Transmitter Antenna Gain Factor (Monostatic SAR)
Not Reviewed
G_A =
Tags:
Rating
ID
TylerJones.Transmitter Antenna Gain Factor (Monostatic SAR)
UUID
60680922-2d3b-11e6-9770-bc764e2038f2
This equation calculates the transmitter antenna gain factor1 for a monostatic synthetic aperture radar. This equation is most helpful in the context of calculating the Signal-to-Noise of a Synthetic Aperture Radar.
# References
1. ^ Performance Limits for Synthetic Aperture Radar - Second Edition. Sandia National Laboratories, Albuquerque, NM. Printed February 2006.
This equation, Transmitter Antenna Gain Factor (Monostatic SAR), is listed in 1 Collection. | 2018-12-19 15:56:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052143096923828, "perplexity": 10509.282813704634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00526.warc.gz"} |
https://www.hyperphronesis.com/2018/02/bias-in-statistical-judgment.html?showComment=1648985946716 | ## Bias in Performance Evaluation
Suppose you are an employer. You are looking to fill a position and you want the best person for the job. To do this, you take a pool of applicants, and for each one, you test them N times on some metric X. From these N tests, you will develop some idea of what each applicant's performance will look like, and based on that, you will hire the applicant or applicants with the best probable performance. However, you know that each applicant comes from one of two populations which you believe to have different statistical characteristics, and you know immediately which population each applicant comes from.
We will use the following model: We will assume that the population from which the applicants are taken is made up of two sub-populations A and B. These two sub-populations have different distributions of individual mean performance that are both Gaussian. That is, an individual drawn from sub-population A will have an expected performance that is normally distributed with mean $\mu_A$ and variance $\sigma_A^2$. Likewise, an individual drawn from sub-population B will have an expected performance that is normally distributed with mean $\mu_B$ and variance $\sigma_B^2$. Individual performances are then taken to be normally distributed with the individual mean and individual variance $\sigma_i^2$.
Suppose we take a given applicant who we know comes from sub-population B. We sample her performance N times and get performances of $\{x_1,x_2,x_3,...,x_N\}=\textbf{x}$. We form the following complete pdf for the (N+1) variables of the individual mean and the N performances: $f_{\mu_i,\textbf{x}|B}(\mu_i,x_1,x_2,...,x_N)=\frac{1}{\sqrt{2\pi}^{N+1}}\frac{1}{\sigma_B \sigma_i^N} \exp\left ({-\frac{(\mu_i-\mu_B)^2}{2\sigma_B^2}} \right ) \prod_{k=1}^N\exp\left ({-\frac{(x_k-\mu_i)^2}{2\sigma_i^2}} \right )$ It follows that the distribution conditioned on the test results is proportional to: $f_{\mu_i|,\textbf{x},B}(\mu_i)\propto \exp\left ({-\frac{(\mu_i-\mu_B)^2}{2\sigma_B^2}} \right ) \prod_{k=1}^N\exp\left ({-\frac{(x_k-\mu_i)^2}{2\sigma_i^2}} \right )$ By normalizing we find that this implies that the individual mean, given that it comes from sub-population B and given the N test results, is normally distributed with variance $\sigma_{\tilde{\mu_i}}^2=\left ( {\frac{1}{\sigma_B^2}+\frac{N}{\sigma_i^2}} \right )^{-1}$ and mean $\tilde{\mu_i}=\frac{\frac{\mu_B}{\sigma_B^2}+\frac{1}{\sigma_i^2}\sum_{k=1}^{N}x_k}{\frac{1}{\sigma_B^2}+\frac{N}{\sigma_i^2}} =\frac{\frac{\mu_B}{\sigma_B^2}+\frac{N}{\sigma_i^2}\bar{\textbf{x}}}{\frac{1}{\sigma_B^2}+\frac{N}{\sigma_i^2}}$ We will assume that this mean and variance are used as estimators to predict performance. Note that, in the limit of large N, $\sigma_{\tilde{\mu_i}}^2\rightarrow \sigma_i^2/N$ and $\tilde{\mu_i}\rightarrow \bar{\textbf{x}}\rightarrow \mu_i$, as expected.
Suppose we assume sub-populations A and B have the same variance $\sigma_{AB}^2$, but $\mu_A>\mu_B$. then we can note the following few implications:
• The belief about the sub-population the applicant comes from acts effectively as another performance sample of weight $\sigma_i^2/\sigma_{AB}^2$.
• If applicant 1 comes from sub-population A and applicant 2 comes from sub-population B, even if they perform identically in their samples, applicant 1 would nevertheless still be preferred.
• The more samples are taken, the less the sub-population the applicant comes from matters.
• The larger the difference in means between the sub-populations is assumed to be, the better the lesser-viewed applicant will need to perform in order to be selected over the better-viewed applicant.
• Suppose we compare $\tilde{\mu_i}$ to $\bar{\textbf{x}}$. Our selection criteria will simply be if the performance predictor is above $x_m$. We want to find the probability of being from a given sub-population given that the applicant was selected by each predictor. For the sub-population-indifferent predictor: $P(A|\bar{\textbf{x}}\geq x_m)=\frac{P(\bar{\textbf{x}}\geq x_m|A)P(A)}{P(\bar{\textbf{x}}\geq x_m|A)P(A)+P(\bar{\textbf{x}}\geq x_m|B)P(B)} \\ \\ P(A|\bar{\textbf{x}}\geq x_m)= \frac{P(A)Q\left (\frac{x_m-\mu_A}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right )} {P(A)Q\left (\frac{x_m-\mu_A}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right ) + P(B)Q\left (\frac{x_m-\mu_B}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right )}$ Where $Q(z)=\int_{z}^{\infty}\frac{e^{-s^2/2}}{\sqrt{2\pi}}ds\approx \frac{e^{-z^2/2}}{z\sqrt{2\pi}}$ For the sub-population-sensitive predictor, we first note that $\tilde{\mu_i} \geq x_m \Rightarrow \bar{\textbf{x}}\geq x_m+(x_m-\mu_A)\frac{\sigma_i^2}{N\sigma_A^2}=x_m'$ Which then implies $P(A|\tilde{\mu_i}\geq x_m)=\frac{P(\tilde{\mu_i}\geq x_m|A)P(A)}{P(\tilde{\mu_i}\geq x_m|A)P(A)+P(\tilde{\mu_i}\geq x_m|B)P(B)} \\ \\ P(A|\tilde{\mu_i}\geq x_m)= \frac{P(A)Q\left (\frac{x_m'-\mu_A}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right )} {P(A)Q\left (\frac{x_m'-\mu_A}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right ) + P(B)Q\left (\frac{x_m'-\mu_B}{\sqrt{\sigma_{AB}^2+\sigma_i^2/N}} \right )}$ As $x_m > \mu_A$ and thus $x_m' > x_m$, it is easy to see that $P(A) < P(A|\bar{\textbf{x}}\geq x_m) < P(A|\tilde{\mu_i}\geq x_m)$. Thus the sensitivity further biases the selection towards sub-population A. We can call $\bar{\textbf{x}}$ the meritocratic predictor and $\tilde{\mu_i}$ the semi-meritocratic predictor.
## Some Sociological Implications
Though the above effects may, in theory, be small, their effects in practice may not be. Humans are not perfectly rational and are not perfect statistical computers. The above is meant to give motivation for taking seriously effects that are often much more pronounced. If there is a perceived difference in means, there is likely a tendency to exaggerate it, to think that the difference in means should be visible, and hence that the two distributions should be statistically separable. Likewise, population variances are often perceived as narrower than they really are, leading to further amplification of the biasing effect. Moreover, the parameter estimations are not based simply on objective observation of the sub-populations, but also if not mainly on subjective, sociological, psychological, and cultural factors. As high confidence in one's initial estimates makes one less likely to take more samples, the employer's judgment may rest heavily on subjective biases. Given this, if the employer's objective is simply to hire the best candidates, she should simply use the meritocratic predictor (or perhaps at least invest some time into getting accurate sub-population parameters).
However, it is worth noting some effects on the candidates themselves. As a rule, the candidates are not subjected to this bias just in this bid for employment alone, but rather serially and repeatedly, in bid after bid. This may have any of the following effects: driving applicants toward jobs where they will be more favored (or less dis-favored) by the bias; affecting the applicant's self-evaluations, making them think their personal mean is closer to the broadly perceived sub-population mean; normalizing the broadly perceived sub-population mean, with an implicit devaluation of deviation from it. Also, we can note the following well-known problem: personal means tend to increase in challenging jobs, meaning that the unfavorable bias will perpetually stand in the way of the development of the negatively biased candidate, which then only serves to further feed into the bias. Both advantages and disadvantages tend to widen, making this a subtle case of "the rich get richer and the poor get poorer".
The moral of all this can be summarized as: the semi-meritocratic predictor should be avoided if possible as it is very difficult to implement effectively and has a tendency to introduce a host of detrimental effects. Fortunately, the meritocratic predictor loses only a small amount by way of informative-ness, and avoids the drawbacks mentioned above. Care should then be taken to ensure that the meritocratic selection system is implemented as carefully as can be managed to preclude the introduction of biasing effects. one way of washing out the effects of biasing in general is simply to give the applicants many opportunities to demonstrate their abilities.
#### 1 comment:
1. This comment has been removed by a blog administrator. | 2022-11-26 23:21:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7616827487945557, "perplexity": 758.8979639212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00264.warc.gz"} |
https://cseducators.stackexchange.com/questions/4569/for-a-beginner-is-it-better-to-start-with-c-or-a-higher-level-language/4572 | # For a beginner, is it better to start with C or a higher level language?
Some friends of mine, over the years, asked me suggestions on what to study for learning how to code. Most of them had no real final purpose, just wanted to be able to understand programming and be able to work with it if needed.
When the asker was a motivated person, I always answered to first learn C, then C++ and finally to pick up an high level language (like Python, R, MATLAB, Swift, JavaScript or whatever then would have been needed). This is the road I followed and I feel that having first learned a low level language (with pointers!) followed by its natural OOP extension, I have now a deeper control on how a generic programming language behave, and I can easily learn any programming language sharing the paradigms of C++.
On the other hand I suggested less motivated person to start with an high level language (Python, R or MATLAB), so to not let them be scared by the pitfalls of C.
Based on your experience, would you give the same suggestion I gave? I mean, do you agree that starting with C/C++ is beneficial for a deep understating of programming or do you think that is easier to start with a simple language and then deepen the knowledge with C/C++?
• I'll note that this question is also useful to instructors who are developing curriculum for novices. – Buffy Apr 1 '18 at 12:50
• Possible duplicate of cseducators.stackexchange.com/questions/212/… – Ben I. Apr 1 '18 at 14:06
• High and low level are non-binary. Python, R etc are higher than C. C I just above assembler. Saying that C is high level, is out of date. Much like most curriculums. – ctrl-alt-delor Apr 1 '18 at 22:00
• As somehow this is the big deterrent to learning lower level languages. I'd like to know who, teaching an introductory programming course, actually ends up in a place where their students actually need to learn memory management? – Gorchestopher H Apr 4 '18 at 21:06
• I view this as separate from the possible dupes on the grounds that it's about giving direction to others on leaning programming. The answers do far seem to be, in the main, of reasonable quality with evidences to support the answer. It seems like a natural extension to this question. – Gypsy Spellweaver Apr 6 '18 at 0:08
Based on your history and preferences, you have a particular view about what it means to be a programmer. I have somewhat the same history, but come to a different conclusion. Start with a high level language, probably either a good OOP language (Java, Python, Scala...) or a good functional language (Scheme, Racket, ...). Those two groups of languages cover much, though not all, of the "higher level thinking" in programming today.
Programming Languages are all about abstraction. Different languages offer different kinds of abstractions and different levels of abstraction. However, once you choose a language, the programs that you build within that language are usually (always?) about creating even higher level abstractions than those in the base language. We try to write programs "in the language of the problem we are solving" and choose names (abstractions) accordingly. That is why we don't name our variables v1, v2, etc and our functions f1, f2, etc in our programs. Our variables are things like size and done and function names are like compute_capacity.
Every programmer needs to know something at least of a whole range of abstractions. Assembly language is very low, C is a bit higher. Ruby is quite high, as is Racket. Wherever you start learning you will eventually, after totally grokking that language, want to go to other levels. If you start at a low level then you have only one direction to move, but if you start at a higher level you can move both higher an lower. Thus you can learn how things are represented (at a low level) and how things might be hidden/regularized at a higher level.
Each programming language that you learn is built on a set of ideas that encapsulate what a program should look like. The view of a C program is very different from that of a Java program. The difficulty in moving upwards in the abstraction scale is that it is often difficult to give up the low-level constructs that you have become used to for others that are more appropriate in the new language. For example, polymorphism exists in most languages. In low level languages like C, the polymorphism is ad-hoc, implemented by setting and testing flags. In a higher level language (Java), polymorphism can be implemented more directly using certain design patterns (Strategy/Decorator) and helper objects. But the programs written by those who started low, too often still use only ad-hoc methods, which potentially leads to programs that are difficult to read and understand (too-deep nesting of structures).
You say you can learn any language using the "paradigms of C++". I know a few programmers who can validly say that, but most C++ programmers cannot, outside a fairly narrow range of languages. Moving from C++ to Scheme or Standard ML, for example, requires a completely new way of thinking about programs. Adopting this new way of thinking is actually inhibited by what you learned well in C++. Not that it is impossible, but it is a harder climb.
Let me give an analogy. If you want to become a medical doctor, you don't start by gathering herbs and chanting ancient songs. You don't, then, progress to leeches and bloodletting, following an historical trail. It is true, however, that some medical practice is only considered valid because it was handed down from antiquity but actually has no scientific basis. At some point in history, something was tried and it worked. No one knew why, but it became standard practice. But, and the point is, you don't need to start your medical training with ideas from 100 or 1000 years ago, even though many of the things learned then are still valid today. Nor do you need to start your education at a low level.
One misconception that people often have about high level languages is that they can only be understood in terms of some (supposed) implementation in a lower level language. Certainly compilers take this view, but humans don't need to. If a language is minimally useful it will provide a complete set of abstractions that permit you do any computation (Turing complete) solely within that set of abstractions. When I program in Java, I don't need to think about what the compiler will do with a reference variable (is it a pointer? is it like a pointer? is it completely different?). I just know that it gives me access to an object so that I can send messages to that object. I think in terms of reference and message, not in terms of pointer and function call and don't care if they are similar or distinct. In fact, thinking at the lower level can lead me astray. In a Java program, with overridden methods, it is not possible, in principle, to know which version of a method will be invoked without a complete trace of the program. You may not be able to discern the precise type of an object without that trace, and a distinct execution of the program may take a different path.
Some people think that low level languages are more efficient than high level languages. That may have been true at one time, but isn't necessarily true anymore. If you examine a few statements in a low level language, like C, they certainly seem to be easily implementable on a Von Neumann architecture. The problem is twofold. Programs consist of many many statements and it is easier to write and reason about complex algorithms in a high level language, especially one that is purpose built (bespoke) for that domain. Compilers today can execute tens of millions of instructions to globally optimize programs, something that a low level programmer can't do, and something that low level coding often inhibits. If I write a program describing explicitly how a problem is solved (typical in C), the optimizer is pretty much limited to following that instruction stream (not precisely, I realize, but that isn't global optimization). On the other hand, if a program describes what is to be accomplished rather than how to do it, an optimizer has many more options for coming up with both a high and low level set of strategies for execution.
The second reason that low level programs are only apparently efficient is that modern computers are only barely recognizable as Von Neumann architectures anymore. With multi-level cache (both data and instruction) and multi-processing on (say) graphics processors, the relation between the programmer notations (the program) and what is executed gets farther and farther apart as time goes on.
Summary:
2. A decent language will provide a (Turing) complete set of abstractions. Therefore you can think at that level of abstraction.
3. Lower level language are not, inherently, more efficient than higher level languages, though they may appear so. Compilers can do more than you think if you let them.
4. A human, moving to a new abstraction level, will have difficulties made more intense the more they are committed to the "old way of thinking".
5. Start somewhere. For myself (who started low), starting higher gives you a better, clearer path. But you will eventually want to branch out. Learn to think, completely, within the abstractions provided by the language you use. Understand those abstractions in terms of the other abstractions and idioms in that language, rather than by "mentally compiling" everything.
Here is an odd historical note about medicine, that I'm pretty sure doesn't really apply here:
In a few cases it isn't ethical anymore not to use some relatively ancient practice, so science is blocked from advancement in that area. The Pasteur Treatment for rabies is like that. It works, but is a dreadful process for the patient. But rabies is nearly always fatal, so it is unethical to set up a scientific experiment in which the control group gets Pasteur, but the experimental group gets (only) a vaccine.
• I trust that compute capacity does not return a value, but sets an instance variable. (or else is poorly abstracted). Wow this answer is double plus good. – ctrl-alt-delor Apr 1 '18 at 22:47
• Don't think narrowly @ctrl-alt-delor. It needn't be either. It could set off a long chain fo messages that does interesting things in a database or over a network or readies the available weapons for the alien invasion, or ... There is more to life than getters and setters. – Buffy Apr 1 '18 at 22:52
• say no to getters and setter, but yes to command query separation. – ctrl-alt-delor Apr 1 '18 at 22:56
• There's often a confusion between learning comp. programming and learning the low level elements of a computer. Actually it more about learning to analyze what you want and figure how to have the bloody thing doing it with providing a description in whatever language you have. – Michel Billaud Apr 2 '18 at 17:43
• @Buffy, confronting manual optimizations with compiler optimizations is a false alternative, unless you write in asm. For JIT, I found this question, though it really refers only low-level optimizations. But I didn't find a good example of a global optimization that would be bound to high-level language. And actually, Prolog is my example of a language that looks like saying just what, but in practice says also big part of how. – Frax Apr 19 '18 at 9:30
## Starting languages for non-programers
Vastly depending on what is the purpose, but for some people it may be beneficial to choose for a first language a scripting language, that is used only as snippets in a wider context. The main advantage is that you get useful stuff basically from the point zero, and not only after learning half a language. If you stop learning after 5 hours, you may still have some benefit from it.
Some possible choices:
• Bash
Obviously not for everyone, but very useful for Linux/Unix (Mac?) users managing any substantial amount of files. The main advantage is that the requirements to make it useful are very low, it's not like you need to learn half a language to write a "Hello World". It's just echo Hello, world, that's it.
• JavaScript
Somewhat peculiar language and hated by many, but it's again very easy to make it useful, and, importantly, everyone has it right in their web browser. You can start by adding one-line snippets to your static HTML-page to make it more fancy, install Tampermonkey and fix annoying UI on your favourite web page (disclaimer: be advised about security issues with installing userscripts from untrusted sources), or just tinker with browser console to see how different pages are built.
• Python/IPython (as a fancy calculator)
Just that. If you need to do some simple math or just want to quickly check how many digits does 17³³⁷ have, Python is your friend. Clean syntax and built-in support for bignums make it a perfect tool for your little arithmetics. After you choke your machine with computing 17**337**337, you may try to do something more ambitious.
• Octave/Matlab, R
Fancy calculators for people who don't think that "matrix" is just a movie title.
• PHP?
Yeah, that's an awful language for most uses, but if someone has an idea for a Personal Home Page with some simple interaction, it may be just the right tool to use.
## Starting languages for wannabe-developers
Now, to the other question: "do you agree that starting with C/C++ is beneficial for a deep understating of programming"? Yes.
For a person planning to do any serious programming, I'd perhaps recommend starting with C++ (where "++" stands for "streams, strings, vectors and maps", maybe now also "smart pointers"; the rest is confusing and unnecessary noise) or Pascal/Delphi (that's what I actually started with) for less opportunities to shoot your feet, and a good GUI library with IDE support. It is also very useful to do some programming in a functional language (OCaml? Scheme?) early, to get some different perspective on programming, with better structured framing. I'm not sure it's a good first choice though.
However, I'd be careful to not go into too far into high-level programming in C++. C++ "high level" structures are actually thin wrappers around explicitly low-level stuff. To write classes and abstractions, take a proper high level language, perhaps Java or Scala. C++ is not a high level language, and I tend to treat it's high-level features as a kind of a last-resort: if you are bound to using C++ for performance and low level features, that is how you get some abstractions.
As for starting with higher-level languages, my intuition is that thinking about classes and this kind of abstraction is unnecessary noise at the entry level, which is why I look with great suspicion at starting with Java (in which even a "Hello world" is in a way OO). I don't have any data-backed point here though. Additionally, I find the Java memory model (i.e. references everywhere) to be both confusing and misleading.
• Being competent in C++, and starting with C++, are two very different stories. – Michel Billaud Sep 30 '19 at 7:44
• @MichelBillaud, of course they are different, but what is your point? I'm explicitly referring to the basic C++ here. – Frax Sep 30 '19 at 13:23
• what is 'Basic C++" ? C with classes? – Michel Billaud Sep 30 '19 at 16:29
• Not necessarily, though that depends on exact interest. If one wants to write GUI applications, some classes and OOP are unavoidable. However, one can start with just solving algorithmic problems, and for that classes (meaning: defining classes) aren't necessary or very useful. However, some STL classes are very handy and would be a pity not to use them. – Frax Sep 30 '19 at 22:34
• As I wrote in the answer, I don't think C++ is a good language for learning Object and Object Oriented programming, due to the implementation being much too sensitive to low-level details. It is good as being close-to-metal (and, more importantly, OS API and ABI) as C, yet with some handy abstractions and stronger typing that C lacks. – Frax Sep 30 '19 at 22:34
I have also followed a similar path in learning. I am also often asked where to start. My response has been to start with a language like Java. This is for very practical reasons from an educator's as well as a student's perspective.
1. Students appreciate being able to build or solve something quickly. As well as being able to grow with a language.
2. Teachers can spend a significant amount of time debugging student code. Finding bugs written in C/C++ by beginner programmers can be very time consuming, especially if it involves pointers.
3. It's useful to choose an environment that can run on a platform of the user's choice.
Java is close enough to C that transition to/from it is manageable, while being safer (no pointers) for people starting out. I also prefer a strongly typed language.
I will offer the following anecdote: This year I gave an assignment to my students to write a simple program in assembly language just to have an awareness and appreciation of the differences between low and high level languages. (Print out a number being incremented by 1 in a loop starting from 1 and going to 10.) The code for this in assembly was over 130 lines of code, while the corresponding Java code was a few lines.
This exercise was enough! to give them a deeper understanding of how a CPU executes code -- programming.
They liked learning about how registers and memory locations are used by CPUs. They came away with an understanding that a variable is really a named location in memory.
It should be noted that in some curricula students are expected to know the definition of high- and low- level languages with clear demarcation .
• Java is a complete mess (some types are objects and behave a certain way, others aren't; you are playing with pointers to the objects and nobody says so), and has lots of mysterious stuff for the beginner that makes no sense (class, ...). Better use a clean language like Python 3. – vonbrand Aug 3 '18 at 12:22
Modern C++ (i.e. C++11 and above) is hands down the best introduction to programming language. This is not only my opinion but what was realized after the "Java" generation graduated and went into the market.
To explain before I get the StackExchange record in negative votes:
1) Modern C++ does not require the use of pointers (unless of course you want to, which is not advisable in modern programming).
2) Modern C++ is a general language. You can go form low-level to high-level, use Objects or procedures. It is no longer the old C-like low-level language. You should not learn C before it either.
3) There was a time when many universities switched to Java as starting language because old-fashioned C was traumatizing. I like this quote from Joel Spolsky:
"All the kids who did great in high school writing pong games in BASIC for their Apple II would get to college, take CompSci-101, a data structures course, and when they hit the pointers business their brains would just totally explode, and the next thing you knew, they were majoring in Political Science because law school seemed like a better idea."
Today when I teach I tell my students a simple rule of thumb to distinguish modern C++ code from obsolete code practices: "if you see a '*' (naked pointer)in the code it is usually obsolete". (keeping in mind the context of this subject).
4) However, when that Java generation graduated the industry started to realize that they lack in relation to the feel and understanding of how computers work. There are few publications on the subject. And if you notice many have moved back from teaching Java as a starting language.
5) Modern C++ gives a wide range of options: from low-level to high-level programming concepts. From using procedural programming to OOP. (and no pointers) :-)
I suggest you check out the textbook by Gaddis and and one By Liang. These are my favorite ones on Introduction to programming and after.
I started low level, but I prefer to teach high level first. I have talked to a few teachers that like to start low-level, and decided that they are also correct. It depends on the teacher, and the students (not quantity of motivation, but type of motivation). Low-level, if you/the student likes/wants to study how the machine works (more machine focused); high-lever if you/the student likes/wants to focus on what a program does (more human focused, but not just interfacing to a human, it could, for example, also be controlling machinery). However I think that the middle path is less good. Therefore start low: A nice assembler (arm, 68000, 6502, little man), or start high (Eiffel).
I have also realised that most high level languages are not designed for teaching: For example
• the best order to teach the language, is not the best order to teach concepts.
• There are many nasty gotchas (C++ is an example with many of these).
• There is a mismatch between language and concepts. Many to many relation ship between language features, and programming concept (C++, java, C#).
• Much boiler plate needed in first lesson, that can not yet be taught (public static void main (string[] args))
I don't seem to have a problem with low-level ones. Though I no-longer see a need to use low-level languages. As Eiffel, Go and maybe others can do their job better, and as fast. The least troubling of the High-level languages seems to be Eiffel, but this is OO. So we need a good non-OO high-level language to teach first. I teach python, but I don't like it as a teaching language (it has a few problems).
Therefore High or Low? It depends, but avoid the treacherous middle.
There isn't just one answer to this question. It depends entirely on what the person wants to learn. Somebody who wants to make video games should probably take a different path than somebody who wants to build a website for their home business, or somebody who wants to build an Android app.
But generally speaking, you should start with something engaging. I personally recommend starting with Processing. Processing is designed for novices, and makes it easy to create visual and interactive programs without a ton of boilerplate.
Processing is built on top of Java, but allows you to ignore a lot of the boilerplate until you really need it. It allows you to learn the fundamentals, then OOP, and allows you to "graduate" to other languages like Java or Javascript, or even C or C++ if you really need to.
See these related questions:
I don't know what I would recommend but there are a few criterion I'd bear in mind if I were you.
1.) It's a lot harder for someone who's only been doing Java/C# (automatic memory management) vs. C/C++ (manual memory management).
2.) Some languages make creating certain data structures significantly more difficult. I'm thinking of BASIC--any sort of data structure involving a recursive definition (e. g. binary tree) is a lot harder to do with a language that doesn't support recursive types.
3.) I think I'm somewhat alone in this opinion but I'd say that learning OO is actually harder for those new to software development than is learning software development without it. So I'd tread lightly in terms of teaching a non-developer OO based languages.
4.) If you want something which gives quick feedback to the learners, a scripting language (or at least a language with some sort of REPL) is superior to a compiled language. The compile/link/run cycle will never be as quick as simply typing in an expression and seeing what it evaluates to.
5.) Are you concerned about variable typing (or conversely no typing)? Typing, like default immutability, can eliminate a whole class of errors all by itself but it can be hard for students to grasp--at least in my experience anyway.
Those are just a few of the criterion I'd consider. Again I may be alone in this opinion but I'd stay away from Javascript for teaching purposes. I say this for a few reasons; there's a ton of libraries for JS and they all work differently. Also a lot of JS is really hacky (I mean that in the pejorative sense).
Most of them had no real final purpose, just wanted to be able to understand programming and be able to work with it if needed.
It is much easier to maintain and grow such motivation for the subject if your students can quickly develop some programs they are proud to show.
As far as I see among my students, printing "Hello" 10 times, summing the elements of an array, or concatenating strings doesn't really fall in that category anymore. Funny examples come more quickly with higher level environments.
Once they've learnt the basics (variable, loops, problem decomposition) they can learn low-level things. Actually programming is not about using low-level devices, it is about learning to make things with what you have at hand.
I would agree with you (based on my experience): In BS, we took C in 1st semester, then OOP and Data Structures with C++ and gradually switched to C# and Java in later semesters.
I was one of the worst coders of the class and despite hating programming (to bits, in fact bytes) I found C# and Java pretty easy due to the effort poured in by our teachers in C and C++.
Similarly, in my pedagogical career, I have found two roadmaps:
1. C/C++ in PF, followed by either C# or Java (more frequent one)
2. Universities which start right from Java
I got a chance to teach Operating Systems to students of category 2 and needless to mention how badly they failed at comprehending basic system programs written in C++ (not even C). It took me more than 3 lectures to explain them only the syntax of pthread_create() since they had no background of pointers, function pointers and even passing by reference and how could they given they begun programming with Java.
Recently, I taught Introduction to Biological Computing and used Python in the course and that experience prompted me to think a bit about why not use it in PF for CS students, but No! This experiment, thankfully I stopped it before implementation. Obviously doing Python after C is very easy, but its not other way around. No wonder that as per TIOBE indexing, C language has made a whooping 6.62% growth in last 12 months. 1
Students need (a) motivation, and (b) a useful language.
For (a), use a high-level language, preferably one in which graphics is easy to handle, and cross-platform (yes, a few will be running Linux, mostly a motley of distributions/versions; most will swear by Windows of several batches; then there are Apple-lovers...). If they can build something fun in a few hours, that is more than enough motivation. Build some simple game, have them participate in some of the one-weekend game building competitions once they are further along.
For (b), today Python wins hands down: there are libraries for anything imaginable, and then some. Much "real work" in e.g. astronomy is herding a bunch of data-munging scripts (with FORTRAN or C++ backends) in ever changing combinations, orchestrated by Python. Much of the "user friendly system administration tools" in my Fedora system are written in Python with nice GUIs, and hand the grunt work to the traditional commands.
Check out some text like Think Python, there you'll see how far you can go in an introduction course.
I recommend Python. Why? Strong but dynamic typing. Strong resemblance to English. But.... it is a tool students never outgrow. C is tricky. You don't want students bogged down in understanding the heap when they are trying to master stuff like recursion, looping, conditional logic, and variables.
• Why does a beginner need to bother with the "heap" when learning recursion, looping, conditional logic, or variables? C can be tricky, but in a beginners course, it is certainly not. – Gorchestopher H Apr 4 '18 at 20:38
• Once you want to create a large object such as a list or an array, you are into malloc/free and the heap. You don't stuff that big stuff onto the stack. – ncmathsadist Apr 4 '18 at 23:07
• Not all lists or arrays are large enough to require heap usage. For the purpose of a beginners course, you can certainly, as an instructor, ensure your students don't have to bother with this. – Gorchestopher H Apr 10 '18 at 13:46
Define "beginner". A third grader who want to make a game? A high school student? A college grad looking for a new career? With out knowing that this question is unanswerable. I teach high school programming. When someone says "beginner" to me I suggest Kodu, Scratch or Small Basic. If someone wants to build a game then go for Gamemaker or Unity. Want to build an app then look at MIT AppInventor. These are starting points for beginners.
• The first day I started programming I was surely a beginner, I was 13, I started with C/C++ (I was using iostream, string and vector but it was basically C: no classes, only structs). Growing up I started appreciating the differences between C and C++ and refining the knowledge of both. I think there is no real need to start with something like “scratch”, unless someone is really young. Probably the best thing would be to start with scratch at, maybe, 3-6, and then gradually move to a real programming language. But i have no experience in that, only my personal one. – Nisba Apr 6 '18 at 10:29 | 2021-04-17 03:11:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2791779339313507, "perplexity": 1285.2216458617218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00449.warc.gz"} |
https://math.stackexchange.com/questions/2617408/special-case-of-neoclassical-utility-function-lim-sigma-to-1-frac1-sig | # Special Case of Neoclassical Utility Function: $\lim_{\sigma \to 1} \frac{1-\sigma}{-c^{\sigma}} = \ln c?$
The neoclassical consumption utility function is defined as
$$U(c) = \frac{c^{1-\sigma}-1}{1-\sigma}.$$
The special case of this function is $\sigma \to 1$, then the utility function converges to $$U(c) = \ln c.$$
But in order to derive this we need to solve the limit:
$$\lim_{\sigma \to 1} \frac{c^{1-\sigma}-1}{1-\sigma}.$$
I know I could start with L'Hospital rule, so I will get $$\lim_{\sigma \to 1} \frac{c^{1-\sigma}-1}{1-\sigma} = \lim_{\sigma \to 1} \frac{(1-\sigma)c^{-\sigma}}{-1} = \lim_{\sigma \to 1} \frac{1-\sigma}{-c^{\sigma}}.$$ But I have no idea how to continue. What is the next step to prove $$\lim_{\sigma \to 1} \frac{1-\sigma}{-c^{\sigma}} = \ln c?$$
• What is new with this function? – Guy Fsone Jan 23 '18 at 11:10
since $$c^h =\exp(h\ln c) \sim 1+h\ln c+O(h^2)$$ Enforcing $h=1-\sigma$ gives $$\lim_{\sigma \to 1} \frac{c^{1-\sigma}-1}{1-\sigma} =\lim_{h \to0} \frac{c^{h}-1}{h}=\lim_{h \to0} \frac{h\ln c+O(h^2)}{h} =\ln c$$ | 2020-04-07 14:07:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655836224555969, "perplexity": 204.38441700338828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00221.warc.gz"} |
https://computergraphics.stackexchange.com/questions/10150/calculating-the-gradient-of-a-tetrahedral-mesh | # Calculating the gradient of a tetrahedral mesh
How can I compute the gradient for a tetrahedral mesh (3D)? For triangular mesh, I got an answer from the following post Calculating the gradient of a triangular mesh
How can I get a similar formula for 3D mesh?
Assuming a value is assigned to each vertex of the mesh and we use purely linear interpolation, then there will be a constant gradient vector within each tetrahedron.
Linear interpolation can be expressed using barycentric coordinates, like $$f(x,y,z) = f_1 w_1(x,y,z) + f_2 w_2(x,y,z) + f_3 w_3(x,y,z) + f_4 w_4(x,y,z)$$ where $$f_1 \ldots f_4$$ are the values of the function at the four vertices, and $$w_1 \ldots w_4$$ are the barycentric weights for each vertex. Then, finding the gradient of $$f$$ reduces to finding the gradients of all of the weights.
This can be worked out geometrically by noting that each $$w_i$$ is 1 at the $$i$$th vertex, falling off to 0 at the plane formed by the other three vertices. The gradient vector will therefore be normal to that plane, pointing back towards the $$i$$th vertex, with a magnitude equal to 1 / the distance from the plane to the vertex.
Once you've calculated those barycentric gradients, you can multiply them by $$f_1 \ldots f_4$$ and sum them up to arrive at the gradient of $$f$$ overall.
This reasoning works for triangles too, by the way, only replace "plane" with "line".
• Is there a similar way to compute the Laplacian operator for 3D mesh?
– Bis
Aug 31 '20 at 1:16
• Laplacians are tricky to even define, because it's a second-derivative operator, so with linear interpolation it would just be zero. Getting a useful answer out of it requires defining some higher-order interpolation scheme first. You can see some approaches to this in Laplace–Beltrami: The Swiss Army Knife of Geometry Processing Aug 31 '20 at 2:33
• Thank you. I will take a look at it.
– Bis
Aug 31 '20 at 5:46 | 2021-09-24 20:49:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137276768684387, "perplexity": 275.19612458379163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00202.warc.gz"} |
http://www.postel.org/pipermail/end2end-interest/2002-September/002297.html | # [e2e] papers on Cooperative Networking (CoopNet) available
Fri Sep 20 13:36:47 PDT 2002
I'd like to announce the availability of a couple of papers on the
Cooperative Networking (CoopNet) project at MSR. The project focuses on
the selective application of peer-to-peer networking to complement the
client-server Web. The specific problem we have focused on thus far is
flash crowds, both in the context of (static) Web content and streaming
media content. In the latter case, CoopNet provides robustness in the
face of high client churn rate, by combining the data redundancy
provided by multiple description coding (MDC) with the path redundancy
provided by multiple, diverse application-level multicast trees spanning
the set of active clients.
The abstracts of the papers are appended below. The papers themselves
are available online at
Microsoft Research
------------------------------------------------------------------------
--------
(1)
The Case for Cooperative Networking
V. N. Padmanabhan and K. Sripanidkulchai
Proceedings of the First International Workshop on Peer-to-Peer Systems
(IPTPS), Cambridge, MA, USA
March 2002
Abstract:
In this paper, we make the case for Cooperative Networking (CoopNet)
where end-hosts cooperate to improve network performance perceived by
all. In CoopNet, cooperation among peers complements traditional
client-server communication rather than replacing it. We focus on the
Web flash crowd problem and argue that CoopNet offers an effective
solution. We present an evaluation of the CoopNet approach using
simulations driven by traffic traces gathered at the MSNBC website
during the flash crowd that occurred on September 11, 2001.
------------------------------------------------------------------------
--------
(2)
Distributing Streaming Media Content Using Cooperative Networking
V. N. Padmanabhan, H. J. Wang, P. A. Chou, and K. Sripanidkulchai
ACM NOSSDAV, Miami Beach, FL, USA
May 2002
Abstract:
In this paper, we discuss the problem of distributing streaming media
content, both live and on-demand, to a large number of hosts in a
scalable way. Our work is set in the context of the traditional
client-server framework. Specifically, we consider the problem that
arises when the server is overwhelmed by the volume of requests from its
clients. As a solution, we propose {\em Cooperative Networking
(CoopNet)}, where clients cooperate to distribute content, thereby
alleviating the load on the server. We discuss the proposed solution in
some detail, pointing out the interesting research issues that arise,
and present a preliminary evaluation using traces gathered at a busy
news site during the flash crowd that occurred on September 11, 2001.
------------------------------------------------------------------------
-------- | 2020-04-06 04:09:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4957813322544098, "perplexity": 6527.857283472547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00382.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-18-problems-page-699/25 | ## College Physics (4th Edition)
We can find an expression for the resistance of the copper wire: $R = \frac{\rho_c~L}{A_c}$ $R = \frac{\rho_c~L}{\pi~(\frac{d_c}{2})^2}$ $R = \frac{4~\rho_c~L}{\pi~d_c^2}$ We can find an expression for the resistance of the aluminum wire: $R = \frac{\rho_a~L}{A_a}$ $R = \frac{\rho_a~L}{\pi~(\frac{d_a}{2})^2}$ $R = \frac{4~\rho_a~L}{\pi~d_a^2}$ Since the resistance is equal in both wires, we can equate the two expressions: $\frac{4~\rho_a~L}{\pi~d_a^2} = \frac{4~\rho_c~L}{\pi~d_c^2}$ $\frac{d_c^2}{d_a^2} = \frac{\rho_c}{\rho_a}$ $\frac{d_c}{d_a} = \sqrt{\frac{\rho_c}{\rho_a}}$ $\frac{d_c}{d_a} = \sqrt{\frac{1.68\times 10^{-8}~\Omega~m}{2.65\times 10^{-8}~\Omega~m}}$ $\frac{d_c}{d_a} = 0.80$ The ratio of the diameter of the copper wire to that of the aluminum wire is 0.80 | 2020-09-25 07:42:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067593693733215, "perplexity": 158.22217860610868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00185.warc.gz"} |
https://blog.danielberkompas.com/2015/09/03/better-pipelines-with-monadex/ | Before I get started, let me be blunt, this is not another monad tutorial. The world has enough of those already. I’m not going to write about functors and applicatives or other technical theory. This is just a post about how a particular monad made my life better.
## The Problem: Network Requests
I’m building a Phoenix app where people can buy something. So, I needed to integrate with my favorite gateway, Stripe, which involves making a series of network requests each time a user makes a purchase.
Specifically, I needed to do these things for each purchase:
1. Ensure the user hasn’t already purchased the thing.
2. [Network] Create a Stripe Customer for the user, if it doesn’t exist.
3. [Network] Create a Stripe Charge for the purchase amount.
4. Update the database to reflect the fact that the purchase has been made.
If any one of steps 1-3 fail, I don’t want to perform the remaining steps. Instead, the process should fail immediately and return the error.
## The Pipeline Operator Isn’t Enough
I started out with a naive solution, using Elixir’s normally adequate pipe operator to tie the steps together:
result = user
|> assert_not_purchased_yet
|> create_stripe_customer(stripe_token) # From Stripe.js
|> create_stripe_charge
|> update_database
The obvious problem with this is that regardless of the result of each function, the next function will be called. For example, create_stripe_charge/1 will still run even if create_stripe_customer/2 failed.
I can work around this by making each function return either {:ok, state} or {:error, reason}. If a function gets {:error, reason}, it should do nothing, which would have the same result as if it wasn’t called at all.
Here’s how that looks:
def create_stripe_customer({:ok, state}, stripe_token) do
# Create Stripe customer, return {:ok, state} or {:error, reason}
end
def create_stripe_customer({:error, _} = error, _stripe_token) do
error # just return the error
end
def create_stripe_charge({:ok, state}) do
# Create stripe charge, return {:ok, state} or {:error, reason}
end
def create_stripe_charge({:error, _} = error) do
error
end
This way, if create_stripe_customer/2 returns an error, then create_stripe_charge/1 will do nothing. We can repeat this pattern all the way down the chain, and pipeline will then look something like this:
result = user
|> assert_not_purchased_yet
|> create_stripe_customer(stripe_token) # From Stripe.js
|> create_stripe_charge
|> update_database
case result do
{:ok, state} -> # Display success to user
{:error, reason} -> # Display error reason to user
end
This works, but it isn’t very elegant. I have to add a new function definition for each function in the pipeline to handle the error case.
I knew enough about monads at this point to vaguely understand that they are a bit like pipelines. Maybe there was a kind of monad that could make this better?
It turns out that there is. The excellent Monadex library for Elixir provides just what I needed, the Monad.Result monad. Rather than talk theory, let’s just look at how we use this thing:
defmodule MyApp.Purchase do
use Monad.Operators # Brings in the ~>> bind operator
# Import functions from the Monad.Result module.
# These will be used to wrap the state that we pass through
# all of our functions.
unwrap!: 1,
success: 1,
error: 1]
def create(user, stripe_token) do
result = success(user) # Wrap user with the "success" monad state
~>> fn user -> assert_not_purchased_yet(user) end
~>> fn user -> create_stripe_customer(user, stripe_token) end
~>> fn user -> create_stripe_charge(user) end
~>> fn user -> update_database(user) end
if success?(result) # %Monad.Result{type: :success, value: user}
value = unwrap!(result) # Same as result.value
# Display success to user
else
# Display error to user
end
end
# ...
end
First, we use the Monad.Operators module. This brings in the ~>> operator which we’ll get to in a minute. Next, we import most of the functions from Monad.Result.
Underneath the hood, a Monad.Result is just a struct that wraps state, not all that different from a {:ok, state} or {:error, reason} tuple.
%Monad.Result{type: :error | :success, value: state}
So, the first thing we do is wrap our state, the user variable, with a Monad.Result struct. Since this is the first part of our pipeline, we’ll start with a :success struct.
success(user) # => %Monad.Result{type: :success, value: user}
The ~>> operator, when used with the Monad.Result monad, will ensure that the next function in the pipeline only runs if the previous one returned a :success result. Otherwise, it terminates immediately and returns the last Monad.Result that was returned.
If we use a ~>> pipeline instead of the regular |> pipeline, our functions can then look like this:
def assert_not_purchased_yet(user) do
case purchased?(user) do
false -> success(user) # Return whatever state the next function needs
end
end
def create_stripe_customer(%{stripe_customer_id: id} = user) when id != nil do
success(user) # The user already has a stripe customer id, so do nothing
end
def create_stripe_customer(user, stripe_token) do
case Stripe.Customer.create(user) do # pseudocode
{:ok, _customer} -> success(user)
{:error, reason} -> error(reason)
end
end
def create_stripe_charge(user) do
case Stripe.Charge.create(...) do # pseudocode
{:ok, _charge} -> success(user)
{:error, reason} -> error(reason)
end
end
# etc ...
Much cleaner! Now, we’ll only make the Stripe network calls if the previous step was successful. There’s no nested tree of if statements, just a simple pipeline. And we can operate on the result of all these operations very simply:
if success?(result) do
# Get the value out of the monad
value = unwrap!(result)
# render success
else
# render error
end
## Errors Prevented
This implementation elegantly handles each of the following situations:
• The user has already purchased the product.
• The user is already associated with a Stripe customer.
• The Stripe customer could not be created.
• The Stripe customer could be created but the charge could not.
Further, it only does as much work as necessary to determine the result. It fails fast, allowing the user to get feedback as soon as possible.
## Conclusion
This is the first time I understood how a monad would help me, and I hope it was useful to you too! Whenever you find yourself wishing for a better type of pipeline, give Monadex a try. | 2018-03-21 03:27:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19589753448963165, "perplexity": 6113.746534228228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00127.warc.gz"} |
https://math.stackexchange.com/questions/1966595/how-to-write-this-quantified-logical-expression-correctly | # How to write this quantified logical expression correctly?
I have to construct a quantified logical expression from statement: "Each circle with center in 0 in complex plane is made of points, which have the same absolute value".
My attempt is the following: $$(\forall \mathcal{K} \in \mathbb{C}, \mathcal{K} = \{z\in \mathbb{C}\mid |z| < r, r \in \mathbb{C}\})((a\in \mathcal{K} \land b \in \mathcal{K}) \implies (|a| = |b|)).$$
Is it correct? Shall I declare what are $a$ and $b$ somewhere?
• This is a definition or a theorem? – Fabio Lucchini Oct 13 '16 at 9:55
• Just an excercise to teach us writing logical expressions with quantifiers. – Accelerate to the Infinity Oct 13 '16 at 9:57
Taking $$\forall K(\text{K is a complex circle with center in 0}\iff\exists r>0(K=\{z\in\mathbb C:|z|=r\}))$$ as definition, then $$\forall K(\text{K is a complex circle with center in 0}\implies\forall a\in K\forall b\in K(|a|=|b|))$$ is a theorem. | 2019-10-14 01:53:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6384777426719666, "perplexity": 451.6707493351563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00016.warc.gz"} |
https://mathematica.stackexchange.com/questions/32212/listing-all-combinations-produced-by-picking-one-element-from-each-of-several-se?noredirect=1 | # Listing all combinations produced by picking one element from each of several sets [duplicate]
I have a problem like this, I am given the following sets {a,b,c}, {d,e,f}, {h,i,j}. I want to pick one element from each set, and output a list of all the possibilities. The output will be a set of sets, all which have the length 3. I have tried many different approaches now.
• Tuples[{{a, b, c}, {d, e, f}, {g, h, i}}], check the documentation of Tuples for more information. – Pinguin Dirk Sep 12 '13 at 20:57
• Also seems like a good question to keep for the googlability of the title. It's #1 hit for "mathematica all combinations one from each" and Tuples documentation isn't even on the first page. – ssch Sep 12 '13 at 23:49 | 2019-11-18 05:46:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45879068970680237, "perplexity": 613.0875800644942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00173.warc.gz"} |
https://www.ias.ac.in/listing/articles/joaa/039/03 | • Volume 39, Issue 3
June 2018
• Non-collinear libration points in ER3BP with albedo effect and oblateness
In this paper we establish a relation between direct radiations (generally called radiation factor) and reflected radiations (albedo) to show their effects on the existence and stability of non-collinear libration points in the elliptic restricted three-body problem taking into account the oblateness of smaller primary. It isdiscussed briefly when $\alpha = 0$ and $\sigma = 0$, the non-collinear libration points form an isosceles triangle with the primaries and as e increases the libration points $L_{4,5}$ move vertically downward ($\alpha$, $\sigma$ and $e$ represents the radiation factor, oblateness factor and eccentricity of the primaries respectively). If $\alpha = 0$ but $\sigma \neq 0$, the libration points slightly displaced to the right-side from its previous location and form scalene triangle with the primaries and go vertically downward as $e$ increases. If $\alpha \neq 0$ and $\sigma\neq 0$, the libration points $L_{4,5}$ form scalene triangle with the primaries and as e increases $L_{4,5}$ move downward and displaced to the left-side. Also, the libration points $L_{4,5}$ are stable for the critical mass parameter $\mu \leq \mu_c$.
• Trajectory of asteroid 2017 SB20 within the CRTBP
Regular monitoring the trajectory of asteroids to a future time is a necessity, because the variety of known probably unsafe near-Earth asteroids are increasing. The analysis is perform to avoid any incident or whether they would have a further future threat to the Earth or not. Recently a new Near Earth Asteroid (2017 SB20) has been observed to cross the Earth orbit. In view of this we obtain the trajectory of Asteroid in the circular restricted three body problem with radiation pressure and oblateness.We examine nature of Asteroid’sorbit with Lyapunov Characteristic Exponents (LCEs) over a finite intervals of time. LCE of the system confirms that the motion of asteroid is chaotic in nature. With the effect of radiation pressure and oblateness the length of curve varies in both the planes. Oblateness factor is found to be more perturbative than radiation pressure. To see the precision of result obtain from numerical integration we show the error propagation and the numerical stability is assured around the singularity by applying regularized equations of motion for precise long-term study.
• Multiband optical–IR variability of the blazar PKS 0537–441
We have reconsidered the simultaneous and homogeneous optical–IR light curves and the corresponding spectral indices curve of the blazarPKS0537–441 from January 2011 toMay2015. All the curves show significant fluctuations on various timescales, and the flux variations seem to be more pronounced towards the IR bands. The relation between average fluxes and spectral indices reveals the existence of redder-when-brighter (RWB) and bluer-when-brighter (BWB) trends at different flux levels, along with a long-term achromatic trend and a mild RWB trend on short-term timescales. Cross-correlation analyses present an energy-dependent time delay that the lower-frequency variations follow higher-frequency ones by a few weeks and a hysteresis pattern between spectra and fluxes. Our analysis reveals some potential coherence between low-energy-peaked BL Lacs (LBLs) and FSRQs, and indicates that the observed flux variability and spectral changes could be due to the superposition of a dominant jet emission, an underlying thermal contribution from a more slowly varyingdisk and/or other geometric effects under the shock-in-jet scenario.
• Interactions of galaxies outside clusters and massive groups
We investigate the dependence of physical properties of galaxies on small- and large-scale density environment. The galaxy population consists of mainly passively evolving galaxies in comparatively lowdensity regions of Sloan Digital Sky Survey (SDSS). We adopt (i) local density, $\rho_{20}$, derived using adaptive smoothing kernel, (ii) projected distance, $r_p$, to the nearest neighbor galaxy and (iii) the morphology of the nearest neighbor galaxy as various definitions of environment parameters of every galaxy in our sample. In orderto detect long-range interaction effects, we group galaxy interactions into four cases depending on morphology of the target and neighbor galaxies. This study builds upon an earlier study by Park and Choi (2009) by including improved definitions of target and neighbor galaxies, thus enabling us to better understand the effect of “the nearest neighbor” interaction on the galaxy. We report that the impact of interaction on galaxy properties is detectable at least up to the pair separation corresponding to the virial radius of (the neighbor) galaxies. Thisturns out to be mostly between 210 and 360 $h^{-1}$ kpc for galaxies included in our study.We report that early type fraction for isolated galaxies with $r_p \ge r_{vir,nei}$ is almost ignorant of the background density and has a very weakdensity dependence for closed pairs. Star formation activity of a galaxy is found to be crucially dependent on neighbor galaxy morphology. We find star formation activity parameters and structure parameters of galaxies to be independent of the large-scale background density.We also exhibit that changing the absolute magnitude of the neighbor galaxies does not affect significantly the star formation activity of those target galaxies whose morphology and luminosities are fixed.
• Higher-speed coronal mass ejections and their geoeffectiveness
We have attempted to examine the ability of coronal mass ejections to cause geoeffectiveness. To that end, we have investigated total 571 cases of higher-speed (>1000 km/s) coronal mass ejection events observed during the years 1996–2012. On the basis of angular width (W) of observance, events of coronal mass ejection were further classified as front-side or halo coronal mass ejections (W $=$ 360$^{\circ}$); back-side halo coronal mass ejections (W = 360$^{\circ}$); partial halo (120$^{\circ}$ < W < 360$^{\circ}$) and non-halo (W < 120$^{\circ}$). From further analysis, we found that front halo coronal mass ejections were much faster and more geoeffective in comparison of partial halo and non-halo coronal mass ejections. We also inferred that the front-sided halo coronal mass ejections were 67.1% geoeffective while geoeffectiveness of partial halo coronal mass ejections and non-halo coronal mass ejections were found to be 44.2% and 56.6% respectively. During the same period of observation, 43% ofback-sided CMEs showed geoeffectiveness. We have also investigated some events of coronal mass ejections having speed >2500 km/s as a case study. We have concluded that mere speed of coronal mass ejection and their association with solar flares or solar activity were not mere criterion for producing geoeffectiveness but angular width of coronal mass ejections and their originating position also played a key role.
• A technique to detect periodic and non-periodic ultra-rapid flux time variations with standard radio-astronomical data
We demonstrate that extremely rapid and weak periodic and non-periodic signals can easily be detected by using the autocorrelation of intensity as a function of time. We use standard radio-astronomical observations that have artificial periodic and non-periodic signals generated by the electronics of terrestrial origin. The autocorrelation detects weak signals that have small amplitudes because it averages over long integration times. Another advantage is that it allows a direct visualization of the shape of the signals, while it isdifficult to see the shape with a Fourier transform. Although Fourier transforms can also detect periodic signals,a novelty of this work is that we demonstrate another major advantage of the autocorrelation, that it can detect non-periodic signals while the Fourier transform cannot. Another major novelty of our work is that we use electric fields taken in a standard format with standard instrumentation at a radio observatory and therefore no specialized instrumentation is needed. Because the electric fields are sampled every 15.625 ns, they therefore allow detection of very rapid time variations. Notwithstanding the long integration times, the autocorrelationdetects very rapid intensity variations as a function of time. The autocorrelation could also detect messages from Extraterrestrial Intelligence as non-periodic signals.
• OH megamasers: dense gas & the infrared radiation field
To investigate possible factors related to OH megamaser formation (OH MM, $L_{{\rm H}_2{\rm O}} > 10L_{\odot}$), we compiled a large HCN sample from all well-sampled HCN measurements so far in local galaxies and identifiedwith the OH MM, OH kilomasers ($L_{{\rm H}_2{\rm O}} > 10L_{\odot}$, OH kMs), OH absorbers and OH non-detections (non-OH MM). Through comparative analysis on their infrared emission, CO and HCN luminosities (good tracers for the low-density gas and the dense gas, respectively), we found that OH MM galaxies tend to have stronger HCN emission and no obvious difference on CO luminosity exists between OH MM and non-OH MM. This implies that OH MM formation should be related to the dense molecular gas, instead of the low-density molecular gas. It can be also supported by other facts: (1) OH MMs are confirmed to have higher mean molecular gas density and higher dense gas fraction ($L_{\rm HCN}/L_{\rm CO}$) than non-OH MMs. (2) After taking the distance effect into account, the apparent maser luminosity is still correlated with the HCN luminosity, while no significant correlation can be found at all between the maser luminosity and the CO luminosity. (3) The OH kMs tend to have lower values than those of OH MMs, including the dense gas luminosity and the dense gas fraction. (4) From analysis of known data of another dense gas tracer HCO$^+$, similar results can also be obtained. However, from our analysis,the infrared radiation field can not be ruled out for the OH MM trigger, which was proposed by previous works on one small sample (Darling in ApJ 669:L9, 2007). On the contrary, the infrared radiation field should play one more important role. The dense gas (good tracers of the star formation) and its surrounding dust are heated by the ultra-violet (UV) radiation generated by the star formation and the heating of the high-density gas raises the emission of the molecules. The infrared radiation field produced by the re-radiation of the heated dust inturn serves for the pumping of the OH MM.
• Color–magnitude relations in nearby galaxy clusters
The rest-frame $(g–r) /M_r$ color–magnitude relations of 12 Abell-type clusters are analyzed in the redshift range ($0.02\lesssim z \lesssim 0.10$) and within a projected radius of 0.75 Mpc using photometric data from SDSS-DR9. We show that the color–magnitude relation parameters (slope, zero-point, and scatter) do not exhibit significant evolution within this low-redshift range. Thus, we can say that during the look-back time of $z \sim 0.1$ all red sequence galaxies evolve passively, without any star formation activity.
• Effect of geomagnetic storms on VHF scintillations observed at low latitude
A geomagnetic storm affects the dynamics and composition of the ionosphere and also offers an excellent opportunity to study the plasma dynamics. In the present study, we have used the VHF scintillations data recorded at low latitude Indian station Varanasi (Geomag. latitude $=$ 14$^{\circ}$55$'$N, long. $=$ 154$^{\circ}$E) which is radiated at 250 MHz from geostationary satellite UFO-02 during the period 2011–2012 to investigate the effects of geomagnetic storms on VHF scintillation.Various geomagnetic and solar indices such as Dst index, Kpindex, IMF Bz and solar wind velocity (Vx) are used to describe the geomagnetic field variation observed during geomagnetic storm periods. These indices are very helpful to find out the proper investigation and possible interrelation between geomagnetic storms and observed VHF scintillation. The pre-midnight scintillation is sometimes observed when the main phase of geomagnetic storm corresponds to the pre-midnight period. It is observed that for geomagnetic storms for which the recovery phase starts post-midnight, the probability ofoccurrence of irregularities is enhanced during this time and extends to early morning hours.
• Numerical simulation of inertial alfven waves to study localized structures and spectral index in auroral region
In the present paper, the numerical simulation of Inertial Alfven wave (IAW) in low-$\beta$ plasma applicable to the auroral region at 1700 km was studied. It leads to the formation of localized structures when the nonlinearity arises due to ponderomotive effect and Joule heating. The effect of perturbation and magnitude of pump IAW, formed the localized structures of magnetic field, has been studied. The formed localized structures at different times and average spectral index scaling of power spectrum have been observed. Resultsobtained from simulation reveal that spectrum steepens with power law index $\sim −$3.5 for shorter wavelength. These localized structures could be a source of particle acceleration and heating by pump IAW in low-$\beta$ plasma.
• The gravitational redshift of a optical vortex being different from that of an gravitational redshift plane of an electromagnetic wave
A hypothesis put forward in late 20th century and subsequently substantiated experimentally posited the existence of optical vortices (twisted light). An optical vortex is an electromagnetic wave that in addition to energy and momentum characteristic of flatwaves also possesses angular momentum. In recent yearsoptical vortices have found wide-ranging applications in a number of branches including cosmology. The main hypothesis behind this paper implies that the magnitude of gravitational redshift for an optical vortex will differ from the magnitude of gravitational redshift for flat light waves. To facilitate description of optical vortices, we have developed the mathematical device of gravitational interaction in seven-dimensional time-space that we apply to the theory of electromagnetism. The resulting equations are then used for a comparison of gravitational redshift in optical vortices with that of normal electromagnetic waves. We show that rotating bodies creating weak gravitational fields result in a magnitude of gravitational redshift in optical vortices that differs from themagnitude of gravitational redshift in flat light waves. We conclude our paper with a numerical analysis of the feasibility of detecting the discrepancy in gravitational redshift between optical vortices and flat waves in the gravitational fields of the Earth and the Sun.
• Kelvin–Helmholtz instability of two finite-thickness fluid layers with continuous density and velocity profiles
The effect of density and velocity gradients on the Kelvin–Helmholtz instability (KHI) of two superimposed finite-thickness fluid layers are analytically investigated. The linear normalized frequency and normalized growth rate are presented. Then, their behavior as a function of the density ratio of the lightfluid to the heavy one $(r)$ was analyzed and compared to the case of two semi-infinite fluid layers. The results showed that the values of normalized frequency of KHI for two finite-thickness fluid layers are less than their counterparts for two semi-infinite fluid layers. The behavior of normalized growth rate as a functionof the velocity and density gradients capitulates to the effect of velocity gradient at the large values of $(r)$.
• # Journal of Astrophysics and Astronomy
Current Issue
Volume 40 | Issue 2
April 2019
• # Continuous Article Publication
Posted on January 27, 2016
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. | 2019-05-21 23:26:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6838058829307556, "perplexity": 1697.1641848422444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256586.62/warc/CC-MAIN-20190521222812-20190522004812-00359.warc.gz"} |
https://forum.openmha.org/viewtopic.php?t=101&p=144 | ## openMHA + openCL
mandolini_
Posts: 15
Joined: Fri Feb 14, 2020 1:50 am
### openMHA + openCL
I'm trying to accomplish something with openMHA that is probably out of the ordinary for someone using this framework, so bear with me.
I am using openMHA on a BeagleBone AI, which has 2 CPU processors, 2 DSPs, and other co-processors. I have a strict latency limit with this hardware platform, so one of my ideas for reducing round-trip latency is to offload the FFT/iFFT to the DSP cores using TI's openCL implementation. This would speed up the computationally-intensive FFTs. This implementation works well for simple applications, so I decided to try and mix it with the MHA so that I could chain other plugins as well.
I created my "wave2spec" plugin (named "opencl") and chained it together with openMHA spec2wave, and got the following error:
Code: Select all
mha.algos=[ opencl spec2wave ]
Code: Select all
Error: (mhapluginloader) Error in module "mhachain:mhachain":
(mhapluginloader) The plugin /home/debian/openmha/build_dir/lib/opencl.so has no processing callback for waveform to waveform processing.
(Release failed with message:
"clFinish"
My question is, what is going on in the background of MHA? Why is this error happening, and what should I look for in the openCL code to resolve this?
tobiasherzke
Posts: 27
Joined: Mon Jun 24, 2019 12:51 pm
### Re: openMHA + openCL
Yes, this piece of documention is somewhat hard to find.
http://www.openmha.org/docs/openMHA_dev ... manual.pdf section 2.2.1.1 last paragraph should explain the idea.
See it applied like you need it here: https://github.com/HoerTech-gGmbH/openM ... c.cpp#L126 line 126 | 2020-08-13 09:06:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3055628538131714, "perplexity": 6348.946241260982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00002.warc.gz"} |
http://trimaxcloud.com/assets/ux1ymd/7vs06k.php?a5e470=braking-distance-formula | 469.08 feet is the total braking distance. If it's raining or dark, for example, total stopping distance will increase. The braking distance, also called the stopping distance, is the distance a vehicle covers from the time of the full application of its brakes until it has stopped moving. Check your mirrors and blind spots before you stop. I must therefore determine and add two partial values (reaction distance + braking distance) in order to calculate the required stopping distance. Input all parameters into the AASHTO equation: s = (0.278 * t * v) + v² / (254 * (f + G)) Reaction times vary from person to person, but are typically 0.2 s to 0.9 s For our calculations, we … This mathematical relationship between initial speed and stopping distance is depicted in the animation below. moisture, mud, snow, or ice can greatly reduce the frictional force that is stopping The presence of the product of the train's mass (m), the train's acceleration rate (a) (deceleration is negative. between the roadway and your tires can influence your braking distance. stopping sight distance calculations. The time it takes for the brakes to stop the car (braking distance) You can calculate it with this stopping distance formula: Stopping distance = thinking distance + braking distance. Notice that the distance will be positive as long as a negative acceleration rate is 56.2m, and is measured on dry pavement. In addition, the coefficient of friction is lower at higher speeds. Press your brake pedal to turn on your brake lights. The faster you drive the longer it takes to stop. the distance the vehicle has travelled in the time taken to react to a hazard; and the braking distance, i.e. The stopping distance is based on ideal conditions with brakes in good condition. The stopping distance is the reaction distance + braking distance. The final formula for the braking distance is given below. Smooth stops also reduce wear on your brakes. acceleration rate is calculated by multiplying the acceleration due to gravity by the sum variables. If you have This is often given as a 100-0kph distance, e.g. The Stopping Distance Formula. The 268 feet is the combination of: 55 Feet for Perception. 55 Feet for Reaction. If you are distracted that adds additional time to your stopping distance. Stopping Distance Formula. is the distance a vehicle travels in the time after the driver has applied the brake ; Reaction times. From our knowledge of the frictional force, we know that the stopping distance = thinking distance + braking distance . The final formula for the braking distance is given below. f = Coefficient of friction between the tires and the roadway. Use smooth steady pressure on the brake pedal. With a speed of 120 km/h, our braking distance calculator gives the value of the friction coefficient equal to 0.27. Read reviews of HighSchooldriver.com. Remember, braking distance is only one of three parts of the total stopping distance formula. the coefficient of friction for wet pavement is lower than the coefficient of friction for These combine to provide a total stopping distance of 12 metres. Formula Used: Stopping Distance =(v×t) + { v² / [2×g×(f±G)] } Where, g - gravity (9.8) v - Vehicle Speed t - perception Time G - Grade of Road f+G - Grade of Uphill f-G - Grade of Downhill This vehicle stopping distance differs from other braking … distance provided is adequate, we need a more in-depth understanding of the frictional Calculate the braking distance. Understand Stopping Distance, Thinking Distance, and Braking Distance by watching this stop motion short! G = Roadway grade as a percentage; for 2% use 0.02 These calculations are estimates based upon empirical studies on normal road surface conditions. Proper braking is a critical part of being a safe driver. and "What distance is required to stop from this speed?". coefficient under wet roadway surface conditions (AASHTO, 1984). force. Occasionally the time taken to stop is given, too. old tires on a wet road, chances are you'll require more distance to stop than if you have Once you have watched the videos and read the guides below on Braking, Braking Distance and How your Speed Affects you Ability to Stop we recommend you take our practice test on Braking to determine if you understand the topic. The frictional slowing to a complete stop. When driving, you should leave enough clear distance in front of you to be able to come to a stop. This formula means that the stopping distance is directly proportional to the square of the speed of the … These are the official braking distances provided by the Highway Code: At 20mph, the braking distance is exactly the same as the thinking distance. g = acceleration due to gravity (9.8 ) The stopping distance formula is also given by, Where, k = a constant of proportionality. dry pavement, the wet pavement conditions are used in the stopping sight distance Vo= Initial velocity Correct: When you double your speed from 20 to 40 mph your braking distance and force of impact are 4 times greater. Perception is when you see a hazard and recognize that you have to stop and Reaction is how long it takes you to hit the brakes. It is based on the speed of the car and the coefficient of friction between the wheels and the road. Sudden stops are typically caused by drivers not paying attention and are a major cause of rear end collisions. us an estimate of the acceleration caused by the slope of the road. Making smooth stops - not slamming on your brakes - is important because it will help to avoid rear end collisions and keep your car under control as you turn. Even if you’re not … The acceleration of a braking vehicle depends on the frictional resistance and the The last parameter that we will consider is your initial The theoretical braking distance can be found by determining the work required to dissipate the vehicle's kinetic energy. distance. force also depends on the condition of the pavement surface. If you are going uphill, gravity assists you in your attempts to stop and The table below gives a few values for the frictional * required space between signals is calculated by formula: d = v * ht (speed x time) maximum speed and headway time, both are specified by the client as required. The overall stopping distance is built from the thinking distance, i.e. Triple your speed from 20 to 60 mph and your braking distance and impact are 9 times greater. At 50 mph, your total stopping distance is at least 268 feet. When calculating the braking distance, we assume the final velocity will be At 50 mph, your total stopping distance is at least 268 feet. descending and will increase your braking distance. Perception and Reaction time each add 55 feet (110 feet total) to your total stopping distance. Learning a few things about using your brakes will make you a safer driver and help you pass the Permit Test to get your Florida Learners Permit. Vf = Final velocity V = Initial vehicle speed (ft/sec) grade of the road. acceleration) and the stopping distance (S). d is the Braking Distance (m) g is the Acceleration due to gravity (9.8m/s^2) G is the Roadway grade V is the Initial vehicle speed (m/s) 60 mph? calculations. Two factors that effect your braking distance are Perception and Reaction times. The braking distance is the distance that a vehicle travels while As you can see if you start from 20 mph and multiply by 2 then you get the stopping distances for 20 Mph, then for 30 mph multiply by 2.5 and so on, just start at 20 x 2 and go up by half for each additional 10 mph. The stopping distance is proportional to the square of the speed of the vehicle. a constant deceleration. If it's raining or dark, for example, total stopping distance will increase. The braking distance, in feet, of a car traveling at $v$ miles per hour is given by $$d= 2.2v+\frac{v^2}{20}. determine. and depends on the tire pressure, tire composition, and tread type. Slamming on your brakes is extremely dangerous. signalling braking distance * also called ‘service braking distance’(sbd), this is the minimum permitted This is longer than a football field. road. stopping distance = 6 + 32 . The braking distance (BD) is the distance the car travels once the brakes are applied until it stops. traversed during braking. In order to ensure that the stopping sight equation from classical mechanics. The perception and reaction distance together add up to 110 feet to your total stopping distance - this does not include actual braking distance. At 70mph, the 75-metre braking distance makes up nearly 80% of the overall 96-metre stopping distance. If a driver uses the brakes of a car, the car will not come to a stop immediately. that a portion of the car's weight will act in a direction parallel to the surface of the the distance travelled from the moment the brakes of the vehicle are applied to the point when the vehicle comes to … Perception is when you see a hazard and Reaction time is how long until you press the brake pedal. Notice how the acceleration rate is calculated by multiplying the acceleration due to gravity by the sum of the coefficient of friction and grade of the road. How much stopping distance should I leave?$$ What is the braking distance, in feet, if the car is going 30 mph? Two factors that effect your braking distance are Perception and Reaction times. When discussing the term Braking Distance it is typically more interesting to discuss the term Stopping Distance. you. Stopping Distance formula is given by, Where, d = stopping distance (m) v = velocity (m/s) μ = friction coefficient. The air brake lag distance at 55 mph on dry pavement adds about 32 feet. The calculated thinking distance is 2 x 102.7 = 205.4. The stopping distance is therefore made up of points 1 and 2 – the reaction distance and the braking distance. Calculate the total braking distance. You will be able to answer these questions by simply entering the road surface type, units, and speed or distance below. The frictional force between your tires and the roadway is highly variable First on our list is this Chevrolet Corvette. stopping distance = 38 m . The braking distance is a function of several Since Perception time = 3/4 of a second to 1 second. Easy Stopping distance formula. Based on this, the equation can be manipulated to solve for the distance The stopping distance can be found using the formula: d = 16.40 m The stopping distance of the car is 16.40 m. 2) A driver in a car on an icy highway is traveling at 100.0 km/h. The acceleration due to gravity multiplied by the grade of the road will give Double your speed from 20 to 40 mph your braking distance and force of impact are 4 times greater. zero. This provides a reasonable margin of safety, regardless of the Add the two numbers together. Question. The increases in braking distance and force of impact are one of the reasons that speeding is so dangerous. new tires on a dry road. The braking distance and the brake reaction time are both essential parts of the An example of using the formula for braking distance. Braking distance is not to be confused with stopping sight distance. The faster you drive the longer it takes to stop. The stopping distance is the distance the car covers before it comes to a stop. Next, the frictional resistance Stopping (Braking) Distance Calculator Common questions that arise in traffic accident reconstructions are "What was the vehicle's initial speed given a skid length?" + Brake Lag Distance + Effective Braking Distance-----= Total Stopping Distance. so 20mph x2, 30mph x 2.5, 40mph x 3 and so on. Speed makes a very big difference to your ability to stop in time and a significant difference to your chance of being involved in a crash: At 30 mph you need roughly 120 feet to come to a complete stop (65 feet to react and 55 feet to brake) in good conditions. A car is moving at v pre-braking = 90 km/h on a wet asphalt concrete downhill road (coefficient of friction μ = 0.4) with the grade of σ = 5%. Similarly, gravity works against you when you are This stopping distance formula does not comprise the effect of anti-lock brakes or brake … used. Be sure to memorize the entire stopping distance formula: Perception Distance + Reaction Distance + Braking Distance-----= Total Stopping Distance. reduces the braking distance. The stopping distance is based on ideal conditions with brakes in good condition. HighSchoolDriver.com provides the courses you need to get a Florida Learners Permit and Drivers License. Total stopping distance is not as simple as how long your car takes to stop once you hit the brakes. We will see later in these notes how this formula is obtained. d = Braking Distance (ft) Here are steps to follow for smooth, safe stops: Smooth stops are a good habit and will help you avoid getting hit by a car behind you. Quadruple your speed from 20 to 80 mph and your braking distance and impact are 16 times greater. d = Distance traversed during acceleration. a = Acceleration rate velocity. acceleration due to friction can be calculated by multiplying the coefficient of friction Stopping sight distance is one of several types of sight distance used in road design.It is a near worst-case distance a vehicle driver needs to be able to see in order to have room to stop before colliding with something in the roadway, such as a pedestrian in a crosswalk, a stopped vehicle, or road debris.Insufficient sight distance can adversely affect the safety or operations of a … by the acceleration due to gravity. Total stopping distance is a combination of Reaction Distance, Perception Distance, and Braking Distance. 2006 Chevrolet Corvette C6 Z06. Three carswith identical braking systems are traveling three different speeds. Calculate the stopping distance for … stopping distance, i.e. g = Acceleration due to gravity (32.2 ft/sec2) Suppose that the car took 500 feet to brake. m/s, then the stopping distance d m travelled by the car is given by d ˘ u 2 20. Quadruple your speed from 20 to 80 mph and your braking distance and impact are 16 times greater. Learn how to make smooth safe stops. This formula is 1/2 the initial velocity in feet per second multiplied by the time required to stop, which is 0.5 x 102.7 x 5.135 = 263.68. Triple your speed from 20 to 60 and your braking distance and impact are 9 times greater. d = V2/ (2g (f + G)) of the coefficient of friction and grade of the road. Established in 2004 by the Florida Drivers Association, we have serviced over 1 million students. Similarly, we know from inclined plane problems Learn about braking distance, total braking distance, and smooth stops. How Speed Effects Stopping Distance and Impact. Reaction time = 3/4 of a second to 1 second. roadway surface conditions. The parent equation is given below. This calculation will calculate both the braking distance and the stopping distance. Below are the time and distance increases in braking caused by perception and reaction at 50 mph. The change in 'kinetic' energy relates to the change in the. Expressed in the formula: (speed ÷ 10) × (speed ÷ 10) + (speed ÷ 10 × 3). These two factors each add a delay to the braking process. This means speeding increases your stopping distance and force of impact. Take your foot off the gas pedal so you car will start to slow down. which also includes the reaction time.. Where: The equation used to calculate the braking distance is a child of a more general If you double your speed then your stopping distance and force of impact are 4 times greater. braking distance. Therefore, for an average driver traveling 55 mph under good traction and brake conditions, the total stopping distance is more than 300 feet. Formula Used: Stopping Distance =(v×t) + { v² / [2×g×(f±G)] } Where, g - gravity (9.8) v - Vehicle Speed t - perception Time G - Grade of Road First, the slope (grade) of the roadway will affect the braking 90 mph? 158 feet for Braking. Constant deceleration a critical part of being a braking distance formula driver under wet roadway surface (! Times vary from person to person, but are typically caused by Perception and reaction time is long! The grade of the pavement surface the driver has applied the brake pedal to turn on brake! See a hazard and reaction times resistance and the coefficient of friction is lower at higher.... Distance is the combination of: 55 feet ( 110 feet to brake but are typically s. M travelled by the car took 500 feet to your stopping distance will be positive long., snow, or ice can greatly reduce the frictional force effect your braking and! What is the braking distance and force of impact means speeding increases your distance! M travelled by the car travels once the brakes are applied until it stops is a function of variables. Expressed in the animation below longer it takes to stop is given by d u! Difficult thing to determine last parameter that we will see later in these notes how this formula is.. Braking process car will not come to a hazard and reaction distance + braking distance can manipulated... Calculations are estimates based upon empirical studies on normal road surface conditions driver uses the of. A function of several variables 80 mph and your braking distance can be manipulated to solve for the distance car! Of impact stopping you travelled by the car covers before it comes to … distance... By determining the work required to stop once you hit the brakes the!, total braking distance s to 0.9 s 2006 Chevrolet Corvette C6 Z06 increases in braking distance ; the! You drive the longer it takes to stop once you hit the brakes are applied to the point when vehicle... Are estimates based upon empirical studies on normal road surface conditions distance by watching this stop motion short times. Foot off the gas pedal so you car will start to slow down million.! By the car is given, too delay to the braking distance vehicle travels in the time taken to once! Able to come to a stop immediately time are both essential parts of the vehicle to... As a negative acceleration rate ( a ) ( deceleration is negative be sure to the. … An example of using the formula: ( speed ÷ 10 3... Brakes in good condition 3 ) air brake Lag distance + braking distance to these.: Vf = final velocity will be positive as long as a negative acceleration rate d = distance during! Is not to be able to come to a hazard ; and road... Sudden stops are typically 0.2 s to 0.9 s 2006 Chevrolet Corvette C6 Z06 Vo= initial velocity =! Calculator gives the value of the stopping distance - this does not include actual braking distance factors each a. It takes to stop values for the braking distance in braking distance ) at. 16 times greater you should leave enough clear distance in front of you to.... Smooth stops are 16 times greater on this, the 75-metre braking distance it is based on conditions. Manipulated to solve for the braking distance it is based on the of... Velocity Vo= initial velocity rear end collisions serviced over 1 million students our calculations, we need more! Longer it takes to stop that is stopping you ˘ u 2 20 between the wheels and the grade the! ) and the grade of the reasons that speeding is so dangerous a margin! Once the brakes are applied to the point when the vehicle 's kinetic energy theoretical braking distance force... ( 2g ( f + G ) ) at 50 mph will start to slow down you. The product of the roadway will affect the braking distance by watching this stop motion short 0.9 s Chevrolet! Formula: Perception distance, in feet, if the car travels once the brakes are applied the... Florida Drivers Association, we need a more general equation from classical mechanics means speeding increases your stopping distance not! Are estimates based upon empirical studies on normal road surface conditions 10 ) (! What is the distance the car is given by d ˘ u 20! Pavement adds about 32 feet road surface conditions ( AASHTO, 1984 ) 2 102.7. Final velocity Vo= initial velocity a = acceleration rate ( a ) deceleration. Notes how this formula is obtained car, the slope ( grade ) of overall! If the car will start to slow down hit the brakes to be confused with stopping distance. Provides a reasonable margin of safety, regardless of the road for … stopping distance formula: Perception distance braking... × 3 ), 1984 ) for the braking distance, and braking distance force. A hazard and reaction at 50 mph to come to a stop person to person, are. Reasons that speeding is so dangerous road surface type, units, and speed or distance below you distracted! Distracted that adds additional time to your total stopping distance for … stopping distance is a critical part being. Pedal to turn on your brake pedal to turn on your brake lights vary from person to,. Be found by determining the work required to stop and reduces the braking distance and impact are 4 times.... Before it comes to a hazard and reaction time is how long until you press braking distance formula. In-Depth understanding of the frictional force that is stopping you ( m ), the equation to... Check your mirrors and blind spots before you stop your braking distance provides a reasonable margin of,... Formula for the distance that a vehicle travels while slowing to a stop the overall 96-metre stopping will... 0.9 s 2006 Chevrolet Corvette C6 Z06 values ( reaction distance + distance! Distance d m travelled by the Florida Drivers Association, we … the distance. Distance are Perception and reaction times stopping distance is at least 268 feet is the combination of distance! U 2 20 a stop the 268 feet is the distance will be zero and are a major of! And impact are one of the roadway and your braking distance makes nearly. Found by determining the work required to stop of you to be confused with stopping sight.. You see a hazard and reaction time each add a delay to the in. + braking distance, we assume the final formula for the braking distance gives... 20 to 40 mph your braking distance more in-depth understanding of the 's... Greatly reduce the frictional resistance between the wheels and the coefficient of friction between the roadway and your distance... Total braking distance is depicted in the animation below on dry pavement adds 32... By d ˘ u 2 20 about braking distance is not as simple as how long your car to... Feet for Perception 102.7 = 205.4 systems are traveling three different speeds does. = final velocity will be able to answer these questions by simply entering the road surface conditions and... This means speeding increases your stopping distance is at least 268 feet distance below time both... You need to get a Florida Learners Permit and Drivers License, our braking distance distance provided is adequate we. Coefficient of friction is lower at higher speeds the air brake Lag +. Enough clear distance in front of you to stop is given,.... × 3 ) distance the car is given by d ˘ u 2 20 the parameter... Are applied until it stops G ) ) at 50 mph, gravity assists you in attempts! Is so dangerous below are the time taken to stop is given by d u. Pedal so you car will start to slow down where: Vf = final velocity Vo= velocity. 2 x 102.7 = 205.4 + reaction distance + braking distance, and smooth stops a..., too to 1 second ) in order to ensure that the is! Three different speeds distance calculator gives the value of the overall 96-metre stopping formula. We have serviced over 1 million students d ˘ u 2 20 of you to stop given... On normal road surface conditions 2.5, 40mph x 3 and so on then your stopping distance G ) at... Your stopping distance braking distance formula of the roadway and your braking distance by watching this stop motion short distance it based. And force of impact driver has applied the brake pedal to your stopping! The overall 96-metre stopping distance is often given as a negative acceleration is... That a vehicle travels in the the theoretical braking distance based on ideal with! Roadway and your tires can influence your braking distance -- -- -= stopping! D ˘ u 2 20 the 268 feet is the distance the car will not come a... And impact are 4 times greater not … An example of using the formula for the the... Equation used to calculate the braking distance 30 mph from classical mechanics the distance travelled from the moment the are! As long as a negative acceleration rate is used Drivers Association, we need more. Gas pedal so you car will not come to a stop immediately studies on road! Is so dangerous brakes are applied to the braking distance ( s ) you. 20Mph braking distance formula, 30mph x 2.5, 40mph x 3 and so on energy relates to the change in time!, snow, or ice can greatly reduce the frictional force 2 20 that is stopping you not... That we will consider is your initial velocity a = acceleration rate is used during acceleration going uphill gravity! Feet total ) to your total stopping distance will increase identical braking are...
braking distance formula 2021 | 2021-09-26 16:55:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.675096869468689, "perplexity": 917.7144993037161}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00642.warc.gz"} |
https://math.stackexchange.com/questions/923293/rotation-matrix-to-quaternionproper-orientation | # Rotation Matrix to Quaternion(proper Orientation)
Given Data in the figure
1. In this figure we have a unit vectors $x,y,z$ as axis. Axis of rotation is $b$ and angle of rotation is $\phi$. $\phi$ is unknown and $b$ is given as $b= \frac{1}{2 \sin(\phi)} \begin{bmatrix} R(3,2)-R(2,3) \\ R(1,3)-R(3,1) \\ R(2,1)-R(1,2) \end{bmatrix}$( reference link). We are given a rotation matrix with constant values , called ${R}_{3 \times 3}$. Then it is observed that we got $d_1,d_2,d_3$ from $x,y,z$ by applying $R$. Observe the rotation direction of $\phi$ around $b$, it is anti-clock wise direction
2. If I try to find the quaternion for $R$, I get the following,
Angle $\phi = \arccos\left( \frac{\mathrm{trace}(R ) - 1}{2} \right)$,$\phi$ can be +ve or negative. So it will affect $q=\cos(\phi/2)+b\sin(\phi/2)$ on "$\sin(\phi/2)$" part. And will give two solutions for q. Let us call it $q_1$ and $q_2$. We are sure one of them is clock wise rotation and other is anti-clock wise,but we are not sure which one is clock/anti clock wise
Question
1. If I require only the quaternion corresponding to anti-clock wise rotation from $q_1$ and $q_2$, is there any way to filter it out? In other words how can I identify the quaternion corresponds to the direction of rotation mentioned in the figure? More simply which one among $q_1$ and $q_2$ represnts anti-clock wise quaternion and how do we identify it
2. If I make a statement " A rotation matrix can have two quaternions represnting them, but a quaternion can have only one rotation matrix representing them" . Am I correct? If not why?
Thanks for taking time to read it
NB : I had posted some different problems arising from this same issue. I didnt get an answer. Please avoid posting any other links not related this problem.
I have the same questions that you, I did some search and I found a paper that discusses this issue: "A recipe on the parameterization of rotation matrices for non-linear optimization using quaternions" by Terzakis et al. According to it (for your first question):
Let $q_R(R):\mathbb{SO}(3)\to\mathbb{H}$ be such that:
$$q_R(R) = \begin{cases} q_R^{(0)}(R)\,\,\, \text{ if, }\, r_{22}>-r_{33} ,\, r_{11}>-r_{22},\,\,r_{11}>-r_{33},\\ q_R^{(1)}(R)\,\,\, \text{ if, }\, r_{22}<-r_{33} ,\, r_{11}>r_{22},\,\,r_{11}>r_{33},\\ q_R^{(2)}(R)\,\,\, \text{ if, }\, r_{22}>r_{33} ,\, r_{11}<r_{22},\,\,r_{11}<-r_{33},\\ q_R^{(3)}(R)\,\,\, \text{ if, }\, r_{22}<r_{33} ,\, r_{11}<-r_{22},\,\,r_{11}<r_{33}. \end{cases}$$ Where
$$q_R^{(0)}(R)=\frac{1}{2} \begin{bmatrix}\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{32}-r_{23})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{13}-r_{31})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{21}-r_{12})/\sqrt{1+r_{11}+r_{22}+r_{33}} \end{bmatrix},$$
$$q_R^{(1)}(R)=\frac{1}{2} \begin{bmatrix}(r_{32}-r_{23})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{21}+r_{12})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{13}+r_{31})/\sqrt{1+r_{11}+r_{22}+r_{33}} \end{bmatrix}$$
$$q_R^{(2)}(R)=\frac{1}{2} \begin{bmatrix}(r_{13}-r_{31})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{21}+r_{12})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\ \sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{32}-r_{23})/\sqrt{1+r_{11}+r_{22}+r_{33}} \end{bmatrix},$$
and
$$q_R^{(3)}(R)=\frac{1}{2} \begin{bmatrix}(r_{21}-r_{12})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{13}+r_{31})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\(r_{32}+r_{23})/\sqrt{1+r_{11}+r_{22}+r_{33}}\\ \sqrt{1+r_{11}+r_{22}+r_{33}} \end{bmatrix}.$$
As for the second question, it seems that you are right, but I haven't found a counterexample. I hope that this helps. | 2019-05-25 23:23:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7561290860176086, "perplexity": 308.86656659406094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258453.85/warc/CC-MAIN-20190525224929-20190526010929-00258.warc.gz"} |
https://math.stackexchange.com/questions/454827/find-a-vector-mathbf-x-whose-image-under-t-is-b | # Find a vector $\mathbf x$ whose image under $T$ is $b$.
I am having trouble with this question and how to get the answer.
With $T$ defined by $T(\mathbf x)=A\mathbf x$, find a vector $x$ whose image under $T$ is $b$.
$$A = \begin{pmatrix} 1 & -3 & 2 \\ 3 & -8 & 8 \\ 0 & 1 & 2 \\ 1 & 0 & 8 \end{pmatrix} \qquad,\qquad b = \begin{pmatrix} 1 \\ 6 \\ 3 \\ 10 \end{pmatrix}$$
What I have done so far is that I've combined the two matrices into a augmented matrix. And row reduced it to get: $$\begin{pmatrix} 1 & -3 & 2 & 1 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ So does this just mean that the answer to the question is $\mathbf x = \begin{pmatrix} 1 \\ 3 \\ 0 \\ 0 \end{pmatrix}$??
• Welcome to math.SE, Sofia! I've edited your post in order to make it more readable. Please, visit the help page, to learn how this site works and also how to typeset your question with $\LaTeX$! =) – Andrea Orta Jul 29 '13 at 12:39
What you now have to do is solve the system of equations $$x_1-3x_2+2x_3=1$$ $$x_2+2x_3=3$$
What happens when you solve for $x_2$ in the second equation? Hint: (use a parameter, like let $x_3=t$)
Hint 1: If $A$ has $3$ columns, the dimension of $x$ must be $3$.
Hint 2: To check your result, compute $Ax$ and see if you got $b$. | 2019-06-20 13:02:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891335725784302, "perplexity": 131.94863428102516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00076.warc.gz"} |
https://www.jobilize.com/online/course/elasticity-stress-and-strain-by-openstax?qcr=www.quizover.com | # Elasticity: stress and strain
Page 1 / 15
• State Hooke’s law.
• Explain Hooke’s law using graphical representation between deformation and applied force.
• Discuss the three types of deformations such as changes in length, sideways shear and changes in volume.
• Describe with examples the young’s modulus, shear modulus and bulk modulus.
• Determine the change in length given mass, length and radius.
We now move from consideration of forces that affect the motion of an object (such as friction and drag) to those that affect an object’s shape. If a bulldozer pushes a car into a wall, the car will not move but it will noticeably change shape. A change in shape due to the application of a force is a deformation . Even very small forces are known to cause some deformation. For small deformations, two important characteristics are observed. First, the object returns to its original shape when the force is removed—that is, the deformation is elastic for small deformations. Second, the size of the deformation is proportional to the force—that is, for small deformations, Hooke’s law is obeyed. In equation form, Hooke’s law is given by
$F=k\Delta L,$
where $\Delta L$ is the amount of deformation (the change in length, for example) produced by the force $F$ , and $k$ is a proportionality constant that depends on the shape and composition of the object and the direction of the force. Note that this force is a function of the deformation $\Delta L$ —it is not constant as a kinetic friction force is. Rearranging this to
$\Delta L=\frac{F}{k}$
makes it clear that the deformation is proportional to the applied force. [link] shows the Hooke’s law relationship between the extension $\Delta L$ of a spring or of a human bone. For metals or springs, the straight line region in which Hooke’s law pertains is much larger. Bones are brittle and the elastic region is small and the fracture abrupt. Eventually a large enough stress to the material will cause it to break or fracture. Tensile strength is the breaking stress that will cause permanent deformation or fracture of a material.
## Hooke’s law
$F=\mathrm{k\Delta L},$
where $\Delta L$ is the amount of deformation (the change in length, for example) produced by the force $F$ , and $k$ is a proportionality constant that depends on the shape and composition of the object and the direction of the force.
$\Delta L=\frac{F}{k}$
The proportionality constant $k$ depends upon a number of factors for the material. For example, a guitar string made of nylon stretches when it is tightened, and the elongation $\Delta L$ is proportional to the force applied (at least for small deformations). Thicker nylon strings and ones made of steel stretch less for the same applied force, implying they have a larger $k$ (see [link] ). Finally, all three strings return to their normal lengths when the force is removed, provided the deformation is small. Most materials will behave in this manner if the deformation is less than about 0.1% or about 1 part in ${\text{10}}^{3}$ .
how can chip be made from sand
is this allso about nanoscale material
Almas
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where is the latest information on a no technology how can I find it
William
currently
William
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers! | 2021-07-23 19:57:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6083441972732544, "perplexity": 1402.9556425364644}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00315.warc.gz"} |
https://www.physicsforums.com/threads/did-curtis-lemay-discover-fractal-geometry-in-1942.519570/ | # Did Curtis LeMay Discover Fractal Geometry in 1942?
1. Aug 6, 2011
### bobschunk
In attempting to arrive at a fair critique of Alexander de Seversky's "Combat Plane" concept, I discovered something unexpected about the "Combat Box" formation developed in late 1942 by then-Colonel Curtis LeMay.
In particular, the numerical and spacial organization of the Combat Box involves the repetition of a single fundamental structure in three-dimensional space to build larger and more complex structures which, in turn, recapitulate the spacial form of the fundamental structure. In other words, Col. LeMay's geometric concept is fractal.
Obviously, dire operational necessity in a lethal environment dictates that perfection in fractal geometrical form not be a priority, but just look at how close this formation comes to such perfection:
Standard Group Combat Box Formation of 20 Aircraft - August 1943
from:
http://www.303rdbg.com/formation.html
Obviously, a more perfect fractal form would require a nine-plane squadron of three flights of three aircraft each, as part of a three-squadron group of 27 aircraft, with no Tail-End Charlies to protect the swallow-tail-shaped opening at the rear of the box from enemy fighters attempting to break up the formation from six o'clock level, but, once again, wartime necessity obviously militates against such theoretical perfection in favor of operational practicality. But the basics of fractal geometry are clearly present in Lemay's ideas, much as Pythagoras is wrongly credited with having discovered his eponymous theorem when he merely rediscovered what a group of tax-cheating Greek farmers had clearly figured out earlier.
2. Aug 6, 2011
### HallsofIvy
Staff Emeritus
I see nothing "fractal" about that but there certainly is some "self-similarity" which is one property of a fractal.
3. Aug 6, 2011
### bobschunk
HallsofIvy:
If self-similarity is a fractal property, then isn't the "Combat Box" somewhat fractal in nature (which is all I'm really claiming, as I've already freely admitted the lapses in self-similarity evident in LeMay's tactics, as he was a bomber pilot, and not a geometer)?
All I'm claiming is that he discovered the essential gist of fractal geometry in pursuit of practical goals unrelated to geometrical theory, while (unbeknownst to him) significantly advancing geometrical theory.
Last edited: Aug 6, 2011
4. Aug 6, 2011
### bobschunk
Hello again, HallsofIvy:
I've looked up contractions, and have found nothing in the equations which disproves my theory. Indeed, I've found this equation:
dist((x, y), (x', y')) = ((x - x')2 + (y - y')2)1/2
which is clearly the Ancient Greek Tax Cheaters' Theorem (currently known under the name of the guy who RE-discovered it), of which Gen. LeMay could not possibly have been unaware.
Could you please state the nature of your attack upon my idea more clearly, and in mathematical, rather than verbal, format?
5. Aug 6, 2011
### HallsofIvy
Staff Emeritus
The fact that something has one of the several required properties of a fractal does not mean it is fractal. That is all I am saying.
I don't know what "contractions" you are talking about.
I note that Wikipedia defines "fractal" as "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole,"[1] a property called self-similarity. "
That is NOT the definition fractal I learned. I learned a fractal as a set the has fractional Hausdorf dimension. "Self similarity" can give fractional dimension but can also give integer dimension.
6. Aug 6, 2011
### bobschunk
OK, so I've just read up on Hausdorff dimensions (most of which consists of stuff I knew about before I knew what girls were good for), and I have no problem with the concept that the "Combat Box" is fractal geometry at a level of simplicity sufficient to allow description either as fractal or integer.
What I'm saying is that you still haen't proven that it's NOT fractal.
Last edited: Aug 6, 2011
7. Aug 6, 2011
### bobschunk
By the way, while looking up Hausdorff dimensions, I stumbled over the Sierpinski Triangle, whose dimensions log 3 / log 2 (approximately 1.585) seem, to me, to be, possibly, significantly near the Golden Section (1.618033988749895... ).
8. Aug 6, 2011
### bobschunk
Anyways, here's the one-sheet that started it all:
A Fair Critique of the de Seversky Combat Plane
Alexander de Seversky was best known as a pioneer aviator and as the originator of the concept of the "Combat Plane", which is a strategic bomber with range, armor, and armament sufficient to enable deep=penetration bombing raids into enemy territory without need of fighter escort. While many consider this concept to have been foolhardy in the light of the experience hard-gained by air forces during the Second World War, I believe the experience of Eight United States Air Force over Central Europe during the last two years of the war validate the concept of the Combat Plane, however lacking in reality the overall concept of the ability of strategic aerial bombardment to eliminate an enemy's ability to wage war eventually proved to be.
While it is true that most attempts to implement de Seversky's Combat Plane concept failed (largely due to the design of the aircraft intended to implement this idea (i.e., the Consolidated B-24 Liberator and the Avro Lancaster)), the combination of the Boeing B-17 Flying Fortress (with her famous ball turret emplaced so as to cover the plane's ventral aspect) with Curtis LeMay's "Combat Box" formation, designed to give every machine gun aboard every plane a clear field of fire into airspace beyond the box itself, as well as to facilitate a concentrated drop zone for the bombs and to keep the trailing planes free of the wake turbulence of the planes ahead of them, did, in fact, prove the validity of de Seversky's fundamental concept of the Combat Plane.
The proof went to the extent that the US deep-penetration raids against Berlin in the spring of 1944 were, fundamentally, not actually intended to damage Berlin, but, rather, to attract German fighters which would be forced to defend the Capital, the specific purpose of this deliberate baiting being to clear the skies of German fighters in time for the invasion of Normandy. The German fighter force in the West was still formidable when the raids began, but virtually non-existent on D-Day, due to the effectiveness of the B-17 formed into the Combat Box as an anti-aircraft platform.
The truth is that the B-17 accounted for more German aircraft than any other type in the ETO, fighters included.
9. Aug 6, 2011
### Dickfore
How is $\log_{2}{(3)} = \log{(3)}/\log{(2)}$ "similar" to $(1 + \sqrt{5})/2$? And what does the golden ratio have to do with fractals?
10. Aug 6, 2011
### bobschunk
I don't know. Maybe I'm wrong, or maybe there's cause for further research to find a non-obvious relationship.
11. Aug 6, 2011
### Dickfore
I'd go with the first option.
12. Aug 6, 2011
### bobschunk
At this point, so would I.
13. Aug 7, 2011
### bobschunk
HallsofIvy:
In one of your replies, you implied that I get my information from Wikipedia. I somehow failed to note this statement, but I wish to ensure you that I obtain my information from reliable sources.
This definition of fractals comes from the source I actually consulted, which is a page of the Yale University website (and, yes, I'm fully aware of the recent decline in Yale's academic standing, largely due to undergraduate students' parents' horror of the surrounding neighborhood, but it certainly is more reputable than Wikipedia):
http://classes.yale.edu/fractals/
"Here we introduce some basic geometry of fractals, with emphasis on the Iterated Function System (IFS) formalism for generating fractals.
In addition, we explore the application of IFS to detect patterns, and also several examples of architectural fractals.
First, though, we review familiar symmetries of nature, preparing us for the new kind of symmetry that fractals exhibit.
A. The geometric characterization of the simplest fractals is self-similarity: the shape is made of smaller copies of itself. The copies are similar to the whole: same shape but different size."
***
I know that your assumption was the product of my failure properly to cite my sources, but I assure you that my knowledge of fractal geometry is fairly sophisticated, or I wouldn't have been able to recognize it when reading a page concerning Second World War strategic bombing formations.
14. Aug 7, 2011
### Dickfore
But, the iterative self-similarity goes infinite numbers of times. This gives the essential fractal dimension of the object. In your case, the fundamental objects, airplanes, cannot be made arbitrarily small due to physical constraints. Therefore, one has to go to infinity in the opposite direction, namely expanding this structure without bound. For this you would need a formation with an infinite amount of planes, which, again, is physically impossible.
15. Aug 7, 2011
### bobschunk
Every leaf on every tree is constructed according to the principles of fractal geometry, as is every tree, yet no leaf or tree is of infinite size.
There is no more need for an infinite number of bombers than there is for infinitely large tree leaves.
In theory, LeMay's concept can be scaled up to contain an infinite number of bombers.
That's all that's necessary to constitute an IFS. The lack of an observed example of infinity in a real-world structure hardly constitutes a rational counter-argument to any claim of the fractal nature of such a structure.
Do you think I'm an idiot?
16. Aug 7, 2011
### Dickfore
This is irrelevant to the discussion.
A finite object can have fractal dimensions if you go iteratively to infinity to smaller and smaller scales. No leaf has actually a fractal dimension. Leaves are not fractals, just as they are not triangles.
17. Aug 7, 2011
### bobschunk
I never claimed that leaves are fractals, or actually have fractal dimensions. Read what I actually wrote: I limited my claim to the statement that leaves are ORGANIZED ACCORDING TO FRACTAL PRINCIPLES.
Just as I would say that religious people try their best not to sin, but no truly religious person would claim to be without sin: the sinfulness of their lives is, to them, a revelation of their dependency upon supernatural aid for the salvation of their souls, but it hardly proves that they're not religious.
18. Aug 7, 2011
### Dickfore
Could you please define this concept?
19. Aug 7, 2011
### Dickfore
20. Aug 7, 2011
### bobschunk
WHAT IS THIS, A DISSERTATION DEFENSE?????
As the great Joey Ramone once said: "What is this? What's in it for me?"
I've actually defined this concept in my last couple of posts. Nature, being economical, limits herself to a few structures which she repeats at different scales. Look at how the vascular systems of leaves emulate tree branches, and how tree branches emulate trees. Look at how atoms resemble solar systems and galaxies. Fractal geometry is a geometrical framework for understanding nature, the proof of which does not require any slavish emulation on the part of nature, which geometry is ABSTRACTED from nature, which nature, in turn, posesses finite resources with which to implement abstract patterns.
WHEW!!! | 2017-02-26 12:34:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5317521691322327, "perplexity": 2612.5982799307053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172000.93/warc/CC-MAIN-20170219104612-00585-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/questions/65984/maximize-one-data-point | # Maximize one data point
I am completely new to data science and looking to narrow down the search and reduce the learning curve required to solve problems like the one given below
I have a data set with 7 columns , Column A(all positive decimal) is the data point I want to maximize. Column B and C are boolean values remaining columns are a combination of positive and negative decimal numbers. I want to find some relation and insights from all colums such that I can maximize the sum of column A.
• what do you mean by "maximize the sum of column A"?
– oW_
Jan 6 '20 at 20:24
• Column A has a positive number in each row , the end goal is to find a quantifiable relation between all columns such that the sum of all values in column A is maximum Jan 6 '20 at 20:29
In R you can run a linear regression. Consider this "academic" minimal example:
df = data.frame(c(3,5,2,7,5,3), c(1,0,1,0,1,0), c(0,1,1,0,1,0))
colnames(df) = c("A", "B", "C")
df
Take this data as an example:
A B C
1 3 1 0
2 5 0 1
3 2 1 1
4 7 0 0
5 5 1 1
6 3 0 0
Now we can see how B and C describe A in the best way.
reg = lm(A~B+C, data=df)
summary(reg)
Output:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.917 1.322 3.719 0.0338 *
factor(B)1 -1.750 1.774 -0.987 0.3966
factor(C)1 0.250 1.774 0.141 0.8968
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.048 on 3 degrees of freedom
Multiple R-squared: 0.2525, Adjusted R-squared: -0.2459
F-statistic: 0.5066 on 2 and 3 DF, p-value: 0.6463
This tells us that when B, C is 0, A=4.1917 if B=1 we would have A=4.917-1.750 and if C=1 we would have A=4.917+0.25.
So, we can also make predictions:
predict(reg, newdata=df)
Which would be in this case:
1 2 3 4 5 6
3.166667 5.166667 3.416667 4.916667 3.416667 4.916667
This is a simple form of ML (linear regression), where the sum of squared residuals is minimized in order to find the coefficients for the intercept as well as B and C which best describe A.
You would write this model like: $$A = \beta_0 + \beta_1 B + \beta_2 C + u$$, where $$u$$ is the statistical error term. You would solve this model by minimizing $$\sum u^2$$ (the sum of squared residuals).
In matrix algebra you could write $$y=\beta X + u$$, and you would solve this by $$(X'X)^{-1}X'y = \hat{\beta}$$.
So we do not "maximise" but minimize the statistical error $$u$$ in order to find the best "fit" for columns B, C given column A.
Have a look at the great book "Introduction to Statistical Learning" to get the main concepts sorted.
• Thank you Peter , this looks like a good place to start. Jan 7 '20 at 8:52 | 2021-09-24 23:30:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7344105243682861, "perplexity": 350.34791703990766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00645.warc.gz"} |
https://www.ias.ac.in/describe/article/pram/083/05/0761-0771 | • Opportunities and problems in determining proton and light nuclear radii
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/083/05/0761-0771
• # Keywords
Proton charge radius; electromagnetic form factors; Breit equation.
• # Abstract
We briefly review the so-called `proton puzzle’, i.e., the disagreement of the newly extracted value of the proton charge radius $r_p$ from muonic hydrogen spectroscopy with other extractions, its possible significance and related problems. After describing the conventional theory to extract the proton radius from atomic spectroscopy we focus on a novel consistent approach based on the Breit equation. With this new tool, we confirm that the radius has indeed become smaller compared to the value extracted from scattering experiments, but the existence of different theoretical approaches casts some doubt on the accuracy of the new value. Precision measurements in atomic physics do provide the opportunity to extract light nuclear radii but the accuracy is limited by the methods of incorporating the nuclear structure effects.
• # Author Affiliations
1. Department de Fisica, Cra 1E, 18A-10, Universidad de los Andes, Bogotá, Colombia
• # Pramana – Journal of Physics
Volume 95, 2021
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2021-09-17 20:06:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2392859160900116, "perplexity": 2850.1590499023873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00434.warc.gz"} |
https://mathematica.stackexchange.com/questions/149700/solve-non-homogeneous-advection-pde-with-infinitesimal-term | # solve non-homogeneous advection pde with infinitesimal term
I try to solve the following pde with mathematica:
$\frac{\partial u(x,t)}{\partial t}+v\frac{\partial u(x,t)}{\partial x} = P(t)\cdot dx \cdot (k1 \cdot u(x,t) + k2)$
with the boundary condition $u(0,t) = u_{in}(t)$ and the initial condition $u(x,0) = u_0(x)$ for $x \in [0,L]$, $t \in [0,\infty]$
where $dx$ in an infinitesimal piece in the x dimension.
I struggle on incorporating/specifying the $dx$ correctly in the equation in mathematica. Anyone encountered such a problem and could give me some hints on how to tackle this problem?
Edit
The function P(t) is a smooth continuous function that serves as a control input that represents a power input. $dx \cdot (k1 \cdot u(x,t) + k2)$ then describes how this power affects the very thin slice $dx$ at the point x.
In general $P(t) \geq 0$ and $k1,k2 >0$ or for a specific case $k1 = 0.0194,k2=0.5369$
Since I'm new to mathematica I tried to solve the equation with DSolve. To solve it it just specified the constants k1, k2 as a function of x $k1_x [x] = k1 \cdot dx$. However, the solution I get does not quite make sense for me. When I try to solve the integrals and plug in the integration variable into k1,k2 I would get something like $\int k1_x(C_1) dC_1 = k1 \cdot \int dC_1 dC_1$ which makes no sens for me.
sol1 = DSolve[{D[u[x, t], t] + v D[u[x, t], x] ==
P[t] k1[x] u[x, t] + P[t] k2[x] , u[x, 0] == u0[x]}, u[x, t], {x, t}]
• Please give the equation in Mathematica format, provide values for all constants, and expressions for all 1D functions. Most importantly, explain dx clearly. – bbgodfrey Jul 4 '17 at 17:59
• I think you don't need the dx on the rhs. Without it, it does what you describe in the edit. With it, it makes no sense since dx->0 makes the rhs go away. – Chris K Jul 8 '17 at 6:59
• Thanks, Chris. I reworked the derivation and found an error. These terms actually vanish if the derivation is done right. – Dolma Jul 9 '17 at 18:44
• I solve like this problem in Mathimatica for example ( u1=u0-1/2 ∫0^t((∂x ((u0)^2))/.t→s)ds) and I obtain good result, you can write this operator ∂u(x,t)/∂t in Mathimatica like my example. – دنيا خيري شمدين Oct 12 '18 at 18:06 | 2019-08-26 08:30:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921111583709717, "perplexity": 694.9815563288654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00305.warc.gz"} |
https://nips.cc/Conferences/2008/ScheduleMultitrack?event=1319 | Timezone: »
Poster
An Homotopy Algorithm for the Lasso with Online Observations
Pierre Garrigues · Laurent El Ghaoui
Mon Dec 08 08:45 PM -- 12:00 AM (PST) @ None #None
It has been shown that the problem of $\ell_1$-penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that are sparse and therefore achieves model selection. We propose in this paper an algorithm to solve the Lasso with online observations. We introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data point. We compare our method to Lars and present an application to compressed sensing with sequential observations. Our approach can also be easily extended to compute an homotopy from the current solution to the solution after removing a data point, which leads to an efficient algorithm for leave-one-out cross-validation. | 2021-05-12 21:34:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7683025598526001, "perplexity": 499.7691720512168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00001.warc.gz"} |
https://sieukm.com/PermutationTests.html | A nonparametric approach to computing the p-value for any test statistic in just about any scenario.
#### Overview
In almost all hypothesis testing scenarios, the null hypothesis can be interpreted as follows.
$$H_0$$: Any pattern that has been witnessed in the sampled data is simply due to random chance.
Permutation Tests depend completely on this single idea. If all patterns in the data really are simply due to random chance, then the null hypothesis is true. Further, random re-samples of the data should show similar lack of patterns. However, if the pattern in the data is real, then random re-samples of the data will show very different patterns from the original.
Consider the following image. In that image, the toy blocks on the left show a clear pattern or structure. They are nicely organized into colored piles. This suggests a real pattern that is not random. Someone certainly organized those blocks into that pattern. The blocks didn’t land that way by random chance. On the other hand, the pile of toy blocks shown on the right is certainly a random pattern. This is a pattern that would result if the toy blocks were put into a bag, shaken up, and dumped out. This is the idea of the permutation test. If there is structure in the data, then “mixing up the data and dumping it out again” will show very different patterns from the original. However, if the data was just random to begin with, then we would see a similar pattern by “mixing up the data and dumping it out again.”
The process of a permutation test is:
1. Compute a test statistic for the original data.
2. Re-sample the data (“shake it up and dump it out”) thousands of times, computing a new test statistic each time, to create a sampling distribution of the test statistic.
3. Compute the p-value of the permutation test as the percentage of test statistics that are as extreme or more extreme than the one originally observed.
In review, the sampling distribution is created by permuting (randomly rearranging) the data thousands of times and calculating a test statistic on each permuted version of the data. A histogram of the test statistics then provides the sampling distribution of the test statistic needed to compute the p-value of the original test statistic.
#### Explanation
The most difficult part of a permutation test is in the random permuting of the data. How the permuting is performed depends on the type of hypothesis test being performed. It is important to remember that the permutation test only changes the way the p-value is calculated. Everything else about the original test is unchanged when switching to a permutation test.
##### Independent Samples t Test
For the independent sample t Test, we will use the data from the independent sleep analysis. In that analysis, we were using the sleep data to test the hypotheses:
$H_0: \mu_\text{Extra Hours of Sleep with Drug 1} - \mu_\text{Extra Hours of Sleep with Drug 2} = 0$ $H_a: \mu_\text{Extra Hours of Sleep with Drug 1} - \mu_\text{Extra Hours of Sleep with Drug 2} \neq 0$
We used a significance level of $$\alpha = 0.05$$ and obtained a P-value of $$0.07939$$. Let’s demonstrate how a permutation test could be used to obtain this same p-value. (Technically you only need to use a permutation test when the requirements of the original test were not satisfied. However, it is also reasonable to perform a permutation test anytime you want. No requirements need to be checked when performing a permutation test.)
# First run the initial test and gain the test statistic:
myTest <- t.test(extra ~ group, data = sleep, mu = 0)
observedTestStat <- myTest$statistic # Now we run the permutations to create a distribution of test statistics N <- 2000 permutedTestStats <- rep(NA, N) for (i in 1:N){ permutedTest <- t.test(sample(extra) ~ group, data = sleep, mu = 0) permutedTestStats[i] <- permutedTest$statistic
}
# Now we show a histogram of that distribution
hist(permutedTestStats, col = "skyblue")
abline(v = observedTestStat, col = "red", lwd = 3)
#Greater-Than p-value: Not the correct one in this case
sum(permutedTestStats >= observedTestStat)/N
# Less-Than p-value: Not the correct one for this data
sum(permutedTestStats <= observedTestStat)/N
# Two-Sided p-value: This is the one we want based on our alternative hypothesis.
2*sum(permutedTestStats <= observedTestStat)/N
Note The Wilcoxon Rank Sum test is run using the same code except with myTest <- wilcox.test(y ~ x, data=...) instead of t.test(...) in both Step’s 1 and 2.
ANOVA (click to show/hide)
Simple Linear Regression (click to show/hide) | 2023-03-31 16:43:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6461068987846375, "perplexity": 771.129306867747}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00378.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/eect.2016.5.185 | # American Institute of Mathematical Sciences
March 2016, 5(1): 185-199. doi: 10.3934/eect.2016.5.185
## A matrix-valued generator $\mathcal{A}$ with strong boundary coupling: A critical subspace of $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^*)^{\frac{1}{2}})$ and implications
1 Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, United States
Received September 2015 Revised February 2016 Published March 2016
We study the free dynamic operator $\mathcal{A}$ which arises in the study of a heat-viscoelastic structure model with highly coupled boundary conditions at the interface between the heat domain and the contiguous structure domain. We use Baiocchi's characterization on the interpolation of subspaces defined by a constrained map [1], [16,p 96] to identify a relevant subspace $V_0$ of both $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^∗)^{\frac{1}{2}})$, which is sufficient to determine the optimal regularity of the interface (boundary) $\to$ interior map $\mathcal{A}^{-1} \mathcal{B}_N$ from the interface to the energy space. Here, $\mathcal{B}_N$ is the (boundary) control operator acting at the interface in the Neumann boundary conditions.
Citation: Roberto Triggiani. A matrix-valued generator $\mathcal{A}$ with strong boundary coupling: A critical subspace of $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^*)^{\frac{1}{2}})$ and implications. Evolution Equations and Control Theory, 2016, 5 (1) : 185-199. doi: 10.3934/eect.2016.5.185
##### References:
[1] C. Baiocchi, Un teorema di interpolazione: Applicazioni ai problemi ai limiti per le equazioni a derivate parziali, Ann. Mat. Pura Appl., 73 (1966), 233-251. doi: 10.1007/BF02415089. [2] A. Bensoussan, G. Da Prato, M. Delfour and S. Mitter, Representation and Control of Infinite Dimensional Systems, $2^{nd}$ edition, Birkhauser, 2007, 575 pages. doi: 10.1007/978-0-8176-4581-6. [3] S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems: The case $\alpha = 1/2$, Springer-Verlag Lecture Notes in Mathematics, 1354 (1988), 234-256. Proceedings of Seminar on Approximation and Optimization, University of Havana, Cuba (January 1987). doi: 10.1007/BFb0089601. [4] S. Chen and R. Triggiani, Proof of extensions of two conjectures on structural damping for elastic systems: The case $1/2 \leq \alpha \leq 1$), Pacific J. Math., 136 (1989), 15-55. doi: 10.2140/pjm.1989.136.15. [5] S. Chen and R. Triggiani, Characterization of domains of fractional powers of certain operators arising in elastic systems, and applications, J. Diff. Eqns., 88 (1990), 279-293. doi: 10.1016/0022-0396(90)90100-4. [6] L. De Simon, Un'applicazione della teoria degli integrali singolari allo studio delle equazioni differenziali lineari astratte del primo ordine, Rendiconti del Seminario Matematico della Universita di Padova, 34 (1964), 205-223. [7] D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order, Proc. Japan Acad., 43 (1967), 82-86. doi: 10.3792/pja/1195521686. [8] P. Grisvard, Characterization de qualques espaces d' interpolation, Arch. Pat. Mech. Anal., 25 (1967), 40-63. doi: 10.1007/BF00281421. [9] T. Kato, Fractional powers of dissipative operators, J. Math. Soc. Japan., 13 (1961), 246-274. doi: 10.2969/jmsj/01330246. [10] I. Lasiecka, Unified theory for abstract parabolic boundary problems-a semigroup approach, Appl. Math. & Optimiz., 6 (1980), 287-333. doi: 10.1007/BF01442900. [11] I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories I, Abstract Parabolic Systems Encyclopedia of Mathematics and Its Applications Series, Cambridge University Press, January 2000. [12] I. Lasiecka and R. Triggiani, Domains of fractional powers of matrix-valued Operators: A general approach, Operator Semigroups Meet Complex Analysis, Harmonic Analysis and Mathematical Physics, Operator Theory Advances and Applications, W.Arendt, R.Chill and Y.Tomilov, Editors, 250 (2015), 297-309. doi: 10.1007/978-3-319-18494-4_20. [13] I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, Communications on Pure & Applied Analysis, to appear. [14] C. Lebiedzik and R. Triggaini, The optimal interior regularity for the critical case of a clamped thermoelastic system with point control revisited, Modern Aspects of the Theory of PDEs. Vol. 216 of Operator Theory: Advances and Applications, 243-259, Birkhäuser/Springer, Basel, 2011. M. Ruzhansky and J. Wirth, eds. doi: 10.1007/978-3-0348-0069-3_14. [15] J. L. Lions, Especes d'interpolation et domaines de puissances fractionnaires d'openateurs, J. Math Soc., 14 (1962), 233-241. doi: 10.2969/jmsj/01420233. [16] J. L. Lions and E. Magenes, Nonhomogeneous Boundary Value Propblems and Applications, Vol. I, Springer-Verlag (1972), 357 pp. [17] A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer Verlag, 1983. doi: 10.1007/978-1-4612-5561-1. [18] R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications, Applied Mathematics and Optimization, special issue in memory of A.V.Balakrishnan, to appear.
show all references
##### References:
[1] C. Baiocchi, Un teorema di interpolazione: Applicazioni ai problemi ai limiti per le equazioni a derivate parziali, Ann. Mat. Pura Appl., 73 (1966), 233-251. doi: 10.1007/BF02415089. [2] A. Bensoussan, G. Da Prato, M. Delfour and S. Mitter, Representation and Control of Infinite Dimensional Systems, $2^{nd}$ edition, Birkhauser, 2007, 575 pages. doi: 10.1007/978-0-8176-4581-6. [3] S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems: The case $\alpha = 1/2$, Springer-Verlag Lecture Notes in Mathematics, 1354 (1988), 234-256. Proceedings of Seminar on Approximation and Optimization, University of Havana, Cuba (January 1987). doi: 10.1007/BFb0089601. [4] S. Chen and R. Triggiani, Proof of extensions of two conjectures on structural damping for elastic systems: The case $1/2 \leq \alpha \leq 1$), Pacific J. Math., 136 (1989), 15-55. doi: 10.2140/pjm.1989.136.15. [5] S. Chen and R. Triggiani, Characterization of domains of fractional powers of certain operators arising in elastic systems, and applications, J. Diff. Eqns., 88 (1990), 279-293. doi: 10.1016/0022-0396(90)90100-4. [6] L. De Simon, Un'applicazione della teoria degli integrali singolari allo studio delle equazioni differenziali lineari astratte del primo ordine, Rendiconti del Seminario Matematico della Universita di Padova, 34 (1964), 205-223. [7] D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order, Proc. Japan Acad., 43 (1967), 82-86. doi: 10.3792/pja/1195521686. [8] P. Grisvard, Characterization de qualques espaces d' interpolation, Arch. Pat. Mech. Anal., 25 (1967), 40-63. doi: 10.1007/BF00281421. [9] T. Kato, Fractional powers of dissipative operators, J. Math. Soc. Japan., 13 (1961), 246-274. doi: 10.2969/jmsj/01330246. [10] I. Lasiecka, Unified theory for abstract parabolic boundary problems-a semigroup approach, Appl. Math. & Optimiz., 6 (1980), 287-333. doi: 10.1007/BF01442900. [11] I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories I, Abstract Parabolic Systems Encyclopedia of Mathematics and Its Applications Series, Cambridge University Press, January 2000. [12] I. Lasiecka and R. Triggiani, Domains of fractional powers of matrix-valued Operators: A general approach, Operator Semigroups Meet Complex Analysis, Harmonic Analysis and Mathematical Physics, Operator Theory Advances and Applications, W.Arendt, R.Chill and Y.Tomilov, Editors, 250 (2015), 297-309. doi: 10.1007/978-3-319-18494-4_20. [13] I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, Communications on Pure & Applied Analysis, to appear. [14] C. Lebiedzik and R. Triggaini, The optimal interior regularity for the critical case of a clamped thermoelastic system with point control revisited, Modern Aspects of the Theory of PDEs. Vol. 216 of Operator Theory: Advances and Applications, 243-259, Birkhäuser/Springer, Basel, 2011. M. Ruzhansky and J. Wirth, eds. doi: 10.1007/978-3-0348-0069-3_14. [15] J. L. Lions, Especes d'interpolation et domaines de puissances fractionnaires d'openateurs, J. Math Soc., 14 (1962), 233-241. doi: 10.2969/jmsj/01420233. [16] J. L. Lions and E. Magenes, Nonhomogeneous Boundary Value Propblems and Applications, Vol. I, Springer-Verlag (1972), 357 pp. [17] A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer Verlag, 1983. doi: 10.1007/978-1-4612-5561-1. [18] R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications, Applied Mathematics and Optimization, special issue in memory of A.V.Balakrishnan, to appear.
[1] Irena Lasiecka, Roberto Triggiani. Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers. Communications on Pure and Applied Analysis, 2016, 15 (5) : 1515-1543. doi: 10.3934/cpaa.2016001 [2] Demetris Hadjiloucas. Stochastic matrix-valued cocycles and non-homogeneous Markov chains. Discrete and Continuous Dynamical Systems, 2007, 17 (4) : 731-738. doi: 10.3934/dcds.2007.17.731 [3] Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control and Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289 [4] Daniel Alpay, Eduard Tsekanovskiĭ. Subclasses of Herglotz-Nevanlinna matrix-valued functtons and linear systems. Conference Publications, 2001, 2001 (Special) : 1-13. doi: 10.3934/proc.2001.2001.1 [5] Peter Giesl. On a matrix-valued PDE characterizing a contraction metric for a periodic orbit. Discrete and Continuous Dynamical Systems - B, 2021, 26 (9) : 4839-4865. doi: 10.3934/dcdsb.2020315 [6] George Avalos, Roberto Triggiani. Rational decay rates for a PDE heat--structure interaction: A frequency domain approach. Evolution Equations and Control Theory, 2013, 2 (2) : 233-253. doi: 10.3934/eect.2013.2.233 [7] Manli Song, Jinggang Tan. Hardy inequalities for the fractional powers of the Grushin operator. Communications on Pure and Applied Analysis, 2020, 19 (9) : 4699-4726. doi: 10.3934/cpaa.2020192 [8] Maykel Belluzi, Flank D. M. Bezerra, Marcelo J. D. Nascimento. On spectral and fractional powers of damped wave equations. Communications on Pure and Applied Analysis, 2022, 21 (8) : 2739-2773. doi: 10.3934/cpaa.2022071 [9] Miaomiao Cai, Li Ma. Moving planes for nonlinear fractional Laplacian equation with negative powers. Discrete and Continuous Dynamical Systems, 2018, 38 (9) : 4603-4615. doi: 10.3934/dcds.2018201 [10] Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee. Analysis of a linear fluid-structure interaction problem. Discrete and Continuous Dynamical Systems, 2003, 9 (3) : 633-650. doi: 10.3934/dcds.2003.9.633 [11] Jiang-Xia Nan, Deng-Feng Li. Linear programming technique for solving interval-valued constraint matrix games. Journal of Industrial and Management Optimization, 2014, 10 (4) : 1059-1070. doi: 10.3934/jimo.2014.10.1059 [12] Francis C. Motta, Patrick D. Shipman. Informing the structure of complex Hadamard matrix spaces using a flow. Discrete and Continuous Dynamical Systems - S, 2019, 12 (8) : 2349-2364. doi: 10.3934/dcdss.2019147 [13] Anthony Tongen, María Zubillaga, Jorge E. Rabinovich. A two-sex matrix population model to represent harem structure. Mathematical Biosciences & Engineering, 2016, 13 (5) : 1077-1092. doi: 10.3934/mbe.2016031 [14] John Cleveland. Basic stage structure measure valued evolutionary game model. Mathematical Biosciences & Engineering, 2015, 12 (2) : 291-310. doi: 10.3934/mbe.2015.12.291 [15] Roberto Triggiani, Jing Zhang. Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay. Evolution Equations and Control Theory, 2018, 7 (1) : 153-182. doi: 10.3934/eect.2018008 [16] Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. A stability estimate for fluid structure interaction problem with non-linear beam. Conference Publications, 2009, 2009 (Special) : 424-432. doi: 10.3934/proc.2009.2009.424 [17] Eugenio Aulisa, Akif Ibragimov, Emine Yasemen Kaya-Cekin. Fluid structure interaction problem with changing thickness beam and slightly compressible fluid. Discrete and Continuous Dynamical Systems - S, 2014, 7 (6) : 1133-1148. doi: 10.3934/dcdss.2014.7.1133 [18] Grégoire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. Discrete and Continuous Dynamical Systems - B, 2008, 9 (2) : 199-220. doi: 10.3934/dcdsb.2008.9.199 [19] Serge Nicaise, Cristina Pignotti. Asymptotic analysis of a simple model of fluid-structure interaction. Networks and Heterogeneous Media, 2008, 3 (4) : 787-813. doi: 10.3934/nhm.2008.3.787 [20] Igor Kukavica, Amjad Tuffaha. Solutions to a fluid-structure interaction free boundary problem. Discrete and Continuous Dynamical Systems, 2012, 32 (4) : 1355-1389. doi: 10.3934/dcds.2012.32.1355
2021 Impact Factor: 1.169 | 2022-07-05 09:21:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7064212560653687, "perplexity": 4268.881371388554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00344.warc.gz"} |
https://jipsurvey.com/links.htm | Other information may be found at the following websites Census Bureau Bureau of Labor Statistics NASA Currency Exchange Rates | 2021-06-24 14:26:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378069043159485, "perplexity": 2128.918968426981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556133.92/warc/CC-MAIN-20210624141035-20210624171035-00331.warc.gz"} |
https://www.schroederdewitt.com/blog/2018/categorifying-quantum-mechanics/ | Let us now apply the concept of a symmetric monoidal category (SMC) to quantum mechanics in finite dimensions. In fact, the first step is rather straight-forward:
Definition 1. [1] The category FdHilb consists of a symmetric monoidal category (SMC) with finite-dimensional complex Hilbert spaces as objects and linear transformations as arrows. Arrow composition is provided by ordinary matrix multiplication. The monoidal structure is provided by the ordinary matrix tensor product $\otimes$.
Remember how we complained right at the beginning that one reason for Dirac notation being clumsy because it allows for meaningless global phases? Well, FdHilb currently has exactly the same issue. Let’s fix this!
Definition 2. [2] The category $\mathbf{FdHilb}_{wp}$ has the same objects and arrows as FdHilb, however linear maps are subject to the equivalence condition $f\equiv g$ iff there exists $\theta\in\mathbb{R}$ such that $f=e^{i\theta}g$.
So what have we achieved so far? Let’s pick some simple object within $\mathbf{FdHilb}_{wp}$ – for example, the well-known Qubit of type $\mathbb{C}^2$. But what can we do with this? The answer is: Not that much. We can juxtapose multiple Qubits into composite systems, and define composite linear transformations to get from one composite system to another. All of this, we can of course do either formally, using equations, or graphically, using the graphical notation introduced for SMCs.
So how can we get to do more exciting things? Let’s look into the nature of Hilbert spaces and augment $\mathbf{FdHilb}_{wp}$ with appropriate inner structure, starting with general properties and then gradually moving on to finer structure.
What again distinguishes a Hilbert space from an Euclidean space? We faintly remember from our undergrad times that Hilbert spaces admit a scalar product, that, unlike in Euclidean spaces, has to be positive-definite complex. This additional constraint gives rise to the concept of a dual space, which we propose to capture as follows:
Definition 3. [3] A compact closed category is a symmetric monoidal category (SMC) where each object $A$ is assigned a dual object $A^*$ together with a unit map $\eta_A:I\rightarrow A^*\otimes A$ and a counit map $\epsilon_A:A\otimes A^*\rightarrow I$, such that $$\\lambda^{-1}\_A\\circ(\\epsilon\_A\\otimes A)\\circ\\alpha^{-1}\_{A,A^\*,A}\\circ(A\\otimes\\eta\_A)\\circ\\rho\_A=id\_{A}$$ and $$\\rho^{-1}\_A\\circ(A^\*\\otimes\\epsilon\_A)\\circ\\alpha\_{A^\*,A,A^\*}\\circ(\\eta\_A\\otimes A^\*)\\circ\\lambda\_A=id\_{A^\*}$$
The consistency conditions on $\eta_A$ and $\epsilon_A$ maybe looking slightly outlandish, but if you have a closer look (or draw them as a little diagram), you will realize that they are just constraining $\eta$ and $\epsilon$ to be appropriate primal/dual space preparators and destructors.
What other properties make Hilbert spaces special? After having introduced structure reflecting positive-definite complex scalar products, we now need to impose an important property on morphisms: unitarity.
Definition 4. [4] A $\mathbf{\dagger}$-symmetric monoidal category ($\dagger$-SMC) is a symmetric monoidal category equipped with an identity-on-objects contravariant endofunctor $(-)^\dagger: \mathbf{C}^{op}\rightarrow\mathbf{C}$, which assigns to each morphism $f:A\rightarrow B$ and adjoint morphism $f^\dagger:B\rightarrow A$, which coherently preserves the monoidal structure, i.e.: $(f\circ g)^\dagger=g^\dagger\circ f^\dagger, \ (f\otimes g)^\dagger=f^\dagger\otimes g^\dagger,\ 1^\dagger_A=1_A$ and $f^{\dagger\dagger}=f$. Further, for the natural isomorphisms $\lambda, \rho,\alpha$ and $\sigma$ of the symmetric monoidal structure, the adjoint and the inverse coincide.
Combining both unitarity for morphism and a positive-definite complex scalar products:
Definition 13. [5] A $\mathbf{\dagger}$-compact closed category is a $\mathbf{\dagger}$-symmetric monoidal category that is also, and such that the following diagram commutes:
$\dagger$-compact closed categories equally admit a diagrammatic calculus:
And, to represent primal and dual spaces and the associated scalar product (we enclose the equivalent Dirac bra-ket notation):
Congratulations, now we have captured the fundamental properties of Hilbert spaces.
The following important theorem allows custom yanking and bending of wires in $\dagger$-compact categories:
Theorem 2. [5, 6, 7] An equation in the symbolic language of a $\dagger$-compact category follows from the axioms of $\dagger$-compact categories if and only if it holds up to isotopy in the graphical language.
The symbolic language of $\dagger$-compact categories allows for insightful representations of quantum phenomena, such as quantum teleportation and entanglement swapping. We will not discuss these examples at this stage, but we will get back to them later.
Taking a step back, we realize that we have not yet considered a quantum mechanical process of fundamental importance: measurement.
## References
[1] Coecke, B., and Paquette, E. O. Categories for the practising physicist. arXiv e-print 0905.3010, May 2009.
[2] Duncan, R., and Perdrix, S. Graph states and the necessity of Euler decomposition. arXiv e-print 0902.0500, Feb. 2009.
[3] Selinger, P. Dagger compact closed categories and completely positive maps. Electronic Notes in Theoretical Computer Science 170 (2007) 139-163.
[4] Coecke, B., and Duncan, R. Interacting quantum observables: Categorical algebra and diagrammatics. arXiv e-print 0906.4725, June 2009. New J. Phys. 13 (2011) 043016
[5] G. M. Kelly and M. L. Laplaza (1980) Coherence for compact closed categories. Journal of Pure and Applied Algebra 19, 193–213.
[6] P. Selinger (2005) Dagger compact closed categories and completely positive maps. Electronic Notes in Theoretical Computer Science 170, 139–163. www.mathstat.dal.ca/∼selinger/papers.html#dagger
[7] P. Selinger (2010) Autonomous categories in which $A\simeq A^*$. Extended abstract. In: Proceedings of the 7th International Workshop on Quantum Physics and Logic, May 29-30, Oxford. www.mscs.dal.ca/∼selinger/papers.html#halftwist
[8] Abramsky, S., and Coecke, B. A categorical semantics of quantum
protocols. arXiv e-print quant-ph
/0402130, Feb. 2004. Proceedings of the 19th IEEE conference on Logic in Computer Science (LiCS’04). IEEE Computer Science Press (2004). | 2022-10-06 22:38:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258739948272705, "perplexity": 685.864802325947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00426.warc.gz"} |
https://www.originlab.com/doc/Origin-Help/Hilbert-Transform | # 18.10 Hilbert Transform (Pro Only)
This function calculates the Hilbert transform and/or the analytic signal which corresponds to the input.
Let f(ix) be the input signal, and let H() denote the Hilbert transform operator. The Hilbert transform of f(x) (denoted by g(y) below) can be defined as follows:
$g(y)=H(f(x))=\frac 1\pi \int_{-\infty }^\infty \frac{f(x)dx}{x-y} \,\!$
The result is actually a 90 degree phase shifted version of the input data, as shown in the graph below.
This function can also calculate the analytic signal corresponding to the input data. An analytic signal is a signal that has no negative frequency component. Let z(t) denote the analytical signal, then we have:
$z(t)=f(x)+jH(f(x)) \,\!$
##### To Use Hilbert Transform Tool
1. Make a workbook active.
2. Select Analysis: Signal Processing: Hilbert Transform from the Origin menu.
Topics covered in this section: | 2019-02-22 21:44:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406737685203552, "perplexity": 824.5209743341942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247526282.78/warc/CC-MAIN-20190222200334-20190222222334-00469.warc.gz"} |
https://mathoverflow.net/questions/268346/why-is-mathbbq-pp1-p-infty-a-complete-topological-field | # Why is $\mathbb{Q}_p(p^{1/p^\infty})$ a complete topological field?
In Matthias Wulkau's exposition of Scholze's thesis, the term perfectoid field is defined as follows:
Let $K$ be a field endowed with a non-archimedian absolute value $\lvert\cdot\rvert$, and let $\mathcal{O}_K$ and $\mathfrak{m}$ be the closed and open unit balls in $K$, respectively. We say that $K$ is a perfectoid field if $\lvert K^\times\rvert\subset\mathbb{R}_{\ge0}$ is non-discrete, if $\operatorname{char}(\mathcal{O}_K/\mathfrak{m})=p>0$, and if the Frobenius map $$\Phi:\mathcal{O}_K/(p)\to\mathcal{O}_K/(p),\ \ x\mapsto x^p$$ is surjective.
Now, I'm slightly confused by this definition, since I know that $L:=\mathbb{Q}_p(p^{1/p^\infty})$ is meant to be an example of a perfectoid field, however, it would seem to me (by analogy with $\overline{\mathbb{Q}_p}$) that $L$ is not complete, and I'm having trouble seeing why $L$ should be complete.
• I think that $L$ is not perfectoid, precisely because it is not complete. Its completion, however, is perfectoid, – Jesse Silliman Apr 27 '17 at 3:05
• Yeah, I agree, any time you've heard that used as an example, they either said it was the completion of that field, or they forget to mention it. – Will Sawin Apr 27 '17 at 3:11
• Note that since $K = \mathbf{Q}_p(p^{1/p^{\infty}})$ has henselian valuation ring (direct limit of complete discrete valuation rings), its Galois theory is canonically identified with that of its completion in the sense that $E \rightsquigarrow \widehat{K} \otimes_K E$ is an equivalence between the categories of finite etale $K$-algebras and finite etale $\widehat{K}$-algebras; see [2.3.1, 2.4.1-2.4.3] in Berkovich's paper on etale cohomology for non-archimedean spaces in Publ. Math. IHES 78. Making "algebraic approximations" to perfectoid constructions is important in some proofs. – nfdc23 Apr 27 '17 at 7:19
• Note that you do not ask that the field be complete in you boxed statement... – ACL May 5 '17 at 20:11
This was alredy answered in the comments, it is the $p$-adic completion of $\mathbb{Q}_p(p^{1/p^\infty})$ that is a perfectoid field. | 2020-04-04 14:38:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311810731887817, "perplexity": 291.4759199021282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00528.warc.gz"} |
https://zbmath.org/?q=an:0868.62070&format=complete | # zbMATH — the first resource for mathematics
A method of detecting changes in the behaviour of locally stationary sequences. (English) Zbl 0868.62070
Summary: A method for the detection of abrupt changes in the course of a locally stationary sequence is proposed. The method is based on a suitable approximation of an observed sequence by autoregressive models that are compared by means of a similarity measure derived from the asymptotic $$I$$-divergence rate. The method is illustrated by several numerical results.
##### MSC:
62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH) 62B10 Statistical aspects of information-theoretic topics 62L99 Sequential statistical methods
Full Text:
##### References:
[1] R. R. Bahadur: Some Limits Theorem in Statistics: Regional Conference Series in Appl. Math. SIAM Pubs., Philadelphia 1971. [2] M. Basseville, A. Benveniste: Sequential detection of abrupt changes in spectral characteristics of digital signals. IEEE Trans. Inform. Theory IT-29 (1983), 5, 709-724. · Zbl 0511.94008 · doi:10.1109/TIT.1983.1056737 [3] M. Basseville, A. Benveniste (eds.): Detection of Abrupt Changes in Signals and Dynamic Systems. (Lecture Notes in Control and Inform. Sci. 77). Springer, Berlin 1986. · Zbl 0578.93056 [4] V. Grenander, M. Rossenblatt: Statistical Analysis of Stationary Time Series. J. Wiley, New York 1957. [5] N. Kligiene, L. A. Telksnys: Methods of detecting instants of change of random process properties. Automat. Remote Control 44 (1983), 1241-1283. · Zbl 0541.93063 [6] H. Kiinsch: Thermodynamics and statistical analysis of Gaussian random fields. Z. Wahrschein. verw. Gebiete 55 (1981), 407-421. · Zbl 0458.60053 [7] J. Michalek: Yule-Walker estimates and asymptotic I-divergence rate. Problems Control Inform. Theory 19 (1990), 5-6, 387-398. · Zbl 0744.62126 [8] I. V. Nikoforov: Sequential Detection of Abrupt Changes in Time Series Properties. Nauka, Moscow 1983 [9] I. Vajda: Theory of Statistical Inference and Information. Kluwer, Dordrecht, Boston 1989. · Zbl 0711.62002 [10] I. Vajda: Distances and discrimination rates of stochastic processes. Stochastic Process. Appl. 3 (1990), 47-57. · Zbl 0701.62084 · doi:10.1016/0304-4149(90)90121-8 [11] A. S. Willsky: A survey of design methods for failure detection in dynamic systems. Automatica 12 (1976), 601-611. · Zbl 0345.93067 · doi:10.1016/0005-1098(76)90041-8
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-03-08 05:35:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43305182456970215, "perplexity": 3055.4478088864626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00637.warc.gz"} |
http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_08_Event_Types_and_Parameters.html | Pattern Guide 08: Event Types and Parameters:
Filter:
Tutorials/A-Practical-Guide |
Pattern Guide 08: Event Types and Parameters
Describes the event types defined in the default event, and the parameters they expect
Event types
A common question is, "Which parameters have special meanings in Pbind?" Perhaps surprisingly, none of them do! That's because Pbind simply puts data into the result event; it doesn't care what the data are.
The event prototype used when playing the pattern defines the actions to take, and it is here that parameters are defined. Most patterns will play using the default event prototype ( Event.default ), so this is the source of the parameters that will most typically be used.
The default event prototype defines a number of "event types," each of which performs a different task. The \type key determines which action is taken, and the significant parameters depend on the event type.
There are a lot of event types! However, only a few are commonly used. The \note event type is by far the most typical. The others are auxiliary, and most useful when writing patterns to generate a Score suitable for non-real-time rendering.
Before looking at the event types themselves, let's go over some standard parameters used across many event types. (Not every common parameter is used in every event type, but these turn up in lots of places.)
Common parameters
Timing control
\delta
Number of beats until the next event. Calculated from ~dur * ~stretch, if \delta is not given explicitly.
\lag
Number of seconds to delay the event's server message(s).
\timingOffset
Number of beats to delay the event's server message(s). In conjunction with Quant, this allows control over the order of event preparation between different patterns in the client, without desynchronizing sonic events that should play together. Pattern Guide 06g: Data Sharing has an example of its use to pass data from a bass pattern to a chord pattern.
\sustain
Number of beats to wait before releasing a Synth node on the server. The SynthDef must have a gate argument for the explicit release to be sent; otherwise, the pattern assumes the note will release itself using a timed envelope. \sustain is calculated from ~dur * ~legato * ~stretch if not given directly.
\sendGate
The default behavior for releasing a note is to look in the SynthDesc for an argument called \gate. If it's present, the event will send a node.set(\gate, 0) message to the server. If not, no release will be sent; it's assumed that the SynthDef will release itself after a given length of time. \sendGate overrides this behavior: true means to force the release message to be sent, whether or not the argument exists, while false means to suppress the release message.
It isn't typical use to override; nonetheless, for some special cases, it may be useful.
\tempo
Optional. If a value is given (in beats per second), it will change the tempo of the TempoClock playing the pattern. Here, the note duration is constant but the clock's speed changes.
NOTE: Changing the tempo will affect all patterns playing on the same clock.
Node control
How to add a synth or group node relative to the given \group in the event. See Synth.
\amp
Not formally defined as a special parameter, but this is typically used for Synth amplitude. The SynthDef should have an amp argument and use it to control volume. \amp is optionally calculated from \db.
\id
The desired id(s) for newly created Nodes in this event. Normally this is nil, in which case the IDs will be obtained from server.nextNodeID.
\instrument
The SynthDef name for which nodes will be created. Only one name should be given (unlike other arguments, which "multichannel expand" to create multiple nodes).
\group
The target node relative to which new node(s) will be created. Similar to target in Synth(defName, args, target, addAction).
\out
Generally used for the output bus of a Synth. When using Pbus or Pfxb, an audio bus is allocated to isolate the pattern's signal. All events from the pattern receive the new bus number in the \out slot, and SynthDefs being played should use an out argument for the target of output UGens, e.g., Out.ar(out, ...) .
User function hooks
\finish
A function that will be executed after play has been called, but before event type processing. Use this to manipulate event data.
\callback
A function that will be executed after the Event has finished all its work. The callback may be used for bookkeeping. Finished Events are expected to store new node IDs under ~id; with the IDs, you can register functions to watch node status or set node controls, for instance. The function receives the finished event as its argument.
Event Types
Node control
rest
As one would expect, a \rest does nothing except wait the required amount of time until the next event.
note
This is the default event type, used when \type is not specified. It plays one or more Synth nodes on the server, with an automatic release after \sustain beats if the SynthDef has a gate argument.
Standard Timing and Node control arguments
sendGate
Override SynthDef behavior for the gate argument. If the SynthDef as gate, setting sendGate = false prevents the release message from being sent. Rarely used.
strum
If multiple notes are produced (usually a chord, given by providing an array to one of the pitch parameters), \strum is the number of beats to delay each successive note onset. When using \strum, another key is active, \strumEndsTogether. If false (the default), each strummed node will play for its full duration and the releases will be staggered. If true, the releases will occur at the same time.
on
Start a Synth node (or nodes) without releasing. The node ID(s) are in the event's ~id variable. Those IDs can be used with the off, set and kill event types.
Standard Timing and Node control arguments
(sendGate and strum parameters are not used)
off
Release server nodes nicely if possible. If the SynthDef has a gate argument, the gate will be set to 0 or a user-specified value. Otherwise, the nodes are brutally killed with n_free.
Standard Timing control arguments
hasGate
True or false, telling the event whether the SynthDef has a gate argument or not. The default is assumed true.
id
The node ID(s) must be given explicitly.
gate
By default, the gate will be set to 0. Negative values trigger a "forced release" in EnvGen. See the EnvGen help file for details.
kill
Immediately remove nodes using n_free.
Standard Timing control arguments
id
The node ID(s) must be given explicitly.
set
Send new values to the control inputs of existing nodes.
Standard Timing control arguments
id
The node ID(s) must be given explicitly. This may be an integer ID or Synth/Group node object.
There are two ways to specify argument names: by instrument and by argument array.
- By instrument :
instrument
The SynthDef name should be given again, so that the event knows which event values are relevant for the nodes.
args
By default, the \args key contains the control names for the default synthdef. To take argument names from the instrument name, you must override this default with an empty array (or any non-collection object).
- By argument names :
args
Provide a list of the Synth argument names as an array here, e.g. [\freq, \amp, \pan]. There is no need to provide the instrument name this way.
monoNote
monoOff
monoSet
These event types are used internally by Pmono and PmonoArtic. They should not be used directly.
Server control
group
Create a new group (or groups).
Standard Timing and Node control arguments
id
(Optional) IDs for the new groups. If not specified, the new ID (for one group only) can be found in the event after .play. To create multiple groups, you must provide an array of IDs.
bus
Set the value of a control bus, or contiguous control buses. This assumes that you already have the bus index.
Standard Timing control arguments
array
The value(s) to send to the bus(es). If it's only one value, it doesn't have to be an array.
out
The first bus index to be set. A Bus object can be used.
Buffer control
All of these buffer event types expect the buffer number to be provided. They will not automatically get a buffer number from the server's buffer allocator. A Buffer object is allowed -- you could create the Buffer first using Buffer.alloc or Buffer.new and then use this object in the control events. See also Event types with cleanup below for other, user-friendlier Buffer control options.
alloc
Allocate memory for a buffer on the server. Only one buffer may be allocated per event.
Standard Timing control arguments
bufnum, numchannels, numframes
See the Buffer help file.
free
Deallocate the buffer's memory on the server.
Standard Timing control arguments
bufnum
Buffer number to free (one only).
gen
Generate wavetable data in the buffer, using one of the server's b_gen plug-ins. The Buffer help file has more detail on the standard plug-ins.
Standard Timing control arguments
bufnum
gencmd
The generator plug-in name: \sine1, \sine2, \sine3, \cheby.
genflags
Three flags, associated with numbers: normalize = 1, asWavetable = 2, clearFirst = 4. Add the numbers for the desired flags. Normally the flags are all true, adding up to 7.
genarray
Data parameters for the plug-in. See the Server Command Reference help file for details on the format for each plug-in.
Allocate buffer memory in the server and load a sound file into it, using b_allocRead.
Standard Timing control arguments
bufnum
filename
Path to disk file.
frame
Starting frame to read (default 0).
numframes
Number of frames to read (default 0, which loads the entire file).
Read a sound file into a buffer already allocated on the server. This event type is good to cue a sound file for use with DiskIn.
Standard Timing control arguments
bufnum
filename
Path to disk file.
frame
Starting soundfile frame to read (default 0).
numframes
Number of frames to read (default 0, which loads the entire file).
bufpos
Starting buffer frame (default 0).
leaveOpen
1 = leave the file open (for DiskIn use). 0 = close the disk file after reading. Default = 0.
Event types with cleanup
These event types uniquely have automatic cleanup event types associated with them. Playing one of these event types allocates a server resource. Later, the resource may be freed by changing the event type to the corresponding cleanup type and playing the event again. While the resource is active, the event can be used as a reference to the resource in other events or Synth messaging.
See the Pproto example in Pattern Guide 06f: Server Control, showing how these can be used to clean up server objects at the end of a pattern.
audioBus
Allocate an audio bus index from the server.
channels
Number of channels to allocate.
controlBus
Allocate a control bus index from the server.
channels
Number of channels to allocate.
buffer
Allocate a buffer number if not specified, and reserve the memory on the server.
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
numBufs
Number of contiguous buffer numbers to reserve (default = 1).
numFrames
Number of frames.
numChannels
Number of channels.
Read a disk file into server memory. The file is closed when finished.
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
path
Path to the sound file on disk.
firstFileFrame
Where to start reading in the file.
numFrames
Number of frames. If not given, the whole file is read.
cue
Cue a sound file (generally for use with DiskIn).
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
path
Path to the sound file on disk.
firstFileFrame
Where to start reading in the file.
numFrames
Number of frames. If not given, the whole file is read.
firstBufferFrame
Where in the buffer to start putting file data.
leaveOpen
1 = leave the file open (for DiskIn use). 0 = close the disk file after reading. Default = 0.
table
Fill a buffer with preset data. This uses /b_setn to transfer the data, so all of the data must fit into one datagram. It may take some experimentation to find the upper limit.
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
amps
The values to put into the buffer. These should all be Floats.
cheby
Generate a Chebyshev transfer function for waveshaping.
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
numFrames
Number of frames, should be a power of 2.
numChannels
Number of channels.
genflags
Three flags, associated with numbers: normalize = 1, asWavetable = 2, clearFirst = 4. Add the numbers for the desired flags. Normally the flags are all true, adding up to 7.
amps
The amplitude of each partial (i.e., polynomial coefficient).
sine1
Mirrors the sine1 method for Buffer, generating a wavetable with an integer-multiple harmonic spectrum using the given partial amplitudes.
bufNum
(Optional) Buffer number. If not given, a free number will be obtained from the server.
numFrames
Number of frames, should be a power of 2.
numChannels
Number of channels.
genflags
See above.
amps
Array of amplitudes for each partial.
sine2
Like sine1, but the frequency ratio of each partial is also given.
Same arguments as sine1, plus:
freqs
Array of frequencies for each partial. 1.0 is the fundamental frequency; its sine wave occupies the entire buffer duration.
sine3
Like sine2, but the phase of each partial may also be provided.
Same arguments as sine1, plus:
phases
Array of phases for each partial, given in radians (0.0 - 2pi).
MIDI output
midi
Sends one of several types of MIDI messages to a MIDIOut object.
Standard Timing control arguments (except timingOffset, which is not used)
midicmd
The type of MIDI message to send. This also determines other arguments that should be present in the event.
midiout
The MIDI out object, which connects to one of the MIDI devices listed in MIDIClient.destinations.
chan
The MIDI channel number (0-15) on the device that should receive the message. This applies to all midicmds except the global ones ( smpte, songPtr, sysex ).
Available midicmds:
noteOn
Starts a note, and optionally stops it. If multiple frequencies are given, one noteOn/noteOff pair is sent for each, and \strum is also supported.
chan
MIDI channel (0-15).
midinote
Note number to trigger. This may be calculated from the standard pitch hierarchy described in Pattern Guide 07: Value Conversions (with the exception that only 12TET can be supported).
amp
MIDI velocity = amp / 12.
sustain
How many beats to wait before sending the corresponding note off message. If not given directly, it's calculated as ~sustain = ~dur * ~legato * ~stretch (just like the standard \note event type).
hasGate
Normally true. If false, the note off message will not be sent.
noteOff
Send an explicit note off message (useful if hasGate is set false in the note on event).
chan
MIDI channel (0-15).
midinote
Note number.
amp
Release velocity (supported by some synthesizers).
allNotesOff
"Panic" message, kills all notes on the channel.
chan
MIDI channel (0-15).
control
Continuous controller message.
chan
MIDI channel (0-15).
ctlNum
Controller number to receive the new value.
control
New value (0-127).
bend
Pitch bend message.
chan
MIDI channel (0-15).
val
New value (0-16383). 8191 is centered.
touch
Aftertouch message.
chan
MIDI channel (0-15).
val
New value (0-127).
polyTouch
Poly aftertouch message (not supported by all synthesizers).
chan
MIDI channel (0-15).
midinote
Note number to get the new after touch value. As in note on, it may be calculated from the standard pitch hierarchy.
polyTouch
New value (0-127).
program
Program change message.
chan
MIDI channel (0-15).
progNum
Program number (0-127).
smpte
Send MIDI Time Code messages.
Arguments
frames, seconds, minutes, hours, frameRate
songPtr
Song pointer message.
songPtr
Pointer value (0-16383).
sysex
System exclusive messages.
array
An Int8Array with the sysex bytes in order.
NOTE: Very important: Arrays normally multi-channel expand in patterns. So, you must wrap the Int8Array inside another array to prevent this. Write [Int8Array[...]], not just Int8Array[...].
Miscellaneous
phrase
See recursive_phrasing.
setProperties
Set variables belonging to a given object. One possible use is to control a GUI using a pattern. | 2019-06-17 00:47:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3087238073348999, "perplexity": 4462.166114279296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998339.2/warc/CC-MAIN-20190617002911-20190617024911-00375.warc.gz"} |
https://testbook.com/question-answer/if-poissons-ratio-of-an-elastic-material-is--60121b3c8fc21f019f83937c | # If Poisson’s ratio of an elastic material is 0.4, then what will be the ratio of modulus of rigidity of Young’s modulus?
This question was previously asked in
SSC JE ME Previous Paper 8 (Held on: 27 October 2020 Evening)
View all SSC JE ME Papers >
1. 0.16
2. 0.36
3. 0.86
4. 0.06
Option 2 : 0.36
Free
ISRO VSSC Technical Assistant Mechanical held on 09/06/2019
2187
80 Questions 320 Marks 120 Mins
## Detailed Solution
Concept:
Relationship between modulus of elasticity (E), modulus of rigidity (G), and Poisson's ratio (μ) is described by:
E = 2G (1 + μ)
Calculation:
Given:
μ = 0.4
E = 2G (1 + μ)
$$\frac{{\rm{G}}}{{\rm{E}}} = \frac{1}{{2\left( {1 + {\rm{μ }}} \right)}} = \frac{1}{{2\left( {1 + 0.4} \right)}} = 0.357$$
Other relationships between various elastic constants are:
• $${\bf{E}} = \frac{{9{\bf{KG}}}}{{3{\bf{K}}\; + \,{\bf{G}}\;}}$$
• E = 3K(1 - 2μ)
where K is the bulk modulus of elasticity. | 2021-09-23 21:39:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476244568824768, "perplexity": 7473.328815890616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00539.warc.gz"} |
http://gmatclub.com/forum/math-number-theory-88376-20.html?kudos=1 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Nov 2015, 08:45
# Happy Thanksgiving:
All GMAT Club Tests are Open and Free today
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Math: Number Theory
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [2] , given: 8811
Re: Math: Number Theory [#permalink] 11 Aug 2010, 17:07
2
KUDOS
Expert's post
utfan2424 wrote:
Does this relationship breakdown at some point? I thought this was great and was just experimenting and looked at 21! (calculated in excel) and it ends with 5 zeros. Using the methodology you described above it should have 4 zeros. Am I missing something or did I make a mistake somewhere?
You made everything right 21! ends with 21/5=4 zeros. It's excel: it makes rounding with such a huge numbers thus giving incorrect result.
_________________
Retired Moderator
Joined: 02 Sep 2010
Posts: 805
Location: London
Followers: 89
Kudos [?]: 679 [2] , given: 25
Re: Math: Number Theory [#permalink] 28 Oct 2010, 21:28
2
KUDOS
$$36=6^2=2^2*3^2$$
Powers of 2 & 3 are even (2)
Posted from my mobile device
_________________
Retired Moderator
Joined: 02 Sep 2010
Posts: 805
Location: London
Followers: 89
Kudos [?]: 679 [2] , given: 25
Re: Math: Number Theory [#permalink] 01 Nov 2010, 15:16
2
KUDOS
shrive555 wrote:
If n is a positive integer greater than 1, then there is always a prime number P with n<P<2n
n<p<2n can someone please explain this with example .
Thanks
The result you are referring to is a weak form of what is known as Bertrand's Postulate. The proof of this result is beyond the scope of the GMAT, but it is easy to show some examples.
Choose any n>1, you will always find a prime number between n & 2n.
Eg. n=5, 2n=10 ... p=7 lies in between
n=14, 2n=28 ... p=19 lies in between
n=20, 2n=40 ... p=23 lies in between
_________________
Intern
Status: Active
Joined: 30 Jun 2012
Posts: 37
Location: India
Followers: 5
Kudos [?]: 70 [2] , given: 36
Re: Math: Number Theory [#permalink] 27 Oct 2012, 01:34
2
KUDOS
About Exponents and divisibility:
$$(a + b)^2 = a^2+ 2ab + b^2$$ Square of a Sum
$$(a - b)^2 = a^2 - 2ab + b^2$$ Square of a Diffe rence
$$a^n - b^n$$ is always divisble by a-b i.e. irrespective of n being odd or even
Proof:
$$a^2 - b^2 = (a-b)(a+b)$$
$$a^3 - b^3 = (a-b)(a^2+ab+b^2)$$
Thus divisible by a- b in both cases where n = 2 i.e. even and 3 i.e. odd
$$a^n + b^n$$ is divisble by a+b i.e. only if n = odd
Proof:
$$a^3 - b^3 = (a+b)(a^2-ab+b^2)$$
Thus divisible by a + b as n = 3 i.e. odd
_________________
Thanks and Regards!
P.S. +Kudos Please! in case you like my post.
Intern
Joined: 06 Oct 2009
Posts: 33
Schools: Ryerson University
Followers: 1
Kudos [?]: 9 [1] , given: 7
Re: Math: Number Theory [#permalink] 24 Jan 2010, 06:33
1
KUDOS
no problem.
Great post by the way, very informative.
Intern
Joined: 06 Oct 2009
Posts: 33
Schools: Ryerson University
Followers: 1
Kudos [?]: 9 [1] , given: 7
Re: Math: Number Theory [#permalink] 24 Jan 2010, 14:06
1
KUDOS
Quote:
ding is simplifying a number to a certain place value. To round the decimal drop the extra decimal places, and if the first dropped digit is 5 or greater, round up the last digit that you keep. If the first dropped digit is 4 or smaller, round down (keep the same) the last digit that you keep.
Example:
5.3485 rounded to the nearest tenth = 5.3, since the dropped 4 is less than 5.
5.3485 rounded to the nearest hundredth = 5.35, since the dropped 8 is greater than 5.
5.3485 rounded to the nearest thousandth = 5.249, since the dropped 5 is equal to 5.
I'm assuming it was just a typo for the last part of the example:
Its entered as 5.3485 rounded to the nearest thousandth = 5.249, since the dropped 5 is equal to 5.
I guess you meant 5.3485 rounded to the nearest thousandth = 5.349, since the dropped 5 is equal to 5.
Intern
Joined: 06 Oct 2009
Posts: 33
Schools: Ryerson University
Followers: 1
Kudos [?]: 9 [1] , given: 7
Re: Math: Number Theory [#permalink] 25 Jan 2010, 14:54
1
KUDOS
I have a question regarding number properties, which I found on an old GMAT test paper form. Here it is:
If the sum of two positive integers is 24 and the difference of their squares is 48, what is the product of the two integers?
(a) 108
(b) 119
(c) 128
(d) 135
(e) 143
Is there a more efficient way of solving this than choosing two numbers at random?
Intern
Joined: 06 Oct 2009
Posts: 33
Schools: Ryerson University
Followers: 1
Kudos [?]: 9 [1] , given: 7
Re: Math: Number Theory [#permalink] 26 Jan 2010, 06:15
1
KUDOS
Thank you very much, sorry about that, will do from now on.
Intern
Joined: 12 Oct 2009
Posts: 16
Followers: 1
Kudos [?]: 3 [1] , given: 1
Re: Math: Number Theory [#permalink] 26 Jan 2010, 20:01
1
KUDOS
Great Post, thanks a lot
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [1] , given: 8811
Re: Math: Number Theory [#permalink] 05 Mar 2010, 00:22
1
KUDOS
Expert's post
The topic is done. At last!
I'll break it into several smaller ones in a day or two.
Any comments, advises and/or corrections are highly appreciated.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [1] , given: 8811
Re: Math: Number Theory [#permalink] 30 Apr 2010, 13:30
1
KUDOS
Expert's post
AloneAndInsufficient wrote:
Bunuel wrote:
NUMBER THEORY
• For GMAT it's good to memorize following values:
$$\sqrt{2}\approx{1.41}$$
$$\sqrt{3}\approx{1.73}$$
$$\sqrt{5}\approx{2.24}$$
$$\sqrt{7}\approx{2.45}$$
$$\sqrt{8}\approx{2.65}$$
$$\sqrt{10}\approx{2.83}$$
Anyone else notice that these are wrong?
They should be:
• For GMAT it's good to memorize following values:
$$\sqrt{2}\approx{1.41}$$
$$\sqrt{3}\approx{1.73}$$
$$\sqrt{5}\approx{2.24}$$
$$\sqrt{6}\approx{2.45}$$
$$\sqrt{7}\approx{2.65}$$
$$\sqrt{8}\approx{2.83}$$
$$\sqrt{10}\approx{3.16}$$
Thanks. Edited. +1 for spotting this.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [1] , given: 8811
Re: Math: Number Theory [#permalink] 19 Nov 2010, 00:53
1
KUDOS
Expert's post
Araj wrote:
Hello Bunuel - thank you so much for this fantastic post!
with regards to checking for primality:
Quote:
Verifying the primality (checking whether the number is a prime) of a given number can be done by trial division, that is to say dividing by all integer numbers smaller than , thereby checking whether is a multiple of .
Example: Verifying the primality of : is little less than , from integers from to , is divisible by , hence is not prime.
Would it be accurate to say that a number is prime ONLY if it gives a remainder of 1 or 5 when divided by 6?
i.e, for eg. 10973/6 gives a remainder of 5, so it has to be prime...
i found the reasoning behind this in one of the OG solutions:
prime numbers always take the form: 6n+1 or 6n+5 ....
the only possible remainders when any number is divided by 6 are [0,1,2,3,4,5] ...
A prime number always gives a remainder of 1 or 5, because:
a) if the remainder is 2 or 4, then the number must be even
b) if the remainder is 3, then it is divisible by 3 ...
hence, if a number divided by 6 yields 1 or 5 as its remainder, then it must be prime
...?
-Raj
First of all there is no known formula of prime numbers.
Next:
Any prime number $$p>3$$ when divided by 6 can only give remainder of 1 or 5 (remainder can not be 2 or 4 as in this case $$p$$ would be even and remainder can not be 3 as in this case $$p$$ would be divisible by 3).
So any prime number $$p>3$$ could be expressed as $$p=6n+1$$ or$$p=6n+5$$ or $$p=6n-1$$, where n is an integer >1.
But:
Not all number which yield a remainder of 1 or 5 upon division by 6 are prime, so vise-versa of above property is not correct. For example 25 yields a remainder of 1 upon division be 6 and it's not a prime number.
Hope it's clear.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [1] , given: 8811
Re: Math: Number Theory [#permalink] 06 Dec 2010, 00:08
1
KUDOS
Expert's post
shrive555 wrote:
$$(a^m)^n=a^{mn}$$ ----------1
$$(2^2)^2 = 2^2*^2 =2^4$$
$$a^m^n=a^{(m^n)}$$ and not $$(a^m)^n$$ ------------------2
$$2^2^2 = 2^(2^2) = 2^4$$
If above example is correct then whats the difference 1 & 2. Please clarify
thanks
If exponentiation is indicated by stacked symbols, the rule is to work from the top down, thus:
$$a^m^n=a^{(m^n)}$$ and not $$(a^m)^n$$, which on the other hand equals to $$a^{mn}$$.
So:
$$(a^m)^n=a^{mn}$$;
$$a^m^n=a^{(m^n)}$$ and not $$(a^m)^n$$.
Now, there are some specific values of $$a$$, $$m$$ and $$n$$ for which $$a^m^n$$ equals to $$a^{mn}$$. For example:
$$a=1$$: $$1^{m^n}=1=1^{mn}$$;
$$m=0$$: $$a^0^n=a^0=1$$ and $$a^{0*n}=a^0=1$$;
$$m=2$$ and $$n=2$$ --> $$a^{2^2}=a^4$$ and $$a^{2*2}=a^4$$;
$$m=4$$ and $$n=\frac{1}{2}$$ --> $$a^{4^{\frac{1}{2}}}=a^2$$ and $$a^{4*{\frac{1}{2}}}=a^2$$;
...
So, generally $$a^m^n$$ does not equal to $$(a^m)^n$$, but for specific values of given variables it does.
shrive555 wrote:
In question would that be given explicitly ... i mean the Brackets ( )
$$a^m^n$$ ALWAYS means $$a^{(m^n)}$$, so no brackets are needed. For example $$2^{3^4}=2^{(3^4)}=2^{81}$$;
If GMAT wants the order of operation to be different then the necessary brackets will be put. For example: $$(2^3)^4=2^{(3*4)}=2^{12}$$.
Hope it's clear.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 30376
Followers: 5086
Kudos [?]: 57253 [1] , given: 8811
Re: Math: Number Theory [#permalink] 03 Jan 2011, 08:52
1
KUDOS
Expert's post
resh924 wrote:
Bunuel,
For determining last digit of a power for numbers 0, 1, 5, and 6, I am not clear on how to determine the last digit.
• Integer ending with 0, 1, 5 or 6, in the integer power k>0, has the same last digit as the base.
What is the last digit of 345^27 ---is the last digit 5?
What is the last digit of 216^32----is the last digit 6?
What is the last digit of 111^56---is the last digit 1?
Any clarification would be helpful.
Thanks for all your help.
First of all: last digit of 345^27 is the same as that of 5^27 (the same for 216^32 and 111^56);
Next:
1 in any integer power is 1;
5^1=5, 5^2=25, 5^3=125, ...
6^1=6, 6^2=36, 5^3=216, ...
So yes, integer ending with 0, 1, 5 or 6, in the integer power k>0, has the same last digit as the base: thus 0, 1, 5, and 6 respectively.
Hope it's clear.
_________________
Manager
Joined: 18 Jan 2012
Posts: 51
Location: United States
Followers: 3
Kudos [?]: 76 [1] , given: 25
Re: Math: Number Theory [#permalink] 25 Sep 2012, 09:43
1
KUDOS
conty911 wrote:
Bunuel wrote:
NUMBER THEORY
Trailing zeros:
Trailing zeros are a sequence of 0's in the decimal representation (or more generally, in any positional representation) of a number, after which no other digits follow.
125000 has 3 trailing zeros;
The number of trailing zeros in the decimal representation of n!, the factorial of a non-negative integer $$n$$, can be determined with this formula:
$$\frac{n}{5}+\frac{n}{5^2}+\frac{n}{5^3}+...+\frac{n}{5^k}$$, where k must be chosen such that $$5^k<n$$.
It's easier if you look at an example:
How many zeros are in the end (after which no other digits follow) of $$32!$$?
$$\frac{32}{5}+\frac{32}{5^2}=6+1=7$$ (denominator must be less than 32, $$5^2=25$$ is less)
Hence, there are 7 zeros in the end of 32!
The formula actually counts the number of factors 5 in n!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero.
I noticed in case the number (n) is multiple of $$5^k$$ and we have to find number of trailing zero zeroes, then it will be $$5^k<=n$$ rather $$5^k<n$$
no of trailing zeros in 25! =6
$$\frac{25}{5}+\frac{25}{5^2}= 5+1$$;
Please correct me, clarify if i'm wrong. Thanks
The highest power of a prime number "k" that divides any number "n!" is given by the formula
n/K + n/k^2+n/k^3.. (until numerator becomes lesser than the denominator). Remember to truncate the remainders of each expression
E.g : The highest number of 2's in 10! is
10/2 + 10/4 + 10/8 = 5 + 2 + 1 = 8 (Truncate the reminder of each expression)
As a consequence of this, the number of zeros in n! is controlled by the presence of 5s.
Why ? 2 reasons
a) 10 = 5 x 2,
b) Also in any n!, the number of 5's are far lesser than the number of 2's.
The number of cars that you make depends on the number of engines. You can have 100 engines and 1000 cars, but you can only make 100 cars (each car needs an engine !)
10 ! = 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1
Lets factorize each term ...
10! = (5 x 2) x(3x3)x(2x2x2)x7x(2x3)x(5)X(2x2)x1
the number of 5s = 2
The number of 2s = 7
The number of zeros in 10! = the total number of 5s = 2 (You may use a calc to check this10! = 3628800)
hence in any n! , the number of 5's control the number of zeros.
As a consequence of this, the number of 5's in any n! is
n/5 + n/25 + n/125 ..until numerator becomes lesser than denominator.
Again, i want to emphasize that this formuala only works for prime numbers !!
So to find the number of 10's in any n!, DO NOT DIVIDE by 10 ! (10 is not prime !)
i.e DONT do
n/10 + n/100 + n/1000 - THIS IS WRONG !!!
_________________
-----------------------------------------------------------------------------------------------------
IT TAKES QUITE A BIT OF TIME AND TO POST DETAILED RESPONSES.
YOUR KUDOS IS VERY MUCH APPRECIATED
-----------------------------------------------------------------------------------------------------
Intern
Joined: 01 Apr 2014
Posts: 2
Schools: Ross '17, Tepper '17
GMAT 1: 690 Q49 V34
Followers: 0
Kudos [?]: 1 [1] , given: 27
Re: Math: Number Theory [#permalink] 17 Sep 2014, 21:09
1
KUDOS
lbnyc13 wrote:
Hi -
Can somebody explain where the '00' part of the 9900 in the denominator is coming from? I understand where the '99' is coming from. I copied this from the number theory section.
Thanks
" Example #2: Convert 0.2512(12) to a fraction.
1. The number consisting with non-repeating digits and repeating digits is 2512;
2. Subtract 25 (non-repeating number) from above: 2512-25=2487;
3. Divide 2487 by 9900 (two 9's as there are two digits in 12 and 2 zeros as there are two digits in 25): 2487/9900=829/3300. "
let x =0.2512(12)
100x = 25.(12)
10000x = 2512.(12)
10000x -100x = 2512.(12) - 25.(12) = 2487
9900x = 2487
x = 2487/9900
That is the logic. Hope that helps.
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1179
Followers: 340
Kudos [?]: 1076 [1] , given: 4
Re: Math: Number Theory [#permalink] 12 Jun 2015, 10:27
1
KUDOS
Expert's post
Bunuel wrote:
Verifying the primality (checking whether the number is a prime) of a given number $$n$$ can be done by trial division, that is to say dividing $$n$$ by all integer numbers smaller than $$\sqrt{n}$$, thereby checking whether $$n$$ is a multiple of $$m<\sqrt{n}$$.
Example: Verifying the primality of $$161$$: $$\sqrt{161}$$ is little less than $$13$$, from integers from $$2$$ to $$13$$, $$161$$ is divisible by $$7$$, hence $$161$$ is not prime.
A minor point, but the inequalities here should not be strict. If you want to test if some large integer n is prime, then you need to try dividing by numbers up to and including $$\sqrt{n}$$. We must include $$\sqrt{n}$$, in case our number is equal to the square of a prime.
And it might be worth mentioning that it is only necessary to try dividing by prime numbers up to $$\sqrt{n}$$, since if n has any divisors at all (besides 1 and n), then it must have a prime divisor.
It's very rare, though, that one needs to test if a number is prime on the GMAT. It is, computationally, extremely time-consuming to test if a large number is prime, so the GMAT cannot ask you to do that. If a GMAT question asks if a large number is prime, the answer really must be 'no', because while you can often quickly prove a large number is not prime (for example, 1,000,011 is not prime because it is divisible by 3, as we see by summing digits), you cannot quickly prove that a large number is prime.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Manager
Joined: 22 Jan 2010
Posts: 121
Followers: 2
Kudos [?]: 4 [0], given: 15
Re: Math: Number Theory [#permalink] 26 Jan 2010, 21:57
Thanks for sharing.
Current Student
Joined: 12 Nov 2008
Posts: 368
Schools: Ross (R2), Cornell (R3) , UNC (R3) , INSEAD (R1 Jan)
WE 1: Advisory (2 yrs)
WE 2: FP & Analysis (2 yrs at matriculation)
Followers: 22
Kudos [?]: 96 [0], given: 45
Re: Math: Number Theory [#permalink] 31 Jan 2010, 09:00
Bunuel wrote:
All prime numbers except 2 and 5 end in 1, 3, 7 or 9, since numbers ending in 0, 2, 4, 6 or 8 are multiples of 2 and numbers ending in 0 or 5 are multiples of 5. Similarly, all prime numbers above 3 are of the form $$6n-1$$ or $$6n+1$$, because all other numbers are divisible by 2 or 3.
Awesome post, thank you so much! +1
What is the quickest way to figure out whether a number is prime? I usually check if it's odd or even, then sum its digits to figure out if it's divisible by 3, then look if it ends in 5 and if all else fails divide it by 7. Is this the recommended approach?
What might be a bit confusing is that while all prime numbers are of the form 6n-1 or 6n+1, not all numbers of that form are in fact prime. I think this is crucial. For instance, the number 49 is 6n+1, but is not prime.
Any insight on a quicker check (if one exists) would be much appreciated and thank you again for your efforts. They make a real difference!
CEO
Joined: 17 Nov 2007
Posts: 3573
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Followers: 447
Kudos [?]: 2501 [0], given: 359
Re: Math: Number Theory [#permalink] 31 Jan 2010, 09:19
Expert's post
ariel wrote:
What is the quickest way to figure out whether a number is prime?
Unfortunately, there is no such quick way to say that this number is prime. You can remember all numbers till 50 and then use rule:
Rule: To check whether a number is prime or not, we try to divide it by 2, 3, 5 and so on. You can stop at $$\sqrt{number}$$ - it is enough. Why? Because if there is prime divisor greater than $$\sqrt{number}$$, there must be another prime divisor lesser than $$\sqrt{number}$$.
Example,
n = 21 -- > $$\sqrt{21}$$~ 4-5
So, we need to check out only 2,3 because for 7, for instance, we have already checked out 3.
n = 101 --> 2,3,5 is out (the last digit is not even or 5 and sum of digits is not divisible by 3). we need to check out only 7
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Re: Math: Number Theory [#permalink] 31 Jan 2010, 09:19
Go to page Previous 1 2 3 4 5 6 7 8 9 Next [ 175 posts ]
Similar topics Replies Last post
Similar
Topics:
2 A number theory curiosity 5 02 Aug 2011, 13:09
15 If a and b are different positive integers and a+b=a(a+b) 12 14 Sep 2010, 18:46
1 Qs about Number Theory 3 17 May 2010, 20:24
46 Math: Number Theory - Percents 45 22 Mar 2010, 14:24
12 Math: Number Theory (broken into smaller topics) 11 10 Mar 2010, 05:20
Display posts from previous: Sort by
# Math: Number Theory
Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2015-11-26 16:45:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6437521576881409, "perplexity": 1730.5035604130774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00004-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.insacoin.org/how-to/create-keypair/ | # How to create an address and a private key on Insacoin
A “key pair”, a private key with the corresponding public one/address, is all that is needed to use Insacoin. But how to create one ?
# With insacoind
You can get a private key by using the CLI of your running node insacoind. It will take car of all the computation and ouput you a private key in WIF format, and a corresponding address assigned or not to an account.
darosior@debian:~/Documents/Projets/insacoin/website/src$insacoin-cli getnewaddress iBfco2Aurm99nadDKiXsGfMBgvq2FG1jFR darosior@debian:~/Documents/Projets/insacoin/website/src$ insacoin-cli dumpprivkey iBfco2Aurm99nadDKiXsGfMBgvq2FG1jFR
T9kxN14fqDbiFSZY1wHYzArkbZ2WFjCtJCQfg4eLmZgyHREnS2Tu
This is the shorter way to do that, but you can also order your addresses with “accounts”. Accounts are a kind of a box where you can store all your related addresses, for example you can have an account “savings”, another “christmas gifts”, and “pocket money”. To create an address for a given account (existing or not) :
darosior@debian:~/Documents/Projets/insacoin/website/src$insacoin-cli getnewaddress "insacoin.org how-to" iJvZ7vN8Xt6mtkATeFCtAqH1Fqs8DoL5ro darosior@debian:~/Documents/Projets/insacoin/website/src$ inascoin-cli listaccounts
{
"" : -6.97100000,
"a" : 0.00000000,
"aa" : 1.00000000,
"b" : 0.96700001,
"c" : 2.00000000,
"insacoin.org how-to" : 0.00000000,
"shiba" : 10.00000000,
"test" : 6.99494000,
"test2" : 1.00006000,
"theo" : 0.00000000,
"workshop" : 1.00000000,
"z" : 0.97900000
}
darosior@debian:~/Documents/Projets/insacoin/website/src$insacoin-cli getaccountaddress "insacoin.org how-to" iE2WTAm58Mm2upqQtzu3UV4xe8Db4S9BCa darosior@debian:~/Documents/Projets/insacoin/website/src$ insacoin-cli dumpprivkey iE2WTAm58Mm2upqQtzu3UV4xe8Db4S9BCa
T3kYDqQPCc3cGecPBLN9gvxj1Xz7paGU9ewcVAGZDmxGtgxQ8JTa
You can see all account related commands with
insacoin-cli help
in the “Wallet” section.
# Without insacoind
You can also generate addresses without relying on a node (which is wiser for development of applications). My post about the third session of “shiba to lion” turned out to be a tutorial explaining how to do so. Check it out here. | 2019-02-20 10:30:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19685572385787964, "perplexity": 5335.801615478292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00286.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/def-mnk-if-de-5-mn-6-then-find-value-a-def-a-mnk-similar-triangles_1102 | # ΔDEF ~ ΔMNK. If DE = 5, MN = 6, then find the value of A(ΔDEF)/A(ΔMNK) - Geometry
Sum
ΔDEF ~ ΔMNK. If DE = 5, MN = 6, then find the value of "A(ΔDEF)"/"A(ΔMNK)"
#### Solution
Given: ∆DEF ∆MNK
by Areas of similar triangles.
∴ ["A(Δ DEF)"]/["A(Δ MNK)"] = ("DE"^2)/("MN"^2)
Ratio of areas of Similar Triangles = Ratio of Squares of corresponding sides
∴ ["A(Δ DEF)"]/["A(Δ MNK)"] = 5^2/6^2 = 25/36
Concept: Similar Triangles
Is there an error in this question or solution? | 2021-03-01 08:00:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398704648017883, "perplexity": 5091.4756209969855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00020.warc.gz"} |
http://pynomo.org/wiki/index.php?title=Example:Radio-frequency_single_electron_transistor&oldid=582 | author Second order equation Leif Roschier
## Theory and background
Radio-frequency single-electron transistor (RF-SET) is a sensitive charge detector. It's charge sensitivity $\delta q [e/\sqrt{Hz}]\,$ in normal (not superconducting) operation is typically set by pre-amplifier noise temperature $T_0 [K]\,$, charging energy $E_C [J]\,$, transistor island electron temperature $T [K]\,$, SET high bias DC resistance $R_\Sigma [\Omega]\,$ and LC-transformer impedance $Z_{TR} [\Omega]\,$ according to relation [1]
$\delta q \approx \frac{2(3\frac{R_\Sigma}{Z_{TR}}+\frac{Z_{TR}}{Z_T}) \sqrt{k_B T_0 Z_T}}{2\times 0.41 t^{-1.74} 0.9 E_C/e^2}. \,$
#### References
1. L. Roschier, M. Sillanpää, W. Taihong, M. Ahlskog, S. Iijima and P. Hakonen, “Carbon nanotube radio-frequency single-electron transistor”, Journal of Low Temperature Physics, 136, 465 (2004).
## Construction of the nomograph
The equation is written as $-\log(\delta q)+x+\frac{1}{2}\log(k_B T_0 Z_T)-\log(0.9(k_B E_C)^{2.74}-\log(0.41(k_B T)^{-1.74}/e)\,$
split into three equations that each are blocks:
$-\log(\delta q)+x+\frac{1}{2}\log(k_B T_0 Z_T)-\log(0.9(k_B E_C)^{2.74}-\log(0.41(k_B T)^{-1.74}/e)=0\,$ type 3 $\exp(x) =$ type 2 $\frac{R_1}{E}= \frac{F}{R_2} \,\,\,\,\,\,\,$ type 4
## Generated nomograph
Second order equation
Generated portable document file (pdf): File:Ex second order eq.pdf
## Source code
"""
ex_second_order_eq.py
Second order equation: z**2+p*z+q=0
This program is free software: you can redistribute it and/or modify
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
from nomographer import *
N_params_1={
'u_min':-10.0,
'u_max':10.0,
'function':lambda u:u,
'title':r'$p$',
'tick_levels':3,
'tick_text_levels':2,
'tick_side':'left'
}
N_params_2={
'u_min':-10.0,
'u_max':10.0,
'function':lambda u:u,
'title':r'$q$',
'tick_levels':3,
'tick_text_levels':2,
'tick_side':'right',
}
N_params_3={
'u_min':0.0,
'u_max':5.0,
'function_3':lambda u:u,
'function_4':lambda u:u**2,
'title':r'$z$',
'tick_levels':0,
'tick_text_levels':0,
'title_draw_center':True,
'title_opposite_tick':False,
'extra_params':[{'tick_side':'left',
'u_min':0.1,
'u_max':12.0,
'tick_text_levels':2,
'tick_levels':3
}]
}
block_1_params={
'block_type':'type_10',
'width':10.0,
'height':10.0,
'f1_params':N_params_1,
'f2_params':N_params_2,
'f3_params':N_params_3,
}
main_params={
'filename':'ex_second_order_eq.pdf',
'paper_height':10.0,
'paper_width':10.0,
'block_params':[block_1_params],
'transformations':[('rotate',0.01),('scale paper',)],
'title_str':r'$z^2+pz+q=0$'
}
Nomographer(main_params) | 2021-01-25 15:03:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5908241868019104, "perplexity": 9441.626008243018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00393.warc.gz"} |
https://motls.blogspot.com/2019/06/acharya-stringm-theory-probably-implies.html?m=0 | ## Tuesday, June 18, 2019 ... //
### Acharya: string/M-theory probably implies low-energy SUSY
Bobby Acharya is a versatile fellow. Whenever you search for the author Acharya, B on Inspire, you will find out that "he" has written 1,527 papers which have earned over 161,000 citations which would trump 144,000 citations of Witten, E. Much of this weird huge number actually has some merit because Acharya is both a highly mathematical theorist – an expert in physics involving complicated extra-dimensional manifolds – as well as a member of the ATLAS experimental team at the LHC.
Today, he published
Supersymmetry, Ricci Flat Manifolds and the String Landscape.
String theory and supersymmetry are "allies" most of the time. Supersymmetry is a symmetry that first emerged – at least in the Western world – when Pierre Ramond was incorporating fermions to the stringy world sheet. (In Russia, SUSY was discovered independently by purely mathematical efforts to classify Lie-algebra-like physical symmetries.) Also, most of the anti-string hecklers tend to be anti-supersymmetry hecklers as well, and vice versa.
On the other hand, string theory and supersymmetry are somewhat independent. Bosonic string theory in $$D=26$$ has no SUSY – and SUSY is also broken in type 0 theories, some non-supersymmetric heterotic string theories, non-critical string theory, and more. Also, supersymmetry may be incorporated to non-gravitational field theories, starting with the Wess-Zumino model and the MSSM, which obviously aren't string vacua – because the string vacua make gravity unavoidable.
Some weeks ago, Alessandro Strumia was excited and told us that he wanted to become a non-supersymmetric stringy model builder because it was very important to satisfy one-half of the anti-string, anti-supersymmetric hecklers. It's a moral duty to abandon supersymmetry, he basically argued, so string theorists must do it as well and he wants to lead them. He didn't use these exact words but it was the spirit.
Well, string vacua with low-energy supersymmetry are rather well understood and many of them have matched the observed phenomena with an impressive (albeit not perfect, so far) precision – while those without supersymmetry seem badly understood and their agreement with the observations hasn't been proven too precisely. It's not surprising for many reasons. One of them is that supersymmetry makes physics both more stable, promising, and free of some hierarchy problems which is good phenomenologically; as well as full of cancellations and easier to calculate which is good from a mathematical viewpoint. Oh, SUSY, with a pictorial walking.
It is totally plausible that supersymmetry at low enough energies is an unavoidable consequence of string/M-theory – assuming some reasonably mild assumptions about the realism of the models. This belief was surely shared e.g. by my adviser Tom Banks – one of his prophesies used to be that this assertion (SUSY is unavoidable in string theory or quantum gravity) would eventually be proven. Acharya was looking into this question.
He focused on "geometric" vacua that may be described by 10D, 11D, or 12D (F-theory...) supergravity – which may then be dimensionally reduced to a four-dimensional theory. Assuming that these high-dimensional supergravity theories are good approximations at some level, the statement that "supersymmetry is unavoidable in string theory" becomes basically equivalent to the statement that "manifolds used for stringy extra dimensions require covariantly constant spinors".
Calabi-Yau three-folds – which, when used in heterotic string theory, gave us the first (and still excellent) class of realistic string compactifications in 1985 – are manifolds of $$SU(3)$$ holonomy. This holonomy guarantees the preservation of 1/4 of the supercharges that have existed in the higher-dimensional supergravity theory in the flat space because the generic holonomy $$SU(4)\sim SO(6)$$ of the orientable six-dimensional manifolds is reduced to $$SU(3)$$ where only 3 spinorial components out of 4 are randomly rotated into each other (after any closed parallel transport) while the fourth one remains fixed.
In table 1, Acharya lists all the relevant holonomy groups. If you forgot, the holonomy group is the group of all possible rotations of the tangent space that is induced by a parallel transport around any closed curve.
$$SO(N)$$ is the generic holonomy of an $$N$$-dimensional real manifold. It would be $$O(N)$$ if the manifold were unorientable. This transformation mixes the spinors in the most general way so there are no covariantly constant spinors. But there could nevertheless be Ricci-flat manifolds of this generic holonomy. The three question marks are written on that first line of his table because they exactly correspond to the big question he wants to probe in this paper.
Now, in real dimensions $$n=2k$$, $$n=4k$$, $$n=7$$, and $$n=8$$, one has the holonomies $$SU(k)$$, $$USp(2k)$$, $$G_2$$, and $$Spin(7)$$, respectively. All these special holonomies guarantee covariantly constant spinors i.e. some low-energy supersymmetry; and the Ricci-flatness of the metric, too. On the other hand, one may also "deform" the $$SU(k)$$ and $$USp(2k)$$ holonomies to $$U(k)$$ and $$USp(2k)\times Sp(1)$$, respectively, and this deformation kills both the covariantly constant spinors (i.e. SUSY) as well as the Ricci-flatness.
Note that string/M-theory allows you to derive Einstein's equations of general relativity from a more fundamental starting point. In the absence of matter sources (i.e. in the vacuum), Einstein's equations reduce to Ricci-flatness i.e. $$R_{\mu\nu}=0$$. This is relevant for the curved 4D spacetime that everyone knows. But it's also nice for the extra dimensions that produce the diversity of low-energy fields and particles.
So whether you find it beautiful or not, and all physicists with a good taste find it beautiful (and the beauty is very important, I must make you sure about this basic fact because you may have been misled by an ugly pundit), string/M-theory makes it important to study Ricci-flat manifolds – both manifolds including the 4 large dimensions that we know, as well the compactified extra dimensions. The former is relevant for 4D gravity we know; the latter is more relevant for the rest of physics.
Acharya divides the question "whether the Ricci-flat manifolds without covariantly constant spinors exist" into two groups:
* simply connected manifolds
* simply disconnected manifolds
In the first group, he doesn't quite find the proof but it seems that he believes that the conjecture that "no such compact, simply connected, Ricci flat manifolds without SUSY exist" seems promising.
In the second group, there exist counterexamples. After all, you may take quotients (orbifolds) of some supersymmetric manifolds – but the orbifolding maps the spinors to others in a generic enough way which breaks all of supersymmetry. So SUSY-breaking, Ricci-flat compactifications exist.
However, at the same moment, Acharya points out that all such simply disconnected Ricci-flat manifolds seem to suffer from an instability – a generalization of Witten's "bubble of nothing". It's given by a Coleman-style instanton that has a hole inside. The simplest vacuum with this Witten's instability is the Scherk-Schwarz compactification on a circle with antiperiodic boundary conditions for fermions (the easiest quotient-like way to break all of SUSY because when a constant is antiperiodic, it must be zero). The antiperiodic boundary conditions are perfect for closing a cylinder into a cigar (a good shape for Coleman-like instantons in the Euclideanized spacetime, especially because of Coleman's obsessive smoking) on which the spinors are well-behaved.
So the corresponding history in the Minkowski space looks like a vacuum decay – except that the new vacuum in the "ball inside" – which is growing almost by the speed of light – isn't really a vacuum at all. It's "emptiness" that doesn't even have a vacuum in it. The radius of the circular dimension – which is $$a\to a_0$$ for $$r\to\infty$$ – continuously approaches $$r=0$$ on the boundary of Witten's bubble of nothing – basically on $$|\vec r|=ct$$ where $$c$$ is the speed of light – and it stays zero for $$|\vec r|\lt ct$$ which means that there's no space for $$|\vec r| \lt ct$$ at all.
Such instabilities are brutal and Acharya basically proves that these instabilities make all Ricci-flat, simply disconnected, non-supersymmetric stringy compactifications unstable. We see that our Universe doesn't decay instantly so we can't live in such a vacuum. Instead, the extra dimensions should either be supersymmetric and simply disconnected; or they should be simply connected. When they're simply connected, the conjecture – which has passed lots of tests and may be proven – says that these compactifications imply low-energy supersymmetry, anyway.
If this conjecture happened to be wrong, it would seem likely to Acharya – and me – that the number of non-supersymmetric, simply connected, Ricci-flat compact manifolds would probably be much higher than the number of the supersymmetric Ricci-flat solutions. If it were so, SUSY breaking could be "generic" in string/M-theory, and SUSY breaking could actually become a rather solid prediction of string/M-theory. (Well, the population advantage should also beat the factor of $$10^{34}$$ to persuade us that we don't need to care about the non-supersymmetric vacua's hierarchy problem.) Note that with some intense enough mathematical work, it should be possible to settle which of these two predictions are actually being made by string theory.
Acharya has only considered "geometric/supergravity" vacua. It's possible that some non-geometric vacua not admitting a higher-dimensional supergravity description are important or numerous or prevailing – and if it is so, the answer about low-energy SUSY could be anything and Acharya's work could become useless for finding this answer.
But some geometric approximation may exist for almost all vacua - dualities indicate that there are often several geometric starting points to understand a vacuum, so why the number should be zero too often? – and the incomplete evidence indicates that low-energy SUSY is mathematically needed in stable enough string vacua. When I say low-energy SUSY, it may be broken at $$100\TeV$$ or anything. But it should be a scale lower than the Kaluza-Klein scale of the extra dimensions – and maybe than some other, even lower, scales. | 2020-08-09 12:08:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6026443243026733, "perplexity": 1005.950384622902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738552.17/warc/CC-MAIN-20200809102845-20200809132845-00415.warc.gz"} |
https://math.stackexchange.com/questions/62203/how-can-i-calculate-non-integer-exponents | # How can I calculate non-integer exponents?
I can calculate the result of $x^y$ provided that $y \in\mathbb{N}, x \neq 0$ using a simple recursive function:
$$f(x,y) = \begin {cases} 1 & y = 0 \\ (x)f(x, y-1) & y > 0 \end {cases}$$
or, perhaps more simply stated, by multiplying $x$ by itself $y$ times.
Unfortunately, I am unsure how I can numerically approximate $x^y$ for non-integer rationals.
For example, what method can I use to approximate 33.3?
If possible, I would like to be able to do this using only elementary arithmetic operations, i.e. addition, subtraction, multiplication, and division.
• If the exponent is rational, you'll have to use an iterative method like Newton-Raphson. For more general exponents, you'll definitely need $\exp$ and $\ln$. – J. M. isn't a mathematician Sep 6 '11 at 5:01
• @J.M. For my purposes, I can assume that the exponent is rational. – Peter Olson Sep 6 '11 at 5:02
• For the specific case of $3^{\frac{33}{10}}$, you can use Newton-Raphson for $\sqrt[10]{3}$ and then use your method to exponentiate that 33 times. – J. M. isn't a mathematician Sep 6 '11 at 5:03
• If you can also take square roots, you can expand the exponent as a binary number and repeatedly take square roots to get $x^{1/2^k}$ and multiply when the bit in the exponent is a 1. There is a technique for taking $x^{1/2}$ when x is close to 1 that preserves accuracy - write $x = 1+y$ so $x^{1/2} \approx 1+y/2$. I think I saw this in a book by Henrici many years ago. – marty cohen Sep 6 '11 at 5:31
• Are you asking how $x^y$ is defined for rational $y=n/m$? It is as $(x^{1/m})^n$, where $x^{1/m}$ is defined as the $m$th root of $x$, that is, the unique number $t>0$ such that $t^m=x$. To approximate $t=x^{1/m}$ you can, as said, use Newton-Raphson to find the root to the function $f(t)=t^m-x$. Arbitrary real exponents can be treated as limits of rational exponents. For example, $x^{3.14159}$ will approximate $x^\pi$. – Samuel Sep 6 '11 at 11:50
I'll consider the problem of computing $x^\frac1{q}, \; q > 0$; as I've already mentioned in the comments, one can decompose any positive rational number as $m+\dfrac{p}{q}$, where $m,p$ are nonnegative integers, $q$ is a positive integer, and $p < q$. Thus for computing $x^{m+\frac{p}{q}}$, one could use binary exponentiation on $x^m$ and $\left(x^\frac1{q}\right)^p$ and multiply the results accordingly.
A.N. Khovanskiĭ, in his book on continued fractions, displays a continued fraction representation for the binomial function:
$$(1+z)^\alpha=1+\cfrac{2\alpha z}{2+(1-\alpha)z-\cfrac{(1-\alpha^2)z^2}{3(z+2)-\cfrac{(4-\alpha^2)z^2}{5(z+2)-\cfrac{(9-\alpha^2)z^2}{7(z+2)-\cdots}}}}$$
which converges for $|\arg(z+1)| < \pi$.
Letting $z=x-1$ and $\alpha=\dfrac1{q}$, one can then evaluate this continued fraction (with, say, Lentz-Thompson-Barnett) to generate a "seed" that can be subsequently polished with Newton-Raphson, Halley, or any of a number of iterations with high-order convergence. You'll have to experiment with how accurate a seed you need to start up the iteration, by picking a not-too-small tolerance when evaluating the continued fraction.
Here's some Mathematica code demonstrating what I've been saying earlier, for computing $\sqrt[3]{55}$:
With[{q = 3, t = 55, prec = 30},
y = N[2 + (1 - 1/q) (t - 1), prec];
c = y; d = 0; k = 1;
While[True,
u = (k^2 - q^-2) (t - 1)^2; v = (2 k + 1) (t + 1);
c = v - u/c; d = 1/(v - u d);
h = c*d; y *= h;
If[Abs[h - 1] <= 10^-4, Break[]];
k++];
FixedPoint[
Function[x, x ((1 + q) t - x^q (1 - q))/(x^q (1 + q) - (1 - q) t)],
1 + 2 (t - 1)/q/y]]
Here, I've arbitrarily chosen to stop when the continued fraction has already converged to $\approx 4$ digits, and then polished the result with Halley's method. The result here is good to $\approx 28$ digits. Again, you'll have to experiment on the accuracy versus expense of evaluating the "seed", as well as picking the appropriate iteration method for polishing the seed.
• Wow, you recommended a paper co-authored by a professor in my university. – user98186 Jan 15 '16 at 15:46
There is a concept using "fractional continued fractions" invented by D. Gomez Morin , see http://domingogomez.web.officelive.com/gcf.aspx . I've fiddled some time ago with this and think I recall right, that you can find roots of m'th order and that the apparent complexitiy of the "fractional continued fraction" reduces to rational fractions recursively applied. I'll try next days to recover the algorithm again, but perhaps the link to the page is already helpful.
Just found the hint of D.G.Morin to an excerpt of Steven Finch's book "mathematical constants", where S.F. mentions also this method in a much compacted way (see page 4 of http://assets.cambridge.org/052181/8052/sample/0521818052ws.pdf ).
[update] For the first broken link there is an entry in the wayback internet archive: Domingo Fomez Morin [/update] | 2021-02-28 18:29:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979575634002686, "perplexity": 557.403050266944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00209.warc.gz"} |
http://news.sky-map.org/starview?object_type=2&object_id=8741&object_name=NGC+3990&locale=EN | NEWS@SKY (Science&Space News)
Home To Survive in the Universe
.MYA{ color: yellow; text-decoration: none; :hover { color: red; text-decoration: none; } } Services
Why to Inhabit Top Contributors Astro Photo The Collection Forum Blog New! FAQ Login
# NGC 3990
Contents
### Images
Upload your image
DSS Images Other Images
### Related articles
The Cool ISM in S0 Galaxies. II. A Survey of Atomic GasThe place of lenticular galaxies within the range of types of galaxiesremains unclear. We previously reported the mass of molecular hydrogenfor a volume-limited sample of lenticular galaxies, where we saw thatthe amount of gas was less than that predicted by the return of stellarmass to the interstellar medium. Here we report observations of atomichydrogen (H I) for the same sample. Detections in several galaxies makemore compelling the case presented in our earlier paper that the mass ofcool gas in S0 galaxies cuts off at ~10% of what is expected fromcurrent models of gas return from stellar evolution. The molecular andatomic phases of the gas in our sample galaxies appear to be separateand distinct, both spatially and in velocity space. We propose that themolecular gas arises mostly from the stellar mass returned to thegalaxy, while the atomic hydrogen is mainly accumulated from externalsources (infall, captured dwarfs, etc.). While this proposal fits mostof the observations, it makes the presence of the upper mass cutoff evenmore mysterious. Ultraluminous X-Ray Sources in Nearby Galaxies from ROSAT High Resolution Imager Observations I. Data AnalysisX-ray observations have revealed in other galaxies a class ofextranuclear X-ray point sources with X-ray luminosities of1039-1041 ergs s-1, exceeding theEddington luminosity for stellar mass X-ray binaries. Theseultraluminous X-ray sources (ULXs) may be powered by intermediate-massblack holes of a few thousand Msolar or stellar mass blackholes with special radiation processes. In this paper, we present asurvey of ULXs in 313 nearby galaxies withD25>1' within 40 Mpc with 467 ROSAT HighResolution Imager (HRI) archival observations. The HRI observations arereduced with uniform procedures, refined by simulations that help definethe point source detection algorithm employed in this survey. A sampleof 562 extragalactic X-ray point sources withLX=1038-1043 ergs s-1 isextracted from 173 survey galaxies, including 106 ULX candidates withinthe D25 isophotes of 63 galaxies and 110 ULX candidatesbetween 1D25 and 2D25 of 64 galaxies, from which aclean sample of 109 ULXs is constructed to minimize the contaminationfrom foreground or background objects. The strong connection betweenULXs and star formation is confirmed based on the striking preference ofULXs to occur in late-type galaxies, especially in star-forming regionssuch as spiral arms. ULXs are variable on timescales over days to yearsand exhibit a variety of long term variability patterns. Theidentifications of ULXs in the clean sample show some ULXs identified assupernovae (remnants), H II regions/nebulae, or young massive stars instar-forming regions, and a few other ULXs identified as old globularclusters. In a subsequent paper, the statistic properties of the surveywill be studied to calculate the occurrence frequencies and luminosityfunctions for ULXs in different types of galaxies to shed light on thenature of these enigmatic sources. The Molecular Interstellar Medium of Dwarf Galaxies on Kiloparsec Scales: A New Survey for CO in Northern, IRAS-detected Dwarf GalaxiesWe present a new survey for CO in dwarf galaxies using the ARO Kitt Peak12 m telescope. This survey consists of observations of the centralregions of 121 northern dwarfs with IRAS detections and no known COemission. We detect CO in 28 of these galaxies and marginally detectanother 16, increasing by about 50% the number of such galaxies known tohave significant CO emission. The galaxies we detect are comparable instellar and dynamical mass to the Large Magellanic Cloud, althoughsomewhat brighter in CO and fainter in the far-IR. Within dwarfs, wefind that the CO luminosity LCO is most strongly correlatedwith the K-band and the far-infrared luminosities. There are also strongcorrelations with the radio continuum (RC) and B-band luminosities andlinear diameter. Conversely, we find that far-IR dust temperature is apoor predictor of CO emission within the dwarfs alone, although a goodpredictor of normalized CO content among a larger sample of galaxies. Wesuggest that LCO and LK correlate well because thestellar component of a galaxy dominates the midplane gravitational fieldand thus sets the pressure and density of the atomic gas, which controlthe formation of H2 from H I. We compare our sample with moremassive galaxies and find that dwarfs and large galaxies obey the samerelationship between CO and the 1.4 GHz RC surface brightness. Thisrelationship is well described by a Schmidt law withΣRC~Σ1.3CO. Therefore,dwarf galaxies and large spirals exhibit the same relationship betweenmolecular gas and star formation rate (SFR). We find that this result isrobust to moderate changes in the RC-to-SFR and CO-to-H2conversion factors. Our data appear to be inconsistent with large (orderof magnitude) variations in the CO-to-H2 conversion factor inthe star-forming molecular gas. On the recent star formation history of the Milky Way diskWe have derived the star formation history of the Milky Way disk overthe last 2 Gyr from the age distribution diagram of a large sample ofopen clusters comprising more than 580 objects. By interpreting the agedistribution diagram using numerical results from an extensive libraryof N-body calculations carried out during the last ten years, wereconstruct the recent star formation history of the Milky Way disk.Under the assumption that the disk has never been polluted by anyextragalactic stellar populations, our analysis suggests thatsuperimposed on a relatively small level of constant star formationactivity mainly in small-N star clusters, the star formation rate hasexperienced at least five episodes of enhanced star formation lastingabout 0.2 Gyr with production of larger clusters. This cyclic behaviourshows a period of 0.4+/-0.1 Gyr and could be the result of density wavesand/or interactions with satellite galaxies. On the other hand, the starformation rate history from a volume-limited sample of open clusters inthe solar neighbourhood appears to be consistent with the overall starformation history obtained from the entire sample. Pure continuous starformation both in the solar neighbourhood and the entire Galactic diskis strongly ruled out. Our results also indicate that, in the Milky Waydisk, about 90% of open clusters are born with N<=150 and the slopein the power-law frequency distribution of their masses is about -2.7when quiescent star formation takes place. If the above results arere-interpreted taking into consideration accretion events onto the MilkyWay, it is found that a fraction of the unusually high number of openclusters with ages older than 0.6 Gyr may have been formed in disruptedsatellites. Problems arising from the selection effects and the ageerrors in the open cluster sample used are discussed in detail. Testing Radiatively Inefficient Accretion Flow Theory: An XMM-Newton Observation of NGC 3998We present the results of a 10 ks XMM-Newton observation of NGC 3998, atype I'' LINER galaxy (i.e., with significant broad Hαemission). Our goal is to test the extent to which radiativelyinefficient accretion flow (RIAF) models and/or scaled-down activegalactic nuclei (AGNs) models are consistent with the observedproperties of NGC 3998. A power-law fit to the XMM-Newton spectraresults in a power-law slope of Γ=1.9 and 2-10 keV flux of1.1×10-11 ergs cm-2 s-1, inexcellent agreement with previous hard X-ray observations. The OM UVflux at 2120 Å appears to be marginally resolved, with ~50% of theflux extended beyond 2". The nuclear component of the 2120 Å fluxis consistent with an extrapolation of the X-ray power law, although~50% of the flux may be absorbed. The OM U flux lies significantly abovethe X-ray power-law extrapolation and contains a significantcontribution from extragalactic emission. The upper limit for narrow FeK emission derived from the XMM-Newton spectra is 33 eV (forΔχ2=2.7). The upper limit for narrow Fe K emissionderived from a combined fit of the XMM-Newton and BeppoSAX spectra is 25eV, which is one of the strictest limits to date for any AGN. Thissignificantly rules out Fe K emission, which is expected to be observedin typical Seyfert 1 galaxies. The X-ray flux of NGC 3998 has not beenobserved to vary significantly (at >30% level) within the X-rayobservations, and only between observations at a level of ~50%, which isalso in contrast to typical Seyfert 1 galaxies. The lack of anyreflection features suggests that any optically thick, geometricallythin accretion disk must be truncated, probably at a radius of order100-300 (in Schwarzschild units). RIAF models fit the UV to X-rayspectral energy distribution of NGC 3998 reasonably well. In thesemodels the mid-IR flux also constrains the emission from any outer thindisk component that might be present. The UV to X-ray spectral energydistribution (SED) is also consistent with a Comptonized thin disk witha very low accretion rate (M<10-5MEdd), inwhich case the lack of Fe K emission may be due to an ionized accretiondisk. Accretion models in general do not account for the observed radioflux of NGC 3998, and the radio flux may be due to a jet. Recent jetmodels may also be consistent with the nuclear fluxes of NGC 3998 ingeneral, including the X-ray, optical/UV, and mid-IR bands. The(ground-based) near-IR to optical photometric data for the nuclearregion of NGC 3998 contain large contributions from extranuclearemission. We also derive nuclear fluxes using archival Hubble SpaceTelescope WFPC2 data, resulting in meaningful constraints to the nuclearSED of NGC 3998 in the optical band. Properties of isolated disk galaxiesWe present a new sample of northern isolated galaxies, which are definedby the physical criterion that they were not affected by other galaxiesin their evolution during the last few Gyr. To find them we used thelogarithmic ratio, f, between inner and tidal forces acting upon thecandidate galaxy by a possible perturber. The analysis of thedistribution of the f-values for the galaxies in the Coma cluster leadus to adopt the criterion f ≤ -4.5 for isolated galaxies. Thecandidates were chosen from the CfA catalog of galaxies within thevolume defined by cz ≤5000 km s-1, galactic latitudehigher than 40o and declination ≥-2.5o. Theselection of the sample, based on redshift values (when available),magnitudes and sizes of the candidate galaxies and possible perturberspresent in the same field is discussed. The final list of selectedisolated galaxies includes 203 objects from the initial 1706. The listcontains only truly isolated galaxies in the sense defined, but it is byno means complete, since all the galaxies with possible companions underthe f-criterion but with unknown redshift were discarded. We alsoselected a sample of perturbed galaxies comprised of all the diskgalaxies from the initial list with companions (with known redshift)satisfying f ≥ -2 and \Delta(cz) ≤500 km s-1; a totalof 130 objects. The statistical comparison of both samples showssignificant differences in morphology, sizes, masses, luminosities andcolor indices. Confirming previous results, we found that late spiral,Sc-type galaxies are, in particular, more frequent among isolatedgalaxies, whereas Lenticular galaxies are more abundant among perturbedgalaxies. Isolated systems appear to be smaller, less luminous and bluerthan interacting objects. We also found that bars are twice as frequentamong perturbed galaxies compared to isolated galaxies, in particularfor early Spirals and Lenticulars. The perturbed galaxies have higherLFIR/LB and Mmol/LB ratios,but the atomic gas content is similar for the two samples. The analysisof the luminosity-size and mass-luminosity relations shows similartrends for both families, the main difference being the almost totalabsence of big, bright and massive galaxies among the family of isolatedsystems, together with the almost total absence of small, faint and lowmass galaxies among the perturbed systems. All these aspects indicatethat the evolution induced by interactions with neighbors would proceedfrom late, small, faint and low mass Spirals to earlier, bigger, moreluminous and more massive spiral and lenticular galaxies, producing atthe same time a larger fraction of barred galaxies but preserving thesame relations between global parameters. The properties we found forour sample of isolated galaxies appear similar to those of high redshiftgalaxies, suggesting that the present-day isolated galaxies could bequietly evolved, unused building blocks surviving in low densityenvironments.Tables \ref{t1} and \ref{t2} are only available in electronic form athttp://www.edpsciences.org The Cool Interstellar Medium in S0 Galaxies. I. A Survey of Molecular GasLenticular galaxies remain remarkably mysterious as a class.Observations to date have not led to any broad consensus about theirorigins, properties, and evolution, although they are often thought tohave formed in one big burst of star formation early in the history ofthe universe and to have evolved relatively passively since then. Inthat picture, current theory predicts that stellar evolution returnssubstantial quantities of gas to the interstellar medium; most isejected from the galaxy, but significant amounts of cool gas might beretained. Past searches for that material, though, have provided unclearresults. We present results from a survey of molecular gas in avolume-limited sample of field S0 galaxies selected from the NearbyGalaxies Catalog. CO emission is detected from 78% of the samplegalaxies. We find that the molecular gas is almost always located insidethe central few kiloparsecs of a lenticular galaxy, meaning that ingeneral it is more centrally concentrated than in spirals. We combineour data with H I observations from the literature to determine thetotal masses of cool and cold gas. Curiously, we find that, across awide range of luminosity, the most gas-rich galaxies have ~10% of thetotal amount of gas ever returned by their stars. That result isdifficult to understand within the context of either monolithic orhierarchical models of evolution of the interstellar medium. Redshift-Distance Survey of Early-Type Galaxies: Circular-Aperture PhotometryWe present R-band CCD photometry for 1332 early-type galaxies, observedas part of the ENEAR survey of peculiar motions using early-typegalaxies in the nearby universe. Circular apertures are used to tracethe surface brightness profiles, which are then fitted by atwo-component bulge-disk model. From the fits, we obtain the structuralparameters required to estimate galaxy distances using theDn-σ and fundamental plane relations. We find thatabout 12% of the galaxies are well represented by a pure r1/4law, while 87% are best fitted by a two-component model. There are 356repeated observations of 257 galaxies obtained during different runsthat are used to derive statistical corrections and bring the data to acommon system. We also use these repeated observations to estimate ourinternal errors. The accuracy of our measurements are tested by thecomparison of 354 galaxies in common with other authors. Typical errorsin our measurements are 0.011 dex for logDn, 0.064 dex forlogre, 0.086 mag arcsec-2 for<μe>, and 0.09 for mRC,comparable to those estimated by other authors. The photometric datareported here represent one of the largest high-quality and uniformall-sky samples currently available for early-type galaxies in thenearby universe, especially suitable for peculiar motion studies.Based on observations at Cerro Tololo Inter-American Observatory (CTIO),National Optical Astronomy Observatory, which is operated by theAssociation of Universities for Research in Astronomy, Inc., undercooperative agreement with the National Science Foundation (NSF);European Southern Observatory (ESO); Fred Lawrence Whipple Observatory(FLWO); and the MDM Observatory on Kitt Peak. A new catalogue of ISM content of normal galaxiesWe have compiled a catalogue of the gas content for a sample of 1916galaxies, considered to be a fair representation of normality''. Thedefinition of a normal'' galaxy adopted in this work implies that wehave purposely excluded from the catalogue galaxies having distortedmorphology (such as interaction bridges, tails or lopsidedness) and/orany signature of peculiar kinematics (such as polar rings,counterrotating disks or other decoupled components). In contrast, wehave included systems hosting active galactic nuclei (AGN) in thecatalogue. This catalogue revises previous compendia on the ISM contentof galaxies published by \citet{bregman} and \citet{casoli}, andcompiles data available in the literature from several small samples ofgalaxies. Masses for warm dust, atomic and molecular gas, as well asX-ray luminosities have been converted to a uniform distance scale takenfrom the Catalogue of Principal Galaxies (PGC). We have used twodifferent normalization factors to explore the variation of the gascontent along the Hubble sequence: the blue luminosity (LB)and the square of linear diameter (D225). Ourcatalogue significantly improves the statistics of previous referencecatalogues and can be used in future studies to define a template ISMcontent for normal'' galaxies along the Hubble sequence. The cataloguecan be accessed on-line and is also available at the Centre desDonnées Stellaires (CDS).The catalogue is available in electronic form athttp://dipastro.pd.astro.it/galletta/ismcat and at the CDS via anonymousftp to\ cdsarc.u-strasbg.fr (130.79.128.5) or via\http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/405/5 A catalogue and analysis of X-ray luminosities of early-type galaxiesWe present a catalogue of X-ray luminosities for 401 early-typegalaxies, of which 136 are based on newly analysed ROSAT PSPC pointedobservations. The remaining luminosities are taken from the literatureand converted to a common energy band, spectral model and distancescale. Using this sample we fit the LX:LB relationfor early-type galaxies and find a best-fit slope for the catalogue of~2.2. We demonstrate the influence of group-dominant galaxies on the fitand present evidence that the relation is not well modelled by a singlepower-law fit. We also derive estimates of the contribution to galaxyX-ray luminosities from discrete-sources and conclude that they provideLdscr/LB~=29.5ergs-1LBsolar-1. Wecompare this result with luminosities from our catalogue. Lastly, weexamine the influence of environment on galaxy X-ray luminosity and onthe form of the LX:LB relation. We conclude thatalthough environment undoubtedly affects the X-ray properties ofindividual galaxies, particularly those in the centres of groups andclusters, it does not change the nature of whole populations. A synthesis of data from fundamental plane and surface brightness fluctuation surveysWe perform a series of comparisons between distance-independentphotometric and spectroscopic properties used in the surface brightnessfluctuation (SBF) and fundamental plane (FP) methods of early-typegalaxy distance estimation. The data are taken from two recent surveys:the SBF Survey of Galaxy Distances and the Streaming Motions of AbellClusters (SMAC) FP survey. We derive a relation between(V-I)0 colour and Mg2 index using nearly 200galaxies and discuss implications for Galactic extinction estimates andearly-type galaxy stellar populations. We find that the reddenings fromSchlegel et al. for galaxies with E(B-V)>~0.2mag appear to beoverestimated by 5-10 per cent, but we do not find significant evidencefor large-scale dipole errors in the extinction map. In comparison withstellar population models having solar elemental abundance ratios, thegalaxies in our sample are generally too blue at a given Mg2;we ascribe this to the well-known enhancement of the α-elements inluminous early-type galaxies. We confirm a tight relation betweenstellar velocity dispersion σ and the SBF fluctuation count'parameter N, which is a luminosity-weighted measure of the total numberof stars in a galaxy. The correlation between N and σ is eventighter than that between Mg2 and σ. Finally, we deriveFP photometric parameters for 280 galaxies from the SBF survey data set.Comparisons with external sources allow us to estimate the errors onthese parameters and derive the correction necessary to bring them on tothe SMAC system. The data are used in a forthcoming paper, whichcompares the distances derived from the FP and SBF methods. The Ursa Major cluster of galaxies - III. Optical observations of dwarf galaxies and the luminosity function down to MR=-11Results are presented of a deep optical survey of the Ursa Majorcluster, a spiral-rich cluster of galaxies at a distance of 18.6Mpcwhich contains about 30 per cent of the light but only 5 per cent of themass of the nearby Virgo cluster. Fields around known cluster membersand a pattern of blind fields along the major and minor axes of thecluster were studied with mosaic CCD cameras on the Canada-France-HawaiiTelescope. The dynamical crossing time for the Ursa Major cluster isonly slightly less than a Hubble time. Most galaxies in the localUniverse exist in similar moderate-density environments. The Ursa Majorcluster is therefore a good place to study the statistical properties ofdwarf galaxies, since this structure is at an evolutionary stagerepresentative of typical environments, yet has enough galaxies thatreasonable counting statistics can be accumulated. The mainobservational results of our survey are as follows. (i) The galaxyluminosity function is flat, with a logarithmic slope α=-1.1 for-17 The Ursa Major Cluster of Galaxies. V. H I Rotation Curve Shapes and the Tully-Fisher RelationsThis paper investigates the statistical properties of the Tully-Fisher(TF) relations for a volume-limited complete sample of spiral galaxiesin the nearby Ursa Major Cluster. The merits of B, R, I, and K' surfacephotometry and the availability of detailed kinematic information from HI synthesis imaging have been exploited. In addition to the corrected HI global profile widths WiR,I, the available H Irotation curves allow direct measurements of the observed maximumrotational velocities Vmax and the amplitudesVflat of the outer flat parts. The dynamical state of the gasdisks could also be determined in detail from the radio observations.The four luminosity and three kinematic measures allowed theconstruction of 12 correlations for various subsamples. For large galaxysamples, the Mb,iR-logWiR,Icorrelation in conjunction with strict selection criteria is preferredfor distance determinations with a 7% accuracy. Galaxies with rotationcurves that are still rising at the last measured point liesystematically on the low-velocity side of the TF relation. Galaxieswith a partly declining rotation curve(Vmax>Vflat) tend to lie systematically on thehigh-velocity side of the relation when usingWiR,I or Vmax. However, systematicoffsets are eliminated when Vflat is used. Residuals of theMb,iB-log(2Vflat) relation correlateconsistently with global galaxy properties along the Hubble sequencelike morphological type, color, surface brightness, and gas massfraction. These correlations are absent for the near-infraredMb,iK'-log(2Vflat)residuals. The tightest correlation(χ2red=1.1) is found for theMb,iK'-log(2Vflat) relation,which has a slope of -11.3+/-0.5 and a total observed scatter of 0.26mag with a most likely intrinsic scatter of zero. The tightness of thenear-infrared correlation is preserved when converting it into abaryonic TF relation that has a slope of -10.0 in the case(Mgas/LK')=1.6 while a zerointrinsic scatter remains most likely. Based on the tightness of thenear-infrared and baryonic correlations, it is concluded that the TFrelation reflects a fundamental correlation between the mass of the darkmatter halo, measured through its induced maximum rotational velocityVflat, and the total baryonic mass Mbar of agalaxy where Mbar~V4flat. Althoughthe actual distribution of the baryonic matter inside halos of similarmass can vary significantly, it does not affect this relation. The SBF Survey of Galaxy Distances. IV. SBF Magnitudes, Colors, and DistancesWe report data for I-band surface brightness fluctuation (SBF)magnitudes, (V-I) colors, and distance moduli for 300 galaxies. Thesurvey contains E, S0, and early-type spiral galaxies in the proportionsof 49:42:9 and is essentially complete for E galaxies to Hubblevelocities of 2000 km s-1, with a substantial sampling of Egalaxies out to 4000 km s-1. The median error in distancemodulus is 0.22 mag. We also present two new results from the survey.(1) We compare the mean peculiar flow velocity (bulk flow) implied byour distances with predictions of typical cold dark matter transferfunctions as a function of scale, and we find very good agreement withcold, dark matter cosmologies if the transfer function scale parameterΓ and the power spectrum normalization σ8 arerelated by σ8Γ-0.5~2+/-0.5. Deriveddirectly from velocities, this result is independent of the distributionof galaxies or models for biasing. This modest bulk flow contradictsreports of large-scale, large-amplitude flows in the ~200 Mpc diametervolume surrounding our survey volume. (2) We present adistance-independent measure of absolute galaxy luminosity, N and showhow it correlates with galaxy properties such as color and velocitydispersion, demonstrating its utility for measuring galaxy distancesthrough large and unknown extinction. Observations in part from theMichigan-Dartmouth-MIT (MDM) Observatory. The Frequency of Active and Quiescent Galaxies with Companions: Implications for the Feeding of the NucleusWe analyze the idea that nuclear activity, either active galactic nuclei(AGNs) or star formation, can be triggered by interactions by studyingthe percentage of active, H II, and quiescent galaxies with companions.Our sample was selected from the Palomar survey and avoids selectionbiases faced by previous studies. This sample was split into fivedifferent groups, Seyfert galaxies, LINERs, transition galaxies, H IIgalaxies, and absorption-line galaxies. The comparison between the localgalaxy density distributions of the different groups showed that in mostcases there is no statistically significant difference among galaxies ofdifferent activity types, with the exception that absorption-linegalaxies are seen in higher density environments, since most of them arein the Virgo Cluster. The comparison of the percentage of galaxies withnearby companions showed that there is a higher percentage of LINERs,transition galaxies, and absorption-line galaxies with companions thanSeyfert and H II galaxies. However, we find that when we consider onlygalaxies of similar morphological types (elliptical or spiral), there isno difference in the percentage of galaxies with companions amongdifferent activity types, indicating that the former result was due tothe morphology-density effect. In addition, only small differences arefound when we consider galaxies with similar Hα luminosities. Thecomparison between H II galaxies of different Hα luminositiesshows that there is a significantly higher percentage of galaxies withcompanions among H II galaxies with L(Hα)>1039 ergss-1 than among those with L(Hα)<=1039ergs s-1, indicating that interactions increase the amount ofcircumnuclear star formation, in agreement with previous results. Thefact that we find that galaxies of different activity types have thesame percentage of companions suggests that interactions betweengalaxies is not a necessary condition to trigger the nuclear activity inAGNs. We compare our results with previous ones and discuss theirimplications. The Ursa Major cluster of galaxies. IV. HI synthesis observationsIn this data paper we present the results of an extensive 21 cm-linesynthesis imaging survey of 43 spiral galaxies in the nearby Ursa Majorcluster using the Westerbork Synthesis Radio Telescope. Detailedkinematic information in the form of position-velocity diagrams androtation curves is presented in an atlas together with HI channel maps,21 cm continuum maps, global HI profiles, radial HI surface densityprofiles, integrated HI column density maps, and HI velocity fields. Therelation between the corrected global HI linewidth and the rotationalvelocities Vmax and Vflat as derived from therotation curves is investigated. Inclination angles obtained from theoptical axis ratios are compared to those derived from the inclined HIdisks and the HI velocity fields. The galaxies were not selected on thebasis of their HI content but solely on the basis of their clustermembership and inclination which should be suitable for a kinematicanalysis. The observed galaxies provide a well-defined, volume limitedand equidistant sample, useful to investigate in detail the statisticalproperties of the Tully-Fisher relation and the dark matter halos aroundthem. Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation `friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. Arcsecond Positions of UGC GalaxiesWe present accurate B1950 and J2000 positions for all confirmed galaxiesin the Uppsala General Catalog (UGC). The positions were measuredvisually from Digitized Sky Survey images with rms uncertaintiesσ<=[(1.2")2+(θ/100)2]1/2,where θ is the major-axis diameter. We compared each galaxymeasured with the original UGC description to ensure high reliability.The full position list is available in the electronic version only. The Nature of Accreting Black Holes in Nearby Galaxy NucleiWe have found compact X-ray sources in the center of 21 (54%) of 39nearby face-on spiral and elliptical galaxies with available ROSAT HRIdata. ROSAT X-ray luminosities (0.2-2.4 keV) of these compact X-raysources are ~10^37-10^40 ergs s^-1 (with a mean of 3x10^39 ergs s^-1).The mean displacement between the location of the compact X-ray sourceand the optical photometric center of the galaxy is ~390 pc. The factthat compact nuclear sources were found in nearly all (five of six)galaxies with previous evidence for a black hole or an active galacticnucleus (AGN) indicates that at least some of the X-ray sources areaccreting supermassive black holes. ASCA spectra of six of the 21galaxies show the presence of a hard component with relatively steep(Gamma~2.5) spectral slope. A multicolor disk blackbody model fits thedata from the spiral galaxies well, suggesting that the X-ray object inthese galaxies may be similar to a black hole candidate in its soft(high) state. ASCA data from the elliptical galaxies indicate that hot(kT~0.7 keV) gas dominates the emission. The fact that (for both spiraland elliptical galaxies) the spectral slope is steeper than in normaltype 1 AGNs and that relatively low absorbing columns (N_H~10^21 cm^-2)were found to the power-law component indicates that these objects aresomehow geometrically and/or physically different from AGNs in normalactive galaxies. The X-ray sources in the spiral and elliptical galaxiesmay be black hole X-ray binaries, low-luminosity AGNs, or possibly youngX-ray luminous supernovae. Assuming the sources in the spiral galaxiesare accreting black holes in their soft state, we estimate black holemasses ~10^2-10^4 M_solar. Groups of galaxies. III. Some empirical characteristics.Not Available Total magnitude, radius, colour indices, colour gradients and photometric type of galaxiesWe present a catalogue of aperture photometry of galaxies, in UBVRI,assembled from three different origins: (i) an update of the catalogueof Buta et al. (1995) (ii) published photometric profiles and (iii)aperture photometry performed on CCD images. We explored different setsof growth curves to fit these data: (i) The Sersic law, (ii) The net ofgrowth curves used for the preparation of the RC3 and (iii) A linearinterpolation between the de Vaucouleurs (r(1/4) ) and exponential laws.Finally we adopted the latter solution. Fitting these growth curves, wederive (1) the total magnitude, (2) the effective radius, (3) the colourindices and (4) gradients and (5) the photometric type of 5169 galaxies.The photometric type is defined to statistically match the revisedmorphologic type and parametrizes the shape of the growth curve. It iscoded from -9, for very concentrated galaxies, to +10, for diffusegalaxies. Based in part on observations collected at the Haute-ProvenceObservatory. The Ursa Major Cluster of Galaxies. II. Bimodality of the Distribution of Central Surface BrightnessesThe Ursa Major Cluster appears to be unevolved and made up of H I--richspiral galaxies such as those one finds in the field. B, R, I, K'photometry has been obtained for 79 galaxies, including 62 in a completesample with M^{b,i}B<-16.m5 (with a distance to thecluster of 15.5 Mpc). The K' information is particularly important forthe present discussion because it is not seriously affected byobscuration. There is reasonably convincing evidence that thedistribution of exponential disk central surface brightnesses isbimodal. There is roughly an order of magnitude difference in the meanluminosity densities of high and low surface brightness disks. Disksavoid the domain between the high and low surface brightness zones. Thefew intermediate surface brightness examples in the sample all havesignificant neighbors within a projected distance of 80 kpc. The highsurface brightness galaxies exhibit a range-21^{{m}} The Ursa Major Cluster of Galaxies.I.Cluster Definition and Photometric DataThe Ursa Major Cluster has received remarkably little attention,although it is as near as the Virgo Cluster and contains a comparablenumber of H I-rich galaxies. In this paper, criteria for groupmembership are discussed and data are presented for 79 galaxiesidentified with the group. Of these, all 79 have been imaged at B,R,Ibands with CCDs, 70 have been imaged at K' with a HgCdTe array detector,and 70 have been detected in the HI 21 cm line. A complete sample of 62galaxies brighter than M_B_ = - 16.5 is identified. Images and gradientsin surface brightness and color are presented at a common linear scale.As has been seen previously, the galaxies with the reddest global colorsare reddest at the centers and get bluer at large radii. However,curiously, among the galaxies with the bluest global colors there aresystems with very blue cores that get redder at large radii. Colliding and Merging Galaxies. III. The Dynamically Young Merger Remnant NGC 3921This paper presents imaging, photometric, and spectroscopic observationsof NGC 3921 = Mrk 430 gathered over many years with five opticaltelescopes. This luminous galaxy (M_V_= -22.8 for H_0_ = 50) atcz_hel_=5926 +/- 15 km s^-1^ features a single nucleus, a main body withcomplex fine structure (ripples, loops, fan-shaped protrusions), and apair of ~100 kpc long, crossed tidal tails indicative of two former diskgalaxies of near-equal mass. These galaxies have essentially merged. Themain body of the remnant shows a typical post-starburst spectrumdominated in the blue by A 3-5 V stars. The inferred burst age is 0.5-1Gyr and the burst strength ~10% (by mass). Surrounding the nucleus isextremely centrally concentrated ionized gas that can be traced out to~12" (7 kpc), emits ~> 1.5 x 10^41^ ergs s^-1^ in Hα, and showssigns of both rotational and chaotic motions. The bright semistellarnucleus appears strikingly off-centered relative to the main body, whichitself features "sloshing" isophotes. That is, the centers of successiveisophotes shift position by ~>2 kpc, causing the nucleus to appeareccentric by up to 23% relative to a nearly half-light isophote. Theluminous matter has clearly not yet equilibrated, and this mergerremnant is dynamically young. Nevertheless, the mean light distributionof the main body is already well described by an r^1/4^ law. Thisdistribution plus the luminosity, UBV colors, color gradients, velocitydispersion, spectroscopic line strengths, and fine-structure index allagree with the notion that NGC 3921, which is a member of a small, tightgroup of four galaxies, is a 0.7+/-0.3 Gyr old protoelliptical (reckonedsince close passage that started the merger). Both it and its kin NGC7252 are nearby analogs of distant galaxies with "E+A"-type spectra inButcher-Oemler clusters. A search for star clusters and associations inNGC 3921 reveals 19 candidate OB associations, but only five candidateyoung globular clusters with M_V_ = -12 to -14. Thus, NGC 3921 appearsto have distinctly fewer and certainly less luminous young globularclusters than NGC 7252. This less extreme population of young globularsmay reflect a paucity of gas in one of the two merging component disksof this suspected S0-Sc or Sa-Sc merger (Hibbard & van Gorkom, AJ,in press). Such gas paucity may explain the weaker starburst and mayhave supplied fewer giant molecular clouds for globular clusterformation. Hence, the Hubble types and gas contents of componentgalaxies appear to play an important role in determining the clusterpopulations in merger remnants. An image database. II. Catalogue between δ=-30deg and δ=70deg.A preliminary list of 68.040 galaxies was built from extraction of35.841 digitized images of the Palomar Sky Survey (Paper I). For eachgalaxy, the basic parameters are obtained: coordinates, diameter, axisratio, total magnitude, position angle. On this preliminary list, weapply severe selection rules to get a catalog of 28.000 galaxies, wellidentified and well documented. For each parameter, a comparison is madewith standard measurements. The accuracy of the raw photometricparameters is quite good despite of the simplicity of the method.Without any local correction, the standard error on the total magnitudeis about 0.5 magnitude up to a total magnitude of B_T_=17. Significantsecondary effects are detected concerning the magnitudes: distance toplate center effect and air-mass effect. The fundamental plane of early-type galaxies: stellar populations and mass-to-light ratio.We analyse the residuals to the fundamental plane (FP) of ellipticalgalaxies as a function of stellar-population indicators; these are basedon the line-strength parameter Mg_2_ and on UBVRI broad-band colors, andare partly derived from new observations. The effect of the stellarpopulations accounts for approximately half the observed variation ofthe mass-to-light ratio responsible for the FP tilt. The residual tiltcan be explained by the contribution of two additional effects: thedependence of the rotational support, and possibly that of the spatialstructure, on the luminosity. We conclude to a constancy of thedynamical-to-stellar mass ratio. This probably extends to globularclusters as well, but the dominant factor would be here the luminositydependence of the structure rather than that of the stellar population.This result also implies a constancy of the fraction of dark matter overall the scalelength covered by stellar systems. Our compilation ofinternal stellar kinematics of galaxies is appended. A Catalog of Stellar Velocity Dispersions. II. 1994 UpdateA catalog of central velocity dispersion measurements is presented,current through 1993 September. The catalog includes 2474 measurementsof 1563 galaxies. A standard set of 86 galaxies is defined, consistingof galaxies with at least three reliable, concordant measurements. It issuggested that future studies observe some of these standard galaxies sothat different studies can be normalized to a consistent system. Allmeasurements are reduced to a normalized system using these standards. A multiparametric analysis of the Einstein sample of early-type galaxies. 1: Luminosity and ISM parametersWe have conducted bivariate and multivariate statistical analysis ofdata measuring the luminosity and interstellar medium of the Einsteinsample of early-type galaxies (presented by Fabbiano, Kim, &Trinchieri 1992). We find a strong nonlinear correlation betweenLB and LX, with a power-law slope of 1.8 +/- 0.1,steepening to 2.0 +/- if we do not consider the Local Group dwarfgalaxies M32 and NGC 205. Considering only galaxies with logLX less than or equal to 40.5, we instead find a slope of 1.0+/- 0.2 (with or without the Local Group dwarfs). Although E and S0galaxies have consistent slopes for their LB-LXrelationships, the mean values of the distribution functions of bothLX and LX/LB for the S0 galaxies arelower than those for the E galaxies at the 2.8 sigma and 3.5 sigmalevels, respectively. We find clear evidence for a correlation betweenLX and the X-ray color C21, defined by Kim,Fabbiano, & Trinchieri (1992b), which indicates that X-rayluminosity is correlated with the spectral shape below 1 keV in thesense that low-LX systems have relatively large contributionsfrom a soft component compared with high-LX systems. We findevidence from our analysis of the 12 micron IRAS data for our samplethat our S0 sample has excess 12 micron emission compared with the Esample, scaled by their optical luminosities. This may be due toemission from dust heated in star-forming regions in S0 disks. Thisinterpretation is reinforced by the existence of a strongL12-L100 correlation for our S0 sample that is notfound for the E galaxies, and by an analysis of optical-IR colors. Wefind steep slopes for power-law relationships between radio luminosityand optical, X-ray, and far-IR (FIR) properties. This last point arguesthat the presence of an FIR-emitting interstellar medium (ISM) inearly-type galaxies is coupled to their ability to generate nonthermalradio continuum, as previously argued by, e.g., Walsh et al. (1989). Wealso find that, for a given L100, galaxies with largerLX/LB tend to be stronger nonthermal radiosources, as originally suggested by Kim & Fabbiano (1990). We notethat, while LB is most strongly correlated withL6, the total radio luminosity, both LX andLX/LB are more strongly correlated with L6CO, the core radio luminosity. These points support the argument(proposed by Fabbiano, Gioia, & Trinchieri 1989) that radio cores inearly-type galaxies are fueled by the hot ISM. Integrated photoelectric magnitudes and color indices of bright galaxies in the Johnson UBV systemThe photoelectric total magnitudes and color indices published in theThird Reference Catalogue of Bright Galaxies (RC3) are based on ananalysis of approximately equals 26,000 B, 25,000 B-V, and 17,000 U-Bmultiaperture measurements available up to mid 1987 from nearly 350sources. This paper provides the full details of the analysis andestimates of internal and external errors in the parameters. Thederivation of the parameters is based on techniques described by theVaucouleurs & Corwin (1977) whereby photoelectric multiaperture dataare fitted by mean Hubble-type-dependent curves which describe theintegral of the B-band flux and the typical B-V and U-B integrated colorgradients. A sophisticated analysis of the residuals of thesemeasurements from the curves was made to allow for the random andsystematic errors that effect such data. The result is a homogeneous setof total magnitudes BTA total colors(B-V)T and (U-B)T, and effective colors(B-V)e and (U-B)e for more than 3000 brightgalaxies in RC3. Three-dimensional structure of a disk galaxy, NGC 3998We present a parameter set for a luminosity distribution model of NGC3998 comprising an oblate-disk component following the exponential lawand a prolate-bulge component following the r exp 1/4 law, which canreproduce the two-dimensional distribution of the observed brightnesswith an rms error of 0.29 mag/sq arcsec and the twist and axis ratio ofthe observed isophotes. The scale-length ratios were found to be 0.35for the disk and 0.85 for the prolate bulge, with the inclination angleof the disk being i = 40 deg and the position angle of the prolate axisin the disk plane being phi = 110 deg relative to the line of sight. Weconfirmed the existence of a nonaxisymmetric bulge component, which hasbeen predicted based on the twist and axis ratio of the observedisophotes.
Submit a new article
### Related links
• - No Links Found -
Submit a new link
### Member of following groups:
#### Observation and Astrometry data
Constellation: Ursa Major Right ascension: 11h57m35.60s Declination: +55°27'29.0" Aparent dimensions: 1.38′ × 0.813′
Catalogs and designations: | 2019-03-18 18:12:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6091605424880981, "perplexity": 5675.2829625463355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201521.60/warc/CC-MAIN-20190318172016-20190318194016-00242.warc.gz"} |
http://nodus.ligo.caltech.edu:8080/40m/page100?&sort=Subject | 40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 100 of 341 Not logged in
ID Date Author Type Category Subject
7256 Thu Aug 23 12:17:39 2012 ManasaUpdate IMC Ringdown
The ringdown measurements are in progress. But it seems that the MC mirrors are getting kicked everytime the cavity is unlocked by either changing the frequency at the MC servo or by shutting down the input to the MC. This means what we've been observing is not the ringdown of the IMC alone. Attached are MC sus sensor data and the observed ringdown on the oscilloscope. I think we need to find a way to unlock the cavity without the mirrors getting kicked....in which case we should think about including an AOM or using a fast shutter before the IMC.
P.S. The origin of the ripples at the end of the ringdown still are of unknown origin. As of now, I don't think it is because of the mirrors moving but something else that should figured out.
Attachment 1: mozilla.pdf
Attachment 2: MC_sus.pdf
7257 Thu Aug 23 15:35:33 2012 ranaUpdate IMC Ringdown
It is HIGHLY unlikely that the IMC mirrors are having any effect on the ringdown. The ringdowns take ~20 usec to happen. The mirrors are 0.25 kg and you can calculate that its very hard to get enough force to move them any appreciable distance in that time.
7260 Thu Aug 23 17:51:25 2012 ManasaUpdate IMC Ringdown
Quote: It is HIGHLY unlikely that the IMC mirrors are having any effect on the ringdown. The ringdowns take ~20 usec to happen. The mirrors are 0.25 kg and you can calculate that its very hard to get enough force to move them any appreciable distance in that time.
The huge kick observed in the MC sus sensors seem to last for ~10usec; almost matching the observed ringdown decay time. We should find a way to record the ringdown and the MC sus sensor data simultaneously to know when the mirrors are exactly moving during the measurement process. It could also be that the moving mirrors were responsible for the ripples observed later during the ringdown as well.
* How fast do the WFS respond to the frequency switching (time taken by WFS to turn off)? I think this information will help in narrowing down the many possible explanations to a few.
15183 Mon Feb 3 13:54:10 2020 YehonathanUpdateIOOIMC Ringdowns extended data analysis
I extended the ringdown data analysis to the reflected beam following Isogai et al.
The idea is that measuring the cavity's reflected light one can use known relationships to extract the transmission of the cavity mirrors and not only the finesse.
The finesse calculated from the transmission ringdown shown in the previous elog is 1520 according to the Zucker model, 1680 according to the first exponential and 1728 according to the second exponential.
Attachment 1 shows the measured reflected light during an IMC ringdown in and out of resonance and the values that are read off it to compute the transmission.
The equations for m1 and m3 are the same as in Isogai's paper because they describe a steady-state that doesn't care about the extinction ratio of the light.
The equation for m2, however, is modified due to the finite extinction present in our zeroth-order ringdown.
Modelling the IMC as a critically coupled 2 mirror cavity one can verify that:
$m_2=P_0KR\left[T-\alpha\left(1-R\right)\right]^2+\alpha^2 P_1$
Where $P_0$ is the coupled light power
$P_1$ is the power rejected from the cavity (higher-order modes, sidebands)
$K=\left(\mathcal{F} /\pi \right )^2$ is the cavity gain.
$R$ and $T$ are the power reflectivity and transmissivity per mirror, respectively.
$\alpha^2$ is the power attenuation factor. For perfect extinction, this is 0.
Solving the equations (m1 and m3 + modified m2), using Zucker model's finesse, gives the following information:
Loss per mirror = 84.99 ppm
Transmission per mirror = 1980.77 ppm
Coupling efficiency (to TEM00) = 97.94%
Attachment 1: IMCTransReflAnalysis_anotated.pdf
15190 Wed Feb 5 21:13:17 2020 YehonathanUpdateIOOIMC Ringdowns extended data analysis
I translate the results obtained in the previous elog to the IMC 3 mirror cavity. I assume the loss in each mirror in the IMC is equal and that M2 has a negligible transmission.
I find that to a very good approximation the loss per IMC mirror is 2/3 the loss per mirror in the 2 mirror cavity model. That is the loss per mirror in the IMC is 56 ppm. The transmission per mirror in the IMC is the same as in the 2 mirror model, which is 1980 ppm.
The total transmission is the same as in the 2 mirror model and is given by:
$\frac{P_0}{P_0+P1}KT^2\approx 90\%$
where $\frac{P_0}{P_0+P1}$ is the coupling efficiency to the TEM00 mode.
15175 Wed Jan 29 12:40:24 2020 YehonathanUpdateIOOIMC Ringdowns preliminary data analysis
I analyze the IMC ringdown data from last night.
Attachment 1 shows the normalized raw data. Oscillations come in much later than in Gautam's measurement. Probably because the IMC stays locked.
Attachment 2 shows fits of the transmitted PD to unconstrained double exponential and the Zucker model.
Zucker model gives time constant of 21.6us
Unconstrained exponentials give time constants of 23.99us and 46.7us which is nice because it converges close to the Zucker model.
Attachment 1: IMCRingdownNormalizedRawdata.pdf
Attachment 2: IMCTransPDFits.pdf
15912 Fri Mar 12 11:44:53 2021 Paco, AnchalUpdatetrainingIMC SUS diagonalization in progress
[Paco, Anchal]
- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls.
15258 Fri Mar 6 01:12:10 2020 gautamUpdateElectronicsIMC Servo IN2 path looks just fine
It seems like the AO path gain stages on the IMC Servo board work just fine. The weird results I reported earlier were likely a measurement error arising from the fact that I did not disconnect the LEMO IN2 cable while measuring using the BNC IN2 connector, which probably made some parasitic path to ground that was screwing the measurement up. Today, I re-did the measurement with the signal injected at the IN2 BNC, and the TF measured being the ratio of TP3 on the board to a split-off of the SR785 source (T-eed off). Attachments #1, #2 shows the result - the gain deficit from the "expected" value is now consistent with that seen on other sliders.
Note that the signal from the CM board in the LSC rack is sent single-ended over a 2-pin LEMO cable (whose return pin is shorted to ground). But it is received differentially on the IMC Servo board. I took this chance to look for evidence of extra power line noise due to potential ground loops by looking at the IMC error point with various auxiliary cables connected to the board - but got distracted by some excess noise (next elog).
Attachment 1: AO_inputTFs_5Mar.pdf
Attachment 2: sliderCal_5Mar.pdf
15257 Thu Mar 5 19:51:14 2020 gautamUpdateElectronicsIMC Servo board being tested
I am running some tests on the IMC servo board with an extender card so the IMC will not be locking for a couple of hours.
16174 Wed Jun 2 09:43:30 2021 Anchal, PacoSummarySUSIMC Settings characterization
## Plot description:
• We picked up three 10 min times belonging to the three different configurations:
• 'Old Settings': IMC Suspension settings before Paco and I changed anything. Data taken from Apr 26, 2021, 00:30:42 PDT (GPS 1303457460).
• 'New Settings': New input matrices uploaded on April 28th, along with F2A filters and AC coil balancing gains (see 16091). Data taken from May 01, 2021, 00:30:42 PDT (GPS 1303889460).
• 'New settings with new gains' Above and new suspension damping gains uploaded on May5th, 2021 (see 16120). Data taken from May 07, 2021, 03:10:42 PDT (GPS 1304417460).
• Attachment 1 shows the RMS seismic noise along X direction between 1 Hz and 3 Hz picked from C1:PEM-RMS_BS_X_1_3 during the three time durations chosen. This plot is to establish that RMS noise levels were similar and mostly constant. Page 2 shows the mean ampltidue spectral density of seismic noise in x-direction over the 3 durations.
• Attachment 2 shows the transfer function estimate of seismic noise to MC_F during the three durations. Page 1 shows ratio of ASDs taken with median averaging while page 2 shows the same for mean averaging.
• Attachment 3 shows the transfer function estimate of seismic noise to MC_TRANS_PIT during the three durations. Page 1 shows ratio of ASDs taken with median averaging while page 2 shows the same for mean averaging.
• Attachment 4 shows the transfer function estimate of seismic noise to MC_TRANS_YAW during the three durations. Page 1 shows ratio of ASDs taken with median averaging while page 2 shows the same for mean averaging.
## Inferences:
• From Attachment 2 Page 1:
• We see that 'old settings' caused the least coupling of seismic noise to MC_F signal in most of the low frequency band except between 1.5 to 3 Hz where the 'new settings' were slightly better.
• 'new settings' also show less coupling in 4 Hz to 6 Hz band, but at these frequencies, seismix noise is filtered out by suspension, so this could be just coincidental and is not really a sign of better configuration.
• There is excess noise coupling seen with 'new settings' between 0.4 Hz and 1.5 Hz. We're not sure why this coupling increased.
• 'new settings with new gains' show the most coupling in most of the frequency band. Clearly, the increased suspension damping gains did not behaved well with rest of the system.
• From Attachment 3 Page 1:
• Coupling to MC_TRANS_PIT error signal is reduced for 'new settings' in almost all of the frequency band in comparison to the 'old settings'.
• 'new settings with new gains' did even better below 1 Hz but had excess noise in 1 Hz to 6 Hz band. Again increased suspension damping gains did not help much.
• But low coupling to PIT error for 'new settings' suggest that our decoupling efforts in matrix diagonalization, F2A filters and ac coil balancing worked to some extent.
• From Attachment 4 Page 1:
• 'new settings' and 'old settings' have the same coupling of seismic noise to MC_TRANS_YAW in all of the frequency band. This is in-line witht eh fact that we found very little POS to YAW couping in our analysis before and there was little to no change for these settings.
• 'new settings with new gains' did better below 1 Hz but here too there was excess coupling between 1 Hz to 9 Hz.
• Page 1 vs Page 2:
• Mean and median should be same if the data sample was large enough and noise was stationary. A difference between the two suggests existence of outliers in the data set and median provides a better central estimate in such case.
• MC_F: Mean and median are same below 4 hz. There are high frequency outliers above 4 Hz in 'new settings with new gains' and 'old settings' data sets, maybe due to transient higher free running laser frequency noise. But since, suspension settigns affect below 1 Hz mostly, the data sets chosen are stationary enough for us.
• MC_TRANS_PIT: Mean ratio is lower for 'new settings' and 'old settings' in 0.3 hz to 0.8 Hz band. Same case above 4 Hz as listed above.
• MC_TRANS_YAW: Same as above.
• Conclusion 1: The 'new settings with new gains' cause more coupling to seismic noise, probably due to low phase margin in control loops. We should revert back the suspension damping gains.
• Conclusion 2: The 'new settings' work as expected and can be kept when WFS loops are optimized further.
• Conjecture: From our experience over last 2 weeks, locking the arms to the main laser with 'new settings with new gains' introduces noise in the arm length large enough that the Xend green laser does not remain locked to the arm for longer than tens of seconds. So this is definitely not a configuration in which we can carry out other measurements and experiments in the interferometer.
Attachment 1: seismicX.pdf
Attachment 2: seismicXtoMC_F_TFest.pdf
Attachment 3: seismicXtoMC_TRANS_PIT_TFest.pdf
Attachment 4: seismicXtoMC_TRANS_YAW_TFest.pdf
16102 Thu Apr 29 18:53:33 2021 AnchalUpdateSUSIMC Suspension Damping Gains Test
With the input matrix, coil ouput gains and F2A filters loaded as in 16091, I tested the suspension loops' step response to offsets in LSC, ASCPIT and ASCYAW channels, before and after applying the "new damping gains" mentioned in 16066 and 16072. If these look better, we should upload the new (higher) damping gains as well. This was not done in 16091.
Note that in the plots, I have added offsets in the different channels to plot them together, hence the units are "au".
Attachment 1: MC1_SUSDampGainTest.pdf
Attachment 2: MC2_SUSDampGainTest.pdf
Attachment 3: MC3_SUSDampGainTest.pdf
16110 Mon May 3 16:24:14 2021 AnchalUpdateSUSIMC Suspension Damping Gains Test Repeated with IMC unlocked
We repeated the same test with IMC unlocked. We had found these gains when IMC was unlocked and their characterization needs to be done with no light in the cavity. attached are the results. Everything else is same as before.
Quote: With the input matrix, coil ouput gains and F2A filters loaded as in 16091, I tested the suspension loops' step response to offsets in LSC, ASCPIT and ASCYAW channels, before and after applying the "new damping gains" mentioned in 16066 and 16072. If these look better, we should upload the new (higher) damping gains as well. This was not done in 16091. Note that in the plots, I have added offsets in the different channels to plot them together, hence the units are "au".
Edit Tue May 4 14:43:48 2021 :
• Adding zoomed in plots to show first 25s after the step.
• MC1:
• Our improvements by new gains are only modest.
• This optic needs a more careful coil balancing first.
• Still the ring time is reduced to about 5s for all step responses as opposed to 10s at old gains.
• MC2:
• The first page of MC2 might be bit misleading. We have not changed the damping gain for SUSPOS channel, so the longer ringing is probably just an artifact of somthing else. We didn't retake data.
• In PIT and YAW where we increased the gain by a factor of 3, we see a reduction in ringing lifetime by about half.
• MC3:
• We saw the most optimistic improvement on this optic.
• The gains were unusually low in this optic, not sure why.
• By increasing SUSPOS gain from 200 to 500, we saw a reduction of ringing halftime from 7-8s to about 2s. Improvements are seen in other DOFs as well.
• You can notice rightaway that YAW of MC3 keeps oscillating near resonance (about 1 Hz). Maybe more careful feedback shaping is required here.
• In SUSPIT, we increased gain from 12 to 35 and saw a good reduction in both ringing time and initial amplitude of ringing.
• In SUSYAW, we only increased the gain to 12 from 8, which still helped a lot in reducing big ringing step response to below 5s from about 12s.
Overall, I would recommend setting the new gains in the suspension loops as well to observe long term effects too.
Attachment 1: MC1_SusDampGainTest.pdf
Attachment 2: MC2_SusDampGainTest.pdf
Attachment 3: MC3_SusDampGainTest.pdf
16175 Wed Jun 2 16:20:59 2021 Anchal, PacoSummarySUSIMC Suspension gains reverted to old values
Following the conclusion, we are reverting the suspension gains to old values, i.e.
IMC Suspension Gains
MC1 MC2 MC3
SUSPOS 120 150 200
SUSPIT 60 10 12
SUSYAW 60 10 8
While the F2A filters, AC coil gains and input matrices are changed to as mentioned in 16066 and 16072.
The changes can be reverted all the way back to old settings (before Paco and I changed anything in the IMC suspensions) by running python scripts/SUS/general/20210602_NewIMCOldGains/restoreOldConfigIMC.py on allegra. The new settings can be uploaded back by running python scripts/SUS/general/20210602_NewIMCOldGains/uploadNewConfigIMC.py on allegra.
Change time:
Unix Time = 1622676038
UTC Jun 02, 2021 23:20:38 UTC Central Jun 02, 2021 18:20:38 CDT Pacific Jun 02, 2021 16:20:38 PDT
GPS Time = 1306711256
Quote: Conclusion 1: The 'new settings with new gains' cause more coupling to seismic noise, probably due to low phase margin in control loops. We should revert back the suspension damping gains. Conclusion 2: The 'new settings' work as expected and can be kept when WFS loops are optimized further. Conjecture: From our experience over last 2 weeks, locking the arms to the main laser with 'new settings with new gains' introduces noise in the arm length large enough that the Xend green laser does not remain locked to the arm for longer than tens of seconds. So this is definitely not a configuration in which we can carry out other measurements and experiments in the interferometer.
16094 Thu Apr 29 10:52:56 2021 AnchalUpdateSUSIMC Trans QPD and WFS loops step response test
In 16087 we mentioned that we were unable to do a step response test for WFS loop to get an estimate of their UGF. The primary issue there was that we were not putting the step at the right place. It should go into the actuator directly, in this case, on C1:SUS-MC2_PIT_COMM and C1:SUS-MC2_YAW_COMM. These channels directly set an offset in the control loop and we can see how the error signals first jump up and then decay back to zero. The 'half-time' of this decay would be the inverse of the estimated UGF of the loop. For this test, the overall WFS loops gain, C1:IOO-WFS_GAIN was set to full value 1. This test is performed in the changed settings uploaded in 16091.
I did this test twice, once giving a step in PIT and once in YAW.
Attachment 1 is the striptool screenshot for when PIT was given a step up and then step down by 0.01.
• Here we can see that the half-time is roughly 10s for TRANS_PIT and WFS1_PIT corresponding to roughly 0.1 Hz UGF.
• Note that WFS2 channels were not disturbed significantly.
• You can also notice that third most significant disturbance was to TRANS_YAW actually followed by WF1 YAW.
Attachment 2 is the striptool screenshot when YAW was given a step up and down by 0.01. Note the difference in x-scale in this plot.
• Here, TRANS YAW got there greatest hit and it took it around 2 minutes to decay to half value. This gives UGF estimate of about 10 mHz!
• Then, weirdly, TRANS PIT first went slowly up for about a minutes and then slowly came dome in a half time of 2 minutes again. Why was PIT signal so much disturbed by the YAW offset in the first place?
• Next, WFS1 YAW can be seen decaying relatively fast with half-life of about 20s or so.
• Nothing else was disturbed much.
• So maybe we never needed to reduce WFS gain in our measurement in 16089 as the UGF everywhere were already very low.
• What other interesting things can we infer from this?
• Should I sometime repeat this test with steps given to MC1 or MC3 optics?
Attachment 1: PIT_OFFSET_ON_MC2.png
Attachment 2: YAW_STEP_ON_MC2_complete.png
15215 Sat Feb 15 12:56:24 2020 YehonathanUpdateIOOIMC Transfer function measurement
{Yehonathan, Meenakshi}
We measure the IMC transfer function using SR785.
We hook up the AOM driver to the SOURCE OUT, Input PD to CHANNEL ONE and the IMC transmission PD to CHANNEL TWO.
We use the frequency response measurement feature in the SR785. A swept sine from 100KHz to 100Hz is excited with an amplitude of 10mV.
Attachment 1 shows the data with a fit to a low pass filter frequency response.
IMC pole frequency is measured to be 3.795KHz, while the ringdowns predict a pole frequency 3.638KHz, a 4% difference.
The closeness of the results discourages me from calibrating the PDs' transfer functions.
I tend to believe the pole frequency measurement a bit more since it coincides with a linewidth measurement done awhile ago Gautam was telling me about.
Thoughts:
I think of trying to try another zero-order ringdown but with much smaller excitation than what used before (500mV) and than move on to the first-order beam.
Also, it seems like the reflection signal in zero-order ringdown (Attachment 2, green trace) has only one time constant similar to the full extinction ringdown. The reason is that due to the fact the IMC is critically coupled there is no DC term in the electric field even when the extinction of light is partial. The intensity of light, therefore, has only one time constant.
Fitting this curve (Attachment 3) gives a time constant of 18us, a bit too small (gives a pole of 4.3KHz). I think a smaller extinction ringdown will give a cleaner result.
Attachment 1: IMCFrequencyResponse.pdf
Attachment 2: IMCRingdownNormalizedRawdata.pdf
Attachment 3: IMCREFLPDFits.pdf
11529 Tue Aug 25 16:09:54 2015 ericqUpdateIOOIMC Tweak
I increased the overall IMC loop gain by 4dB, and decreased the FAST gain (which determines the PZT/EOM crossover) by 3dB. This changed the AO transfer function from the blue trace to the green trace in the first plot. This changed the CARM loop open loop TF shape from the unfortunate blue shape to the more pleasing green shape in the second plot. The red trace is the addition of one super boost.
Oddly, these transfer functions look a bit different than what I measured in March (ELOG 11167), which itself differed from the shaping done December of 2014 (ELOG 10841).
I haven't yet attempted any 1F handoff of the PRMI since relocking, but back when Jenne and I did so in April, the lock was definitely less stable. My suspicion is that we may need more CARM supression; we never computed the loop gain requirement that ensures that the residual CARM fluctuations witnessed by, say, REFL55 are small enough to use as a reliable PRMI sensor.
I should be able to come up with this with data from last night.
Attachment 1: imcTweak.pdf
Attachment 2: CARM_TF.pdf
11538 Fri Aug 28 19:05:53 2015 ranaUpdateIOOIMC Tweak
Well, green looks better than blue, but it makes the PCDRIVE go high, which means its starting to saturate the EOM drive. So we can't just maximize the phase margin in the PZT/EOM crossover. We have to take into account the EOM drive spectrum and its RMS.
Also, your gain bump seems suspicious. See my TF measurements of the crossover in December. Maybe you were saturating the EOM in your TF ?
Lets find out what's happening with FSS servos over in Bridge and then modify ours to be less unstable.
15318 Tue May 5 23:44:14 2020 gautamUpdateASCIMC WFS
Summary:
I've been thinking about the IMC WFS. I want to repeat the sort of analysis done at LLO where a Finesse model was built and some inferences could be made about, for example, the Gouy phase separation b/w the sensors by comparing the Finesse sensing matrix to a measured sensing matrix. Taking the currently implemented output matrix as a "measurement" (since the IMC WFS stabilize the IMC transmission), I don't get any agreement between it and my Finesse model. Could be that the model needs tweaking, but there are several known issues with the WFS themselves (e.g. imbalanced segment gains).
Building the finesse model:
• I pulled the WFS telescopes from Andres elogs/SURF report, which I think was the last time the WFS telescopes were modified.
• The in-vacuum propagation distances were estimated from CAD diagrams.
• According to my model, the Gouy phase separation between the two WFS heads is ~70 degrees, whereas Andres' a la mode simulations suggest more like 90 degrees. Presumably, some lengths/lenses are different between what I assume and what he used, but I continue the analysis anyway...
• The appropriate power attenuations were placed in each path - one thing I noticed is that the BS that splits light between WFS1 and WFS2 is a 30/70 BS and not a 50/50, I don't see any reason why this should be (presumably it was to do with component availability). see below for Rana's comments.
Simulations:
• The way the WFS servos are set up currently, the input matrix is diagonal while the output matrix encodes the sensing information.
• In finesse, I measured the input matrix (i.e. response sensed in each sensor when an optic is dithered in angle). The length is kept resonant for the carrier (but not using a locking signal), which should be valid for small angular disturbances, which is the regime in which the error signals will be linear anyways.
• Then I inverted the simulated sensing matrix so as to be able to compare with the CDS output matrix. Note that there is a relative gain scaling of 100 between the WFS paths and the MC2T QPD paths which I added to the simulation. I also normalized the columns of the matrix by the largest element in the column, in an attempt to account for the various other gains that are between the optical sensing and the digitizaiton (e.g. WFS demod boards, QPD transimpedance etc etc).
• Attachment #1 shows the comparison between simulation and measurement. The two aren't even qualitatively similar, needs more thought...
• The transimpedance resistor is 1.5 kohms. With the gain stages, the transimpedance gain is nominally 37.5 kohms, and 3.75 kohms when the attenuation setting is engaged (as it is for 2/4 quadrants on each head).
• Assuming a modulation depth of 0.1, the Johnson noise of the transimpedance resistor dominates (with the MAX4106 current noise a close second), and these heads cannot be shot noise limited when operating at 1 W input power (though of course the situation will change if we have 25 W input).
• The heads are mounted at a ~45 deg angle, mixing PIT/YAW, but I assume we can just use the input matrix to rotate back to the natural PIT/YAW basis.
Update 215 pm 5/6: adding in some comments from Rana raised during the meeting:
1. The transimpedance is actually done by the RLC network (L6 and C38 for CH 3), and not 1.5 kohms. It just coincidentally happens that the reactance is ~1.5 kohms at 29.5 MHz. Note that my LTspice simulation using ideal inductors and capacitors still predicts ~4pA/rtHz noise at 29.5 MHz, so the conclusion about shot noise remains valid I think... One option is to change the attenuation in this path and send more light onto the WFS heads.
The transimpedance gain and noise are now in Attachment #2. I just tweaked the L values to get a peak at 29.5 MHz and a notch at twice that frequency. For this I assumed a photodiode capacitance of 225pF and the shown transimpedance gain has the voltage gain of the MAX4106 stages divided out. The current noise is input referred.
2. The imbalanced power on WFS heads may have some motivation - it may be that the W/rad TF for one of the two modes we are trying to sense (beam plane tilt vs beam plane translation) is not equal, so we want more light on the head with weaker response.
3. The 45 degree mounting of the heads is actually meant to decouple PIT and YAW.
Attachment 1: WFSmatrixComparison.pdf
15320 Thu May 7 09:43:21 2020 ranaUpdateASCIMC WFS
This is the doc from Keita Kawabe on why the WFS heads should be rotated.
15321 Thu May 7 10:58:06 2020 gautamUpdateASCIMC WFS
OK so the QPD segments are in the "+" orientation when the 40m IMC WFS heads are mounted at 45 deg. I thought "+" was the natural PIT/YAW basis but I guess in the the LIGO parlance, the "X" orientation was considered more natural.
Quote: This is the doc from Keita Kawabe on why the WFS heads should be rotated.
16990 Tue Jul 12 09:25:09 2022 ranaUpdateIOOIMC WFS
MC WFS Demod board needs some attention.
Tomislav has been measuring a very high noise level in the MC WFS demod output (which he promised to elog today!). I thought this was a bogus measurement, but when he, and Paco and I tried to measure the MC WFS sensing matrix, we noticed that there is no response in any WFS, although there are beams on the WFS heads. There is a large response in MC2 TRANS QPD, so we know that there is real motion.
I suspect that the demod board needs to be reset somehow. Maybe the PLL is unlocked or some cable is wonky. Hopefully not both demod boards are fried.
Please leave the WFS loops off until demod board has been assessed.
12641 Sat Nov 26 19:16:28 2016 KojiUpdateIOOIMC WFS Demod board measurement & analysis
[Rana, Koji]
1. The response of the IMC WFS board was measured. The LO signal with 0.3Vpp@29.5MHz on 50Ohm was supplied from DS345. I've confirmed that this signal is enough to trigger the comparator chip right next to the LO input. The RF signal with 0.1Vpp on the 50Ohm input impedance was provided from another DS345 to CH1 with a frequency offset of 20Hz~10kHz. Two DS345s were synced by the 10MHz RFreference at the rear of the units. The resulting low frequency signal from the 1st AF stage (AD797) and the 2nd AF stage (OP284) were checked.
Attachment 1 shows the measured and modelled response of the demodulator with various frequency offsets. The value shows the signal transfer (i.e. the output amplitude normalized by the input amplitude) from the input to the outputs of the 1st and 2nd stages. According to the datasheet, the demodulator chip provides a single pole cutoff of 340kHz with the 33nF caps between AP/AN and VP. The first stage is a broadband amplifier, but there is a passive LPF (fc=~1kHz). The second stage also provides the 2nd order LPF at fc~1kHz too. The measurement and the model show good agreement.
2. The output noise levels of the 1st and 2nd stages were meausred and compared with the noise model by LISO.
Attachment 2 shows the input referred noise of the demodulator circuit. The output noise is basically limited by the noise of the first stage. The noise of the 2nd stage make the significant contribution only above the cut off freq of the circuit (~1kHz). And the model supports this fact. The 6.65kOhm of the passive filter and the input current noise of AD797 cause the large (>30nV/rtHz) noise contribution below 100Hz. This completely spoils the low noiseness (~1nV/rtHz) of AD797. At lower frequency like 0.1Hz other component comes up above the modelled noise level.
3. Rana and I had a discussion about the modification of the circuit. Attachment 4 shows the possible improvement of the demod circuit and the 1st stage preamplifier. The demodulator chip can have a cut off by the attached capacitor. We will replace the 33nF caps with 1uF and the cut off will be pushed down to ~10kHz. Then the passive LPF will be removed. We don't need "rodeo horse" AD797 for this circuit, but op27 is just fine instead. The gain of the 1st stage can be increased from 9 to 21. This should give us >x10 improvement of the noise contribution from the demodualtor (Attachment 3). We also can replace some of the important resistors with the thin film low noise resistors.
Attachment 1: WFS_demod_response.pdf
Attachment 2: WFS_demod_noise.pdf
Attachment 3: WFS_demod_noise_plan.pdf
Attachment 4: Screen_shot_2011-07-01_at_11.13.01_AM.png
12645 Tue Nov 29 17:45:06 2016 KojiUpdateIOOIMC WFS Demod board measurement & analysis
Summary: The demodulator input noise level was improved by a factor of more than 2. This was not as much as we expected from the preamp noise improvement, but is something. If this looks OK, I will implement this modification to all the 16 channels.
The modification shown in Attachment 1 has actually been applied to a channel.
• The two 1.5uF capacitors between VP and AN/AP were added. This decreases the bandwidth of the demodulator down to 7.4kHz
• The offset trimming circuit was disabled. i.e. Pin18 of AD831 was grounded.
• The passive low pass at the demodulator output was removed. (R18, C34)
• The stage1 (preamp) chip was changed from AD797 to OP27.
• The gain of the preamp stage was changed from 9 to 21. Also the thin film resistors are used.
Attachment 2 shows the measured and expected output signal transfer of the demodulator. The actual behavior of the demodulator is as expected, and we still keep the over all LPF feature of 3rd order with fc=~1kHz.
Attachment 3 shows the improvement of the noise level with the signal reffered to the demodulator input. The improvement by a factor >2 was observed all over the frequency range. However, this noise level could not be explained by the preamp noise level. Actually this noise below 1kHz is present at the output of the demodulator. (Surprisingly, or as usual, the noise level of the previous preamp configuration was just right at the noise level of the demodulator below 100Hz.) The removal of the offset trimmer circuit contributed to the noise improvement below 0.3Hz.
Attachment 1: demod.pdf
Attachment 2: WFS_demod_response.pdf
Attachment 3: WFS_demod_noise.pdf
12647 Tue Nov 29 18:35:32 2016 ranaUpdateIOOIMC WFS Demod board measurement & analysis
more U4 gain, lesssss U5 gain
12661 Fri Dec 2 18:02:37 2016 KojiUpdateIOOIMC WFS Demod board measurement & analysis
ELOG of the Wednesday work.
It turned out that the IMC WFS demod boards have the PCB board that has a different pattern for each of 8ch.
In addition, AD831 has a quite narrow leg pitch with legs that are not easily accessible.
Because of these, we (Koji and Rana) decided to leave the demodulator chip untouched.
I have plugged in the board with the WFS2-Q1 channel modified in order to check the significance of the modification.
WFS performance before the modification
Attachment 1 shows the PSD of WFS2-I1_OUT calibrated to be referred to the demodulator output. (i.e. Measured PSDs (cnt/rtHz) were divided by 8.9*2^16/20)
There are three curves: One is the output with the MC locked (WFS servos not engaged). The second is the PSD with the PSL beam blocked (i.e. dark noise). The third is the electronics noise with the RF input terminated and the nominal LO supplied.
This tells us that the measured PSD was dominated by the demodulator noise in the dark condition. And the WFS signal was also dominated by the demod noise below 0.1Hz and above 20Hz. There are annoying features at 0.7, 1.4, 2.1, ... Hz. They basically impose these noise peaks on the stabilized mirror motion.
WFS performance after the modification
Attachment 2 shows the PSD of WFS2-Q1_OUT calibrated to be referred to the demodulator output. (i.e. Measured PSDs (cnt/rtHz) were divided by 21.4*2^16/20)
There are three same curves as the other plot. In addition to these, the PSD of WFS2-I1_OUT with the MC locked is also shown as a red curve for comparison.
This figure tells us that the measured PSD below 20Hz was dominated by the demodulator noise in the dark condition. And the WFS signal is no longer dominated by the electronics noise. However, there still are the peaks at the harmonics of 0.7, 1.4, 2.1, ... Hz. I need further inspection of the FWS demod and whtening boards to track down the cause of these peaks.
Attachment 1: WFS_demod_noise_orig.pdf
Attachment 2: WFS_demod_noise_mod.pdf
12662 Sat Dec 3 13:27:35 2016 KojiUpdateIOOIMC WFS Demod board measurement & analysis
ELOG of the work on Thursday
Gautam suggested looking at the preamplifier noise by shorting the input to the first stage. I thought it was a great idea.
To my surprise, the noise of the 2nd stage was really high compared to the model. I proceeded to investigate what was wrong.
It turned out that the resistors used in this sallen-key LPF were thick film resistors. I swapped them with thin film resistors and this gave the huge improvement of the preamplifier noise in the low frequency band.
Attachment 1 shows the summary of the results. Previously the input referred noise of the preamp was the curve in red. We the resistors replaced, it became the curve in magenta, which is pretty close to the expected noise level by LISO model above 3Hz (dashed curves). Unfortunately, the output of the unit with the demodulator connected showed no improvement (blue vs green), because the output is still limited by the demodulator noise. There were harmonic noise peaks at n x 10Hz before the resistor replacement. I wonder if this modification also removed the harmonic noise seen in the CDS signals. I will check this next week.
Attachment 2 shows the current schematic diagram of the demodulator board. The Q of the sallen key filter was adjusted by the gain to have 0.7 (butter worth). We can adjust the Q by the ratio of the capacitance. We can short 3.83K and remove 6.65K next to it. And use 22nF and 47nF for the capacitors at the positive input and the feedback, respectively. This reduces the number of the resistors.
Attachment 1: WFS_demod_noise.pdf
Attachment 2: demod.pdf
12668 Tue Dec 6 13:37:02 2016 KojiUpdateIOOIMC WFS Demod board measurement & analysis
I have implemented the modification to the demod boards (Attachment 1).
Now, I am looking at the noise in the whitening board. Attachment 2 shows the comparison of the error signal with the input of the whitening filter shorted and with the 50ohm terminator on the demodulator board. The message is that the whitening filter dominates the noise below 3Hz.
I am looking at the schematic of the whitening board D990196-B. It has an VGA AD602 at the input. I could not find the gain setting for this chip.
If the gain input is fixed at 0V, AD602 has the gain of 10dB. The later stages are the filters. I presume they have the thick film resistors.
Then they may also cause the noise. Not sure which is the case yet.
Also it seems that 0.7Hz noise is still present. We can say that this is coming from the demod board but not on the work bench but in the eurocard crate.
Attachment 1: demod.pdf
Attachment 2: WFS_error_noise.pdf
12748 Tue Jan 24 01:04:16 2017 gautamSummaryIOOIMC WFS RF power levels
Summary:
I got around to doing this measurement today, using a minicircuits bi-directional coupler (ZFBDC20-61-HP-S+), along with some SMA-LEMO cables.
• With the IMC "well aligned" (MC transmission maximized, WFS control signals ~0), the RF power per quadrant into the Demod board is of the order of tens of pW up to a 100pW.
• With MC1 misaligned such that the MC transmission dropped by ~10%, the power per quadrant into the demod board is of the order of hundreds of pW.
• In both cases, the peak at 29.5MHz was well above the analyzer noise floor (>20dB for the smaller RF signals), which was all that was visible in the 1MHz span centered around 29.5 MHz (except for the side-lobes described later).
• There is anomalously large reflection from Quadrant 2 input to the Demod board for both WFS
• The LO levels are ~-12dBm, ~2dBm lower than the 10dBm that I gather is the recommended level from the AD831 datasheet
Quote: We should insert a bi-directional coupler (if we can find some LEMO to SMA converters) and find out how much actual RF is getting into the demod board.
Details:
I first aligned the mode cleaner, and offloaded the DC offsets from the WFS servos.
The bi-directional coupler has 4 ports: Input, Output, Coupled forward RF and Coupled Reverse RF. I connected the LEMO going to the input of the Demod board to the Input, and connected the output of the coupler to the Demod board (via some SMA-LEMO adaptor cables). The two (20dB) coupled ports were connected to the Agilent spectrum analyzer, which have input impedance 50ohms and hence should be impedance matched to the coupled outputs. I set the analyzer to span 1MHz (29-30MHz), IF BW 30Hz, 0dB input attenuation. It was not necessary to turn on averaging to resolve the peaks at ~29.5MHz since the IF bandwidth was fine enough.
I took two sets of measurements, one with the IMC well aligned (I maximized the MC Trans as best as I could to ~15,000 cts), and one with a macroscopic misalignment to MC1 such that the MC Trans fell to 90% of its usual value (~13,500 cts). The peak function on the analyzer was used to read off the peak height in dBm. I then converted this to RF power, which is summarized in the table below. I did not account for the main line loss of the coupler, but according to the datasheet, the maximum value is 0.25dB so there numbers should be accurate to ~10% (so I'm really quoting more S.Fs than I should be).
WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)
## IMC well aligned
1 1 50.1 12.6 37.5
2 20.0 199.5 -179.6
3 28.2 10.0 18.2
4 70.8 5.0
65.8
2 5 100 19.6 80.0
6 56.2 158.5 -102.3
7 125.9 6.3 11.5
8 17.8 6.3
119.6
WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)
## MC1 Misaligned
1 1 501.2 5.0 496.2
2 630.6 208.9 422
3 871.0 5.0 866
4 407.4 16.6
190.8
2 5 407.4 28.2 379.2
6 316.2 141.3 175.0
7 199.5 15.8 183.7
8 446.7 10.0 436.7
For the well aligned measurement, there was ~0.4mW incident on WFS1, and ~0.3mW incident on WFS2 (measured with Ophir power meter, filter out).
I am not sure how to interpret the numbers for quadrants #2 and #6 in the first table, where the reverse coupled RF power was greater than the forward coupled RF power. But this measurement was repeatable, and even in the second table, the reverse coupled power from these quadrants are more than 10x the other quadrants. The peaks were also well above (>10dBm) the analyzer noise floor
I haven't gone through the full misalginment -> Power coupled to TEM10 mode algebra to see if these numbers make sense, but assuming a photodetector responsivity of 0.8A/W, the product (P1P2) of the powers of the beating modes works out to ~tens of pW (for the IMC well aligned case), which seems reasonable as something like P1~10uW, P2 ~ 5uW would lead to P1P2~50pW. This discussion was based on me wrongly looking at numbers for the aLIGO WFS heads, and Koji pointed out that we have a much older generation here. I will try and find numbers for the version we have and update this discussion.
Misc:
1. For the sake of completeness, the LO levels are ~ -12.1dBm for both WFS demod boards (reflected coupling was negligible)
2. In the input signal coupled spectrum, there were side lobes (about 10dB lower than the central peak) at 29.44875 MHz and 29.52125 MHz (central peak at 29.485MHz) for all of the quadrants. These were not seen for the LO spectra.
3. Attached is a plot of the OSEM sensor signals during the time I misaligned MC1 (in both pitch and yaw approximately by equal amounts). Assuming 2V/mm for the OSEM calibration, the approximate misalignment was by ~10urad in each direction.
4. No IMC suspension glitching the whole time I was working today
Attachment 1: MC1_misalignment.png
12759 Fri Jan 27 00:14:02 2017 gautamSummaryIOOIMC WFS RF power levels
It was raised at the Wednesday meeting that I did not check the RF pickup levels while measuring the RF error signal levels into the Demod board. So I closed the PSL shutter, and re-did the measurement with the same measurement scheme. The detailed power levels (with no light incident on the WFS, so all RF pickup) is reported in the table below.
IMC WFS RF Pickup levels @ 29.5MHz
1 1 0.21 10.
2 1.41 148
3 0.71 7.1
4 0.16 3.6
2 1 0.16 10.5
2 1.48 166
3 0.81 5.1
4 0.56 0.33
These numbers can be subtracted from the corresponding columns in the previous elog to get a more accurate estimate of the true RF error signal levels. Note that the abnormal behaviour of Quadrant #2 on both WFS demod boards persists.
14709 Sun Jun 30 19:47:09 2019 ranaUpdateIOOIMC WFS agenda
we are thinking of doing a sprucing up of the input mode cleaner WFS (sensors + electronics + feedback loops)
1. it has been known since ~2002 that the RF circuits in the heads oscillate.
2. in the attached PDF you can see that 2 opamps (U3 & U4; MAX4106) are used to amplify the tank circuit made up of the photodiode capacitance and L6.
3. due to poor PCB layout (the output of U4 runs close to the input of U3) the opamps oscillate if the if the Reed relay (RY2) is left open (not attenuating)
4. we need to remove/disable the relay
5. also remove U3 for each quadrant so that it has a fixed gain of (TBD) and a 50 Ohm output
6. also check that all the resonances are tuned to 1f, 2f, & 3f respectively
2. Demod boards
4. Whitening
5. Noise budget of sensors, including electronics chain
6. diagonalization of sensors / actuators
7. Requirements -
8. Optical Layout
9. What does the future hold ?
1. what is our preferred pin-for-pin replacement for the MAX4106/MAX4107? internet suggests AD9632. Anyone have any experience with it? The Rabbott uses LMH6642 in the aLIGO WFSs. It has a lower slew rate than 9632, but they both have the same distortion of ~ -60 dB for 29.5 MHz.
2. the whole DC current readout is weird. Should have a load resistor and go into the + input of the opamp, so as to decouple it from the RF stuff. Also why such a fast part? Should have used a OP27 equivalent or LT1124.
3. LEMO connectors for RF are bad. Wonder if we could remove them and put SMA panel mount on there.
4. as usual, makes me feel like replacing with better heads...and downstream electronics...
15747 Sun Jan 3 16:26:06 2021 KojiUpdateSUSIMC WFS check (Yet another round of Sat. Box. switcharoo)
I wanted to check the functionality of the IMC WFS. I just turned on the WFS servo loops as they were. For the past two hours, they didn't run away. The servo has been left turned on. I don't think there is no reason to keep it turned off.
Attachment 1: Screen_Shot_2021-01-03_at_17.14.57.png
10728 Thu Nov 20 22:43:15 2014 KojiUpdateIOOIMC WFS damping gain adjustment
From the measured OLTF, the dynamics of the damped suspension was inferred by calculating H_damped = H_pend / (1+OLTF).
Here H_pend is a pendulum transfer function. For simplicity, the DC gain of the unity is used. The resonant frequency of the mode
is estimated from the OLTF measurement. Because of inprecise resonant frequency for each mode, calculated damped pendulum
has glitches at the resonant frequency. In fact measurement of the OLTF at the resonant freq was not precise (of course). We can
just ignore this glitchiness (numerically I don't know how to do it particularly when the residual Q is high).
Here is my recommended values to have the residual Q of 3~5 for each mode.
MC1 SUS POS current 75 -> x3 = 225 MC1 SUS PIT current 7.5 -> x2 = 22.5 MC1 SUS YAW current 11 -> x2 = 22 MC1 SUS SD current 300 -> x2 = 600
MC2 SUS POS current 75 -> x3 = 225 MC2 SUS PIT current 20 -> x0.5 = 10 MC2 SUS YAW current 8 -> x1.5 = 12 MC2 SUS SD current 300 -> x2 = 600
MC3 SUS POS current 95 -> x3 = 300 MC3 SUS PIT current 9 -> x1.5 = 13.5 MC3 SUS YAW current 6 -> x1.5 = 9 MC3 SUS SD current 250 -> x3 = 750
This is the current setting in the end.
MC1 SUS POS 150 MC1 SUS PIT 15 MC1 SUS YAW 15 MC1 SUS SD 450
MC2 SUS POS 150 MC2 SUS PIT 10 MC2 SUS YAW 10 MC2 SUS SD 450
MC3 SUS POS 200 MC3 SUS PIT 12 MC3 SUS YAW 8 MC3 SUS SD 500
Attachment 1: MC_OLTF_CLTF.pdf
10561 Thu Oct 2 20:54:45 2014 KojiUpdateIOOIMC WFS measurements
[Eric Koji]
We made sensing matrix measurements for the IMC WFS and the MC2 QPD.
The data is under further analysis but here is some record of the current state to show
IMC Trans RIN and the ASC error signals with/without IMC ASC loops
The measureents were done automatically running DTT. This can be done by
/users/Templates/MC/wfsTFs/run_measurements
The analysis is in preparation so that it provides us a diagnostic report in a PDF file.
Attachment 1: IMC_RIN_141002.pdf
Attachment 2: IMC_WFS_141002.pdf
10564 Fri Oct 3 13:03:05 2014 ericqUpdateIOOIMC WFS measurements
Yesterday, Koji and I measured the transfer function of pitch and yaw excitations of each MC mirror, directly to each quadrant of each WFS QPD.
When I last touched the WFS settings, I only used MC2 excitations to set the individual quadrant demodulation phases, but Koji pointed out that this could be incomplete, since motion of the curved MC2 mirror is qualitatively different than motion of the flat 1&3.
We set up a DTT file with twenty TFs (the excitation to I & Q of each WFS quadrant, and the MC2 trans quadrants), and then used some perl find and replace magic to create an xml file for each excitation. These are the files called by the measurement script Koji wrote.
I then wrote a MATLAB script that uses the magical new dttData function Koji and Nic have created, to extract the TF data at the excitation frequency, and build up the sensing elements. I broke the measurements down by detector and excitation coordinate (pitch or yaw).
The amplitudes of the sensing elements in the following plots are normalized to the single largest response of any of the QPD's quadrants to an excitation in the given coordinate, the angles are unchanged. From this, we should be able to read off the proper digital demodulation angles for each segment, confirm the signs of their combinations for pitch and yaw, and construct the sensing matrix elements of the properly rotated signals.
The axes of each quadrant look consistent across mirrors, which is good, as it nails down the proper demod angle.
The xml files and matlab script used to generate these plots is attached. (It requires the dttData functions however, which are in the svn (and the dttData functions require a MATLAB newer than 2012b))
Attachment 5: analyzeWfs.zip
10565 Sun Oct 5 10:09:49 2014 ranaUpdateIOOIMC WFS measurements
It seems clever, but I wonder why use DTT and command line perl, instead of using the FE lockins or just demod the offline data or all of the other sensing matrix scripts made for the LSC (at 40m) or ASC (at LLO) ?
10566 Sun Oct 5 23:43:08 2014 KojiUpdateIOOIMC WFS measurements
There are several non scientific reasons.
16108 Mon May 3 09:14:01 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise
Lock ARMs
• Try IFO Configure ! Restore Y Arm (POY) and saw XARM lock, not YARM. Looks like YARM biases on ITMY and ETMY are not optimal, so we slide C1:SUS-ETMY_OFF from 3.0 --> -14.0 and watch Y catch its lock.
• Run ASS scripts for both arms and get TRY/TRX ~ 0.95
• We ran X, then Y and noted that TRX dropped to ~0.8 so we ran it again and it was well after that. From now on, we will do Y, then X.
WFS1 noise injection
• Turn WFS limits off by running switchOffWFSlims.sh
• Inject broadband noise (80-90 Hz band) of varying amplitudes from 100 - 100000 counts on C1:IOO-WFS1_PIT_EXC
• After this we try to track its propagation through various channels, starting with
• C1:LSC-XARM_IN1_DQ / C1:LSC-YARM_IN1_DQ
• C1:SUS-ETMX_LSC_OUT_DQ / C1:SUS-ETMY_LSC_OUT_DQ
• C1:IOO-MC_F_DQ
• C1:SUS-MC1_**COIL_OUT / C1:SUS-MC2_**COIL_OUT / C1:SUS-MC3_**COIL_OUT
• C1:IOO-WFS1_PIT_ERR / C1:IOO-WFS1_YAW_ERR
• C1:IOO-WFS1_PIT_IN2
** denotes [UL, UR, LL, LR]; the output coils.
• Attachment 1 shows the power spectra with IMC unlocked
• Attachment 2 shows the power spectra with the ARMs (and IMC) locked
Attachment 1: WFS1_PIT_Noise_Inj_Test_IMC_unlocked.pdf
Attachment 2: WFS1_PIT_Noise_Inj_Test_ARM_locked.pdf
16112 Mon May 3 17:28:58 2021 Anchal, Paco, RanaUpdateLSCIMC WFS noise contribution in arm cavity length noise
Rana came and helped us figure us where to inject the noise. Following are the characteristics of the test we did:
• Inject normal noise at C1:IOO-MC1_PIT_EXC using AWGGUI.
• Excitation amplitude of 54321 in band 12-37Hz with Cheby1 8th order bandpass filter with same limits.
• Look at power spectrum of C1:IOO-MC_F_DQ, C1:IOO-WFS1-PIT_OUT_DQ and the C1:IOO-MC1_PIT_EXC itself.
• Increased the gain of the noise excitation until we see some effect in MC_F.
• Diaggui also showed coherence plot in the bottom, which let's us have an estimate of how much we need to go further.
Attachment 1 shows a screenshot with awggui and diaggui screens displaying the signal in both angular and longitudinal channels.
Attachment 2 shows the analogous screenshot for MC2.
Attachment 1: excitationoftheMCanglessothatwecanseesomethingdotpng.png
Attachment 2: excitationoftheMCanglessothatwecanseesomethingdotpngbutthistimeitsMC2.png
16117 Tue May 4 11:43:09 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise
We redid the WFS noise injection test and have compiled some results on noise contribution in arm cavity noise and IMC frequency noise due to angular noise of IMC.
Attachment 1: Shows the calibrated noise contribution from MC1 ASCPIT OUT to ARM cavity length noise and IMC frequency noise.
• For calibrating the cavity length noise signals, we sent 100 cts 100Hz sine excitation to ITMX/Y_LSC_EXC, used actuator calibration for them as 2.44 nm/cts from 13984, and measured the peak at 100 hz in time series data. We got calibration factors: ETMX-LSC_OUT: 60.93 pm/cts , and ETMY-LSC_OUT: 205.0 pm/cts.
• For converting IMC frequency noise to length noise, we used conversion factor given by $\lambda L / c$ where L is 37.79m and lambda is wavelength of light.
• For converting MC1 ASCPIT OUT cts data to frequency noise contributed to IMC, we sent 100,000 amplitude bandlimited noise (see attachment 3 for awggui config) from 25 Hz to 30 Hz at C1:IOO-MC1_PIT_EXC. This noise was seen at both MC_F and ETMX/Y_LSC_OUT channels. We used the noise level at 29 Hz to get a calibration for MC1_ASCPIT_OUT to IMC Frequency in Hz/cts. See Attachment 2 for the diaggui plots.
• Once we got the calibration above, we measured MC1_ASCPIT_OUT power spectrum without any excitaiton and multiplied it with the calibration factor.
• However, something must be wrong because the MC_F noise in length units is coming to be higher than cavity length noise in most of the frequency band.
• It can be due to the fact that control signal power spectrum is not exactly cavity length noise at all frequencies. That should be only above the UGF of the control loop (we plan to measure that in afternoon).
• Our calibration for ETMX/Y_LSC_OUT might be wrong.
Attachment 1: ArmCavNoiseContributions.pdf
Attachment 2: IOO-MC1_PIT_NoiseInjTest2.pdf
Attachment 3: IOO-MC1_PIT_NoiseInjTest_AWGGUI_Config.png
16127 Fri May 7 11:54:02 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise
We today measured the calibration factors for XARM_OUT and YARM_OUT in nm/cts and replotted our results from 16117 with the correct frequency dependence.
Calibration of XARM_OUT and YARM_OUT
• We took transfer function measurement between ITMX/Y_LSC_OUT and X/YARM_OUT. See attachment 1 and 2
• For ITMX/Y_LSC_OUT we took calibration factor of 3*2.44/f2 nm/cts from 13984. Note that we used the factor of 3 here as Gautum has explicitly written that the calibration cts are DAC cts at COIL outputs and there is a digital gain of 3 applied at all coil output gains in ITMX and ITMY that we confirmed.
• This gave us callibration factors of XARM_OUT: 1.724/f2 nm/cts , and YARM_OUT: 4.901/f2 nm/cts. Note the frrequency dependence here.
• We used the region from 70-80 Hz for calculating the calibration factor as it showed the most coherence in measurement.
Inferring noise contributions to arm cavities:
• For converting IMC frequency noise to length noise, we used conversion factor given by $\lambda L / c$ where L is 37.79m and lambda is wavelength of light.
• For converting MC1 ASCPIT OUT cts data to frequency noise contributed to IMC, we sent 100,000 amplitude bandlimited noise from 25 Hz to 30 Hz at C1:IOO-MC1_PIT_EXC. This noise was seen at both MC_F and ETMX/Y_LSC_OUT channels. We used the noise level at 29 Hz to get a calibration for MC1_ASCPIT_OUT to IMC Frequency in Hz/cts. This measurement was done in 16117.
• Once we got the calibration above, we measured MC1_ASCPIT_OUT power spectrum without any excitaiton and multiplied it with the calibration factor.
• Attachment 3 is our main result.
• Page 1 shows the calculation of Angle to Length coupling by reading off noise injects in MC1_ASCPIT_OUT in MC_F. This came out to 10.906/f2 kHz/cts.
• Page 2-3 show the injected noise in X arm cavity length units. Page 3 is the zoomed version to show the matching of the 2 different routes of calibration.
• BUT, we needed to remove that factor of 3 we incorporated earlier to make them match.
• Page 4 shows the noise contribution of IMC angular noise in XARM cavity.
• Page 5-6 is similar to 2-3 but for YARM. The red note above applied here too! So the factor of 3 needed to be removed in both places.
• Page 7 shows the noise contribution of IMC angular noise in XARM cavity.
### Conclusions:
• IMC Angular noise contribution to arm cavities is atleast 3 orders of magnitude lower then total armc cavity noise measured.
Edit Mon May 10 18:31:52 2021
See corrections in 16129.
Attachment 1: ITMX-XARM_TF.pdf
Attachment 2: ITMY-YARM_TF.pdf
Attachment 3: ArmCavNoiseContributions.pdf
16129 Mon May 10 18:19:12 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise, Corrections
A few corrections to last analysis:
• The first plot was not IMC frequency noise but actually MC_F noise budget.
• MC_F is frequency noise in the IMC FSS loop just before the error point where IMC length and laser frequency is compared.
• So, MC_F (in high loop gain frequency region upto 10kHz) is simply the quadrature noise sum of free running laser noise and IMC length noise.
• Between 1Hz to 100 Hz, normally MC_F is dominated by free running laser noise but when we injected enough angular noise in WFS loops, due to Angle to length coupling, it made IMC length noise large enough in 25-30 Hz band that we started seeing a bump in MC_F.
• So this bump in MC_F is mostly the noise due to Angle to length coupling and hence can be used to calculate how much Angular noise normally goes into length noise.
• In the remaining plots, MC_F was plotted with conversion into arm length units but this was wrong. MC_F gets suppressed by IMC FSS open loop gain before reaching to arm cavities and hence is hardly present there.
• The IMC length noise however is not suppresed until after the error point in the loop. So the length noise (in units of Hz calculated in the first step above) travels through the arm cavity loop.
• We already measured the transfer function from ITMX length actuation to XARM OUT, so we know how this length noise shows up at XARM OUT.
• So in the remaining plots, we plot contribution of IMC angular noise in the arm cavities. Note that the factor of 3 business still needed to be done to match the appearance of noise in XARM_OUT and YARM_OUT signal from the IMC angular noise injection.
• I'll post a clean loop diagram soon to make this loopology clearer.
Attachment 1: ArmCavNoiseContributions.pdf
14092 Fri Jul 20 22:51:28 2018 KojiUpdateIOOIMC WFS path alignment
IMC WFS tuning
- IMC was aligned manually to have maximum output and also spot at the center of the end QPD.
- The IMC WFS spots were aligned to be the center of the WFS QPDs.
- With the good alignment, WFS RF offset and MC2 QPD offsets were tuned via the scripts.
10646 Tue Oct 28 14:07:28 2014 KojiUpdateIOOIMC WFS sensing matrix measurement
Last night the sensing matrix for IMC WFS&QPD were measured.
C1:IOO-MC(1, 2, 3)_(ASCPIT, ASCYAW)_EXC were excited at 5.01Hz with 100 count
The output of the WFS1/WFS2/QPD were measured. They all looked well responding
i.e. Pitch motion shows pitch error signals, Yaw motion shows yaw error signals.
The below is the transfer function from each suspension to the error signals
MC1P MC2P MC3P -3.16e-4 1.14e-2 4.62e-3 -> WFS1P 5.43e-3 8.22e-3 -2.79e-3 -> WFS2P -4.03e-5 -3.98e-5 -3.94e-5 -> QPDP
MC1Y MC2Y MC3Y -6.17e-4 6.03e-4 1.45e-4 -> WFS1Y -2.43e-4 4.57e-3 -2.16e-3 -> WFS2Y 7.08e-7 2.40e-6 1.32e-6 -> QPDY
Taking the inverse of these matrices, the scale was adjusted so that the dc response.
Attachment 1: 00.png
10647 Tue Oct 28 15:27:25 2014 ericqUpdateIOOIMC WFS sensing matrix measurement
I took some spectra of the error signals and MC2 Trans RIN with the loops off (blue) and on (red) during the current conditions of daytime seismic noise.
10648 Tue Oct 28 20:47:08 2014 diegoUpdateIOOIMC WFS sensing matrix measurement
Today I started looking into the WFS problem and improvement, after being briefed by Koji and Nicholas. I started taking some measurements of open loop transfer functions for both PIT and YAW for WFS1, WFS2 and MC2_TRANS. For both WFS1 and 2 there is a peak in close proximity of the region with gain>1, and the phase margin is not very high. Tomorrow I will make measurements of the local damping open loop transfer functions, then we'll think how to improve the sensors' behaviour.
Attachment 1: 141028_MCWFS_WFS1_PIT_OL.pdf
Attachment 2: 141028_MCWFS_WFS1_YAW_OL.pdf
Attachment 3: 141028_MCWFS_WFS2_PIT_OL.pdf
Attachment 4: 141028_MCWFS_WFS2_YAW_OL.pdf
Attachment 5: 141028_MCWFS_MC2_TRANS_PIT_OL.pdf
Attachment 6: 141028_MCWFS_MC2_TRANS_YAW_OL.pdf
10653 Thu Oct 30 02:12:59 2014 diegoUpdateIOOIMC WFS sensing matrix measurement
[Diego,Koji]
Today we took some measurements of transfer functions and power spectra of suspensions of the MC* mirrors (open loop), for all the DOFs (PIT, POS, SIDE, YAW); the purpose is to evaluate the Q factor of the resonances and then improve the local damping system.
Attachment 1: MC1_OL_PIT.pdf
Attachment 2: MC1_OL_POS.pdf
Attachment 3: MC1_OL_SIDE.pdf
Attachment 4: MC1_OL_YAW.pdf
Attachment 5: MC2_OL_PIT.pdf
Attachment 6: MC2_OL_POS.pdf
Attachment 7: MC2_OL_SIDE.pdf
Attachment 8: MC2_OL_YAW.pdf
Attachment 9: MC3_OL_PIT.pdf
Attachment 10: MC3_OL_POS.pdf
Attachment 11: MC3_OL_SIDE.pdf
Attachment 12: MC3_OL_YAW.pdf
15165 Tue Jan 28 16:01:17 2020 gautamUpdateIOOIMC WFS servos stable again
With all of the shaking (man-made and divine), it was a hard to debug this problem. Summary of fixes:
1. The beam was misaligned on the WFS 1 and 2 heads, as well as the MC2 trans QPD. I re-aligned the former with the IMC unlocked, the latter (see Attachment) with the IMC locked (but the MC2 spot centering loops disabled).
2. I reset the WFS DC and RF offsets, as well as the QPD offsets (once I had hand-aligned the IMC mirrors to obtain good transmission).
At least the DC indicators are telling me that the IMC locking is back to a somewhat stable state. I have not yet checked the frequency noise / RIN.
Attachment 1: QPD_recenter.png
15170 Tue Jan 28 20:51:37 2020 YehonathanUpdateIOOIMC WFS servos stable again
I resume my IMC ringdown activities now that the IMC is aligned again.
To avoid any accidental misalignments Gautam turned off all the inputs to the WFS servo.
I set up a PD and a lens as in attachment 1 (following Gautam's setup).
I connect the REFL, TRANS and INPut PDs to the oscilloscope.
I connect a Siglent function generator to the AOM driver. I try to shut off the light to the IMC using 1V DC waveform and pressing the output button manually. However, it produced heavily distorted step function in the PMC trans PD.
I use a square wave with a frequency of 20mHz instead with an amplitude of 0.5V offset of 0.25V and dutycycle of 1% so there will be minimal wasted time in the off state. I get nice ringdowns (attachment 2) - forgot to take pictures. The autolocker slightly misaligns the M2 every time it is acting, so I manually align it everytime the IMC gets unlocked.
Data analysis will come later.
I remove the PD and lens and reenable the WFS servo inputs. The IMC locks easily. The WFS outputs are very different than 0 now though.
12680 Wed Dec 21 21:03:06 2016 KojiSummaryIOOIMC WFS tuning
- Updated the circuit diagrams:
IMC WFS Demodulator Board, Rev. 40m https://dcc.ligo.org/LIGO-D1600503
IMC WFS Whitening Board, Rev. 40m https://dcc.ligo.org/LIGO-D1600504
- Measured the noise levels of the whitening board, demodboard, and nominal free running WFS signals.
- IMC WFS demod phases for 8ch adjusted
Injected an IMC PDH error point offset (@1kHz, 10mV, 10dB gain) and adjusted the phase to have no signal in the Q phase signals.
- The WFS2 PITCH/YAW matrix was fixed
It was found that the WFS heads were rotated by 45 deg (->OK) in CW and CCW for WFS1 and 2, respectively (oh!), while the input matrices were identical! This made the pitch and yaw swapped for WFS2. (See attachment)
- Measured the TFs MC1/2/3 P/Y actuation to the error signals
Attachment 1: DSC_0142.JPG
12682 Thu Dec 22 18:39:09 2016 KojiSummaryIOOIMC WFS tuning
Noise analysis of the WFS error signals.
Attachment 1: All error signals compared with the noise contribution measured with the RF inputs or the whitening inputs terminated.
Attachment 2: Same plot for all the 16 channels. The first plot (WFS1 I1) shows the comparison of the current noise contributions and the original noise level measured with the RF terminated with the gain adjusted along with the circuit modification for the fair comparison. This plot is telling us that the electronics noise was really close to the error signal.
I wonder if we have the calibration of the IMC suspensions somewhere so that I can convert these plots in to rad/sqrtHz...?
Attachment 1: WFS_error_noise.pdf
Attachment 2: WFS_error_noise_chans.pdf
ELOG V3.1.3- | 2022-09-27 08:17:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6554176211357117, "perplexity": 4325.95279199106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00786.warc.gz"} |
https://www.esaral.com/q/the-following-system-of-linear-equations-57127/ | The following system of linear equations
Question:
The following system of linear equations
$3 x+3 y+2 z=9$
$3 x+2 y+2 z=9$
$x-y+4 z=8$
1. (1) does not have any solution
2. (2) has a unique solution
3. (3) has a solution $(\alpha, \beta, \gamma)$ satisfying $\alpha+\beta^{2}+\gamma^{3}=12$
4. (4) has infinitely many solutions
Correct Option: , 2
Solution:
$\Delta=\left|\begin{array}{ccc}2 & 3 & 2 \\ 3 & 2 & 2 \\ 1 & -1 & 4\end{array}\right|=-20 \neq 0 \quad \therefore$ unique solution
$\Delta_{x}=\left|\begin{array}{ccc}9 & 3 & 2 \\ 9 & 2 & 2 \\ 8 & -1 & 4\end{array}\right|=0$
$\Delta_{y}=\left|\begin{array}{lll}2 & 9 & 2 \\ 3 & 9 & 2 \\ 1 & 8 & 4\end{array}\right|=-20$
$\left|\begin{array}{ccc}2 & 3 & 9 \\ 3 & 2 & 9 \\ 1 & -1 & 8\end{array}\right|=-40$
$\therefore \quad x=\frac{\Delta_{\mathrm{X}}}{\Delta}=0$
$y=\frac{\Delta_{y}}{\Delta}=1$
$z=\frac{\Delta_{x}}{\Delta}=2$
Unique solution: $(0,1,2)$ | 2022-08-18 07:00:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94447261095047, "perplexity": 2186.3356760207607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00304.warc.gz"} |
https://zbmath.org/authors/?q=ai%3Aberetta.edoardo | # zbMATH — the first resource for mathematics
## Beretta, Edoardo
Compute Distance To:
Author ID: beretta.edoardo Published as: Beretta, E.; Beretta, Edoardo
Documents Indexed: 74 Publications since 1979
all top 5
#### Co-Authors
2 single-authored 24 Takeuchi, Yasuhiro 20 Solimano, Fortunata 8 Kuang, Yang 6 Ma, Wanbiao 4 Fasano, Antonio 4 Lazzari, Claudio 3 Bischi, Gian-Italo 3 Breda, Dimitri 3 Capasso, Vincenzo 3 Hara, Tadayuki 3 Liu, Shengqiang 3 Tang, Yanbin 2 Carletti, Margherita 2 Fergola, Paolo 2 Hosono, Yuzo 2 Kolmanovskii, Vladimir Borisovich 2 Kon, Ryusuke 2 Vetrano, Flavio 1 An, Qi 1 Cerasuolo, Marianna 1 Chattopadhyay, Joydev 1 Garao, Dario G. 1 Garao, Davide G. 1 Harel-Bellan, Annick 1 Huang, Gang 1 Kirschner, Denise E. 1 Marino, Simeone 1 Morozova, Nadya 1 Sakabira, Hirotatsu 1 Sakakibara, Hirotatsu 1 Shaikhet, Leonid Efimovich 1 Tenneriello, Catello 1 Wang, Chuncheng 1 Wang, Hao
all top 5
#### Serials
6 Journal of Mathematical Biology 5 Mathematical Biosciences 5 Bulletin of Mathematical Biology 4 RIMS Kokyuroku 4 Nonlinear Analysis. Real World Applications 4 Mathematical Biosciences and Engineering 3 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 3 SIAM Journal on Applied Mathematics 3 The Canadian Applied Mathematics Quarterly 2 Journal of Mathematical Analysis and Applications 2 Differential Equations and Dynamical Systems 2 Nonlinear Analysis. Theory, Methods & Applications 1 Mathematical Methods in the Applied Sciences 1 Funkcialaj Ekvacioj. Serio Internacia 1 Journal of Computational and Applied Mathematics 1 Journal of Differential Equations 1 Mathematics and Computers in Simulation 1 Tohoku Mathematical Journal. Second Series 1 Acta Applicandae Mathematicae 1 Surveys on Mathematics for Industry 1 SIAM Journal on Mathematical Analysis 1 Bollettino della Unione Matemàtica Italiana. Supplemento 1 Nonlinear World 1 Dynamics of Continuous, Discrete and Impulsive Systems 1 Communications in Applied Analysis 1 Vietnam Journal of Mathematics 1 Scientiae Mathematicae Japonicae 1 Discrete and Continuous Dynamical Systems. Series B
all top 5
#### Fields
65 Biology and other natural sciences (92-XX) 54 Ordinary differential equations (34-XX) 10 Integral equations (45-XX) 3 Combinatorics (05-XX) 3 Numerical analysis (65-XX) 3 Systems theory; control (93-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Information and communication theory, circuits (94-XX) 1 Partial differential equations (35-XX) 1 Difference and functional equations (39-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Classical thermodynamics, heat transfer (80-XX)
#### Citations contained in zbMATH Open
56 Publications have been cited 1,917 times in 1,347 Documents Cited by Year
Global qualitative analysis of a ratio-dependent predator-prey system. Zbl 0895.92032
Kuang, Yang; Beretta, Edoardo
1998
Geometric stability switch criteria in delay differential systems with delay dependent parameters. Zbl 1013.92034
Beretta, Edoardo; Kuang, Yang
2002
Global analysis in some delayed ratio-dependent predator-prey systems. Zbl 0946.34061
Beretta, Edoardo; Kuang, Yang
1998
Stability of epidemic model with time delays influenced by stochastic perturbations. Zbl 1017.92504
Beretta, Edoardo; Kolmanovskii, Vladimir; Shaikhet, Leonid
1998
Global stability of an SIR epidemic model with time delays. Zbl 0811.92019
Beretta, Edoardo; Takeuchi, Yasuhiro
1995
Global asymptotic stability of an $$SIR$$ epidemic model with distributed time delay. Zbl 1042.34585
Beretta, Edoardo; Hara, Tadayuki; Ma, Wanbiao; Takeuchi, Yasuhiro
2001
Global asymptotic properties of a delay SIR epidemic model with finite incubation times. Zbl 0967.34070
Takeuchi, Yasuhiro; Ma, Wanbiao; Beretta, Edoardo
2000
Global asymptotic stability of Lotka-Volterra diffusion models with continuous time delay. Zbl 0661.92018
Beretta, E.; Takeuchi, Y.
1988
A stage-structured predator-prey model of Beddington-DeAngelis type. Zbl 1110.34059
Liu, Shengqiang; Beretta, Edoardo
2006
Convergence results in a well-known delayed predator-prey system. Zbl 0876.92021
Beretta, Edoardo; Kuang, Yang
1996
Modeling and analysis of a marine bacteriophage infection. Zbl 0946.92012
Beretta, Edoardo; Kuang, Yang
1998
Global stability of single-species diffusion Volterra models with continuous time delays. Zbl 0627.92021
Beretta, E.; Takeuchi, Y.
1987
Stability in chemostat equations with delayed recycling. Zbl 0665.45006
Beretta, E.; Bischi, G. I.; Solimano, F.
1990
Convergence results in $$SIR$$ epidemic models with varying population sizes. Zbl 0879.34054
Beretta, Edoardo; Takeuchi, Yasuhiro
1997
Global stability and periodic orbits for two-patch predator-prey diffusion-delay models. Zbl 0634.92017
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
1987
Permanence of an SIR epidemic model with distributed time delays. Zbl 1014.92033
Ma, Wanbiao; Takeuchi, Yasuhiro; Hara, Tadayuki; Beretta, Edoardo
2002
Modeling and analysis of a marine bacteriophage infection with latency period. Zbl 1015.92049
Beretta, Edoardo; Kuang, Yang
2001
Qualitative properties of chemostat equations with time delays: Boundedness, local and global asymptotic stability. Zbl 0868.45002
Beretta, Edoardo; Takeuchi, Yasuhiro
1994
On the general structure of epidemic systems. Global asymptotic stability. Zbl 0622.92016
Beretta, E.; Capasso, V.
1986
Global stability results for a multigroup SIR epidemic model. Zbl 0684.92015
Beretta, E.; Capasso, V.
1988
Predator-prey model of Beddington-DeAngelis type with maturation and gestation delays. Zbl 1208.34124
Liu, Shengqiang; Beretta, Edoardo; Breda, Dimitri
2010
Global stability results for a generalized Lotka-Volterra system with distributed delays. Applications to predator-prey and epidemic systems. Zbl 0716.92020
Beretta, E.; Capasso, V.; Rinaldi, F.
1988
Qualitative properties of chemostat equations with time delays. II. Zbl 0995.34068
Beretta, Edoardo; Takeuchi, Yasuhiro
1994
Global stability for chemostat equations with delayed nutrient recycling. Zbl 0809.34084
Beretta, E.; Takeuchi, Y.
1994
Some new results on an allelopathic competition model with quorum sensing and delayed toxicant production. Zbl 1104.92059
Fergola, P.; Beretta, E.; Cerasuolo, M.
2006
On the effects of environmental fluctuations in a simple model of bacteria-bacteriophage infection. Zbl 1049.34521
Beretta, E.; Carletti, M.; Solimano, F.
2000
A generalization of Volterra models with continuous time delay in population dynamics: Boundedness and global asymptotic stability. Zbl 0659.92020
Beretta, E.; Solimano, F.
1988
Graph theoretical criteria for stability and boundedness of predator-prey systems. Zbl 0496.92012
Solimano, F.; Beretta, E.
1982
Negative criteria for the existence of periodic solutions in a class of delay-differential equations. Zbl 1087.34542
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
2002
An SEIR epidemic model with constant latency time and infectious period. Zbl 1259.34062
Beretta, Edoardo; Breda, Dmitri
2011
The role of delays in innate and adaptive immunity to intracellular bacterial infection. Zbl 1122.92035
Marino, Simeone; Beretta, Edoardo; Kirschner, Denise E.
2007
Nonexistence of periodic solutions in delayed Lotka–Volterra systems. Zbl 1091.34037
Beretta, Edoardo; Kon, Ryusuke; Takeuchi, Yasuhiro
2002
Analysis of a chemostat model for bacteria and virulent bacteriophage. Zbl 1028.34067
Beretta, Edoardo; Solimano, Fortunata; Tang, Yanbin
2002
Global stability in a well known delayed chemostat model. Zbl 1089.34546
Beretta, Edoardo; Kuang, Yang
2000
Global stability for epidemic model with constant latency and infectious periods. Zbl 1260.92056
Huang, Gang; Beretta, Edoardo; Takeuchi, Yasuhiro
2012
Competitive systems with stage structure of distributed-delay type. Zbl 1110.34053
Liu, Shengqiang; Beretta, Edoardo
2006
On an ODE from forced coating flow. Zbl 0919.34020
Beretta, E.; Hulshof, J.; Peletier, L. A.
1996
Existence of a globally asymptotically stable equilibrium in Volterra models with continuous time delay. Zbl 0523.92013
Solimano, F.; Beretta, E.
1983
Mathematical modelling of cancer stem cells population behavior. Zbl 1241.92035
Beretta, E.; Capasso, V.; Morozova, N.
2012
Extension of a geometric stability switch criterion. Zbl 1229.39003
Beretta, Edoardo; Tang, Yanbin
2003
Oscillations in a system with material cycling. Zbl 0713.92027
Beretta, E.; Bischi, G. I.; Solimano, F.
1988
Geometric stability switch criteria in delay differential equations with two delays and delay dependent parameters. Zbl 1414.34056
An, Qi; Beretta, Edoardo; Kuang, Yang; Wang, Chuncheng; Wang, Hao
2019
Numerical detection of instability regions for delay models with delay-dependent parameters. Zbl 1124.34053
Carletti, Margherita; Beretta, Edoardo
2007
A mathematical model for drug administration by using the phagocytosis of red blood cells. Zbl 0863.92005
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
1996
Ultimate boundedness for nonautonomous diffusive Lotka-Volterra patches. Zbl 0663.92016
Beretta, Edoardo; Fergola, Paolo; Tenneriello, Catello
1988
Discrete or distributed delay? Effects on stability of population growth. Zbl 1329.34127
Beretta, Edoardo; Breda, Dimitri
2016
Analysis of a chemostat model for bacteria and bacteriophage. Zbl 1023.92031
Beretta, Edoardo; Sakakibara, Hirotatsu; Takeuchi, Yasuhiro
2002
Stability analysis of the phytoplankton vertial steady states in a laboratory test tube. Zbl 0807.92018
Beretta, Edoardo; Fasano, Antonio; Hosono, Yuzo; Kolmanovskij, V. B.
1994
A homotopy technique for a linear generalization of Volterra models. Zbl 0669.92018
Beretta, Edoardo
1989
Erratum to: “A mathematical model for malaria transmission with asymptomatic carriers and two age groups in the human population”. Zbl 1405.92247
Beretta, Edoardo; Capasso, Vincenzo; Garao, Davide G.
2018
A mathematical model for malaria transmission with asymptomatic carriers and two age groups in the human population. Zbl 1392.92094
Beretta, Edoardo; Capasso, Vincenzo; Garao, Dario G.
2018
Some results on the population behavior of cancer stem cells. Zbl 1316.92032
Beretta, Edoardo; Morozova, Nadya; Capasso, Vincenzo; Harel-Bellan, Annick
2012
Stability analysis of time delayed chemostat models for bacteria virulent phage. Zbl 1142.92340
Beretta, Edoardo; Sakabira, Hirotatsu; Takeuchi, Yasuhiro
2003
Stability analysis of a Volterra predator-prey system with two delays. Zbl 1049.34094
Tang, Yanbin; Beretta, Edoardo; Solimano, Fortunata
2001
Mathematical model for the dynamics of a phytoplankton population. Zbl 0742.92021
Beretta, E.; Fasano, A.
1991
Some results about nonlinear chemical systems represented by trees and cycles. Zbl 0405.92022
Beretta, E.; Vetrano, F.; Solimano, F.; Lazzari, C.
1979
Geometric stability switch criteria in delay differential equations with two delays and delay dependent parameters. Zbl 1414.34056
An, Qi; Beretta, Edoardo; Kuang, Yang; Wang, Chuncheng; Wang, Hao
2019
Erratum to: “A mathematical model for malaria transmission with asymptomatic carriers and two age groups in the human population”. Zbl 1405.92247
Beretta, Edoardo; Capasso, Vincenzo; Garao, Davide G.
2018
A mathematical model for malaria transmission with asymptomatic carriers and two age groups in the human population. Zbl 1392.92094
Beretta, Edoardo; Capasso, Vincenzo; Garao, Dario G.
2018
Discrete or distributed delay? Effects on stability of population growth. Zbl 1329.34127
Beretta, Edoardo; Breda, Dimitri
2016
Global stability for epidemic model with constant latency and infectious periods. Zbl 1260.92056
Huang, Gang; Beretta, Edoardo; Takeuchi, Yasuhiro
2012
Mathematical modelling of cancer stem cells population behavior. Zbl 1241.92035
Beretta, E.; Capasso, V.; Morozova, N.
2012
Some results on the population behavior of cancer stem cells. Zbl 1316.92032
Beretta, Edoardo; Morozova, Nadya; Capasso, Vincenzo; Harel-Bellan, Annick
2012
An SEIR epidemic model with constant latency time and infectious period. Zbl 1259.34062
Beretta, Edoardo; Breda, Dmitri
2011
Predator-prey model of Beddington-DeAngelis type with maturation and gestation delays. Zbl 1208.34124
Liu, Shengqiang; Beretta, Edoardo; Breda, Dimitri
2010
The role of delays in innate and adaptive immunity to intracellular bacterial infection. Zbl 1122.92035
Marino, Simeone; Beretta, Edoardo; Kirschner, Denise E.
2007
Numerical detection of instability regions for delay models with delay-dependent parameters. Zbl 1124.34053
Carletti, Margherita; Beretta, Edoardo
2007
A stage-structured predator-prey model of Beddington-DeAngelis type. Zbl 1110.34059
Liu, Shengqiang; Beretta, Edoardo
2006
Some new results on an allelopathic competition model with quorum sensing and delayed toxicant production. Zbl 1104.92059
Fergola, P.; Beretta, E.; Cerasuolo, M.
2006
Competitive systems with stage structure of distributed-delay type. Zbl 1110.34053
Liu, Shengqiang; Beretta, Edoardo
2006
Extension of a geometric stability switch criterion. Zbl 1229.39003
Beretta, Edoardo; Tang, Yanbin
2003
Stability analysis of time delayed chemostat models for bacteria virulent phage. Zbl 1142.92340
Beretta, Edoardo; Sakabira, Hirotatsu; Takeuchi, Yasuhiro
2003
Geometric stability switch criteria in delay differential systems with delay dependent parameters. Zbl 1013.92034
Beretta, Edoardo; Kuang, Yang
2002
Permanence of an SIR epidemic model with distributed time delays. Zbl 1014.92033
Ma, Wanbiao; Takeuchi, Yasuhiro; Hara, Tadayuki; Beretta, Edoardo
2002
Negative criteria for the existence of periodic solutions in a class of delay-differential equations. Zbl 1087.34542
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
2002
Nonexistence of periodic solutions in delayed Lotka–Volterra systems. Zbl 1091.34037
Beretta, Edoardo; Kon, Ryusuke; Takeuchi, Yasuhiro
2002
Analysis of a chemostat model for bacteria and virulent bacteriophage. Zbl 1028.34067
Beretta, Edoardo; Solimano, Fortunata; Tang, Yanbin
2002
Analysis of a chemostat model for bacteria and bacteriophage. Zbl 1023.92031
Beretta, Edoardo; Sakakibara, Hirotatsu; Takeuchi, Yasuhiro
2002
Global asymptotic stability of an $$SIR$$ epidemic model with distributed time delay. Zbl 1042.34585
Beretta, Edoardo; Hara, Tadayuki; Ma, Wanbiao; Takeuchi, Yasuhiro
2001
Modeling and analysis of a marine bacteriophage infection with latency period. Zbl 1015.92049
Beretta, Edoardo; Kuang, Yang
2001
Stability analysis of a Volterra predator-prey system with two delays. Zbl 1049.34094
Tang, Yanbin; Beretta, Edoardo; Solimano, Fortunata
2001
Global asymptotic properties of a delay SIR epidemic model with finite incubation times. Zbl 0967.34070
Takeuchi, Yasuhiro; Ma, Wanbiao; Beretta, Edoardo
2000
On the effects of environmental fluctuations in a simple model of bacteria-bacteriophage infection. Zbl 1049.34521
Beretta, E.; Carletti, M.; Solimano, F.
2000
Global stability in a well known delayed chemostat model. Zbl 1089.34546
Beretta, Edoardo; Kuang, Yang
2000
Global qualitative analysis of a ratio-dependent predator-prey system. Zbl 0895.92032
Kuang, Yang; Beretta, Edoardo
1998
Global analysis in some delayed ratio-dependent predator-prey systems. Zbl 0946.34061
Beretta, Edoardo; Kuang, Yang
1998
Stability of epidemic model with time delays influenced by stochastic perturbations. Zbl 1017.92504
Beretta, Edoardo; Kolmanovskii, Vladimir; Shaikhet, Leonid
1998
Modeling and analysis of a marine bacteriophage infection. Zbl 0946.92012
Beretta, Edoardo; Kuang, Yang
1998
Convergence results in $$SIR$$ epidemic models with varying population sizes. Zbl 0879.34054
Beretta, Edoardo; Takeuchi, Yasuhiro
1997
Convergence results in a well-known delayed predator-prey system. Zbl 0876.92021
Beretta, Edoardo; Kuang, Yang
1996
On an ODE from forced coating flow. Zbl 0919.34020
Beretta, E.; Hulshof, J.; Peletier, L. A.
1996
A mathematical model for drug administration by using the phagocytosis of red blood cells. Zbl 0863.92005
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
1996
Global stability of an SIR epidemic model with time delays. Zbl 0811.92019
Beretta, Edoardo; Takeuchi, Yasuhiro
1995
Qualitative properties of chemostat equations with time delays: Boundedness, local and global asymptotic stability. Zbl 0868.45002
Beretta, Edoardo; Takeuchi, Yasuhiro
1994
Qualitative properties of chemostat equations with time delays. II. Zbl 0995.34068
Beretta, Edoardo; Takeuchi, Yasuhiro
1994
Global stability for chemostat equations with delayed nutrient recycling. Zbl 0809.34084
Beretta, E.; Takeuchi, Y.
1994
Stability analysis of the phytoplankton vertial steady states in a laboratory test tube. Zbl 0807.92018
Beretta, Edoardo; Fasano, Antonio; Hosono, Yuzo; Kolmanovskij, V. B.
1994
Mathematical model for the dynamics of a phytoplankton population. Zbl 0742.92021
Beretta, E.; Fasano, A.
1991
Stability in chemostat equations with delayed recycling. Zbl 0665.45006
Beretta, E.; Bischi, G. I.; Solimano, F.
1990
A homotopy technique for a linear generalization of Volterra models. Zbl 0669.92018
Beretta, Edoardo
1989
Global asymptotic stability of Lotka-Volterra diffusion models with continuous time delay. Zbl 0661.92018
Beretta, E.; Takeuchi, Y.
1988
Global stability results for a multigroup SIR epidemic model. Zbl 0684.92015
Beretta, E.; Capasso, V.
1988
Global stability results for a generalized Lotka-Volterra system with distributed delays. Applications to predator-prey and epidemic systems. Zbl 0716.92020
Beretta, E.; Capasso, V.; Rinaldi, F.
1988
A generalization of Volterra models with continuous time delay in population dynamics: Boundedness and global asymptotic stability. Zbl 0659.92020
Beretta, E.; Solimano, F.
1988
Oscillations in a system with material cycling. Zbl 0713.92027
Beretta, E.; Bischi, G. I.; Solimano, F.
1988
Ultimate boundedness for nonautonomous diffusive Lotka-Volterra patches. Zbl 0663.92016
Beretta, Edoardo; Fergola, Paolo; Tenneriello, Catello
1988
Global stability of single-species diffusion Volterra models with continuous time delays. Zbl 0627.92021
Beretta, E.; Takeuchi, Y.
1987
Global stability and periodic orbits for two-patch predator-prey diffusion-delay models. Zbl 0634.92017
Beretta, Edoardo; Solimano, Fortunata; Takeuchi, Yasuhiro
1987
On the general structure of epidemic systems. Global asymptotic stability. Zbl 0622.92016
Beretta, E.; Capasso, V.
1986
Existence of a globally asymptotically stable equilibrium in Volterra models with continuous time delay. Zbl 0523.92013
Solimano, F.; Beretta, E.
1983
Graph theoretical criteria for stability and boundedness of predator-prey systems. Zbl 0496.92012
Solimano, F.; Beretta, E.
1982
Some results about nonlinear chemical systems represented by trees and cycles. Zbl 0405.92022
Beretta, E.; Vetrano, F.; Solimano, F.; Lazzari, C.
1979
all top 5
#### Cited by 1,782 Authors
42 Chen, Lansun 35 Jiang, Daqing 35 Takeuchi, Yasuhiro 35 Xu, Rui 33 Teng, Zhi-dong 26 Wei, Junjie 21 Beretta, Edoardo 19 Chattopadhyay, Joydev 16 Ma, Wanbiao 16 Meng, Xinzhu 16 Song, Xinyu 16 Yuan, Sanling 15 Cui, Jingan 15 Ji, Chunyan 15 Kuang, Yang 15 Shaikhet, Leonid Efimovich 15 Wang, Weiming 14 Liu, Shengqiang 14 Ruan, Shigui 14 Wang, Wendi 14 Zhang, Long 13 Chaplain, Mark A. J. 13 Huo, Hai-Feng 13 Lahrouz, Aadil 13 Ma, Zhien 12 Ahn, Inkyung 12 Jiao, Jianjun 12 Kar, Tapan Kumar 12 Li, Xuezhi 11 Jin, Zhen 11 Samanta, Guru Prasad 11 Song, Yongli 11 Zhou, Xueyong 10 Banerjee, Malay 10 Davidson, Fordyce A. 10 Jiang, Zhichao 10 Li, Wan-Tong 10 Wang, Ke 9 Cai, Liming 9 Enatsu, Yoichi 9 Liu, Zijian 9 Samanta, Sudip K. 9 Settati, Adel 9 Shi, Ningzhong 9 Zou, Xingfu 8 Cai, Shaohong 8 Cai, Yongli 8 Ding, Xiaoquan 8 Fan, Yonghong 8 Han, Maoan 8 Jiang, Weihua 8 Kumar Upadhyay, Ranjit 8 Muroya, Yoshiaki 8 Nakata, Yukihiko 8 Pal, Samaresh 8 Shu, Hongying 8 Tang, Sanyi 8 Venturino, Ezio 8 Wang, Lin 8 Zhang, Juan 8 Zhang, Tailei 8 Zhang, Tonghua 8 Zhang, Tongqian 7 Bairagi, Nandadulal 7 Buonomo, Bruno 7 Cao, Jinde 7 Fergola, Paolo 7 Gakkhar, Sunita 7 Gan, Qintao 7 Ko, Wonlyul 7 Li, Michael Yi 7 Liu, Qun 7 Maiti, Alakes 7 Rao, Vadrevu Sree Hari 7 Shi, Xiangyun 7 Solimano, Fortunata 7 Wang, Jinliang 7 Xia, Yonghui 7 Xu, Changjin 7 Zhang, Fengqin 7 Zhang, Zhonghua 7 Zhao, Min 7 Zhong, Shou-Ming 6 Abbas, Syed 6 Fan, Meng 6 Garira, Winston S. 6 Guin, Lakshmi Narayan 6 Hayat, Tasawar 6 Huang, Lihong 6 Koshmanenko, Volodymyr Dmytrovych 6 Lin, Zhigui 6 Liu, Gui Rong 6 Liu, Junli 6 Mukandavire, Zindoga 6 Raja Sekhara Rao, P. 6 She, Zhikun 6 Smith, Hal Leslie 6 Sun, Guiquan 6 van den Driessche, Pauline 6 Wanduku, Divine ...and 1,682 more Authors
all top 5
#### Cited in 164 Serials
112 Applied Mathematics and Computation 92 Nonlinear Analysis. Real World Applications 78 Journal of Mathematical Analysis and Applications 60 Chaos, Solitons and Fractals 48 International Journal of Biomathematics 46 Advances in Difference Equations 44 Mathematical Biosciences 44 Abstract and Applied Analysis 38 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 33 Discrete Dynamics in Nature and Society 32 Nonlinear Dynamics 31 Applied Mathematical Modelling 30 Communications in Nonlinear Science and Numerical Simulation 30 Journal of Applied Mathematics and Computing 26 Journal of Mathematical Biology 26 Journal of Computational and Applied Mathematics 24 Bulletin of Mathematical Biology 24 Discrete and Continuous Dynamical Systems. Series B 23 Journal of Theoretical Biology 22 Computers & Mathematics with Applications 19 Mathematical Methods in the Applied Sciences 19 Mathematical and Computer Modelling 19 Journal of Biological Systems 19 Mathematical Biosciences and Engineering 16 Journal of Biological Dynamics 13 Journal of the Franklin Institute 13 Journal of Differential Equations 13 Journal of Applied Analysis and Computation 11 Applied Mathematics Letters 10 Applicable Analysis 10 Physica A 10 Mathematics and Computers in Simulation 10 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 9 Chaos 9 Computational & Mathematical Methods in Medicine 8 Complexity 8 Mathematical Problems in Engineering 8 Differential Equations and Dynamical Systems 8 Journal of Applied Mathematics 7 Stochastic Analysis and Applications 7 Nonlinear Analysis. Theory, Methods & Applications 6 Rocky Mountain Journal of Mathematics 6 Acta Mathematicae Applicatae Sinica. English Series 5 International Journal of Mathematics and Mathematical Sciences 5 Communications on Pure and Applied Analysis 4 Acta Applicandae Mathematicae 4 Physica D 4 Journal of Dynamics and Differential Equations 4 Computational and Applied Mathematics 4 The ANZIAM Journal 4 Journal of Systems Science and Complexity 4 Nonlinear Analysis. Hybrid Systems 4 Mathematical Modelling of Natural Phenomena 4 Journal of Nonlinear Science and Applications 4 International Journal of Differential Equations 3 Nonlinearity 3 Ricerche di Matematica 3 Theoretical Population Biology 3 Applied Mathematics and Mechanics. (English Edition) 3 Japan Journal of Industrial and Applied Mathematics 3 Applications of Mathematics 3 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 3 SIAM Journal on Applied Mathematics 3 Journal of Mathematical Chemistry 3 Natural Resource Modeling 3 Nonlinear Analysis. Modelling and Control 3 Dynamical Systems 3 Advances in Complex Systems 3 Stochastics and Dynamics 3 Nonlinear Oscillations 3 Discrete and Continuous Dynamical Systems. Series S 3 International Journal of Applied and Computational Mathematics 3 Cogent Mathematics 2 Automatica 2 Proceedings of the American Mathematical Society 2 Quarterly of Applied Mathematics 2 Statistics & Probability Letters 2 Applied Numerical Mathematics 2 European Journal of Applied Mathematics 2 Numerical Algorithms 2 International Journal of Computer Mathematics 2 Linear Algebra and its Applications 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 Journal of Nonlinear Science 2 Applied Mathematics. Series B (English Edition) 2 Journal of the Egyptian Mathematical Society 2 Journal of Difference Equations and Applications 2 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 2 Vietnam Journal of Mathematics 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 International Journal of Modern Physics C 2 SIAM Journal on Applied Dynamical Systems 2 Boundary Value Problems 2 Acta Mechanica Sinica 2 Applications and Applied Mathematics 2 Asian-European Journal of Mathematics 2 Science China. Mathematics 2 Afrika Matematika 2 Nonautonomous Dynamical Systems 1 International Journal of Modern Physics B ...and 64 more Serials
all top 5
#### Cited in 28 Fields
1,210 Biology and other natural sciences (92-XX) 900 Ordinary differential equations (34-XX) 206 Dynamical systems and ergodic theory (37-XX) 145 Partial differential equations (35-XX) 105 Probability theory and stochastic processes (60-XX) 98 Numerical analysis (65-XX) 80 Systems theory; control (93-XX) 38 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 32 Difference and functional equations (39-XX) 24 Integral equations (45-XX) 24 Operator theory (47-XX) 16 Calculus of variations and optimal control; optimization (49-XX) 7 Combinatorics (05-XX) 7 Computer science (68-XX) 7 Fluid mechanics (76-XX) 6 Mechanics of particles and systems (70-XX) 5 Statistics (62-XX) 4 Functional analysis (46-XX) 4 Operations research, mathematical programming (90-XX) 3 Statistical mechanics, structure of matter (82-XX) 1 General and overarching topics; collections (00-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Real functions (26-XX) 1 Measure and integration (28-XX) 1 Integral transforms, operational calculus (44-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Quantum theory (81-XX) | 2021-05-08 08:05:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47686219215393066, "perplexity": 13644.65664206674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988850.21/warc/CC-MAIN-20210508061546-20210508091546-00297.warc.gz"} |
https://tenpy.johannes-hauschild.de/viewtopic.php?f=7&t=99&p=355 | 2D DMRG with boundary conditions and lattice index
How do I use this algorithm? What does that parameter do?
steven_tao
Posts: 5
Joined: 11 Mar 2020, 01:07
2D DMRG with boundary conditions and lattice index
Hi everyone:
(1) I want to simulate a self-defined 2D lattice with Finite lattice sizes along both the x and y directions. I am not sure how to define the boundary conditions. Is it the right way to define the boundary condition like this:
Code: Select all
class Mylattice(lattice.Lattice):
def __init__(self, Lx, Ly, siteA, **kwargs)
......
super().__init__([Lx, Ly], [siteA], **kwargs)
class Mymodel(CouplingMPOModel):
def init_lattice(self, model_params):
site = self.init_sites(model_params)
bc_MPS = get_parameter(model_params, 'bc_MPS', 'finite', self.name)
bc_y = get_parameter(model_params, 'bc_y', 'finite', self.name)
order = get_parameter(model_params, 'order', 'default', self.name)
bc_x = 'periodic' if bc_MPS == 'infinite' else 'open'
bc_y = 'periodic' if bc_y == 'cylinder' else 'open'
bc = [bc_x, bc_y]
lat = Mylattice(Lx, Ly, site, order=order, bc=bc, bc_MPS=bc_MPS) # Set Boundary Conditions.
(2) For the 2D case, we calculate the each site occupation: A = psi.expectation_value('N'). The expectation value A is a 1D list. Whether the list index corresponds to the site index of the lattice indicated by the function ''ax.plot_site()" (For example, the following figure)? E.g., the 11th element A[10] is the occupation at lattice site 10 indicated by the figure.
Johannes
Posts: 135
Joined: 21 Jul 2018, 12:52
Location: UC Berkeley
Re: 2D DMRG with boundary conditions and lattice index
(1): Looks good to me. In fact, that's code taken from CouplingMPOModel.init_lattice() for the case of a 2D lattice.
(2): Yes.
You might want to take a look at mps2lat_values in that case.
Use it like this:
Code: Select all
exp_vals_mps = psi.expectation_value("N")
exp_vals_lat = model.lat.mps2lat_values(exp_vals_mps)
While the 1D array exp_vals_mps is indexed by MPS indices i,
exp_vals_lat will be indexed by lattice indices x, y, u.
In the example of your question, you can verify this:
Code: Select all
assert exp_vals_mps[10] == exp_vals_lat[0, 3,1]
steven_tao
Posts: 5
Joined: 11 Mar 2020, 01:07
Re: 2D DMRG with boundary conditions and lattice index
Hi Johannes, thank you very much. I got it. | 2020-04-06 17:38:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6078482866287231, "perplexity": 8206.32820068152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00550.warc.gz"} |
https://www.esaral.com/q/show-that-moment-of-inertia-of-a-solid-body-55199 | # Show that moment of inertia of a solid body
Question:
Show that moment of inertia of a solid body of any shape changes with temperature as $I=I_{0}(1+2 \alpha \theta)$ where $I_{0}$ is the moment of inertia at $0^{\circ} \mathrm{C}$ and $\alpha$ is the coefficient of linear expansion of the solid.
Solution: | 2023-02-02 15:44:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.563086748123169, "perplexity": 69.16610841308972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00687.warc.gz"} |
https://baski.me/publication/bbjmfmv-bkcore-2015/ | The minimum spanning $k$-core problem with bounded CVaR under probabilistic edge failures
Abstract
This article introduces the minimum spanning $k$-core problem that seeks to find a spanning subgraph with minimum degree at least $k$ (also known as a $k$-core) that minimizes the total cost of the edges in the subgraph. The concept of $k$-cores was introduced in social network analysis to identify denser portions of a social network. We exploit the graph-theoretic properties of this model to introduce a new approach to survivable inter-hub network design via spanning $k$-cores that preserves connectivity and diameter under limited edge failures. The deterministic version of the problem is polynomial-time solvable due to its equivalence to generalized graph matching. We propose two conditional value-at-risk (CVaR) constrained optimization models to obtain risk-averse solutions for the minimum spanning $k$-core problem under probabilistic edge failures. We present polyhedral reformulations of the convex piecewise linear loss functions used in these models that enable Benders-like decomposition approaches. A decomposition and branch-and-cut approach is then developed to solve the scenario-based approximation of the CVaR-constrained minimum spanning $k$-core problem for the aforementioned loss functions. The computational performance of the algorithm is investigated via numerical experiments.
Publication
INFORMS Journal on Computing | 2023-01-27 07:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6056140661239624, "perplexity": 562.973926227919}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00439.warc.gz"} |
https://staff.najah.edu/en/publications/9952/ | ##### Chebychev Subspaces of Orlicz Function Space
Publication Type
Original research
Authors
In this paper it is proved that G is a Chebychev subspace of a Banach space X if and
only if $L^{\phi}(G)$ is a Chebyshev subspace of $L^{\phi}(X)$ where $L^{\phi}(X)$is an Orlicz function
space with Luxemburg norm.
Journal
Title
Annals of pure and applied mathematics
Publisher
house of scientific research
Publisher Country
India
Publication Type
Online only
Volume
--
Year
2019
Pages
-- | 2019-11-14 01:56:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013685345649719, "perplexity": 3133.1486813175125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00329.warc.gz"} |
http://elykya.nl/ip2ndkp/ced2bd-southern-hemisphere-tropical-cyclones | {\displaystyle W_{in}} The dominant source is the input of heat at the surface, primarily due to evaporation. A Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect. [180] Observations have shown little change in the overall frequency of tropical cyclones worldwide. where {\displaystyle {\frac {T_{s}}{T_{o}}}} [12], Due to surface friction, the inflow only partially conserves angular momentum. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's Eastern Shore during the 2005 hurricane season. k d p Mathematically, this has the effect of replacing . [88], In the 1960s and 1970s, the United States government attempted to weaken hurricanes through Project Stormfury by seeding selected storms with silver iodide. This millennial-scale variability has been attributed to long-term shifts in the position of the Azores High,[172] which may also be linked to changes in the strength of the North Atlantic oscillation.[173]. [87] A cyclone can also merge with another area of low pressure, becoming a larger area of low pressure. The toxins are very harmful to the people and animals in the area, as well as the environment around them. is given by, where P {\displaystyle v_{p}} [152][153] In March 2004, Cyclone Gafilo struck northeastern Madagascar as a powerful cyclone, killing 74, affecting more than 200,000 and becoming the worst cyclone to affect the nation for more than 20 years. [72], Though a tropical cyclone typically moves from east to west in the tropics, its track may shift poleward and eastward either as it moves west of the subtropical ridge axis or else if it interacts with the mid-latitude flow, such as the jet stream or an extratropical cyclone. Image credit: NOAA/NESDIS. [161], Hurricane John is the longest-lasting tropical cyclone on record, lasting 31 days in 1994. {\displaystyle C_{k}} If storms are significantly sheared, use of wind speed measurements at a lower altitude, such as at the 70 kPa pressure surface (3,000 metres or 9,800 feet above sea level) will produce better predictions. s : {\displaystyle T_{s}} ∗ t o Climatologically, tropical cyclones are steered primarily westward by the east-to-west trade winds on the equatorial side of the subtropical ridge—a persistent high-pressure area over the world's subtropical oceans. {\displaystyle r} [171][172] Few major hurricanes struck the Gulf coast during 3000–1400 BC and again during the most recent millennium. Please update this article to reflect recent events or newly available information. acts to multiply the total heat input rate by the factor | [65] However, it is still possible for tropical systems to form within this boundary as Tropical Storm Vamei and Cyclone Agni did in 2001 and 2004, respectively. It is induced indirectly by the storm itself, the result of a feedback between the cyclonic flow of the storm and its environment. [54] Low-latitude and low-level westerly wind bursts associated with the Madden–Julian oscillation can create favorable conditions for tropical cyclogenesis by initiating tropical disturbances. T C Google Earth image showing the track of Tropical Cyclone … NOAA uses the term "direct hit" to describe when a location (on the left side of the eye) falls within the radius of maximum winds (or twice that radius if on the right side), whether or not the hurricane's eye made landfall. / Lasted the longest data and warnings obtained from the storm as it allows them to determine a northeasterly! May involve preparations made by individuals as well as the secondary circulation is the only month in all. To El Niño years storm system with a closed, low-level circulation, Frequently... Outermost closed isobar ( ROCI ), the radius of outermost closed isobar ( ROCI ), of! ( in-up-out-down ) part of the tropical cyclone is a product developed as of... Feedback between the Gulf of Mexico coast and the radius of maximum wind speed that a storm can not.! Circulation in the North Indian basin, it does assign suffix as high category... Air subsides and warms at the centers of tropical and Southern Hemisphere cyclone systems develop What. In future projections ) Frequently Asked Questions: What is an extra-tropical cyclone size of a tropical cyclone temporarily. 151 ] other destructive eastern Pacific hurricanes include Pauline and Kenna, both causing severe damage striking. Death toll its powerful storm surge was responsible for enhancing/dampening the number of major hurricanes Atlantic coast to... October 2008 and made landfall in Veracruz current threats cyclone Yasa, 16., particularly in intense tropical cyclones, there is zero environmental flow is known as the secondary circulation is only... Destructive eastern Pacific hurricanes include Pauline and Kenna, both four-engine turboprop cargo aircraft momentum the! With peaks in may and November of eyewall mesovortices, which form almost exclusively over tropical.! Assessed in a similar time frame to the Azores high hypothesis, an anti-phase is! Surface ( during evaporation, the radius at which the cyclone 's relative vorticity field decreases to s−1! Were assessed in a wide band of latitudes, from the storm 's location and intensity several! Difference between temperatures aloft and sea surface temperature over a large area in just a few.. Risk of disease propagation preliminary data from the various Warning centers provide current information forecasts... For future possibilities similar time frame to the top of the system may be to! Events that occur in mountainous terrain, when the difference between temperatures aloft and sea temperature! And again during the quiescent periods, a tropical cyclone motion may be idealized as atmospheric! On April 2, 2020, the radius to the people and animals in the form of eyewall cycles... But in a wide band of latitudes, from the various Warning centers Japan! All else equal, a formative tropical cyclone is usually not considered to become subtropical during its transition. Marco generated tropical storm-force winds only 37 kilometres ( 23 mi ) ) a temperature... Where a tropical cyclone, and private entities smallest storm on record, tropical cyclone, compared to inland.! hurricane '' redirects here ( ROCI ), the ocean 's surface 183 ] between 1949 2016. Its own seasonal patterns metrics commonly used to measure storm size information and to! Causing severe damage after striking Louisiana and Mississippi as a result of eyewall replacement cycles, particularly in tropical! Be spawned as a result of a tropical cyclone activity and forecasts help... Lasting 31 days in 1994 state, local, and weaken quite rapidly over land, is! The relative angular momentum that projects onto the local vertical ( i.e make landfall given above [ 187 ] example. Cyclones require 80 °F ( 27 °C ) ocean temperatures, there was a slowdown in tropical translation. Orbiting cyclonically about a point between the cyclonic circulation of the Azores would. 'S relative vorticity field decreases to 1×10−5 s−1 steered towards the Atlantic by transient weather systems, which typical! A risk to coastal communities speed is dominant ) leads to the regions where they occur during and landfall. These risks and impacts due to evaporation ) Frequently Asked Questions: do... Ocean basin can cease to have tropical characteristics in several different ways Hercules and WP-3D Orions both. Form of eyewall replacement cycles, particularly in intense tropical cyclones are areas of relatively pressure! Can form in any month of the Azores high hypothesis system of disturbed weather Atlantic tropical storms formed during 2008! Strongest when over or near water, the inflow only partially conserves angular momentum the. Month of the background environment alone ( i.e wind damage occurs where tropical... 95 ] Project cirrus even involved throwing dry ice on a worldwide scale, may is radius... Be devastating, tropical cyclones, there was a slowdown in tropical cyclone can also merge with another area low... 5° of the circulation itself with respect to the Azores high hypothesis within this region [ 168 ] in! | 2021-06-20 04:25:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5810514688491821, "perplexity": 5022.641964423393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00202.warc.gz"} |
https://erickchacon.github.io/debian/linux/upgrade-debian-wheezy-to-debian-jessie/ | # Upgrade Debian Wheezy to Debian Jessie
In this last week, I updated my RStudio in Debian Wheezy and it turned out that it needed a more recent version of the package lib6. A reliable solution was to upgrade my system to Jessie, the current stable distribution of Debian. Its latest update, Debian 8.1, was released on 6th of June, 2015.
For this reason, I share, in this post, the steps I followed for upgrading my system keeping user configuration and the main programs I use such as R, RStudio, Matlab, Mendeley, TeXstudio and others.
0) Backup your data: This is a logical initial step before starting any change on the system.
1.1) Prepare Debian Wheezy to be upgrading: Be sure that your current system does not have any problem of dependency or wrong installed packages. You can use the following commands for that purpose:
1.2) Update the repositories list: Packages for Debian Jessie is downloaded from these repositories. One way to update this list is to modify the file /etc/apt/sources.list, I use gedit for that:
In my case, I put the following repositories:
Another option is to change the wheezy word by jessie word automatically with the sed function.
1.3) Update the packages of Debian 8.1 Jessie.
During the upgrade it will ask if you want to restart manually or automatically some currently running services. It is suggested to make it manually.
Furthermore, after upgrading the distribution, I had to choose the device where grub should be installed. If it this your case, you should select the /dev/sda device if your pc has only one disk (use spacebar to choose the device). Otherwise, use the following link http://askubuntu.com/questions/23418/what-do-i-select-for-grub-install-devices-after-an-update.
1.5) Finally, reboot your computer to get the Debian Jessie system and enjoy.
2) Useful links to install programs:
2.1) Software R: http://cran.r-project.org/bin/linux/debian/
2.3) Texstudio: http://packages.debian.org/jessie/texstudio
2.4) Mendeley Desktop (Gestor de bibliografías): http://www.mendeley.com/
2.5) Dropbox: https://www.dropbox.com/
Tags:
Categories:
Updated: | 2017-10-17 20:31:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3018188774585724, "perplexity": 4987.879881617345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00085.warc.gz"} |
https://code.tutsplus.com/courses/build-an-app-from-scratch-with-javascript-and-the-mean-stack/lessons/creating-a-simple-data-access-layer | FREELessons: 34Length: 3.4 hours
• Overview
• Transcript
# 3.4 Creating a Simple Data Access Layer
With a simple schema defined, you'll need a way to get data into and out of the database. While you can use the Schema objects directly, sometimes it pays to create at least a simple abstraction layer, and in this lesson, you'll do just that. That way, all of your requests that go through the database will be funneled through the same functions to help reduce the amount of code duplication.
## 7.Conclusion1 lesson, 03:18
### 3.4 Creating a Simple Data Access Layer
One other thing that we wanna talk about doing before we start to move on to the next layer of our application, is talking a little bit about our data access layer. Now typically, a good strategy to use when building any sort of kind of n-tier application, is to condense the access to different layers of your application. Kind of into as as few number of pieces as possible, so that you can control the way that you are accessing that layer in a consistent fashion, and limit the amount of duplicated code within your application. So I'm gonna show you one interesting way that you can do that in this particular case. Now, there are many different ways that you could do this. You could create Service layers. You could create repositories. You could do all sorts of different things, but JavaScript kind of blends itself to doing some interesting things in on of itself. So, we're gonna take advantage of that along with Mongoose to be able to provide a little bit of common functionality, outside of our lawn schema here, and still be able to access this layer in a very consistent fashion. So the way that we're gonna do this, is we are gonna come down to the bottom, where we have our exports line here, and we are going to do one thing here. We are gonna create a constant called lawn. So you can kind of link together all of this assignment operations and everything is gonna continue to work the way that it did before, but now I also have this constant called lawn, so now down below here I can start to define some functions, that I can use outside of this particular file. Or expose some functionality that I would like to make available throughout my application so we can do in other places. So lets create a couple of additional functions here so I can do this a couple of different ways. Once again, JavaScript is very flexible on this way, so I can do another module.exports. And lets say in this case, I wanted to say I wanna get all lawns function. Now in this case, actually a common way to access things using Mongoose is you can do some operations and there's always a way to provide a callback function and that's a very common way to do things in JavaScript. So this is really no different, so pretty much all of our functions here are going to have some sort of callback there so. We're gonna provide a call back so that you can pass that into this particular function, and then we're gonna do some sort of operation here. Now, these all gonna be very simple processes so you could argue that you don't need to have this layer here, but could definitely add an additional functionality, some validation, all sorts of different things. So what I'm gonna do in this particular case, is I'm going to use Lawn. And then I'm gonna use the find, and the find method or find function, is going to return all of the objects that are found within the Lawn collection. And I can also pass in a call back so if this executes and completes successfully, then it's going to execute that call back. So it just gives me a way to inject some additional functionality, into this get all lawns operation. So now that I have a way to get all the lawns, I'm probably going to want to be able to get a single lawn. So let's do this again, we're gonna say module.exports, and we're gonna call this getLawn, and this is going to be equal to, and we're gonna provide two parameters this time. We wanna be able to get a specific one, and you typically do that by an ID, and we'll talk about that in just a second. And then we're also gonna provide a callback, and then this is once again gonna be fairly simplistic. We're simply going to say, lawn.findByID and we're going to pass in that id that we're going to find it by and we're also gonna give it a call back. So now what we're doing here actually is we're saying, I wanna find one of the lawns that have this specific ID. But if you look up here, we don't see an ID anywhere, and that's really okay, because by default, Mongoose and MongoDB are going to work together, and they are going to create an implicit ID, even if you're not specifying one, that is going to be a unique identifier for each one of the objects that it stores within that collection. Now, and actually more specifically, by default it's gonna name that _id, so throughout the application and other places, when we're returning back an object of type 1, I'm gonna be referring to that ID as _id if I wanna do something with it. So just if you see that, that's why we're doing it that way. So now I have a way to get all the lawns, I have a way to get a single lawn. Then what if I want to create a new lawn, so let's say module.exports.addLawn, how about that? Now in this case I'm passing in a new lawn object and a callback. And this process is going to use the new lawn.save functionality, and we're going to pass in a callback for that one. So now we have a way to add now we wanna be able to update as well so we're gonna do a module.exports.updateLawn(). And in this case we're gonna pass in a couple of different things, we're gonna pass in an idea of the one we wanna update. We're gonna pass in the updated lawn object as well as a call back. And here can now we can say lawn.findByIdAndUpdate and now I can pass in the ID, I can pass in the updated lawn object, and I can give it the callback. And then finally we're gonna wanna be able to delete as well, so .exports where I can say, deleteLawn and we can pass in the lawn that we want to delete. We can also pass in a callback. And actually you could do this a couple of different ways, you could pass in the idea as well. And then we would come in here, and could say Law.findByIdAndRemove. And then we can simply pass in the id, and we can also pass in the callback. So I was saying you could pass in the Lawn, and then use the id off of that, or you could just pass in the id. And we'll just go with that functionality there. So, let's go ahead and save this. So as you can see now, we are exporting the Lawn model. But we are also saving that into a Lawn constant and then we are exposing a number of different helper methods, or data access layer if you want. Where we can get all the Lawns, get an individual Lawn, add a Lawn, update a Lawn, and delete a Lawn. And you could obviously do this for all the different applications. And we're gonna do that in this particular course, but we're gonna do it in a different way that if we wanna remove an application, we're actually doing an update on the lawn itself. But once again, it's all about architecture, it's how you wanna do these things. So this is just one way of doing it. So now we have all of these in there. We have this data access layer. Let's come in and take a look at our terminal. Let's just make sure that everything started up successfully, and it has. So that is gonna be basically it for our database layer of our application. We have MongoDB up and running, we've configured our schema, we have our data access layer. Now it's time to move up a level, and start talking about Express and exposing some APIs to the outside world.
Back to the top | 2022-09-28 06:17:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31206589937210083, "perplexity": 465.88445451457017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00105.warc.gz"} |
https://no2147483647.wordpress.com/category/uncategorized/ | # Yet another (not) winning solution: Kaggle Flavours of Physics for finding τ → 3μ
TL,DR: this blog describes feature engineering and models without implicitly/explicitly using tau invariant mass.
This blog is for describing the “Hi from CMS” team solution of the Kaggle Flavours of Physics competition. It has the public score of 0.988420 and the private score of 0.989161 which has ranked at 29th. I thought I didn’t need to publish this ‘not winning’ solution, but I found all recent published top solutions heavily depended on invariant mass reconstruction in simulation (first place, second place), so I decided to post some discussion here about more ‘physical sounds’ features and models, plus some other work.
The github link to all code is available here. The simplified version was also put on Kaggle script here during the competition (thanks for many forks). All code were credited to the team members of “Hi from CMS”: phunter, dlp78, Michael Broughton and littleboat.
Unlike the last competition of Higgs search where a single xgboost classifier was used from one year ago, this solution explored more on: physical sounds feature engineering, a specific neural network model, as well as the model ensemble method for Random Forest, xgboost, UGradient and neural network, with a (fancy) automatic model selection under the agreement test constraint.
1. Understanding the constraints: KS test, CvM test, invariant mass and SPDhits
This competition had two tests: agreement test and correlation test, which required the classifier be some robust against not-well simulated variable like SPDhits and independence to the tau invariant mass. The general ideas was not using SPDhits or tau invariant mass, but for different reasons.
SPDhits: not well simulated
SPDhits was the number of hits in the SPD calorimeter, and this variable was not well simulated in the training data because of the limited understanding of LHCb SPD calorimeter and LHC bunch intensity. In the real life analysis, SPDhits variable needed additional calibration for simulation vs collision data, and the SPDhits distribution in the control samples looked very different from that in the simulation. For passing KS test, SPDhits feature was not suggested.
A tricky but not-used feature engineering for ‘weakening SPDhits‘ was binning SPDhits: a vector of feature of SPDhits on if SPDhits<10, <20, <30, … , <600 etc. Some experiment showed bins up to <150 could effectively help on up to +0.0001 AUC score without hurting the KS score, but it expanded the feature spaces too much and caused some confusions to the classifier, so I decided not to use these features, although it had some physics meanings that LHCb analysis with SPDhits were usually binned with SPDhits<100, SPDhits<200 etc for different track multiplicity.
In the later model selection for the final submission, a combination of models with and without SPDhits were used for an extra +0.0005 AUC score boost (discussed later in this blog). In my opinion, this ensemble was fine for physical sound because SPDhits calibration should be considered as systematic errors.
Invariant mass: it is not a good idea for physical sounds.
Tau invariant mass could be (easily) reconstructed using energy conservation and basic kinematics provided in the dataset, and may winning solutions were using it, which worried me. The detailed discussion on tau invariant mass was here (Kaggle forum link).
My thought on tau invariant mass and correlation test was that: it was not a good idea of reconstructing tau invariant mass from simulation and having this correlation test. In the real life analysis, a robust classifier for search for particles should not depend on the parent particle mass much:
1. if this particle mass was unknown like the Higgs boson, there was an look-elsewhere effect where the invariant mass could bias the analysis;
2. if this particle mass was known like the tau particle, the simulation with detector was not perfect thus the mass reconstruction was too good to be true.
The real life M_3body was reconstructed using likelihood methods, but repeating this method for this competition was impossible with the provided dataset. Using reconstructed tau invariant mass in the classifier was strongly biased to the simulation, and feature engineering with the tau invariant mass in simulation would probably have artificial dependence on the mass and might make the classifier useless in the real life analysis.
Surely, models with and without invariant mass could be ensembled for balancing CvM score, but I was afraid it was against the original idea of physical sounds, because the bias of invariant mass was physical, instead of systematic errors.
Using invariant mass feature was not a good idea in my opinion, so I had the following feature engineering without invariant mass. All these following feature engineering had very small CvM score which meant they had little dependency on the tau invariant mass, and the model didn’t create artificial dependency on mass.
2. Physical sounds feature engineering: what are good and what are bad.
Kinematic features.
It is a 3-body decay where Ds and tau has similar invariant mass, thus tau is almost at rest in the Ds rest reference frame, so the 3 children particles are almost symmetric in the Ds rest frame, thus individual particle’s kinematic feature may not be important, but some separations can be seen by pair-wise features. However, unfortunately we didn’t have phi and charge info for each particle, thus pair-wise mass reconstruction was not available, nor background veto by mu-mu mass 😦
Since tau is boosted and Ds and tau has similar invariant mass, tau’s IP and open angle (dira) should be small as expected, and tau has its lifetime thus it could fly for some distance. In the HEP analysis, usually IP significance was used, and fortunately it could be calculated as flight_dist_sig by dividing FlightDistance by FlightDistanceError.
The distance from PV to the secondary vertex could be approximately calculated too by IP/(1-dira) where dira was the cosine value of tau open angle, and (1-dira) was the approximation when the open angle was very small (phi = sin(phi) = tan(phi) if phi is very small). A trick was that, dira was almost 1, thus a very small float number was added, and the feature was inv-log transformed.
Some features from Stepan Obraztsov (another CMS physicist, yeah!) were also included in my public script. They were:
1. NEW_FD_SUMP which was FlightDistance divided by the summation of daughter particle momenta. It is considered as the total flight time in the lab reference frame.
2. NEW5_lt which was Tau lifetime times the averaged daughter particle IP. It could be interesting to understand the effect of this feature.
3. p_track_Chi2Dof_MAX which was the maximum of track chi2 quality. It is as in the selection features about the track quality.
Track/Vertex selection features.
The primary reason of using selection features was for distinguishing from the Ds-> eta mu nu background: eta could fly for a short distance and decay to 2 muons while tau-> 3 muons decay was immediate, which made the reconstructed Ds 3 muon vertex not as good as tau-> 3 muons. Thus, the track quality and vertex reconstruction quality (VertexChi2) should be used.
CDF, DCA features were other selection features from some TMVA analyses, but xgboost and other models didn’t nicely picked them up and the feature importance for these individual features were minimal, which meant they needed to be dropped (aka feature selection), otherwise they would confuse xgboost-like decision tree classifiers. After dropping them out, xgboost improved significantly.
However, there was some hope: this LHCb thesis about lepton flavour violation search (LINK) provided some good ideas about using these selection features, and a good one from this thesis was, max and min values of these selection features could help classification. It was understandable that, if one of these 3 tracks had bad or good quality, the pair-wise and 3 body reconstruction could be heavily effected.
An additional list of good selection features are (approximately +0.0003 AUC) :
1. Min value of IsoBDT
2. Max value of DCA
3. Min value of isolation from a to f (which are pair-wise track isolation of the 3 tracks)
4. Max value of track chi2 quality.
5. Square summation of track chi2 quality: surprisedly good feature, maybe the tau vertex selection needed to consider multiple track quality instead of the max.
Pair-wise track IP feature
Although there was no phi nor charge information of each daughter track for pair-wise parent particle reconstruction, I had two features which improved the score and had good feature importance: The ratio of Tau IP vs p0p2 pair IP and Tau IP vs p1p2 pair IP.
It was interesting of understanding this: the signal process was tau -> Z’ mu and Z’->2 muon immediately, while the background was Ds->eta mu nu where a small missing energy nu was present, thus the pair-wise IP from the signal was surely large by comparing with tau IP, but the missing energy from background confused this comparison. The in-depth reason might need more information about the process. In the experiment, these two ratio values gave good separation and good feature importance.
Other experimented but not used features:
1. IP error of each particle.
2. Pz of each particle plus combinations.
3. Pair-wise open angle in eta.
Feature selection using trees
Decision tree classifiers didn’t like confusing features, e.g. some features with very small separation for the positive and negative, so feature selection was needed. I only had time for a very simple feature selection by giving all these features to a random forest classifier for optimizing AUC, and dropped features with low feature importance. This tree-based selection gave consistent results as from human mind: individual momentum feature and individual quality selection features were dropped. It was also interesting that, isolationb and isolationc were more less important than the other 4 pair-wise track isolation variables. Keeping only the other 4 track isolation variables helped AUC, the reason behind this may need some in-depth analysis.
3. The neural network model
More details to be updated here from Michael. Michael had a two-layer neural network using all these features, and the code was self-explained, please check the code.
The final model includes this neural network model with SPDhits. The model itself with SPDhits had very low CvM score but disaster KS score (almost 0.3 and surely didn’t pass the KS test <0.09), but ensemble this model with other conservative models with a very low weight gave +0.0005 AUC score without failure in KS test.
4. “Weakening” decision tree models under the constraints
With good features, models have easier jobs. UGrad, Random Forest classifiers have good performance with these feature plus GridSearch for parameters, and their KS scores were around 0.06 which could easily pass. Since no invariant mass features were used, CvM score was round 0.0008 which had no problem either.
Xgboost is an aggressive and strong classifer, where it might give bad KS test score even without SPDhits features, e.g. 0.12, so weakening this model was needed for passing KS test. There were two tricks in xgboost parameters: large eta value for controlling convergence, and large colsample_bytree value plus small subsample by adding randomness, as mentioned in the xgboost document. The good combination was about:
• eta = 0.2
• colsample_bytree=13
• subsample = 0.7
5. CV-based ensemble and automatic model selection
Littleboat had a cool randomized automatic model selection by the CV score. The idea was that, each model had some variance, thus the best ensemble could be summing up similar models with small parameter changes, e.g. different seeds, slightly different eta etc. The CV score was used for the ensemble: most of the time, CV scores were more reliable than the public leaderboard scores.
Each model had two versions: high score (with SPDhits) and low score (all other features, no SPDhits). The same parameters are used for both two versions of each model, and surely the high score one doesn’t pass KS test for most time. After generating a list of model candidates, the script randomly pickup some models with good CV scores, searched for best weight to average them and evaluated the final CV score, as well as the KS score and CvM score. The ensemble method was weighted average, and the weight was brute-force searched after selecting models for passing KS test. If the result could pass the tests, it was saved as an candidate for submission.
After these selection, a manual process of weighted averaging with the neural network (with SPDhits) model was used. The weight was around 0.04 for the neural network model, which helped the final score by +0.0005 AUC. The final two submissions were weighted average of Xgboost and UGrad, plus a small weight of neural network with SPDhits.
6. Lesson learned
1. Kaggle competition really needs time. Thanks to my wife for her support.
2. Team-up is great: I know some amazing data scientists from this competition and have learned much from them.
3. Ensemble is needed, but it is not everything: feature engineering firstly, CV for each model, and ensemble them.
4. Data leakage: every kaggle competition has this risk, and dealing with it is a good lesson for both admins and competitors. Exploiting it for winning? blaming it? having fun without using it? Any of them is OK, no one is perfect and no one can hide from statistics.
5. When in doubt, use xgboost! Do you know xgboost can be installed by “pip install xgboost”. Submit an issue if you can’t install it.
7. One more thing for HEP: UGrad and xgboost
As learned from other high rank solution from the forum, UGrad itself was a strong classifier with BinFlatnessLossFunction, and a simple ensemble of UGrad models could easily go 0.99+. But UGrad used single thread and was very slow, so I didn’t experiment more about it. I have submitted a wish-list issue on xgboost github repo for implementing a xgboost loss function which can take BinFlatnessLossFunction as in UGrad and using multi-thread xgboost for speed up, so HEP people can be have both BinFlatnessLossFunction and a fast gradient boosting classifier. It is do-able as Tianqi (author of xgboost) has described in the issue, and hope I (or someone) can find a time working on it. I hope it can help HEP research.
Advertisement time: DMLC is a great toolbox for machine learning. xgboost is part of it, and DMLC has a recent new tool for deep learning MXnet: it has nice design, it can implement LSTM in 25 lines of python code, and it can train ImageNet on 4 GTX 980 card in 8.5 days, go star it!
# Winning solution of Kaggle Higgs competition: what a single model can do?
This blog is for describing the winning solution of the Kaggle Higgs competition. It has the public score of 3.75+ and the private score of 3.73+ which has ranked at 26th. This solution uses a single classifier with some feature work from basic high-school physics plus a few advanced but calculable physical features.
Github link to the source code.
1. The model
I choose XGBoost which is a parallel implementation of gradient boosting tree. It is a great piece of work: parallel, fast, efficient and tune-able parameters.
The parameter tuning-up is simple. I know GBM can have a good learning result by providing a small shrinkage eta value. According to a simply brute-force grid search for the parameters and the cross-validation score, I choose :
• eta = 0.01 (small shrinkage)
• max_depth = 9 (default max_depth=6 which is not deep enough)
• sub_sample = 0.9 (giving some randomness for preventing overfitting)
• num_rounds = 3000 (because of slow shrinkage, many trees are needed)
and leave other parameters by default. This parameter works good, however, since I only have limited knowledge of GBM, this model is not very optimal for:
• shrinkage too slow and training time too long, so a training takes about 30 minutes and cross-validation takes longer time. On the forum, some faster parameters are published which can limit the training time to less than 5 min.
• sub_sample parameter gives some randomness, especially when complicated non-linear features are involved, and the submission AMS score is not stable and not 100% reproducible. In the feature reduction section, I have a short discussion.
The cross validation is done as usually. A reminder about the AMS is that, from the AMS equation, one can simplify it to
${\rm AMS} = \frac{S}{\sqrt{B}}$
which means, if we evenly partition the training set for K-fold cross validation where the S and B have applied the factor of $(K-1)/K$, the AMS score from CV is artificially lowered by approximately $\sqrt{(K-1)/K}$. The same with estimating the test AMS: the weight should be scaled by the number of test samples.
XGboost also supports customized loss function. People have some discussions in this thread and I have some tries. I haven’t applied this customized function for my submission.
2. The features
The key for this solution is the feature engineering. With the basic physics features, I can reach the public leaderboard score around 3.71, and the other advanced physics features push it to public 3.75 and private 3.73.
2.1 General idea for feature engineering
Since the signal (positive) samples are Higgs to tau-tau events where two taus coming from the same Higgs boson, while the background (negative) samples are tau-tau-look-like events where particles have no correlation with the Higgs boson, I have the general idea that, in the signal, these particles should have their kinematic correlations with Higgs, while the background particles don’t. This kinematic correlation can be represented as:
• Simply as the open angles (see part 2.2)
• Complicated as CAKE features (see this link in the Higgs forum, this feature is not developed by me nor being used in the submission/solution)
2.2 Basic physics features
Because of the general idea of finding the “correlation” in the kinematic system, the correlation between each pair of particles can be useful. The possible features are:
• The open angles in the transverse (phi) plane and the longitudinal plan (eta angle). The reminder is that, the phi angle difference must be mapped into ±pi, and the absolute values of these angles work better for xgboost.
• Some careful study on the open angle can find that, tau-MET open angles in phi direction is somehow useless. It is understandable that in tau-l-nu case the identified tau angle has no much useful correlation with nu angle.
• Cartesian of each momentum value (px, py, pz): it works with xgboost.
• The momentum correlation in the longitudinal direction (Z direction), for example, jet momentum in Z direction vs tau-lepton momentum in Z direction is important. This momentum in Z direction can be calculated using the pt (transverse momentum) and the eta angle.
• The longitudinal eta open angle for the leading jet and the subleading jet: the physics reason is from the jet production mechanism from tau, but it is easy to be noticed when plotting PRI_jet_leading_eta – PRI_jet_subleading_eta without physics knowledge.
• The transverse momentum ratio of tau-lep, jet-jet, MET to the total transverse momentum.
• The min and max PT: this idea comes from the traditional cut-based analysis where different physics channel, e.g. 2-jet VBF vs 1 jet jet suppression to lepton, there are minimal and maximal PT cut. In this approach, I give them as features instead of splitting the model, and xgboost picks them up in a nice way.
Overlapping the feature distribution for the signal and the background is a common technique for visualizing features in the experimental high energy physics, for example, the jet-lepton eta open angle distribution for the positive and negative samples can be visualized as following:
Jet-lepton open angle distributions
If one reads the CMS/ATLAS Higgs to tau-tau analysis paper, one can have some advanced features for eliminating particular background. For example, this CMS talk has covered the fundamental points of Higgs to tau-tau search.
In Higgs to tau tau search, one of the most important background is the Z particle where Z can decay into lepton pairs (l-l, tau-tau) and mimic the Higgs signal. Moreover, considering the known Higgs mass is 126 GeV which is close to Z mass 91 GeV, the tau-tau kinematics can be very similar in Z and Higgs.
The common technique for eliminating this background is reconstructing Z invariant mass peak which is around 91 GeV. The provided ‘PRI’ features only have tau momentum and lepton momentum, from which we can’t have precise reconstruction of Z particle invariant, however, this invariant mass idea gives some hint that, the pseudo transverse invariant mass can be a good feature where the transverse invariant mass distribution can be :
Tau-lepton pseduo invariant mass
QCD and W+jet background are also important where lepton/tau can be mis-identified as jet, however, these two have much lower cross-section so the feature work on these two background are not very important. Tau-jet pseudo invariant mass features are useful too.
Some other not important features are:
• For tau-tau channel, the di-tau momentum summation can be roughly estimated by tau-lep-jet combination although it is far from truth.
• For 2-jets VBF channel, the jet-jet invariant mass should be high for signal.
• I know some teams (e.g. Lubos’ team) are using SVFIT, which is the CMS counter-partner of ALTAS’ MMC method (the DER_invariant_MMC feature). SVFIT has more variables and more considerable features than MMC, so SVFIT can do better Higgs invariant mass construction. Deriving SVFIT feature using the current provided features is very hard, so I haven’t done it.
• CAKE-A feature is basically the likelihood if a mass peak belongs to Z mass or Higgs mass. CAKE team claims that, it is a very complicated feature and some positive reports exist in the leaderboard, so ATLAS should investigate this feature for their Higgs search model.
2.4 Feature selection
Gradient boosting tree method is generally sensitive to confusing samples. In this particular problem of Higgs to tau-tau, many background events can have very similar features to the signal ones, thus, a good understanding of features and feature selection can help reducing confusions for better model building.
The feature selection uses some logic and/or domain knowledge, and I haven’t applied any automatic feature selection techniques, e.g. PCA. The reason is that, most auto feature selection methods are designed for capturing the max errors while this competition is for maximizing the AMS score, so I am afraid some feature selection can accidentally drop some important features.
Because of the proton-proton collision in the longitudinal direction, the transverse direction is symmetric, thus the raw phi values of these particles are mostly useless and should be removed for less confusions for the boosting process. It is also the reason why ATLAS and CMS detectors are cylinders.
The tau and lep raw eta values don’t help much too. The reason behind it is the Higgs to tau tau production mechanism, but one can easily see it from the symmetric distributions of these variables.
Jets raw eta values are important. The physics reason behind it is from the jet production mechanism where the jets from the signal should be more centralized, but one can easily find it by overlapping the distributions of PRI_jet_(sub)leading_eta for the signal and the background without physics knowledge.
2.5 Discussions
More advanced features or not? It is a question. I only keep physically meaningful features for the final submission. I have experimented some other tricky features, e.g. the weighted transverse invariant mass with PT ratio, and some of them help scoring on the public LB. However, it doesn’t show significant improvement in my CV score. To be safe, I spend the last 3 days before the deadline removing these ‘tricky’ features, and keeping only the basic physics feature (linear combinations) as well as these pseudo invariant mass features, in order not to overfit the public LB. After checking the private LB scores, I find some of them can help, but only a little. @Raising Spirit on the forum has posted a feature which is DER_mass_MMC*DER_pt_ratio_lep_tau/DER_sum_pt and Lubos has a nice comment if it is good idea or not.
CAKE features effect. I have used 3 test submissions for testing CAKE-A and CAKE-B feature. With CAKE A+B, both of my public and private LB submission score drops around 0.01; with CAKE-A, my model score has almost no change (reduce 0.0001); with CAKE-B, my model score improves 0.01. I think it is because CAKE-A feature may have strong correlations with my current feature, while CAKE-B is essentially the MT2 variable in physics which can help for the hadronic (jet) final state. I haven’t include these submissions in my final scoring ones, but thanks to CAKE team for providing these features.
3. Conclusion and lessons learned
What I haven’t used:
• loss function using AMS score: In these two posts (post 1 and post 2), they proposed the AMS loss function. XGboost has a good interface for these customized loss function, but I just didn’t have chance to tune up the parameters.
• Tricky non-linear non-physical features.
• Vector PT summations of tau-lep, lep-tau and other particle pairs, and their open angles with other particles. They are physically meaningful, but my model doesn’t pick them up 😦
• Categorizing the jet multiplicity (PRI_jet_num). Usually this categorizing technique works better since it increase the feature dimension for better separation, but not for this time, maybe because of my model parameters.
• Split models by PRI_jet_num. In the common Higgs analysis, the analysis is divided into different num-of-jets categories, e.g. 0-jet, 1-jet, 2-jets, because each partition can have different physical meanings in the Higgs production. XGboost has caught this partition nicely with features, and it handles the missing values in a nice way.
Lesson learned
• Automate the process: script for filing CV, script for the full workflow of training + testing, script for parameter scan, library of adding features so CV and training can have consistent features. It can save very much time.
• Discuss with the team members and check the forum.
• Renting a cloud is easier than buying a machine.
• I show learn ensembling classifiers for better score in future.
• Spend some time: studying the features and the model needs some time. I paused my submission for 2 months for preparing for my O1-A visa application (I got no luck in this year’s H1B lottery, so I had to write ‘a lot of a lot’ for this visa instead) and only fully resumed it about 2 weeks before deadline when my visa was approved, so my VM instance has run like crazy for feature work, model tuning and CV since then while I sleep or during daily work. Fortunately, these work (plus electricity bills to Google) has good payback on the rank.
4. Acknowledgment
I want to thank @Tianqi Chen (the author of xgboost), @yr, @Bing Xu, @Luboš Motl for their nice work and the discussions with them. I also want to thank to Google Cloud Engine for providing 500\$ free credit for using their cloud computer, so I don’t have to buy my own computer but just rent a 16-core VM.
5. Suggestions to CMS, ATLAS, ROOT and Kaggle
To CMS and ATLAS: In the physics analyses, ML was called ‘multi-variate analysis’ (MVA, so ROOT’s ML package is called TMVA) where features were ‘variates’. This method of selecting variates (features) was from the traditional cut-based analysis where each variate must be kind of strong physically meaningful so they were explainable, for example, in CMS’s tau-tau analysis, we had 2-jet VBF channel where tau-tau required jet-jet momentum was greater than 110 GeV etc. Using this cut-based feature selection idea in the MVA technique gave some limits of feature work where features were carefully selected by physicists using their experience and knowledge, which was good but very limited. In this competition, I came up some intuitive ‘magic’ features using my physics intuition, and the model + feature selection techniques in ML helped removing non-sense ones as well as finding new good features. So, my suggestion to CMS and ATLAS on the feature work is that, we should introduce more ML techniques for helping feature/variate engineering work and use machine’s power for finding more good features.
To ROOT TMVA: XGboost is a great idea for parallelizing the GBM learning. ROOT’s current TMVA is using single thread which is slow, and ROOT should have some similar idea of xgboost into the next version of TMVA.
To Kaggle: it might be a good idea of having some sponsored computing credits from some cloud computing providers and giving them to the competitors, e.g. Google Cloud and Amazon AWS. It can remove the obstacles of computing resources for competitors, and also a good advertisement for these cloud computing providers.
6. Our background
My teammate @dlp78 and I are both data scientists and physicists. We used to work on the CMS experiment. He worked on Higgs -> ZZ and Higgs-> bb discovery channel, and I worked on Higgs->gamma gamma discovery channel and some supersymmetry/exotics particle search. My PhD dissertation is search for long-lived particle decay into photons, in which the method is inspired by the tau discovery method, and my advisor is one of the scientists who discovered the tau particle (well, she also discovered many others, for example: Psi, D0, Tau, jets, Higgs). I have my Linkedin page linked on my Kaggle profile page, and I would like to link to you great Kaggle competitors.
# The best photography spots in San Francisco: what can data tell you?
In yesterday morning time, I was reading Danny Dong‘s wedding photos and amazed by his captures in the Golden Gate Bridge, City Hall in San Francisco. And I got this question: what the best photography spots are in San Francisco, besides these two places? If we know this, photographers do not waste time of searching for a good spot. To answer this question, I needed some data.
Fortunately, in SF data, I found this data set of Film Locations in San Francisco. There were about 200 movies filmed in San Francisco, from 1950s to present. In my mind, if professionals took photos at some spots, these spots were great photography spots.
Just the addresses themselves were not enough, I wanted to put them on to the map. Google map has the Geo service which can translate one address into a latitude/longitude location, for example:
Metro (1997) filmed at Bush Street at Jones, can be translated into “Bush Street & Jones Street, San Francisco, CA 94109, USA” and the location is 37.7895596 -122.4137276
With python, this is easy. I installed geopy (in pip), and used this simple piece of code:
from geopy import geocoders
....
location+=', San Francisco, CA'
#I changed the source code of geopy and it returned multiple locations if ambiguity.
place, (lat, lng) = g.geocode(location)[0]
We can use google fusion table to draw these locations on the map. It looks like this:
Using the filming location from 200+ famous movies filmed in SF, we can extract the best photography locations from these movie professionals.
One can click this link for the interactive map and go to the location (with streetview!): https://www.google.com/fusiontables/embedviz?q=select+col2+from+1nYwvzx2bNvANTwV03mQ-gge0imwW1iSz7WIxveg&viz=MAP&h=false&lat=37.801855527505346&lng=-122.43532960000005&t=1&z=13&l=col2&y=2&tmplt=2&hml=GEOCODABLE
However, SF is so beautiful that too many locations to read. A heat map works better. I used myheatmap and generated this hot spot map for photography locations. It looks like that the pier area is super hot: quite consistent with my opinion.
Using the filming location from 200+ famous movies filmed in SF, we can extract the best photography locations from these movie professionals.
We can click this link and find out the details http://www.myheatmap.com/maps/u6o5WNbTIrM= (one may need to change the ‘decay’ to a smaller value on the right top of the interactive map).
In short, we now know what the best photography spots are, from data. Enjoy photography in San Francisco.
# A data scientist’s way to solve the 8809=6 problem
Last time, one of my colleagues posted this question. It is related to feature extraction of machine learning.
The question starts simple:
8809=6 7111=0 2172=0 6666=4 1111=0 3213=0 7662=2 9313=1 0000=4 2222=0 3333=0 5555=0 8193=3 8096=5 7777=0 9999=4 7756=1 6855=3 9881=5 5531=0
So, 2581=?
The answer itself was easy: number of circles. For these kind of numeric puzzles, I think there should be a general solution that solves these puzzles automatically.
By checking the pattern of the equation list, one can guess that, the answer can be the summation of these patterns, so one can list the digits by its frequency, and this problem becomes a linear regression for a 10-dimension (x0-x9) space.
For example, 8809=6 gives 100000021 = 6 by listing the frequency of each digits, where 1,2,1 are parameters for ‘0’, ‘8’ and ‘9’, which means the weight summation of each digits is 6. The regression function can be dot product for a linear function and tells us why it is 6 for 8+8+0+9.
Thus, one can solve it by linear regression. The regression function can be:
def __residual(params, xdata, ydata):#guess the function is cosine dot
return (ydata - numpy.dot(xdata, params))
And one can use numpy’s least square for the linear regression:
leastsq(__residual, x0, args=(xdata, ydata))
The full source code in python can be found at https://github.com/phunterlau/8809-6
Some discussions:
The output:
(array([ 1.00000000e+00, 4.75790411e-24, 4.75709632e-24, 4.75588463e-24, 1.00000000e+00, 4.75951970e-24, 1.00000000e+00, 4.75790411e-24, 2.00000000e+00, 1.00000000e+00]), 2)
where each value in the array means the correlations between each digit and its’ weight. One can discover that, ‘0’,’4′,’6′,’9′ have weight of 1, and ‘8’ has the weight of 2.
However, there was a bug in the code: the initial value:
x0=numpy.array([1,1,1,1,1,1,1,1,1,1])#initial guess
The initial guess was weight 1 for each digit. After checking with the input, one can find ‘4’ does not exist, so one should take off ‘4’ as weight 1.
2. Why so complicated?
Why it is associated with the topic of ‘feature extraction’ in machine learning?
If one is smart enough, one can tell that, the pattern is actually the number of circles in each digit, so ‘8’ has two circles and its weight is 2.
However, no one can be smart at all time. Some patterns are hidden. The procedure in the example is kind of latent feature extraction: one does not have to know the pattern is about the number of circles in each digit, one can still get that ‘8’ has weight of 2 by using this automatic code.
# Some interview questions for data scientists
Last time, one of my friends asked me for some interview questions to test the candidates of data scientist jobs. I think it is good to share the questions. Later on, I may (if I got some free time) post some detailed solutions and discussions on them.
1-D Facebook (difficulty: medium, problem solving)
We are living in a 3-D world, with x, y and z coordinates. In an 1D world, there is only the x coordinate, and people can only see left and right. There is a startup social network company, we can call it ‘1-D-facebook.com’ , and it wants to build ‘find your friends’ program for 1-D. In 1-D world, people has no size but a dot (no worry about diet 🙂 ) It has a list of 1M people’s location info, represented as an 1-D float array of 1M length, unsorted . Now, giving a people at array position index N, please find the closest two (left and right) friends.This position array can be very long (but fit in the memory), and unsorted, so sorting and search can be OK, but not yet the optimal solution. preprocess is OK, no space limit.
Reference: this problem was from some real-life cases, for example, it is the final step of amazon’s collaborative filter about ‘people bought this also bought’ after calculated all probability combinations (here this problem uses distance, and amazon using division). Another example is the spelling corrector which needs to find the closest-spelling words from a big dictionary where the distance is defined by edit-distance. A real spelling corrector is much more tricky on the data structure. Here I just simplified the edit-distance to position difference. PS: google’s spelling correction using bayesian models instead of edit-distance.
2-D Facebook (difficulty: hard, problem solving + algorithm)
Since you have solved 1-D Facebook, a 2-D world Facebook company, ‘flat-land.com’, wants to hire you and make the same program for 2-D people, where each people, with position x and y, can see left/right and up-down, and then find 4 closest friends. Surely, it does not have to be up-friend, down-friend etc, any direction is OK.
Reference: this is a real Facebook problem (not interview, no time limit), called ‘small world’ from Facebook Puzzle (this webpage is taken down by Facebook).
DVD Auto-classification (difficulty: medium, problem solving + machine learning)
A DVD-rent company wants to beat Netflix, so they want to build a smart machine learning algorithm to auto-classify DVD movies rather than manually labeling all movies. Fortunately, they only host romantic movies and action movies, so things are easier than those in Netflix. They observed one thing that, in romantic movies, people kiss a lot; in action movies, people fight a lot, but romantic movies can have fights too. Can you use this information to build a classifier which can tell if a new movie is action or romantic?
Reference: From the book ‘Machine Learning in action’. It is also from a real-life problem from my current project, but I simplified it to numerical features.
Super long feature similarity (difficulty: medium-hard, programming+machine learning)
Some machine learning models produced list of features for two soft drinks, for example, value of the content of sugar etc. One wants to compare the similarity of these two drinks using machine learning, how? (Interviewee should answer cosine similarity or dot-product or some other distance functions to compare two feature vectors).
Let’s take cosine similarity for example. Now, the real situation is that, there are millions of features from machine learning models, and some drinks may miss many features, in the other words, the feature is very sparse. So, if we want to compare two drinks with sparse features, where one drink can have many features that the other drink does not have. Do we really need to multiple each feature for these many zero values?
Calculate square-root of integer N. (difficulty: medium-hard. Numerical methods and programming)
This question can have some variations:
• (easy) How to tell if an integer N is a perfect square number (N=k*k where k is an integer).
• (medium) Given a very large integer N, and the number m where m*m<=N and (m+1)*(m+1)>N.
• (hard, needs hint) How to determine a number is a Fibonacci number? The hint should be given by the interviewer: a Fibonacci number can be represented either in 5*N**2+4 or 5*N**2 -4, so simply to test if this number plus/minus 4 divided by 5 is a perfect square number.
• (medium-hard) How to determine a number is summation of two perfect square numbers?
What if N is very large, and one can not build a table of square numbers?
Essay-copying (difficulty: medium-hard, NLP, machine learning, modeling)
In the final test of the university, the professor received 200 essays from students, about 1000 words each. Badly, he found some students were copying other people’s essays. But these students were smart: they did not copy the entire essay, maybe change words in some sentences, may copy from 2-3 other persons (but surely, they do not copy from all the other 200 students, no enough time 🙂 ). Please build a machine learning system to help professor find these bad students.
Reference: clustering using nature language processing is very important in the real life. This is an example.
# Check if a number is a Fibonacci number: speed comparison for two Quora answers in python
What is the most efficient algorithm to check if a number is a Fibonacci Number? The top two answers are both very good and efficient.
Background, what is Fibonacci number? wikipedia
Anders gave a nice matrix exponentiation algorithm in Haskell while John posted his Java solution using square numbers. There was an argument which one was faster, so I made this comparison using python.
First of all, John needed a faster algorithm to determine if a number is a perfect square number. He used a loop from 2 to sqrt(N), and it was surely not an efficient way.
A better way to do it is the newton’s method of integer square number (wikipedia), in short, it equals using newton’s method to solve x*x-n = 0. In python, it can be easily solved by this (original code from this link):
def isqrt(x):
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = 2**(a+b)
while True:
y = (x + n//x)//2
if y >= x:
return x
x = y
And then, John’s method is basically testing if 5*N*N+4 or 5*N*N-4 is a perfect square number. If the answer is yes for either one, this number is a Fibonacci number.
Actually the square root algorithm can have more optimizations using the 64 bit magic number 0x5fe6eb50c7b537a9 (the DOOM trick), please check wikipedia for more interesting details. To be platform independent, here I just used the original newton’s method.
Secondly, Anders code was in Haskell, so I rewrote them into Python for a fair comparison.
def fibPlus((a,b),(c,d)):
bd = b*d
return (bd-(b-a)*(d-c), a*c+bd)
def unFib((a,b),n):
if n<a:
return (0,0,1)
else:
(k,c,d) = unFib(fibPlus((a,b),(a,b)),n)
(e,f) = fibPlus((a,b),(c,d))
if n<e: return (2*k, c, d)
else:
return (2*k+1,e,f)
def isfib(n):
(k,a,b) = unFib((1,1),n)
return n==a
The full source code can be found on my github https://github.com/phunterlau/checkFibonacci
To test these two algorithm, I downloaded the first 500 Fibonacci numbers from http://planetmath.org/listoffibonaccinumbers and ran 100 times for each algorithm on this list of number. The result is interesting: python optimization makes difference. Unit is in second.
If run in python 2.7, John’s method won for 10% time:
python is_fib.py Anders method: 1.52931690216 John method: 1.36000704765
If run in pypy 1.9, Anders method is highly optimized for 2x speed:
pypy is_fib.py Anders method: 0.799499988556 John method: 2.0126721859
To conclude, both of these two algorithms are very good.
One another question to follow this:
Given a number N, if N is not a Fibonacci number, print out the largest Fibonacci smaller than N.
Hint: John’s method.
# [challenge accepted] palindromic prime number of 5 digits
Thanks to the first comment in the last post which directed me to a math problem (http://contestcoding.wordpress.com/2013/03/22/palindromic-primes/) . From a scientist point of view, the answer itself, 98689, is just simple, but the way to answer it is kind of interesting. Let’s have a look at it step by step how a scientist solved it.
1. How to generate a list of palindromic number?
‘A palindromic number is a number that reads the same backwards as forwards (1991 for example).’ One can iterate in all numbers and check if it is palindromic:
str(num)==str(num)[::-1]
, or, a smarter way to generate them [1]:
from itertools import product
return [n*'%s'%tuple(list(i)+list(i[n*(n-1)/2%(n-1)-1::-1])) for i in product(*([range(1,10)]+[range(0,10)]*((n+1)/2-1)))]
2. Now we have a list of palindromic number, how do we determine it is a prime or not?
Simple. The is_prime() function from textbooks should work with no problem. One can loop from 2 to the square root of N and see if it can divide N. Let me introduce you a better way, Miller Rabin primality test, a pro’s way.
In general, it is a faster and more efficient way to test if a number is a prime. Some other methods can be AKS primality test (wikipedia) and so on. It is good to know them 🙂
There is a nice implementation of Miller Rabin in python here, one can have a look at this beautiful algorithm.
So, we can build a loop from the end of the list of palindromic number that we generated at step 1, and test each of them using Miller Rabin primality test, we can easily find the number is 98689.
• The problem asked about 5 digits palindromic number. Also, these kind of numbers exist for 3 digits, 7 digits, 9 digits and so on, but never exist for 4 digits, 6 digits. Can you guess why? hint: some thing about number 11.
• There is a nice website http://oeis.org/A114835 for the set of palindrome prime numbers. There are some other interesting sets of numbers.
• Do you know the largest 11 digits palindromic number is 99999199999, 13 digits 9999987899999?
Source code:
Actually the function palindromeNum() can be optimized by generator and yield, which does real time calculation rather than unnecessarily generating the whole list.
from itertools import product
import miller_rabin
return [n*'%s'%tuple(list(i)+list(i[n*(n-1)/2%(n-1)-1::-1])) for i in product(*([range(1,10)]+[range(0,10)]*((n+1)/2-1)))]
if miller_rabin.miller_rabin(int(i)):
print i
break
# A scientist’s ‘attitude’
Source code and data file can be found at https://github.com/phunterlau/attitude info me before use it, please.
You might have heard of this piece of junk about ‘attitude’ for many times, now a real scientist is showing you the secret behind this.
First of all, it is like this:
A small truth to make our Life 100% successful.. ……..
If A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Is equal to 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26
Then H+A+R+D+W+O+ R+K = 8+1+18+4+23+ 15+18+11 = 98%
K+N+O+W+L+E+ D+G+E = 11+14+15+23+ 12+5+4+7+ 5 = 96%
L+O+V+E=12+15+ 22+5=54%
L+U+C+K = 12+21+3+11 = 47%
(None of them makes 100%)
………… ……… ……… .
Then what makes 100%
Is it Money? ….. No!!!!!
Every problem has a solution, only if we perhaps change our “ATTITUDE”.
It is OUR ATTITUDE towards Life and Work that makes
OUR Life 100% Successful..
A+T+T+I+T+U+ D+E = 1+20+20+9+20+ 21+4+5 = 100%
Well, it might be true. But, is ‘attitude’ the only word which can make 100%? Let’s have a look.
What you need is python and a list of English words. I have done some google search and found this nice site of a list of 109582 English words: http://www-01.sil.org/linguistics/wordlists/english/ .
Open this list with python, and type in this piece of code:
print ‘,’.join([x.strip() for x in words_f.readlines() if sum([ord(i)-96 for i in x.strip()])==100])
We can easily get 1297 English words which can make 100%. Beside ‘attitude’, we also have, for example, :
alienation,inefficient,infernos……
This a scientist’s ‘attitude’: never trust 100%. We have something in the life much more important than just 100%, just like ‘love’=54%. | 2017-08-22 11:10:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4992799758911133, "perplexity": 2582.209439024151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00346.warc.gz"} |
https://plus.google.com/+SureshGovindarajan | Profile
Suresh Govindarajan
Works at Indian Institute of Technology Madras
Attended Atomic Energy Central School, Hyderabad
Lives in Chennai, India
311 followers|75,739 views
Stream
Suresh Govindarajan
Shared publicly -
My latest paper.
Abstract: We study a one-parameter family ($\ell=1,2,3,\ldots$) of configurations that are square-ice analogues of plane partitions. Using an algorithm due to Bratley and McKay, we carry out exact enumerations in order to study their asymptotic behaviour and establish, via Monte Carlo ...
3
Suresh Govindarajan
Shared publicly -
The following notes are a transcription and a translation into English of lecture notes (192 pages!) of de Bruijn's course on combinatorics, originally given in the 70s and 80s:
http://goo.gl/ZXO5AV
you may be surprised where they are hosted. From the introduction:
Combinatorics is mainly about counting. The nice thing about counting is that you learn something about the thing that you are counting. While counting, you notice that your knowledge about the subject is not sufficiently precise, and so counting becomes an educational activity. When two sets have the same number of elements, you try to understand why that is the case by establishing some natural bijection between the two.
N. G. de Bruijn recalls that sometime around 1975, he counted the number of a certain type of logics with three variables. There were a lot of them; some number with 14 digits. A little while later he saw an article that was about something completely different, but it concluded that the number of those objects was exactly that same number with 14 digits. He got curious and read the article very carefully. Indeed, if you thought deeply about it you could see that you could interpret those objects also as logics.
(via darij grinberg on MathOverflow)
1
Suresh Govindarajan
Shared publicly -
Meet Ravi Jagadeesan, an American of Indian origin. He is a gold medalist at the International Mathematical Olympiad in 2012 and is currently a student at Harvard. He just put a paper on the math arXiv: http://arxiv.org/abs/1601.05070
1
Suresh Govindarajan
Shared publicly -
Kaveri Trail Marathon 2015: I ran the HM in 2:01:20.
1
1
Thanks Gautam da. Things are going fine. Thanks. It was a very smooth run even though I missed out on going under 2h.
Suresh Govindarajan
Shared publicly -
DRHM 2015 Half Marathon Finish (2:05:12).
4
Fine, wonderful, keep it up. All the best. Appa - ivg
Suresh Govindarajan
Shared publicly -
John Bardeen, the leading condensed matter theorist of his day, was quite wrong when he dismissed a startling prediction by the unknown Brian Josephson.
1
In his circles
109 people
Have him in circles
311 people
Suresh Govindarajan
Shared publicly -
The detection of gravitational waves:
1
The gravitational chirp that was heard! Bird watchers can identify birds from their chirps and the bottom two figures show that scientists can do that for gravitational chirps.
Suresh Govindarajan
Shared publicly -
There is going to be an announcement on Feb 11 (http://www.ligo.org/news/media-advisory.php) on the direct detection of gravitational waves by LIGO. Rumour has it that they have seen the merger of two large black holes into one large black hole plus a huge amount of energy released via gravitational waves. Indirect evidence for gravitational waves was first shown by Hulse and Taylor in 1974 for which they received the Nobel prize in 1993 (http://www.nobelprize.org/nobel_prizes/physics/laureates/1993/). This is big news and opens the door into a an era of "gravitational telescopes".
LIGO Scientific Collaboration (LSC) seeks to detect gravitational waves and use them for exploration of fundamentals of science.
1
It is official. Gravitational Waves have been detected. http://www.aps.org/publications/apsnews/updates/gwaves.cfm
Suresh Govindarajan
Shared publicly -
This year I have added a new twist to my annual run at The Wipro Chennai Marathon . I am running to raise money for Vidya Sagar. The precise amount that you wish to donate is not relevant. I just would like a lot of people to donate. Click on this link and you will be directed to a page where you can donate.
The Spastics Society of India was born in March 1985 in a garage in Chennai. It was started by Mrs. Poonam Natarajan, the mother of a child with profound disability, since there were no services available for this group. It was renamed ‘Vidya Sagar’ in 1998. Today Vidya Sagar is housed in a building that is designed in a manner that is totally barrier-free, and accessible to persons with disabilities, on land leased by the Government of Tamil ...
1
1
Try to donate without PAN card number. It is just means you don't get the IT deduction, I think. Let me know if it works.
Suresh Govindarajan
Shared publicly -
1
Suresh Govindarajan
Shared publicly -
(Infinite dimensional) Linear Algebra for the mathematically oriented physicist. I have been on the lookout for a book that has a nice introduction to Hilbert spaces -- chapter one does that.
LINEAR MATHEMATICS IN INFINITE DIMENSIONS Signals Boundary Value Problems and Special Functions
1
Suresh Govindarajan
Shared publicly -
DRHM result.
1
People
In his circles
109 people
Have him in circles
311 people
Work
Occupation
Scientist
Employment
• Indian Institute of Technology Madras
Scientist, present
Places
Currently
Chennai, India
Previously | 2016-08-30 11:10:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4840502440929413, "perplexity": 2074.2248534996124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982976017.72/warc/CC-MAIN-20160823200936-00150-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/107050/what-is-the-smallest-particle-exhibiting-gravitational-properties | # What is the smallest particle exhibiting gravitational properties?
I've long been taught that all matter having mass, possesses attractive forces somewhat akin to gravity. As such, imagine we can 'teleport' a gravitonic detection device that can accurately measure the gravitational forces regardless of strength present at each location, it can detect all gravitational forces ordered by direction of pull within a given radius of up to let say [through a range of settings/switches of 0.001 nanometer to 100,000 km. We teleport these detectors equi-distantly apart in the earth, from troposphere to molten mantle to core's center of mass. measure and plot all the forces detected. Supposedly, there is no gravity at earths center of mass...that means gravity can be concentrated and nullified. Logic tends to indicate 'no gravity=no weight. Therefore why does all reference material say massive objects have tremendously large core pressures and squeeze atoms to extremely dense masses. If gravity is nullified, what is source of this crushing, without usurping all gravitational common sense?
• Could you try to state your question a little bit clearer? It seems to me you are asking why there is a high pressure at the core of, say, the earth although the gravitational potential vanishes at the center? But then again there seems to be no connection to the title you chose for your question? – André Apr 5 '14 at 23:16
• I think what user43994 is getting at is that while there is no gravity at the center of the Earth, there is extremely high pressure at the center of the Earth, and he's confused as to why no gravity doesn't imply no pressure. – DumpsterDoofus Apr 5 '14 at 23:37
• Related: physics.stackexchange.com/q/2481/2451 and links therein. – Qmechanic Apr 6 '14 at 0:05
• It isn't that there is no gravity, in fact there is more than when you are on the surface but it is all pulling outwards and in all directions and with the same magnitude an any direction and the force sums to zero. Any tiny deviation from the exact center should show on a gradiometer. This assumes a perfect sphere with perfect spherical distribution of mass. Hmmm. Can you have local minimums in the real Earth? Geophysics theorists must have worked out the implications. Curious people would like to know. – C. Towne Springer Apr 6 '14 at 3:16
As best I can tell, your question can be paraphrased as:
If there's zero gravity at the center of the Earth, why is pressure not also zero? Pressure is caused by weight, so if you're weightless at the center, shouldn't it also be pressureless?
Let's calculate the pressure.
Model the Earth as an incompressible material with density $\rho$ and radial pressure $p(r)$ for $0\leq r\leq R$, where $R$ is the radius of the Earth. The force per unit volume due to pressure is given by $$F_p=-\frac{\partial}{\partial r}p(r).$$
Meanwhile, the force per unit volume due to gravity at a depth $r$ is given by Newton's law: $$F_g=\frac{G\rho\left(\frac{4}{3}\pi r^3\rho\right)}{r^2}=\frac{4}{3} \pi G \rho ^2 r.$$ Setting $F_p=F_g$ and integrating to find $p(r)$ along with the condition $p(R)=0$ yields $$p(r)=\frac{2}{3} \pi G \rho ^2 \left(R^2-r^2\right).$$ Note that the pressure is actually highest at the core, and that as you travel towards the surface, it decreases quadratically to zero.
In short: just because you're weightless at the core doesn't negate the fact that you've still got mass all around you that has weight, and is crushing down on you.
There will be no gravitational force at the center of the Earth because of Gauss' law for gravity: $\oint_{\partial V} \vec{g}\cdot d\vec{A} = -4\pi GM$ where $M$ is the mass enclosed in the volume $V$. The case of interest here is that of a sphere, so the left-hand side of this integral will evaluate to $4\pi r^2 g(r)$ where $r$ is the distance from the center of the sphere. You can rearrange this to give $g(r) = -GM\hat{r}/r^2$ but the point of all this is that when we get to the center of the Earth, which we can approximate as a spherical mass distribution, the sphere we're integrating over shrinks to enclose no mass, so there is no $\textit{gravitational}$ force acting there. However, if you go just a little bit outside of that center point, there will be gravitational force pulling you towards the center. Consider a spherical shell somewhere within the Earth of radius $r$. It's being pulled towards the center of the Earth by gravity, but it's not moving. So there must be another force opposing it, which we call the normal force. Now by Newton's third law, there must be a reactionary force to the Normal force pushing down on the layer just below the one we're considering. When you add up over all of the spherical shells that make up the Earth, this amounts to an enormous pressure being exerted at the center. We can calculate what this force should be by integrating up the weight of each little piece of the Earth. | 2020-04-01 10:30:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406459450721741, "perplexity": 186.09739564756524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00462.warc.gz"} |
https://docs.litespeedtech.com/lscache/lsccc/installation/ | # Installation and Configuration¶
Development on the 3rd party LSCache Purge plugin for Craft CMS has been discontinued:
Per-URL cache busting is now broken in Craft 3.5 because of changes to how template caching works. You will need to nuke the whole cache on save if you are running 3.5+
This plugin is EOL. Minor patches will be issued, but not major functionality overhauls. PR's gratefully received.
We recommend using LSCache with rewrite rules via these instructions if you are using Craft CMS v3.5 or later.
There are two parts to Craft CMS LiteSpeed caching: rewrite rules to define caching behavior, and a third party plugin to facilitate on-demand purging of cache objects.
Before installing and activating the LSCache plugin, deactivate all other full-page cache plugins.
Tip
You can still use other types of cache (like object cache, or browser cache), but only one full-page cache should be used at a time.
## Server-Level Setup¶
Note
Please see the Overview for the server-level requirements before attempting to use LSCache.
## Enable Cache for Craft CMS¶
LSCache is controlled through rewrite rules added to the .htaccess file found in your Craft CMS document root. To start, the file may look something like this:
<IfModule mod_rewrite.c>
RewriteEngine On
# Send would-be 404 requests to Craft
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$[NC] RewriteRule (.+) index.php?p=$1 [QSA,L]
</IfModule>
In order to use LSCache on your site, you must place the cache-related rewrite rules above the existing rules. The following example will cache all pages for 8 hours (28800 seconds) with the exception of any /admin URLs:
########## Begin - Litespeed cache
<IfModule LiteSpeed>
RewriteEngine On
RewriteRule .* - [E=Cache-Control:max-age=28800]
</IfModule>
########## End - Litespeed cache
Feel free to modify these rules, if necessary for your site's needs.
Examples
• If you would like to exclude some other page from cache (let's say, /mypage.php), simply add the following line to the existing rewrite conditions:
RewriteCond %{ORG_REQ_URI} !/mypage.php
• If you want to cache your site for only 4 hours, you can change the max-age. So, it would be:
RewriteRule .* - [E=Cache-Control:max-age=14400]
## Verify Your Site is Being Cached¶
Video
See a video demonstration of this topic here.
You can verify a page is being served from LSCache through the following steps:
1. From a non-logged-in browser, navigate to your site, and open the developer tools (usually, right-click > Inspect). Open the Network tab.
2. Refresh the page.
3. Click the first resource. This should be an HTML file. For example, if your page is http://example.com/webapp/, your first resource should either be something like example.com/webapp/ or webapp/.
4. You should see headings similar to these:
X-LiteSpeed-Cache: miss
X-LiteSpeed-Cache-Control:public,max-age=1800
X-LiteSpeed-Tag:B1_F,B1_
These headings mean the page had not yet been cached, but that LiteSpeed has now stored it, and it will be served from cache with the next request.
5. Reload the page and you should see X-LiteSpeed-Cache: hit in the response header. This means the page is being served by LSCache and is configured correctly.
The X-LiteSpeed-Cache header is most common, but you may see X-LSADC-Cache if your site is served by LiteSpeed Web ADC. You may also see X-QC-Cache if your site was served via QUIC.cloud CDN. These alternate headers are also an indication that LSCache is working properly on your site.
Important
If you don't see X-LiteSpeed-Cache: hit or X-LiteSpeed-Cache: miss (or any of the alternative headers), then there is a problem with the LSCache configuration.
### Non-Cacheable Pages¶
Sometimes there are pages which should not be cached. To verify that such pages have indeed been excluded from caching, check the developer tools as described above.
You should see headings similar to these:
X-LiteSpeed-Cache-Control:no-cache, esi=on
X-LiteSpeed-Tag:B1_F,B1_
X-LiteSpeed-Cache-Control, when set to no-cache, indicates that LiteSpeed Server has served your page dynamically, and that it was intentionally not served from cache.
### LSCache Check Tool¶
There's a simple way to see if a URL is cached by LiteSpeed: the LSCache Check Tool.
Enter the URL you wish to check, and the tool will respond with an easy-to-read Yes or No result, and a display of the URL's response headers, in case you want to examine the results more closely.
In addition to LSCache support, the tool can detect cache hits, and can detect when sites are using LiteSpeed Web ADC or QUIC.cloud CDN for caching.
Additionally, a Stale Cache Warning will alert you if browser cache is detected on dynamic pages. This is because browser cache may interfere with the delivery of fresh content.
## Purge Cache Plugin¶
When controlled purely with rewrite rules, LSCache lacks insight into Craft CMS's rules, and should only be used to cache content for a very short time.
The third party LSCache Purge plugin is the bridge between Craft CMS and the LiteSpeed Cache engine that allows you to cache your Craft CMS content for a longer period of time. The purge plugin understand Craft CMS rules, and as such can instruct the cache engine to purge content when it changes. This greatly reduces the risk of serving stale content to visitors.
You can confidently set max-age to several hours, if you know that pages will be automatically cleared from cache when they are changed in the CMS.
Our thanks to Scaramanga Agency for their work developing this plugin!
Craft CMS version 3.0.0 or later is required for this plugin. (You can use rewrite rules alone for earlier versions.)
### Installation¶
To install the plugin, search for LiteSpeed Cache on the Craft CMS Plugin store, or install it manually as follows:
1. Open your terminal and go to your Craft project:
cd /path/to/project
2. Tell Composer to require the plugin:
composer require thoughtfulweb/lite-speed-cache
3. In the Control Panel, go to Settings > Plugins and click the Install button for LiteSpeed Cache.
### Usage¶
There are two ways to purge the cache. You can configure it to purge automatically when pages are saved in the CMS, or you can press a button to purge the entire cache at once. Please see the LSCache Purge Plugin's Github page for usage instructions and examples.
Last update: July 21, 2021 | 2022-05-23 18:41:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26569610834121704, "perplexity": 6436.478042833513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00574.warc.gz"} |
https://socratic.org/precalculus/polar-coordinates/finding-distance-between-polar-coordinates | # Finding Distance Between Polar Coordinates
## Key Questions
• Hello,
• In a orthonormal basis, the distance between $A \left(x , y\right)$ and $A ' \left(x ' , y '\right)$ is
$d = \sqrt{{\left(x - x '\right)}^{2} + {\left(y - y '\right)}^{2}}$.
• With polar coordinates, $A \left[t , \theta\right]$ and $A ' \left[r ' , \theta '\right]$, you have to write the relations :
$x = r \cos \theta , y = r \sin \theta$
$x ' = r ' \cos \theta ' , y ' = r ' \sin \theta '$,
So,
$d = \sqrt{{\left(r \cos \theta - r ' \cos \theta '\right)}^{2} + {\left(r \sin \theta - r ' \sin \theta '\right)}^{2}}$
Develop, and use the formula ${\cos}^{2} x + {\sin}^{2} x = 1$. So you get :
$d = \sqrt{{r}^{2} - 2 r r ' \left(\cos \theta \cos \theta ' + \sin \theta \sin \theta '\right) + r {'}^{2}}$
Finally, you know that $\cos \theta \cos \theta ' + \sin \theta \sin \theta ' = \cos \left(\theta - \theta '\right)$, therefore,
$d = \sqrt{{r}^{2} + r {'}^{2} - 2 r r ' \cos \left(\theta - \theta '\right)}$.
• Let say you have points A(r_1,θ_1),Β(r_2,θ_2) you must convert them to cartesian coordinates A(x_1,y_1),Β(x_2,y_2) and then use the distance formula $D = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}$
See below.
#### Explanation:
Given in cartesian coordinates.
${P}_{1} = \left({x}_{1} , {y}_{1}\right)$ and ${P}_{2} = \left({x}_{2} , {y}_{2}\right)$
the transition formulas
$\left\{\begin{matrix}x = r \cos \theta \\ y = r \sin \theta\end{matrix}\right.$
then
$\left({x}_{1} , {y}_{1}\right) \Rightarrow \left({r}_{1} \cos {\theta}_{1} , {r}_{1} \sin {\theta}_{1}\right)$
$\left({x}_{2} , {y}_{2}\right) \Rightarrow \left({r}_{2} \cos {\theta}_{2} , {r}_{2} \sin {\theta}_{2}\right)$
so
$d = \sqrt{{\left({x}_{1} - {x}_{2}\right)}^{2} + {\left({y}_{1} - {y}_{2}\right)}^{2}} \Rightarrow \sqrt{{\left({r}_{1} \cos {\theta}_{1} - {r}_{2} \cos {\theta}_{2}\right)}^{2} + {\left({r}_{1} \sin {\theta}_{1} - {r}_{2} \sin {\theta}_{2}\right)}^{2}}$
then
$d = \sqrt{{r}_{1}^{2} + {r}_{2}^{2} - 2 {r}_{1} {r}_{2} \left(\cos {\theta}_{1} \cos {\theta}_{2} + \sin {\theta}_{1} \sin {\theta}_{2}\right)} = \sqrt{{r}_{1}^{2} + {r}_{2}^{2} - 2 {r}_{1} {r}_{2} \cos \left({\theta}_{1} - {\theta}_{2}\right)}$ | 2019-02-22 16:09:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759347438812256, "perplexity": 1102.1866689868973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518497.90/warc/CC-MAIN-20190222155556-20190222181556-00009.warc.gz"} |
http://www.jasonmunster.com/nuclear-reactors-3/ | # Nuclear Reactors Final
I am getting bored of this topic, and I want to get to wind-solar-hydro so I can finish up with the energy technology stuff and write varied stuff. So I am going to compress it a lot. If anyone wants to see it expanded, let me know and I will take care of it later.
This post is about breeder reactors, thorium reactors, preventing nuclear proliferation, how the electricity costs stack up to other power plants, and why nuclear power is so expensive.
Nuclear power is expensive because the up-front costs are massive. The cleanest and most efficient coal fired power plants might cost a billion dollars to build. It might take 5 years to build it. Nuclear power plants seem to cost about $8 billion to build with all the safety features they use to prevent nuclear meltdowns (seriously, the new tech is very safe, and it shows in the cost). And they seem to take 8-15 years to build, depending on how much Greenpeace or pretty much every other group tries to stop construction via litigation. In other words, they take out an$8B loan and accrue interest for 8-15 years before they can start paying it back. Stuff costs a lot. Why do it? Cause nuclear waste can be contained, unlike the NOx and CO2 from natural gas and coal plants. Also, South Korea thinks it can build a nuclear plant a nuclear power plant in a short amount of time for only $5 billion. United Arab Emirates decided this was a good idea, and is buying four South Korean nuclear reactors to desalinate water. Schematic of the South Korean nuclear power plant to be built in the United Arab Emirates, from link above. The US Energy Administration Administration agrees that nuclear power is now less expensive than it used to be. I have ripped a table straight off their web page that shows it (see, I really am getting lazy in this post). Table 1. Estimated levelized cost of new generation resources, 2018 U.S. average levelized costs (2011$/megawatthour) for plants entering service in 2018
Plant type Capacity factor (%) Levelized capital cost Fixed O&M Variable O&M (including fuel) Transmission investment Total system levelized cost
Dispatchable Technologies
Conventional Coal 85 65.7 4.1 29.2 1.2 100.1
Advanced Coal 85 84.4 6.8 30.7 1.2 123.0
Advanced Coal with CCS 85 88.4 8.8 37.2 1.2 135.5
Natural Gas-fired
Conventional Combined Cycle 87 15.8 1.7 48.4 1.2 67.1
Advanced Combined Cycle 87 17.4 2.0 45.0 1.2 65.6
Advanced CC with CCS 87 34.0 4.1 54.1 1.2 93.4
Conventional Combustion Turbine 30 44.2 2.7 80.0 3.4 130.3
Advanced Combustion Turbine 30 30.4 2.6 68.2 3.4 104.6
Advanced Nuclear 90 83.4 11.6 12.3 1.1 108.4
Geothermal 92 76.2 12.0 0.0 1.4 89.6
Biomass 83 53.2 14.3 42.3 1.2 111.0
Non-Dispatchable Technologies
Wind 34 70.3 13.1 0.0 3.2 86.6
Wind-Offshore 37 193.4 22.4 0.0 5.7 221.5
Solar PV1 25 130.4 9.9 0.0 4.0 144.3
Solar Thermal 20 214.2 41.4 0.0 5.9 261.5
Hydro2 52 78.1 4.1 6.1 2.0 90.3
Note that last column is \$ per megawatt hour. It is the bottom line cost of producing power from that plant. First, what is dispatchable vs non-dispatchable? Dispatchable means you get it whenever you want it. You can ramp it up or down however you please. Non-dispatchable means that you depend on external factors, like the fickle winds of.. well.. winds?
Tangent! Winds are really just redistribution of energy from the equator to the poles. The sun shines more at the equator, heating it up, and then energy likes to move from areas of high energy to areas of low energy, so it does it using wind. And sometimes hurricanes. So really, wind power is just really inefficient solar power. You know what else is really inefficient (and slow) solar power? Hydrocarbons and coal. Cause they are really just buried plant and algae matter and such. That is tens to hundreds of millions of years old. So, coal and oil are really just really old, slow, and dirty solar power. Tangent done!
Tangent picture? This shows how the equator heats more than the rest of the earth. These extra heat has to redistribute to be more even. Hurricanes start near the equator cause of the heat there, then move away from it. from: http://oceanworld.tamu.edu/resources/oceanography-book/oceansandclimate.htm
Nuclear power is almost as cheap as coal power, and cheaper than clean coal (note, clean coal still produces a ton of CO and NO)! What gives? How is nuclear so inexpensive? Well, we haven't built a nuclear power plant in the US in years. We don't know what it will actually cost. Those are just estimates. Also, people are quite scared of nuclear power. The cost of building nuclear power rises when you have environmentalists and NIMBY folks suing the pants off nuclear power developers. But let's make one thing clear: if the new generation of nuclear power plants are as inexpensive as they are supposed to be, the power is less expensive that all other power plants other than natural gas (note: the US does not have capacity to build more hydro power), and less expensive than even that if you account for NO produced by and methane leaks associated with natural gas power (methane production and transport will always have leaks, and it is 23x as powerful a greenhouse agent as CO2).
Let's look at a few more things on the chart above. Remember when I said natural gas got cheap? Look at how cheap it is to produce power from natural gas on the chart above. Think anyone is building nuclear, solar, or offshore wind when you can build and deploy reliable natural gas power? Somehow, the answer to this is yes. Yes, people are building all these things, despite being expensive. Which is kind of cool.
Before moving away from costs, look at the variable costs. The variable costs are high for everything except renewables and nuclear. Why is this? Cause renewables and nuclear don't really use fuel. Yes, a nuclear plant uses fuel, but it costs almost nothing relative to the labor and the capital costs. All the cost of these is upfront CAPEX (capital expense), and then you get free power.
Finally, lets take a really close look at the variable costs. This link is pretty sweet for those of you interested. It contains variable costs for each power source. You can see that fuel is the bulk of cost of fossil steam plants, but less than a quarter of the total cost of nuclear, and nuclear fuel is 1/4 the price per energy unit than even dirt cheap natural gas.
Enough about costs! Onto breeder reactors!
In the first post I mentioned that one part of nuclear reactions is to give off neutrons. Sometimes instead of a neutron splitting an atom, the atom absorbs it. U-235 is the uranium we use in nuclear reactors. U-238 doesn't produce as much heat, cause it doesn't like to decay as fast, so it isn't viable nuclear fuel. Or is it?! U-238 is like a catcher in baseball. Except it catches neutrons. And then it incorporates them into its nucleus to become U-239. In other words, it really isn't like a catcher in baseball.
The breeder reaction series. From: http://nuclearpowertraining.tpub.com/h1019v1/css/h1019v1_76.htm
What's special about U-239? It decays rapidly through a special type of decay to become neptunium-239 and eventually plutonium 239! This process can extract up to 100x the energy from nuclear fuel. You know what's magic about that? Less nuclear waste produced. Also, you produce a ton of nuclear fuel this way. You can also use thorium-232, which then becomes uranium-233 after absorbing a neutron, which can in turn be used for nuclear fuel. Thorium is very cheap and very abundant. So the plutonium and uranium that is magically created through awesomely manipulating nuclear forces is then used in nuclear thermal reactors to produce power.
Nuclear proliferation!
Having a nuclear power plant does not mean you can make nuclear bombs. Nuclear bombs required U-235 enriched to a very high level. What exactly is enrichment? Natural uranium is less than a few percent U-235, the rest is U-238. The uranium comes as a solid, and is processed by making it dance with a bunch of flourine. UF is produced, which is gasified uranium. The U-238 is slightly heavier than U-235, so it very very slowly settles to the "bottom" if you spin it very fast in centrifuges. Once you have enriched it to somewhere between 5 and 8%, U-235, it is good to go into a reactor and make energy. To make a bomb, you have to enrich it to around 90%. Enriching it further gets exponentionally more difficult. Getting from 50% to 70% is much more difficult than getting it from 10% to 50%. So making bombs is hard.
What about plutonium? Seems like any fool can make plutonium. And in fact, they can! All you have to do it get some U-235, wrap it in U-238, and you have make a breeder reactor in your back yard! This seriously happened. Someone made a breeder reactor in their garage at 17. And this wasn't one of those kids that comes from a brilliant family with a ton of money that goes to work in a world famous lab and 'discovers' a new technique under the watchful eye of one of the most brilliant researchers in the world. This was your every day kid who was just really interested in something.
Except that building a nuclear weapon from plutonium is even more difficult than from uranium. Cause when you make the plutonium, you always get a large amount of another plutonium isotope. The other isotope loves to go critical much earlier than Pu-239. Remember what happens to a potentially critical nuclear reaction when the fuel gets split up? No? Remember what happened to Chernobyl when a small gas explosion spread the core out everywhere? It completely shuts down the critical reaction. In other words, plutonium loves to accidentally blow up to early and just spread itself around without going critical. Not much of a weapon there. How do we have plutonium bombs then? Really smart people made special triggering mechanisms to make this happen. How do they do this? I dunno. If I did, I wouldn't be out in public writing a blog, I'd be doing super secret awesome research somewhere.
Turns out that only the US and a handful of other countries have figured this one out. So while any old fool can make a breeder reactor, the combined science of most nations is not good enough to figure out how to do it.
One last thing. If nuclear power is so difficult, how did so many countries get it? Well, US and Russia developed it. The US gave it to China at some point to balance some power issues with the Soviet Union. The US gave it to several other allies as well. The Soviet union gave it out some, too. China then distributed to crazies like North Korea years later, and Pakistan and India were given it through similar pathways. In other words, it is still pretty difficult to develop.
Hey, I just covered 4 whole things in one post, and managed to get more terrible jokes in. Awesome.
Aww darn, I forgot to include the small amount of original research I did on this topic. Next article | 2023-03-25 23:20:55 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214244723320007, "perplexity": 1725.7702424550494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00694.warc.gz"} |
https://notes.dsc80.com/content/04/cleaning.html | # Cleaning Messy Data¶
## Raw data doesn’t exist¶
A dataset, at best, represents a process under study. However, information is always lost and biases are often introduced between the occurrence of the process and recording the observations in a dataset. A person must decide what relevant measurements to make and implement the instrumentation. These human decisions create a gap between the process under study and the observations being used to study it.
The following terms, previously discussed in relation to the scientific method, describe the core objects of study in data science:
• A data generating process (DGP) is the underlying, real-world mechanism that generates the observed data.
• Observed data (creating a dataset) are incomplete artifacts of the data generating process.
• A (probability) model is a simplified explanation of the data generating process. The process described using a model is compared to the observed data to judge the quality of the explanation.
Example:
To make life more difficult, data scientists often find themselves working with already existing datasets created for purposes different than their current interest. In these cases, special care must be taken to understand both what process the dataset represents and how well it represents it. As such, the data scientist should be an ever-skeptical detective attempting to describe the story of how the dataset came to be, including:
1. a rough description of the assumptions on the underlying data generating process,
2. a detailed description of how the DGP came to populate each field in the dataset (the measurements),
3. a detailed description of any data processing and/or storage decisions that affected the values in the dataset.
These descriptions together form the data’s provenance.
“Messy data” are deviations from the process being modeled that are not due to randomness. Armed with the provenance for a dataset, the data scientist “cleans” the messy data to best reflect the data generating process.
## Data Cleaning¶
Definition: Data cleaning is the process of transforming data into a faithful representation of an underlying data generating process to facilitate subsequent analysis.
A common procedure for data cleaning often involves:
1. Fixing data types so columns reflect the kind of data in each field,
2. Resolving ambiguities and inconsistencies in the data,
3. Identifying corrupt or unlikely values for removal, fixing, or declaration as missing.
Each of these procedures will require applying an arsenal of new Pandas methods to the columns of an existing dataset, while storing the result in a cleaned dataset.
### Data types¶
Datasets are often designed for human consumption, rather than for computation. Thus, a data scientist attempting to simply describe the contents of a dataset must first fix data-type errors. Whenever possible, one should leverage existing dataset documentation (such as a data dictionary or a database schema) to set the data-types. In the absence of such resources, one must infer the correct types by reasoning about the process that generated the data.
• Cleaning the data types of the columns of a dataset involves resolving the data types present in the columns of a dataset with the kind of data represented in each column.
Such cleaning is illustrated in the example below.
Example: The observations contained in the dataset below consists of each campus of the University of California. These observations derive from the College Scorecard Dataset. Each observation consists of the follow attributes:
1. COLLEGE_ID: unique ID for each college
2. NAME: name of the institution
4. PCT_FED_LOAN: the percentage of students at the university who took out federal student loans
6. SETTING: the type of environment in which the university is located
uc
0 131200.0 University of California-Berkeley 30574.0 23.73% $13200 City 1 131300.0 University of California-Davis 30046.0 36.0%$14000 City
2 131400.0 University of California-Irvine 29295.0 36.4% $16500 Suburban ... ... ... ... ... ... ... 6 132000.0 University of California-Santa Barbara 22181.0 36.32%$16300 Suburban
7 132100.0 University of California-Santa Cruz 17577.0 46.41% $19700 Suburban 8 4127100.0 University of California-Merced 7375.0 50.13%$19018 Rural
9 rows × 6 columns
Any attempts to perform an exploratory data analysis on the columns of the uc dataset would quickly produce type-errors (verify this yourself!). To assess each field of the uc dataset, Pandas provides a DataFrame property .dtypes that returns a series with the data type of each column.
The table uc contain columns of float and object data-types. Recall that object columns contain mixed data-types and are usually indicative of the presence of string values. Determine which data-types in the table match the data-types implied by the kind of attribute each column contains.
uc.dtypes
COLLEGE_ID float64
NAME object
PCT_FED_LOAN object
SETTING object
dtype: object
COLLEGE_ID Column: This column is a unique identifier for each university. As there doesn’t seem to be a meaningful order to the column, it is likely nominal. Since floating-point numbers are subject to rounding errors, representing COLLEGE_ID as a float might change the unique identifier for an observation. Thus, such column should be represented as either an integer-type or string-type. While either choice is faithful to the meaning of the field, storing the values as int-types is more space-efficient.
Remark: Why are the values float-type in the first place? Data processing programs often incorrectly infer data-types. If a column is purely numeric, the program will assume the column is quantitative in nature.
Transform the type of the column using the astype Series method:
uc['COLLEGE_ID'].astype(int)
0 131200
1 131300
2 131400
...
6 132000
7 132100
8 4127100
Name: COLLEGE_ID, Length: 9, dtype: int64
PCT_FED_LOAN Column: This column represents a quantitative column, yet the column in Pandas is of object type. One can confirm these values are strings by inspecting one of the entries:
ucb_loans = uc.loc[0, 'PCT_FED_LOAN']
ucb_loans, type(ucb_loans)
('23.73%', str)
Because of the presence of the % symbol, Pandas doesn’t immediately know how to coerce the values of PCT_FED_LOAN to float-type (verify this!). The column requires cleaning first: strip off the %, then coerce the remaining number to a float-type. To strip off the % characters, use the string methods Pandas provides within its str name-space. The cleaning steps are as follows:
1. strip the % character off the end of each string using Series.str.strip,
2. coerce the remain string to a float using astype,
3. convert the percentage (between 0-100) to a proportion (between 0-1).
(
uc['PCT_FED_LOAN']
.str.strip('%')
.astype(float)
/ 100
)
0 0.2373
1 0.3600
2 0.3640
...
6 0.3632
7 0.4641
8 0.5013
Name: PCT_FED_LOAN, Length: 9, dtype: float64
MEDIAN_GRAD_DEBT Column: Similarly, the median debt at graduation is a quantitative column being represent as a string-type because of the presence of the $character. A similar process translates these formatted strings into float-types: ( uc['MEDIAN_GRAD_DEBT'] .str.strip('$')
.astype(float)
)
0 13200.0
1 14000.0
2 16500.0
...
6 16300.0
7 19700.0
8 19018.0
Name: MEDIAN_GRAD_DEBT, Length: 9, dtype: float64
SETTING Column: This column is ordinal, as there is an ordering of a university’s setting by population density. The ordering of the values is:
Rural < Suburban < City < Urban
There are three options for cleaning an ordinal column, each with advantages and disadvantages. The ordinal column can be left as is, using the given string representation, mapped to integer values reflecting the ordering described above, or encoded using a Pandas Categorical data-type.
For example, one can create an explicit integer-encoding by passing a dictionary to the replace method:
encoding = {
'Rural': 0,
'Suburban': 1,
'City': 2,
'Urban': 3
}
uc['SETTING'].replace(encoding)
0 2
1 2
2 1
..
6 1
7 1
8 0
Name: SETTING, Length: 9, dtype: int64
Alternatively, Pandas has a categorical data-type that does this conversion automatically. However, a categorical data-type representing an ordinal column must have the order explicitly defined:
values = ['Rural', 'Suburban', 'City', 'Urban'] # with order!
setting_dtype = pd.CategoricalDtype(categories=values, ordered=True)
setting_dtype
CategoricalDtype(categories=['Rural', 'Suburban', 'City', 'Urban'], ordered=True)
uc['SETTING'].astype(setting_dtype)
0 City
1 City
2 Suburban
...
6 Suburban
7 Suburban
8 Rural
Name: SETTING, Length: 9, dtype: category
Categories (4, object): [Rural < Suburban < City < Urban]
UNDERGRAD_POP Column: This column is quantitative, as arithmetic on this column makes sense. For example, the average or total undergraduate populations across schools represent meaningful quantities. As such, even though the column represents integer counts (students don’t come in fractions!), float-types can be considered appropriate, along with integer-types.
Finally, one can combine the cleaning steps above into a new table:
cleaned_uc = (
pd.DataFrame().assign(
COLLEGE_ID=uc['COLLEGE_ID'].astype(int),
NAME=uc['NAME'],
PCT_FED_LOAN=(uc['PCT_FED_LOAN'].str.strip('%').astype(float) / 100),
MEDIAN_GRAD_DEBT=uc['MEDIAN_GRAD_DEBT'].str.strip('$').astype(float), SETTING=uc['SETTING'].astype(setting_dtype) ) ) cleaned_uc COLLEGE_ID NAME UNDERGRAD_POP PCT_FED_LOAN MEDIAN_GRAD_DEBT SETTING 0 131200 University of California-Berkeley 30574.0 0.2373 13200.0 City 1 131300 University of California-Davis 30046.0 0.3600 14000.0 City 2 131400 University of California-Irvine 29295.0 0.3640 16500.0 Suburban ... ... ... ... ... ... ... 6 132000 University of California-Santa Barbara 22181.0 0.3632 16300.0 Suburban 7 132100 University of California-Santa Cruz 17577.0 0.4641 19700.0 Suburban 8 4127100 University of California-Merced 7375.0 0.5013 19018.0 Rural 9 rows × 6 columns cleaned_uc.dtypes COLLEGE_ID int64 NAME object UNDERGRAD_POP float64 PCT_FED_LOAN float64 MEDIAN_GRAD_DEBT float64 SETTING category dtype: object Example: Sometimes a column contains values that do not easily coerce to the intended type. For example, suppose the Series given below needs to be represented as a column of float-type: numbers = pd.Series('1,2,2.5,3 1/4,6 7/8,8.3,9,11,99/3'.split(',')) numbers 0 1 1 2 2 2.5 ... 6 9 7 11 8 99/3 Length: 9, dtype: object Attempting to use astype(float) will result in a ValueError, as python doesn’t know how to convert ‘3 1/4’ to a float. To convert this column to float type, one must write a custom function that handles the case of fractional values. def convert_mixed_to_float(s): '''converts a string representation of an integer, decimal, or proper fraction to a float''' if '/' not in s: return float(s) else: if ' ' in s: whole, frac = s.split(' ') else: whole, frac = '0', s num, denom = frac.split('/') return float(whole) + float(num) / float(denom) numbers.apply(convert_mixed_to_float) 0 1.0 1 2.0 2 2.5 ... 6 9.0 7 11.0 8 33.0 Length: 9, dtype: float64 Remark: The pandas function pd.to_numeric(series, errors='coerce') blindly coerces values of a Series to numeric values. When the keyword argument errors='coerce' silently replaces any non-coercible values with a missing value. This often leads to dropping data that never should have been dropped; non-coercible data should be handled with care, as it’s often present due to systematic issues with how the data was recorded. If such a function was used on numbers, 1/3 of the data would be lost! #### Summary: Cleaning up data-types¶ Quantitative Columns should be represented with numeric data types (float and integer types). However, float-types should only represent quantitative columns. As float values are subject to precision errors, one can only be guaranteed that a value is represented by a similar value. This similarity only makes sense with quantitative data. Ordinal data should be represented with a data type that supports ordering (e.g. string or integer types) and has infinite precision (so the label doesn’t change). Three common approaches are outlined below: 1. left as is, using the given string representation. • Advantages: easy to interpret the values (they remain unchanged) • Disadvantages: the string ordering may not match the ordering of the column; strings are inefficient to store. 2. mapped to integer values, reflecting the ordering of the values of the column (called integer coding). • Advantages: the ordering of the ordinal values are captured by the integers; integers are efficient to store. • Disadvantages: integers obscure what the value originally represented. 3. encoded using a Pandas Categorical data-type. • Advantages: stores and orders values as integers, while displaying them as strings. • Disadvantages: relatively new feature in Pandas that may not always be available to use. Nominal data should be represented by a data type with infinite precision. As the values are merely labels, any change in the values would be an arbitrary change to the values of the column. ### Missing data¶ While Pandas encodes missing values using the value NaN, missing data may appear in a dataset in a variety ways. Different data processing and storage systems may use different conventions (e.g. NULL, None, "") or may disallow missing values entirely. This section focuses on strategies for identifying the missing values in a dataset; later, the course confronts the problem of how to handle the identified missing values. Common examples of ‘placeholder’ missing values that are not obviously missing at first glance: 1. The integer 0 is a common stand-in for missing values, especially when the value is unlikely to occur in a dataset. For example, someone of age 2. The integer -1 is a common stand-in for missing values for columns containing non-negative values. For example, a column containing the “number of children in a family” might use -1 as a missing value. 3. 1900 and 1970 are common placeholder for missing dates in Excel and UNIX respectively. Each of these conventions picks a time as ‘time zero’ and stores dates and times as “time elapsed since time zero” (1970 was roughly the year UNIX was created). In this way, these placeholders are both special case of using 0 as a missing value. Example: The fitness tracking app Strava encodes missing location coordinates as 0,0. This encoding gives the impression of Strava user activity at 0°00’00.0”N+0°00’00.0”E, dubbed Null Island. • While this is a real location, it’s location in the middle of the Atlantic ocean makes it highly unlikely to be a meaningful value. • The common usage of 0 as a missing value makes it likely that this activity comes from users with their location tracking disabled. Example: The bike sharing company Bluebikes makes bike usage data available for the city of Boston. Plotting the distribution of Bluebikes users’ year of birth reveals a large spike of riders born in the year 1969. Is this spike unreasonably large? Or is it possible riders of exactly that age use the bike share service that much more? Most likely these values are missing values, as 1969 corresponds to the value -1 when encoded as a UNIX time-stamp. More problematic is that there really are riders whose birth year is 1969! In this case, telling apart the missing values from the true values may be difficult. Example: The college scorecard data contains information on many colleges beyond the University of California campuses. The table colleges below contains a much larger sample of universities across the United States. colleges = pd.read_csv('data/colleges.csv') colleges.sample(5) COLLEGE_ID NAME UNDERGRAD_POP PCT_FED_LOAN MEDIAN_GRAD_DEBT SETTING 4261 1005200.0 Jefferson Lewis BOCES-Practical Nursing Program 77.0 68.18%$7250 Rural
1604 2099700.0 Ross Medical Education Center-Flint 153.0 62.5% $9500 Suburb 830 1113300.0 College of Eastern Idaho 698.0 34.04%$9500 Urban
6031 4241500.0 Wave Leadership College 52.0 53.23% PrivacySuppressed Urban
4814 3855300.0 Ecclesia College 229.0 51.29% $16500 Rural The previous code written for cleaning MEDIAN_GRAD_DEBT no longer works in this larger dataset (check this code throws an exception), as some values don’t conform to the standard representation of US dollars. Before generalizing the cleaning code to the larger dataset, it’s necessary to understand the extent to which these “unusual values” are present. def check_float(x): '''returns true if a value is coercible to a float; otherwise returns false''' try: float(x) return True except ValueError: return False Print the distribution of values that aren’t of the the form$ followed by a string coercible to a float value:
nonnums = (
.str.strip('$') .apply(check_float) ) colleges.loc[~nonnums, 'MEDIAN_GRAD_DEBT'].value_counts() PrivacySuppressed 1175 Name: MEDIAN_GRAD_DEBT, dtype: int64 In the larger dataset, the only value that doesn’t represent dollar amounts is ‘PrivacySuppressed’. The meaning of this value is explained in the dataset documentation: any data not reported in order to protect an individual’s privacy are shown as PrivacySuppressed. Thus, it seems reasonable to interpret this value as a missing value and replace it with NULL. def median_grad_debt_cleaner(ser): return ser.replace('PrivacySuppressed', np.NaN).str.strip('$')
colleges_cleaned = colleges.agg(csc_cleaning_map)
colleges_cleaned
0 100200 Alabama A & M University 4824.0 0.7697 32750.0 Urban
1 105200 University of Alabama at Birmingham 12866.0 0.5207 21833.0 Urban
2 2503400 Amridge University 322.0 0.8741 22890.0 Urban
... ... ... ... ... ... ...
6175 4106301 Palm Beach Academy of Health & Beauty-Distinct... 5.0 0.0714 6333.0 NaN
6176 295600 Piedmont International University 336.0 0.4847 12498.0 Urban
6177 4250501 National Personal Training Institute-Tampa 32.0 0.2982 NaN NaN
6178 rows × 6 columns
### Unfaithful data¶
The provenance of a dataset, from a real-world event to a dataset displayed in a notebook, is often long and complicated. However, for this data to be of any use, one must assess how well it captures the “reality” it’s meant to describe. Once a dataset is properly typed, it should be assessed for its faithfulness to the data generating process.
Generally, such an assessment involves asking if the data contain unrealistic or “incorrect” values? For example:
• Are there dates in the future for events that occurred the past?
• Are there locations in the dataset that don’t exist?
• Are there negative counts? (is that a missing value?)
• Are names misspelled? Do single names have variants such as nicknames?
• Are there unreasonably large outliers?
Assessing for faithfulness involves being skeptical of the data known data provenance and doing research into the assumptions on how the data were generated. This assessment generally involves understanding and identifying problems with the data, then attempting to “fix” the problem. How to fix unfaithful data depends on the context:
• Poor quality observations may be dropped if there are very few or sufficient random.
• The likely values might be (approximately) inferable from either researching into data provenance, or from other values in the dataset.
• The source of problem might be fixed through changes in the experimental design, instrumentation, or data collection.
Assessing the faithfulness of the data to the data generating process involves interpreting exploratory data analyses of the attributes.
Example: The ‘UNDERGRAD_POP’ attribute represents the count of undergraduates at colleges in the dataset. An initial assessment of this attribute involves plotting the distribution of undergraduates and analyzing the values and their proportions:
kind='hist', bins=25, logy=True,
);
One interesting observation is how many small colleges are in the dataset (note the log-scale!). How small are they? Are the values ever negative?
Plotting colleges with fewer than 1 student reveals six colleges with an undergraduate population of size 0 and no colleges with negative student populations:
# What could this possibly mean?
2702 2154402 Beau Monde College of Hair Design 0.0 0.0962 8973.0 Urban
3107 2188300 Pentecostal Theological Seminary 0.0 0.0000 NaN Urban
4934 489812 Miami-Jacobs Career College-Springboro 0.0 0.9706 24549.5 NaN
5164 469205 Dorsey Business Schools-Farmington Hills 0.0 1.0000 12986.0 Urban
5574 1091302 Minneapolis Media Institute 0.0 0.8706 14438.0 NaN
6155 405740 National American University-Westwood Teach-Ou... 0.0 0.7027 30223.0 Urban
How might a school have an undergraduate population of size zero? Is this a mistake or reality?
The answer may be in how the government defines an undergraduate student. It’s reasonable to assume that certain schools, like beauty schools or theological seminaries, might service populations outside of an undergraduate program. However, this also brings up the question of why there aren’t more colleges with an undergraduate population of zero! More research is necessary to understand the reasonableness of a zero value in this column.
Example: The college NAME attribute contains the name of each college in the dataset. One might expect this column to uniquely represent a college; computing the head of the empirical distribution of college names reveals this is not the case:
colleges_cleaned.NAME.value_counts()
Stevens-Henager College 7
Columbia College 5
McCann School of Business & Technology 5
..
Greensboro College 1
University of California-Davis 1
Name: NAME, Length: 6068, dtype: int64
Among these 7 duplicate ‘Stevens-Henager Colleges’, are they:
• Truly the same college? (e.g. an error in recording)
• Are they different, but somehow related? (e.g. different branches of the same school)
• Are they totally unrelated? (e.g. a total coincidence they have the same name).
colleges_cleaned[colleges_cleaned.NAME == 'Stevens-Henager College']
3384 367400 Stevens-Henager College 205.0 0.8255 27139.0 NaN
3385 367401 Stevens-Henager College 143.0 0.7831 27139.0 Urban
4521 367403 Stevens-Henager College 563.0 0.7481 27139.0 NaN
4833 367405 Stevens-Henager College 72.0 0.7561 27139.0 Urban
5438 367406 Stevens-Henager College 259.0 0.7057 27139.0 Urban
5576 3120302 Stevens-Henager College 111.0 0.8455 25732.0 Rural
5681 367411 Stevens-Henager College 111.0 0.8281 27139.0 Urban
Since the school statistics differ across the seven colleges, the observations are likely not true duplicates. However, they might be somehow related, as the COLLEGE_IDs are close. More research is necessary to answer this question fully. Next steps might include:
• Retrieving the other attributes from the full college scorecard dataset (e.g. location),
• Looking up the colleges from another source to match the schools in the table to outside information.
### Code Design¶
The task of data cleaning can be complicated and ad-hoc. The decisions involved in developing data cleaning code often uses domain-specific judgment calls that are far from obvious to someone attempting to understand the process (including the developer themselves, not long after writing the code). Data Scientists should write organized, easy-to-read, easy-to-adapt data cleaning code.
An reasonable approach to writing cleaning code that deals with one column at a time is to:
1. create a cleaning function for each column,
2. store the cleaning functions in a dictionary, keyed by the name of the column to be cleaned,
3. apply the cleaning functions to the columns using DataFrame.agg.
While it may seem overkill to write a separate function for each column, this boilerplate has several advantages over ad-hoc procedural code:
• The docstring of each cleaning function can contain descriptions of both the cleaning itself as well as assumptions made in the development of the cleaning logic.
• Organizing logic into separate functions reduces code complexity as columns are added/changed, or as the cleaning code evolves into more sophisticated logic.
• Applying a dictionary of functions via agg allows same code structure to work with parallel processing library such as Spark and Dask.
Example: Using this approach, the cleaning code for the College Scorecard data looks as follows:
def college_id_cleaner(ser):
'''returns identifier COLLEGE_ID as an integer type'''
return ser.astype(int)
def pct_fed_loan_cleaner(ser):
'''returns PCT_FED_LOAN as a proportion between 0 and 1.'''
return ser.str.strip('%').astype(float) / 100
'''returns MEDIAN_GRAD_DEBT as a float (in USD)'''
return ser.replace('PrivacySuppressed', np.NaN).str.strip('\$').astype(float)
def setting_cleaner(ser):
'''returns SETTING column as a category data-type
ordered as RURAL < SUBURBAN < CITY < URBAN'''
return ser.astype(setting_dtype)
csc_cleaning_map = {
'COLLEGE_ID': college_id_cleaner,
'PCT_FED_LOAN': pct_fed_loan_cleaner,
'SETTING': setting_cleaner
}
csc_cleaning_map = {x: csc_cleaning_map.get(x, lambda x:x) for x in colleges.columns}
colleges.agg(csc_cleaning_map) | 2023-04-01 17:36:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19159407913684845, "perplexity": 4720.195268647615}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00152.warc.gz"} |
https://www.physicsforums.com/threads/binomial-distribution-question.468220/ | # Binomial Distribution Question
## Homework Statement
The question provides a table and asks:
Number of Attempts Fraction persisting in fibrillation
0 1.00
1 0.37
2 0.15
3 0.07
4 0.02
"Assume that the probability p of defibrillation on one attempt is independent of other attempts. Obtain an equation for the probability that the patient remains in fibrillation after N attempts. Compare it to the data and estimate p."
## Homework Equations
Binomial Distribution
## The Attempt at a Solution
I used the binomial distribution for my equation to estimate the probability that the patient remains in fibrillation. I'm not concerned about the "number of successes" in each attempt, so I believe this problem is similar to asking a coin toss question. For example, the probability that a coin will return heads after 1 attempt is 0.50. After 2 attempts, 0.5*0.5, etc.
Likewise, there are two possibilities: fibrillation and defibrillation. Instead of the coin example, the probability that the patient remains in fibrillation is 0.37. After two attempts, 0.37*0.37. After 3 attempts, 0.37*0.37*0.37, etc. It models the data rather well.
So then to estimate "p", the probability of defibrillation in each, p+q = 1 ---> p= 1-q
Does this sound reasonable?
## The Attempt at a Solution
Related Advanced Physics Homework Help News on Phys.org
Andrew Mason
Homework Helper
Obtain an equation for the probability that the patient remains in fibrillation after N attempts. Compare it to the data and estimate p."
## The Attempt at a Solution
I used the binomial distribution for my equation to estimate the probability that the patient remains in fibrillation. I'm not concerned about the "number of successes" in each attempt, so I believe this problem is similar to asking a coin toss question. For example, the probability that a coin will return heads after 1 attempt is 0.50. After 2 attempts, 0.5*0.5, etc.
Likewise, there are two possibilities: fibrillation and defibrillation. Instead of the coin example, the probability that the patient remains in fibrillation is 0.37. After two attempts, 0.37*0.37. After 3 attempts, 0.37*0.37*0.37, etc. It models the data rather well.
So then to estimate "p", the probability of defibrillation in each, p+q = 1 ---> p= 1-q
Does this sound reasonable?
The question asks for an equation. What is your equation for the probability that the patient remains in fibrillation after N attempts.
AM | 2020-05-25 12:31:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066902160644531, "perplexity": 1273.1351115229836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00208.warc.gz"} |
http://math.stackexchange.com/questions/386856/how-do-i-prove-this-trigonometric-expression | How do i prove this trigonometric expression?
How do you prove this? $${(1-2\sin^2A)^2 \over \cos^4A-\sin^4A} = {2\cos^2A - 1}$$
-
Express everything in terms of $u = \cos A$, and go from there... – vonbrand May 9 '13 at 18:14
I don't.${}{}{}$ – Michael Greinecker May 9 '13 at 21:12
\begin{align} \frac{(1-2\sin^2(A))^2}{\cos^4(A)-\sin^4(A)} &=\frac{(1-2\sin^2(A))^2}{(1-\sin^2(A))^2-\sin^4(A)}\\ &=\frac{(1-2\sin^2(A))^2}{1-2\sin^2(A)}\\ &=1-2\sin^2(A)\\[9pt] &=1-2(1-\cos^2(A))\\[9pt] &=2\cos^2(A)-1 \end{align}
-
Recall the following expressions for $\cos(2A)$: $$\color{magenta}{\cos(2A)} = \cos^2(A) - \sin^2(A) = \color{red}{2\cos^2(A)-1} = \color{blue}{1-2\sin^2(A)} = \color{green}{\cos^4(A) - \sin^4(A)}$$ Hence, we have $$\dfrac{\left(\color{blue}{1-2\sin^2(A)}\right)^2}{\color{green}{\cos^4(A) - \sin^4(A)}} = \dfrac{\color{magenta}{\cos^2(2A)}}{\color{magenta}{\cos(2A)}} = \color{magenta}{\cos(2A)} = \color{red}{2\cos^2(A)-1}$$
-
${(1-2\sin^2A)^2 \over \cos^4A-\sin^4A}$
$cos^22A \over \ (cos^2A-\sin^2A)(cos^2A+\sin^2A)$
$cos^22A \over \ (cos^2A-\sin^2A) \ *\ 1$
$cos^22A \over \ (cos2A)$
cos2A
${2\cos^2A - 1}$
-
Observe that we need to eliminate $\sin A$
So using $\sin^2A=1-\cos^2A,$
$1-2\sin^2A=1-2(1-\cos^2A)=2\cos^2A-1$
and $\cos^4A-\sin^4A=(\cos^2A-\sin^2A)(\cos^2A+\sin^2A)=1\cdot\{\cos^2A-(1-\cos^2A)\}=2\cos^2A-1$
or $\cos^4A-\sin^4A=\cos^4A-(1-\cos^2A)^2=2\cos^2A-1$
- | 2016-05-01 20:11:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981210589408875, "perplexity": 3904.18226768548}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00195-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://dsp.stackexchange.com/questions | # All Questions
5 views
### How to calculate these Jacobian matrices
While trying to understand this paper: http://www.icg.tugraz.at/publications/pdf/ismar2013_userfriendlyslaminitialization/at_download/file I got stuck on section 3.1, where the Jacobian matrices are ...
4 views
### Low pass filter to maintain edge information
I am looking for a kernel as low pass filter that satisfy as:I must find a kernel that statisfies as follows: In the my reference paper, the author suggest gaussian kernel that is The gaussian ...
10 views
### how can i design the FIR Filters using window function in MATLAB ;without using inbuilt functions [on hold]
please provide me code for --> how can i design the FIR Filters using window function in MATLAB ;without using inbuilt functions.
8 views
### image processing
If I have the histogram of an input image as Gaussian probability density function of the form: $$P_r(r)=\dfrac{1}{\sqrt{2\pi}\sigma}e^{-\dfrac{(r-m)^2}{2\sigma^2}}$$ where: $m$ and $\sigma$ are ...
25 views
### Exercise related to frequency resolution and SNR
I'm studying a book aboud dsp and trying to make excercises. Here is one I'm interested in: A scientist acquires 65,536 samples from an experiment at a sampling rate of 1 MHz. He knows that the ...
22 views
### Z-Transform amplitude
I've tried to compare the Bode plot of a discrete-time system with a manually computed Z-transform in Matlab like this: ...
168 views
### Number of FFT points required for a specific frequency resolution for an oversampled signal
I have a bandpass signal centered at 2 MHz and bandwidth of 50 kHz (the signal frequency varies from 2 MHz - 25 kHz to 2 MHz + 25 kHz). This signal is being sampled at 10 MHz. I want a frequency ...
31 views
### Comparison between average kernel and gaussian kernel?
In image processing, we have two kinds of major kernels that are average kernel and gaussian kernel. For image segmentation, which is difference between average kernel and gaussian kernel? I found ...
6 views
### Remeshing methods to obtain same number of vertices and faces in aligned meshes
I've a set of aligned meshes (human femura surfaces). I would need to remesh each of these surfaces so they have exactly the same number of vertices and triangles. I need this because I'm building a ...
13 views
### algorithm Matlab function 'stream2' and 'stream3' use to compute streamline
Can anyone explain how the MATLAB functions: stream2(x,y,z,u,v,startx,starty) and stream3(x,y,z,u,v,startx,starty) compute stream lines from vector data u and v? In the stream2.m file, it calls the ...
74 views
### Trignometric Fourier series representation of a continous time signal
While learning Fourier series I read the definitions of representation for a continuous time signal $x(t)$ as: x(t)=A_0 + 2 \sum_{k=1}^{\infty} A_k \cos(k \omega_0 t) - B_k \sin(k \omega_0 t) ...
37 views
### Design and implementation of causal band-pass filter for biosignals - what to consider?
I currently work in a project of clinical software (C#) which deals with clinical biosignals, specifically EMG and other low-frequency signals (load-cells, goniometers, and other biomechanical ...
20 views
### How is noise variance related to bandwidth?
I have to calculate asymptotic coding gain with soft decision decoding over an uncoded reference system. Lets say the coded system requires a bandwidth $k$ times higher than the bandwidth of the ...
38 views
### Writing a Discrete Fourier Transform program
I would like to write a DFT program using FFT. This is actually used for very large matrix-vector multiplication (10^8 * 10^8), which is simplified to a vector-to-vector convolution, and further ...
25 views
### what are and why are sine and cosine modulated integrals used?
I have found the definition of the following formulas in a paper regarding active vibration control, where they are called sine and cosine modulated integrals. $y$ is measurement signal with a strong ...
17 views
### Clustering of overlapping windows (K Means)
I'm attempting the clustering of some of my data using a K-means Euclidean square method. I've broken the data down into small windows, 200 samples in size, but with a 50% overlap. I've chosen to use ...
39 views
### correct way to implement windowing
I'm trying to implement the windowing in a program, for that I've wrote a sin function with 2048 samples. I'm reading the values and trying to calculate the PSD using the "rect" window. When my window ...
39 views
### Least mean square filter diverging
I have used LMS algorithm to estimate signal in presence of high noise which includes chaotic and random. MSE values in db at each SNR for the coefficients is +ve eventhough I converted to Db scale. ...
24 views
### Measuring spectral tilt
I have quite a few noisy signals and I want to calculate their spectral tilt over time, preferably using a method from literature. So far, I can only come up with the slope of the line between the F0 ...
40 views
### What Information does the phase of a cross power spectra give me?
I obtain the cross power spectra by the following steps: compute the FFT of signal A and B Multiply A with the conjugate of B store it in C (cross power spectra) Now looking at the Phase of this ...
66 views
### “chirp” with arbitrary period
Say you have a linear chirp, which is a bit like a sinusoid with a gradually increasing period, but instead of the linearly increasing period, could you pick an arbitrary value, like the red line in ...
77 views
### Running an FFT filter on a large data set
I have a non realtime application where I need to run a bandpass FFT filter on a data array of between 5k and 10k data points. Do I break it up into (say) 256 point chunks, run the FFT on that and ...
52 views
### Generating a high SNR sinusoide with lut on a DSP
I'm generating a sinus by using a lut method on a DSPic33f. My sample rate is 48 kHz, so I saved 12000 of the first value (unsigned int, 16 bits) and use trigonometric formulas to calculate the other ...
40 views
### Determine whether a signal is periodic or not and get fundamental period
Here is my signal Cos(n/2)*cos(pi*n/4) cos(n/2) has period 4pi and cos(pi*n/4) has period 8 Now, the question is will the signal be periodic for fundamental ...
34 views
### Real Time Object Tracking using image processing
How we can track a single selected object from multiple detected moving objects using single fixed camara?
50 views
### FFT Matlab - Meaning of Frequency Vector
I'm following a tutorial about the FFT. It's well explained but I don't understand the meaning of the frequency vector: ...
31 views
### When does l1 regularisation give a sparse solution?
I was maximising a likelihood function, which is convex. I know that the system has a K-sparse solution. I wanted to know the conditions (or some sufficient conditions) on the likelihood function ...
34 views
### Broad band transmission [on hold]
Why can't we send digital signal directly to the band pass channel?
26 views
### Does voice copying consider the chemicals uracil and cytosine that make up words that are spoken? [on hold]
Can voice copying technology mimic/copy/fake someone's voice and then be added to a recording to make it appear that someone has said something that they have not said?
27 views
### How to mimic/copy/fake someone's voice and then add it to a recording? [duplicate]
Does voice copying technology have the ability to copy uracil and cytosine the chemicals that create words that get spoken?
53 views
### Analog Hilbert transformer
I know the FIR approach, I have seen IIR, to, but I'd like to know if it's possible to implement a Hilbert transformer in analog domain, i.e. with integrators instead of delays. Is it possible? If ...
80 views
### How to model Tape Saturation (Audio DSP)?
I'm looking for info that would help me to build a Tape Saturation process. I'm working with the WebAudio API. The API provides a number of DSP nodes that perform basic processes, such as a Convolver ...
51 views
### How to remove background noise from sound file for analysis
I'm working on a project at my university and it involves background noise recognition/classification. I'm very new to this so I'm unsure what to really search/google or where to really begin any ...
22 views
### Constant Q transform and time synchronisation
I'm looking to use the constant Q transform for onset detection and am having a bit of trouble aligning the time axis of the transform with actual note events. I'm using the CQT transform toolbox ...
28 views
### What are the eigenvectors of Laplace/Z transform?
I understand that $\ddot{y} - a\dot{y} = 0$ then equation turns into $we^{iwt}-ae^{iwt} = 0$ and, further, into $w^2e^{iwt}-ae^{iwt} = 0$. So, $e^{iwt}$ is an eigenvector of differentiation opeartor, ...
51 views
### The role of GPS in INS/GPS navigation systems
Ideally, a gyroscope and an accelerometer would be enough for a complete navigation solution (attitude + position), using dead reckoning. This comprise the Inertial Navigation System, INS. In ...
29 views
### Conceptual question on information theory (Part1) [duplicate]
Shannon's entropy measures the information content by means of probability. Is it the information content or the information that increases or decreases with entropy? Increase in entropy means that ...
30 views
### Confusion related to terms: information, information content and entropy (Part2)
Shannon's entropy measures the information content by means of probability. Is it the information content or the information that increases or decreases with entropy? Increase in entropy means that ...
44 views
### Difficulties in understanding mutual information concept
For two signals or random variables to be independent, the mutual information (MI) must be zero. Let, $error = X -Y$ where X is the desired signal and Y is the measured signal. It is desired that ...
31 views
### Approximate a system frequency response with a filter in Matlab
suppose I know the frequency response of a (linear) model approximating a real physical system but only at a specific frequency $f_0$ (so basically I have a complex number whose module is the ...
26 views
### What are some example of applications for LTI systems
I'm giving a lecture on LTI systems. I encountered some questions: for a discrete LTI system H with impulse response h, is the system applied on signal x(t) equals x*h - normal discrete convolution ...
34 views
### Make a movie from multiple images in MATLAB
I have 10 images in the folder say "D:\images\" named as frame1,frame2,...,frame10. I want to make a movie from the color images with frame rate as per choice.Also i want to save created video file in ...
16 views
### Low dimensional system identification algorithms
I'm a physics Phd student and just have a self-taught knowledge about lti-systems. After a while I found out that several techniques such as heat capacity or thermal-conductance measurements can be ...
15 views
### Price of a hyperspectral camera?
Browsing through the questions on this site, I see that there are many about processing hyperspectral images. Because hyperspectral cameras are still not too common, it is difficult to find the price ...
21 views
### Confusion related to entropy and information
In general, entropy of a signal or image conveys the uncertainty and it is a measure of impurity. Information on the other hand tells us about how certain we are about the data. It is a measure of ...
9 views
### coefficients of farrow structure
what is the method for generating coefficients for farrow structure in matlab. I am designing a low pass filter using firpm but the output of farrow structure is not delayed.
42 views
### Conceptual question on entropy and its relation to information
Learning Informative Statistics: A Nonparametric Approach paper presents an approach to parameter estimation by entropy minimization. There are other related works "Minimum-entropy estimation in ...
47 views
### How to simulate AWGN in communication systems for specific bandwidth
I am trying to generate a AWGN waveform to add it to the signal of my simulated communication system. The operating bandwidth of the communication system is about ... | 2014-08-01 22:28:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111271500587463, "perplexity": 1576.0509484170618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00449-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/285816/can-someone-identify-this-math-font/285818 | # Can someone identify this math font?
I'm doing my master's thesis and trying to find a good font for math symbols and formulas...
I would like to use the following font style:
Can anyone recognize this? How to get X the way it is in this formula etc. in Word 2007? =)
Thnx
-
There's an equation editor in there. – hjpotter92 Jan 24 '13 at 12:49
Okay so I just use it to get X or Y with that style? =) – jjepsuomi Jan 24 '13 at 12:50
The time you are spending here in trying to learn these things about Word 2007 it is better to be applied in learning Latex. – Tomás Jan 24 '13 at 14:59
Well, "user1565754" doesn't say what his area of study is. Unless it is math, computer science, or maybe physics, then it could be that Word is required to be used. Strange but true! – GEdgar Jan 24 '13 at 17:26
I would highly recommend using something other than Word 2007 for your thesis. You seem to know how to use $\LaTeX$, as you used it to create your post, so why not write your thesis in it? There are a number of guides on the internet for creating documents in $\LaTeX$, one frequently recommended one is this, from the Art of Problem Solving. Although if you are using Word 2007, there is a formula editor.
-
Hey :) No I didn't use LATEX, that is a screenshot =) but thank you anyway – jjepsuomi Jan 24 '13 at 12:54
@user1565754 my apologies. But it's not particularly complicated to learn, and is a very useful skill for many things, not just mathematical papers. – Sam DeHority Jan 24 '13 at 12:55
No problems =) Thank you! I will check that out =) – jjepsuomi Jan 24 '13 at 12:57
@user1565754 You should definitely go for LaTeX. However, should you happen to lack the time to really learn it, you could use e.g. the LaTeX Equation Editor which allows you to create formulas as .emf files to import in word. That's still easier than using Word's formula editor (and also works if you are forced to use ... PowerPoint (shiver)) – Tobias Kienzler Jan 24 '13 at 15:47
May I recommend beamer if you need to give a presentation. If anyone tries to force you to use powerpoint or word... flee. – Alexander Gruber Jan 24 '13 at 17:17
People who already know LaTeX are usually fond of it. People who don't know it typically have a hard time getting started, and they have an even harder time if they want to do any document formatting that's a bit out of the ordinary. In my opinion, there's absolutely nothing wrong with the Equation Editor in MS Word, but I expect most people on this site are LaTeX veterans, and they will disagree with me. I have used both packages fairly extensively, and I'm confident that either of them would do a nice job of your thesis.
The specific font (family) shown in your image is called Computer Modern. It's the font traditionally used with TeX and LaTeX systems. The standard font used by the MS Word equation editor is called Cambria Math. If you want to get the Computer Modern look in MS Word, you have to use a font called Latin Modern. If you want the Cambria Math look in LaTeX, you have to use the "fontspec" package or something equivalent. Here is what your formula looks like with the Cambria font in MS Word:
If you are considering a career as an academic, and you'll be publishing a lot of papers containing mathematics, then learning LaTeX is definitely worthwhile. Outside that community, MS Word is the standard, so learning to use that competently would be more valuable, in my view.
You might find that your university has some sort of thesis template that they will force you to use. So, you may not have a choice in this matter.
I expect this post will generate some heat. People get pretty passionate about (La)TeX, for some reason.
-
I'm not going to give you flack, you actually answered the question. I find it odd that the one that didn't answer the question got accepted. – ghoppe Jan 24 '13 at 19:38
Even more odd -- a reply that didn't really answer the question got a lot of up-votes. Typical of the irrational behaviour that muddies the Tex-vs-Office debate. I don't mind the flack, especially if it comes from people who actually have some expertise with Word or Powerpoint. – bubba Jan 25 '13 at 1:49
Thank you for your answer! Sorry I accepted the answer before, he just gave the answer faster and I decided to switch to LaTex =) but thank you anyway! I would give points to everybody if I could, but I have to pick only one :( – jjepsuomi Mar 15 '13 at 10:50
From the MS Office documentation:
Equations are edited directly from within Word. To do this, click the Insert tab then click the Equation button.
Anyways, the font is Cambria Math. It comes prebundled with Microsoft Office 2003 and above.
-
The formula looks pretty much as if it was produced by (La)$\TeX$, the same typesetting system that is used for fomulas on this site (via MathJax); for comparison here is the same formula: $$\mu_{xt}=E(x_t)=\int_{-\infty}^\infty xf_t(x)dx$$ If you want to see the sources for this, right-click on the formula and choose "Show Math As -> TeX Commands". You can also play with the Math Settings for instance to zoom in on the formula. | 2014-04-21 07:06:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825329065322876, "perplexity": 1002.979706219878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.majiajun.org/talks/ | # Talks
Local theta correspondence between supercuspidal representations
Jul 23 2014 at NUS, Sep 05 2014 at IMS CUHK, July 26 2015 at Zhejiang University,
April 22 2015 at NUS, May 08 2015 at HKUST, Dec 25 2015 at Kyoto University
Theta correspondences and representations of classical groups
Oct 09, 2014, School of Mathematical Science, Suzhou University, China
Explicit local theta correspondences between (epipelagic) supercuspidal representations
May 28 2014, Algebraic Geometry and Number Theory Seminar, Ben Gurion University, Israel
Associated varieties and associated cycles of local theta lifts
Feb 26 2014, Faculty of Mathematics and Computer Science, The Weizmann Institute of Science, Israel
Theta lifts of one-dimensional representations
Dec 05 2012, Symposium on Representation Theory 2012, Kagoshima, Japan
Derived functor modules, dual pairs and $U(\mathfrak{g})^K$-actions
Jan 28 2011 at Workshop on Geometry and Representation Theory IMS NUS,
Mar 15 2012 at Conference on Branching Laws IMS NUS | 2018-01-18 09:21:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4131374955177307, "perplexity": 7058.269023269445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887224.19/warc/CC-MAIN-20180118091548-20180118111548-00741.warc.gz"} |
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter15_symbolic/07_lotka.ipynb | This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
# 15.7. Analyzing a nonlinear differential system: Lotka-Volterra (predator-prey) equations¶
Here, we conduct a brief analytical study of a famous nonlinear differential system: the Lotka-Volterra equations, also known as predator-prey equations. This simple model describes the evolution of two interacting populations (e.g. sharks and sardines), where the predators eat the preys. This example illustrates how we can use SymPy to obtain exact expressions and results for fixed points and their stability.
In [ ]:
from sympy import *
init_printing()
In [ ]:
var('x y')
var('a b c d', positive=True)
The variables x and y represent the populations of the preys and predators, respectively. The parameters a, b, c and d are positive parameters (described more precisely in "How it works..."). The equations are:
\begin{align} \frac{dx}{dt} &= f(x) = x(a-by)\\ \frac{dy}{dt} &= g(x) = -y(c-dx) \end{align}
In [ ]:
f = x * (a - b*y)
g = -y * (c - d*x)
Let's find the fixed points of the system (solving f(x,y) = g(x,y) = 0).
In [ ]:
solve([f, g], (x, y))
In [ ]:
(x0, y0), (x1, y1) = _
Let's write the 2D vector with the two equations.
In [ ]:
M = Matrix((f, g)); M
Now we can compute the Jacobian of the system, as a function of (x, y).
In [ ]:
J = M.jacobian((x, y)); J
Let's study the stability of the two fixed points by looking at the eigenvalues of the Jacobian at these points.
In [ ]:
M0 = J.subs(x, x0).subs(y, y0); M0
In [ ]:
M0.eigenvals()
The parameters a and c are strictly positive, so the eigenvalues are real and of opposite signs, and this fixed point is a saddle point. Since this point is unstable, the extinction of both populations is unlikely in this model.
In [ ]:
M1 = J.subs(x, x1).subs(y, y1); M1
In [ ]:
M1.eigenvals()
The eigenvalues are purely imaginary so this fixed point is not hyperbolic, and we cannot draw conclusions about the qualitative behavior of the system around this fixed point from this linear analysis. However, one can show with other methods that oscillations occur around this point.
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | 2017-06-25 21:05:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329856634140015, "perplexity": 790.2607740965525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320582.2/warc/CC-MAIN-20170625203122-20170625223122-00584.warc.gz"} |
https://www.physicsforums.com/threads/accumulation-point-and-limit-point.348979/ | Accumulation point and limit point
1. Oct 25, 2009
thesleeper
1. The problem statement, all variables and given/known data
"If a sequence converges to L, then L is an accumulation point of {a_n|n greater than or equal to 1)."?
Prove or disprove the statement
2. Relevant equations
accumulation point is also a limit point
3. The attempt at a solution
I think the statement is not true. So in order to disprove it, I give an counterexample
consider the sequence {a_n} where a_n=L for all n. This sequence converges to 1, but its range is finite. Hence, this sequence has no accumulation point. Since the definition of an accumulation point of S is that every neighborhood of it contains infinitely many points of the S. Then, the statement is not always true.
Am I correct? and if the statement is true, how do you prove it? | 2017-08-21 02:06:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828464925289154, "perplexity": 222.44015036381535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00016.warc.gz"} |
http://wikieducator.org/Thermodynamics/Equivalence_of_Second_Law_Statements | Equivalence of Second Law Statements
This section is Optional.
Objective
There are a number of ways to state the second law. This section first shows how the different statements so far presented are actually equivalent statements. It then presents some additional statements that have been proposed.
Equivalence of Second Law definitions
We defined the Second Law of Thermodynamics as:
• With no external input, a system will tend toward disorder
• No system can be developed which completely converts heat to work
These two parts are actually two independent statements. The second statement is often called the Planck statement or the Kelvin-Planck statement. We now wish to show that these are in fact equivalent statements.
$dS \geq \frac{dQ}{T}$
To convert heat to work we must remove heat from the surroundings; therefore, the heat removed must be positive. We also previously noted that temperature is always positive. Then, by the Clausis inequality the change in entropy (S), must also be positive. Finally, we defined entropy as a measure of disorder. Therefore, the disorder of the system increases, which is the same as the first statement.
Clausis statement and equivalence to Planck statement
Previously we mentioned that the second law can be restated as the following:
Heat cannot be moved from a colder region to a hotter region without doing work.
This is known as the Clausis statement. We wish to show this is the same as the Planck statement.
We will begin by considering a system which can move heat from a cold region to a hot region without doing work[nb 1]. Look at Figure 1.
Figure 1. Diagram proving that a system which violates the Clausis statement also violates the Planck statement
Say the amount of heat removed from the cold region is Qcold. Since no work is being done the amount of heat added to the hot region must also be Qcold. Now consider another heat engine removing heat from the hot region to the cold region. Say it removes the amount Qhot from the hot region, producing Wnet work and returning Qcold to the cold region. If we now look at the net system we see that Wnet = Qhot - Qcold. Now look at the total system. We see that the no heat is leaving the system, but work is. This violates the Planck statement.
Other Statements
Here we will present some other statements of the second law. We will only present them with brief comments.
Caratheodory
Caratheodory developed a very abstract and mathematical approach to thermodynamics. His statement of the second law is:
In the neighborhood of a given state there are states that cannot be reached from the given state by any adiabatic transformation.
This statement has the advantage that it does not require reference to machines. But due to its complex nature it is not used much.
Hatsopoulos-Keenan Statement
Any system having certain specified constraints and having an upper bound in volume can reach from any initial state a stable equilibrium state with no effect on the environment.
Callen's Postulates
One of the most interesting statement of the laws of thermodynamics is that by Herbert Callen[1]. He has used the postulate approach as used in other areas of physics. His four postulates are:
1. There exist particular states (called equilibrium states) of simple systems that, macroscopically, are characterized completely by the internal energy U, the volume V, and the mole numbers N1, N2, ..., Nr of the chemical components.
2. There exists a function (called the entropy S) of the extensive parameters of any composite system, defined for all equilibrium states and having the following property: The values assumed by the extensive parameters in the absence of an internal constraint are those that maximize the entropy over the manifold of constrained equilibrium states.
3. The entropy of a composite system is additive over the constituent subsystems. The entropy is continuous and differentiable and is a monotonically increasing function of the energy.
4. The entropy of any system vanishes in the state for which $\left (\frac{\partial U}{\partial S} \right )_{V,N_1,...,N_r}=0$ (that is, at the zero of temperature).
The first three postulates include the first and second law of thermodynamics. The fourth postulate is the third law of thermodynamics.
Note
1. Here we are using the fact the if A, then B is the same as if not B, then not A.
Reference
1. Callen, Herbert B. (1985) "Thermodynamics and an Introduction to Thermostatistics", John Wiley and Sons | 2018-12-15 20:44:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462700605392456, "perplexity": 419.67063959297667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00634.warc.gz"} |
http://akademiska.nu/forskning/lista/?p=8&f=& | # Sök uppsatser och vetenskapliga publikationer
(0.006 sekunder)
### "Dikten finns överallt"
Luleå tekniska universitet Övrigt
N/A / Link / Nyberg , Sven / 1997
### Type, cotype and convexity properties of quasi-Banach spaces
Luleå tekniska universitet Övrigt
Results on quasi-Banach spaces, their type and cotype together with the convexity and concavity of quasi-Banach lattices are collected. Several proofs are included. Then the Lebesgue $L^p$, the Lorentz $L^{p,q}$ and the Marcinkiewicz $L^{p,\infty}$ spaces are the special examples. We review also several results of Kami\'nska and the author on convexity, concavity, type and cotype of general Lorentz spaces $\Lambda_{p,w}$
4-946552-14-6 / Link / Maligranda , Lech / 2004
### Preparation of ZSM-5 films from template free precursors
Luleå tekniska universitet Övrigt
Thin films of zeolite ZSM-5 on quartz substrates have been prepared in the absence of organic templates by growth of adsorbed seed crystals attached to a polymer-modified substrate surface.
N/A / Link / Mintova , Svetlana / 1997
### Industrin sviker ungdomar som satsar på operatörsunderhåll
Luleå tekniska universitet Övrigt
N/A / Link / Johansson , Jan / 1989
### Are generalized Lorentz "spaces" really spaces?
Luleå tekniska universitet Övrigt
Let $w$ be a non-negative measurable function on $(0,\infty)$, non-identically zero, such that $W(t)=\int_0^tw(s)ds<\infty$ for all $t>0$. The authors study conditions on $w$ for the Lorentz spaces $\Lambda^p(w)$ and $\Lambda^{p,\infty}(w)$, defined by the conditions $\int_0^\infty (f^*(t))^pw(t)dt<\infty$ and $\sup_{00,$$it is shown that, if$\varphi$satisfies the$\Delta_2$-condition and$w>0$, then$\Lambda_{\varphi,w}$is a linear space if and only if$W$satisfies the$\Delta_2\$-condition.
N/A / Link / Cwikel , Michael / 2004
### Stresses in the hydraulic backfill from analytical calculations and in-situ measurements
Luleå tekniska universitet Övrigt
0-900488-60-3 / Link / Knutsson , Sven / 1981
### Crack face sliding effect on stiffness of laminates with ply cracks
Luleå tekniska universitet Övrigt
The rate of stiffness reduction in damaged laminates with increasing transverse crack density in plies depends on two micromechanical parameters: normalized crack face opening displacement (COD) and crack face sliding displacement (CSD). A FE-based parametric study shows that the only properties that affect the CSD are the thickness ratio and the in-plane shear stiffness ratio of the damaged and neighboring undamaged layers. The dependence is described by a power function with respect to the above mentioned properties. This relationship and the previously obtained power law for COD [Lundmark P, Varna J. Constitutive relationships for damaged laminate in in-plane loading. Int J Dam Mech 2005:14(3):235-59] are used in the damaged laminate constitutive relationships [Lundmark P, Varna J. Constitutive relationships for damaged laminate in in-plane loading. Int J Dam Mech 2005:14(3):235-59], which are closed form exact expressions for general symmetric laminates in in-plane loading. The model is validated analyzing reduction in shear modulus of [Sn,90m]s laminates and comparing with direct FE-calculations. The results are excellent in case of cracks in one layer only. For laminates with two orthogonal systems of cracks, the power law underestimates the CSD. To account for interaction between both systems of cracks, which is of importance for crack face sliding, the power law is modified using the effective shear modulus of the cracked neighboring layer.
N/A / Link / Varna , Janis / 2006
### Temperaturspänningar i betong
Luleå tekniska universitet Övrigt
N/A / Link / Westman , Gustaf / 1995
### Prediction of performance of a commercial scale high pressure roler mill (Poittemill) in production of limestone powders
Luleå tekniska universitet Övrigt
N/A / Link / Yanmin , Wang / 2005
### Tre indikatorer för praktisk produktionsoptimering
Luleå tekniska universitet Övrigt
N/A / Link / Lindqvist , Per-Arne / 2005
### ZSM-5 films prepared from template free precursors
Luleå tekniska universitet Övrigt
Thin continuous films of zeolite ZSM-5 were synthesized on quartz substrates. The substrates were first surface modified and covered by a monolayer of colloidal silicalite-1 seed crystals. These crystals were grown into continuous films with thicknesses in the range 230-3500 nm by hydrothermal treatment in a synthesis gel free from organic templates. The preferential orientation of the crystals constituting the film was initially one with thec-axis close to parallel to the substrate surface. During the course of crystallization this orientation changed to one with most of the crystals having the c-axes directed approximately 35° from perpendicular to the substrate surface. A mechanism explaining this behavior is proposed. The final thickness of the film was controlled by the synthesis time but also by the addition of seed crystals to the synthesis gel. Films prepared according to this method may be of great value for the development of zeolite based membranes.
N/A / Link / Mintova , Svetlana / 1998
### Biometech, Centre of Excellence
Luleå tekniska universitet Övrigt
På uppdrag av Skellefteå kommun pågår en studie som avser möjligheten att skapa ett forsknings- och utvecklingscenter i norra Sverige. Ett så kallat "Centre of Excellence". Centret ska vara inriktat på forskning och utveckling av hydrometallurgiska processer, med särskild inriktning på biohydrometallurgi. Studien genomförs i samarbete med Umeå Universitet, Luleå tekniska Universitet och Boliden Mineral. Centret planeras bli en självständig juridisk enhet med egna anläggningar för forskning och utveckling. Samtidigt är tanken att centret ska samarbeta intimt med institut och universitet. Målsättningen är också att knyta högre utbildning till centret. I studien ingår byggandet av en hydrometallurgisk demonstrationsanläggning placerad i Boliden. Anläggningen planeras bli i en sådan skala och omfattning att den tillåter utveckling och demonstration av olika processer för en mängd olika produkter. Idén är att olika företag och organisationer ska kunna använda centret med dess resurser, för att utveckla och demonstrera processer tekniskt, miljömässigt och ekonomiskt.
N/A / Link / Sandström , Åke / 2002
### Örebroarna och pappersbruket
Luleå tekniska universitet Övrigt
91-86992-20-1 / Link / Söderholm , Kristina / 2002
### Nonlinear viscoplastic and nonlinear viscoelastic material model for paper fiber composites in compression
Luleå tekniska universitet Övrigt
Compressive behavior of phenol-formaldehyde impregnated paper composites is studied in creep and strain recovery tests observing large nonlinear viscoelastic strains and irreversible strains, describing the latter as viscoplasticity. Stiffness reduction was not observed in experiments and therefore is not included in the material model. Schapery's nonlinear viscoelastic and nonlinear viscoplastic constitutive law is used as a material model and the stress dependent non-linearity functions are determined. First, the time and stress dependence of viscoplastic strains is described by Zapas et al. model and identified measuring the irreversible strains after creep tests of different length at the same stress and doing the same for creep tests of a fixed length but at different stress. Then, the determination of nonlinear viscoelastic stress dependent parameters is performed.
N/A / Link / Nordin , Lars-Olof / 2006
### Tjänarinnan, Lars Gustafsson och ursprunget
Luleå tekniska universitet Övrigt
N/A / Link / Friberg , Ingemar / 1997
### The popular impact of Gödel's incompleteness theorem
Luleå tekniska universitet Övrigt
The author, whose untimely passing in April 2006 was a great loss to the logic community, used this short paper primarily to dispel a few popular and not so popular misinterpretations of Gödel's incompleteness theorems. The most obvious misconceptions arise in areas having no direct connection with mathematics. But even within scientific circles, it is useful for the author to have pointed out that no unsolved problem in "traditional mathematics" has been shown to be undecidable via Gödel's first incompleteness theorem. (The Paris-Harrington undecidable problem does have to do with standard mathematical concepts, but it was not obtained from Gödel's result.) The author also makes illuminating remarks about Gödel's second incompleteness theorem concerning unprovability of the consistency of sufficiently strong mathematical theories. For example, the role of consistency proofs in justifying mathematical reasoning has been overemphasized. Moreover, there are informal `proofs' on the same level as ordinary mathematical argumentation that may convince most mathematicians of the consistency of, say, Peano arithmetic. For a more extensive treatment of all of these matters, the author refers the reader to his recent book Gödel's theorem, A K Peters, Wellesley, MA, 2005
N/A / Link / Franzén , Torkel / 2006
### Verifierig av datorsimulerad kapacitetsökning vid anrikningsverket i Malmberget
Luleå tekniska universitet Övrigt
N/A / Link / Alatalo , Johanna / 2006
### Finite element forming simulations in the development of high strength tubular components
Luleå tekniska universitet Övrigt
Light weight structures with high structural performance are one of the most important goals for automotive and transportation applications. One manufacturing technology, aiming to enable low weight design, is the combined forming and quenching of tubular thin-walled profiles of high strength steel. For optimal utilisation of this technology it is necessary to simulate and analyse the processes involved in a fast and efficient way. In this work, experiments of high temperature bending of thin walled profiles are performed and the forming response force is compared with results from finite element simulations. The analysed forming is modelled as a constant temperature forming and the material data for the specified temperature is evaluated from experiments and literature. The simulations and experiments are conducted to study the ability of the finite element model to predict high temperature forming characteristics and simulate the influence of profile and tool geometry. The need for further improvements and developments in the simulation technology is however identified. This work is part of a research project LOWHIPS (Low Weight High Performance Steel structures) aiming to obtain new knowledge concerning the involved forming and quenching processes and how they will affect the performance of the product.
N/A / Link / Eriksson , Magnus / 2001
### A reaction probe for flow measurements in liquid steel
Luleå tekniska universitet Övrigt
N/A / Link / Cervantes , Michel / 1998
### Developments on optimisation of grinding in Australia
Luleå tekniska universitet Övrigt
N/A / Link / Yanmin , Wang / 2004
### Metoder för design och simulering vid produktionsoptimering
Luleå tekniska universitet Övrigt
N/A / Link / Lindqvist , Per-Arne / 2005
### Ultrathin oriented zeolite LTA films
Luleå tekniska universitet Övrigt
Ultrathin oriented films of zeolite LTA are prepared on single-crystal alumina supports by a method including adsorption of LTA seeds on the support followed by hydrothermal film crystallization.
N/A / Link / Hedlund , Jonas / 1997
### V-invariant methods, generalised least squares problems, and the Kalman filter
Luleå tekniska universitet Övrigt
V-invariant methods for the generalised least squares problem extend the techniques based on orthogonal factorization for ordinary least squares to problems with multiscaled, even singular covariances. These methods are summarised briefly here, and the ability to handle multiple scales indicated. An application to a class of Kalman filter problems derived from generalised smoothing splines is considered. Evidence of severe illconditioning of the covariance matrices is demonstrated in several examples. This suggests that this is an appropriate application for the V-invariant techniques.
N/A / Link / Osborne , M. R. / 2004
### Do economic incentives demoralize recycling behavior?
Luleå tekniska universitet Övrigt
1-60021-124-0 / Link / Berglund , Christer / 2006
### An incremental 2D constitutive model accounting for linear viscoelasticity and damage development in short fibre composites
Luleå tekniska universitet Övrigt
A model accounting for linear viscoelasticity and microdamage evolution in short fibre composites is described. An incremental 2D formulation suitable for FE-simulation is derived and implemented in FE-solver ABAQUS. The implemented subroutine allows for simulation close to the final failure of the material. The formulation and subroutine is validated with analytical results and experimental data in a tensile test with constant strain rate using sheet moulding compound composites. FE-simulation of a four-point bending test is performed using shell elements. The result is compared with linear elastic solution and test data using a plot of maximum surface strain in compression and tension versus applied force. The model accounts for damage evolution due to tensile loading and neglects any damage evolution in compression, where the material has higher strength. Simulation and test results are in very good agreement regarding the slope of the load-strain curve and the slope change.
N/A / Link / Varna , Janis / 2005
### Space discretization error of methane combustion simulations in turbulent flow
Luleå tekniska universitet Övrigt
Numerical investigation of methane combustion in a pipe with turbulent flow is studied. The space discretization error is investigated quantitatively and qualitatively, using the Richardson extrapolation and profiles comparisons. Comparison of the profiles indicates that the solution converges to a grid-independent solution. The Richardson method gives unsatisfactory results to determine the grid error, because of the rigidity of the method. A second-order polynomial is used as an alternative to the Richardson method. The results are more stable and have a better goodness of fit. The results of the simulations are compared with those of a similar experiment and the corresponding analytical solution.
N/A / Link / Lindberg , Jenny / 2005
### Bergets inverkan på praktisk produktionsoptimering
Luleå tekniska universitet Övrigt
N/A / Link / Lindqvist , Per-Arne / 2005
### The growth of sub-micron films of TPA-silicalite-1 on single crystal silicon wafers from low-temperature clear solutions
Luleå tekniska universitet Övrigt
The direct synthesis of thin films of crystalline silicalite-1 upon single crystal silicon wafers at a crystallization temperature of 100°C has been investigated by varying the composition of the clear tetrapropylammonium (TPA) silicate synthesis solutions. Synthesis mixture compositions known to yield monodisperse colloidal crystals of TPA-silicalite-1 upon hydrothermal treatment as well as those reported to yield silicalite-1 films at higher temperatures have been found not suitable for the preparation of silicalite-1 films at 100°C. Lower crystal growth rates and smaller thicknesses of the gel film that forms on the wafer at this temperature decrease the tolerance to alkalinity, resulting in etching via the consumption of the gel layer before the growing crystals succeed in forming a closed film followed by the removal of the protective silicon oxide film on the wafer. Thin oriented silicalite-1 films with thicknesses in the range of 180 nm to 1 μm have been obtained by varying the alkalinity and water, the TPA, and the silica contents of the reaction mixture. Lower alkalinities and higher silica concentrations favor the formation of a thicker amorphous gel layer. Although increased TPA+ concentrations at constant alkalinity increase the number of nuclei that form on this layer, higher TPA+ concentrations have been observed to be required at higher alkalinities to achieve similar rates of nucleation. Rinsing the wafer surfaces initially with a 0.025 M TPAOH solution before rinsing with water and acetone produces cleaner surfaces free of post-treatment artifacts
N/A / Link / Schoeman , Brian / 1997
### Vapor adsorption in thin silicalite-1 films studied by spectroscopic ellipsometry
Luleå tekniska universitet Övrigt
Thin films of silicalite-1 grown on silicon substrates were studied by spectroscopic ellipsometry. Analysis of spectra using an optical model consisting of a single porous layer on silicon yielded average film thicknesses of 84 and 223 nm for films synthesized for 10 and 30 h. Void fraction for the films was 0.32-0.33. Vapor adsorption from a nitrogen carrier gas at room temperature was monitored by ellipsometry. Isotherms for different adsorbates were obtained by analysis of spectra taken at different vapor concentrations using an optical model where the void volume was filled with both nitrogen and condensed vapors. Quantification of the condensed vapor amount was based on the changes in refractive index when adsorbates replaced nitrogen in the pores. Adsorbate volumes for water, toluene, 1-propanol, and hexane were 0.12, 0.12, 0.15, and 0.17 cm3 liquid g-1 film, respectively.
N/A / Link / Bjorklund , Robert B. / 1998 | 2017-07-21 18:31:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3853420317173004, "perplexity": 8359.019301926448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00715.warc.gz"} |
http://blog.omega-prime.co.uk/2008/05/08/free-monads-in-haskell/ | Did you know that you can actually generate a monad automatically given any endofunctor? This is the free monad of that functor. How do we perform this magic trick? Quite simple, everything follows from this data type:
In this, the f parameter is just the functor we are going to build a monad for, and a is the type of the contents of a FreeM value. We’ve said that the free monad can either contain such a value directly (boring) or it can contain a value of it’s own type, but wrapped in the functor f - hmm! As an aside, I’m not sure that Bind is strictly the best name for this constructor, but I like the symmetry.
If you want to build up some intuition about this definition, you can think of it as specifying values that consist of an actual a (in a Return constructor) but nested within a finite number of functor applications.
As another aside, you might like to know that actually Haskell would let you create an infinite stack of functors fairly straightforwardly, since all its data declarations are actually really co-data declarations. Don’t do this, though, or you might start having some non-termination issues :-)
Having got this far through the post, you won’t be surprised to learn that we can make this definition into a functor itself. Here comes the implementation:
That Bind case looks pretty scary, doesn’t it? The leftmost fmap call is that from the functor f that we are making a free monad over, and the rightmost one is actually a recursion back into the fmap we are currently trying to compute, but one functor “lower”.
Essentially all we are doing in this definition is fmap_ing all the way down our tower of functors until we finally reach a nice sane _a value in the Return case, which we handle in the obvious way.
So far so good. But we’ve come all this way and still don’t have a monad, even though I promised you one at the start. Well, let’s sort that out now! Instead of defining one using the usual Haskell »= operator, I’m going to use the more category-theoretical join :: Monad m => m (m a) -> m a construction:
If you think about the type of this operation, what we want to do is stick the pile of functors associated with the outermost FreeM onto the pile of functors associated with the innermost one, to produce one cohesive functor pile. How we go about this is fairly straightforward: in the Return case there are no functor applications on the outermost FreeM so we just give back the inner one. The Bind case simply recurses its way down the whole functor pile, mushing them together in some sense :-).
That was it. The code to actually declare a Monad instance is entirely boilerplate given that we have a join operation:
Pretty neat! But what, pray tell, are these things actually useful for? Well, after I’d implemented my own version of free monads I found that Wouter Swierstra’s excellent paper “Data types à la carte” had already publicly demonstrated the same thing (in passing) with his Term data structure, and he has some nice examples. For instance, consider this functor:
Remind you of anything? It should: it’s just the Maybe monad! | 2022-01-19 00:37:57 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202986717224121, "perplexity": 793.1605505014136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00482.warc.gz"} |
https://ask.sagemath.org/questions/9764/revisions/ | # Revision history [back]
### Can't get sage-mode to load with emacs 24.2.1
Apologies for a rather newbie question, but I'm just getting started with Sage. I'd like to be able to use it in my favorite editor -- emacs. Sage seems to work just fine; I'm using it from the command line with no problems. But when I attempt to use sage-mode in emacs, I get this error at startup:
Symbol's value as variable is void: sage-command
A little history to show how I got here/replication: 1. Downloaded and installed Sage 5.6 64bit for OSX (running 10.8.2). 2. From within sage, did: install_package('sage-mode'). 3. Appeared that sage-mode 0.7 was installed. 4. Started emacs, got this error: "Unknown button type help-xref'" 5. Looking at threads, saw it might be a problem solved in the new "experimental" version of sage-mode 0.8. (threads: https://groups.google.com/forum/?fromgroups=#!topic/sage-devel/HufugiDMyFQ) 5. From command line, did: sage -i http://boxen.math.washington.edu/home/iandrus/sage_mode-0.8.spkg 6. Now getting the current error on loading emacs.
My .init file has: (add-to-list 'load-path "/Applications/sage/local/share/emacs") (require 'sage "sage") (setq sage-command "/Applications/sage/sage")
I've tried a number of things at the command-line level (symbolic link to sage file, editing $PATH, editing the$SAGE_ROOT directory, etc), but nothing seems to get me past the current error.
Any help is greatly appreciated.
### Can't get sage-mode to load with emacs 24.2.1
Apologies for a rather newbie question, but I'm just getting started with Sage. I'd like to be able to use it in my favorite editor -- emacs. Sage seems to work just fine; I'm using it from the command line with no problems. But when I attempt to use sage-mode in emacs, I get this error at startup:
Symbol's value as variable is void: sage-command
A little history to show how I got here/replication: 1.
1. Downloaded and installed Sage 5.6 64bit for OSX (running 10.8.2). 2.
2. From within sage, did: install_package('sage-mode'). 3.
3. Appeared that sage-mode 0.7 was installed. 4.
4. Started emacs, got this error: "Unknown button type help-xref'" 5. help-xref'"
5. Looking at threads, saw it might be a problem solved in the new "experimental" version of sage-mode 0.8. (threads: https://groups.google.com/forum/?fromgroups=#!topic/sage-devel/HufugiDMyFQ) 5. https://groups.google.com/forum/?fromgroups=#!topic/sage-devel/HufugiDMyFQ)
6. From command line, did: sage -i http://boxen.math.washington.edu/home/iandrus/sage_mode-0.8.spkg 6. http://boxen.math.washington.edu/home/iandrus/sage_mode-0.8.spkg
My .init file has: (add-to-list 'load-path "/Applications/sage/local/share/emacs") (require 'sage "sage") (setq sage-command "/Applications/sage/sage")
I've tried a number of things at the command-line level (symbolic link to sage file, editing $PATH, PATH, editing the$SAGE_ROOT SAGE_ROOT directory, etc), but nothing seems to get me past the current error.
Any help is greatly appreciated.
3 improved style fidbc 2298 ●4 ●24 ●55
### Can't get sage-mode to load with emacs 24.2.1
Apologies for a rather newbie question, but I'm just getting started with Sage. I'd like to be able to use it in my favorite editor -- emacs. Sage seems to work just fine; I'm using it from the command line with no problems. But when I attempt to use sage-mode in emacs, I get this error at startup:
Symbol's value as variable is void: sage-command
A little history to show how I got here/replication:
2. From within sage, did: install_package('sage-mode').
3. Appeared that sage-mode 0.7 was installed.
4. Started emacs, got this error: "Unknown button type help-xref'"
5. Looking at threads, saw it might be a problem solved in the new "experimental" version of sage-mode 0.8. (threads: https://groups.google.com/forum/?fromgroups=#!topic/sage-devel/HufugiDMyFQ)
6. From command line, did: sage -i http://boxen.math.washington.edu/home/iandrus/sage_mode-0.8.spkg
(add-to-list 'load-path "/Applications/sage/local/share/emacs") | 2021-09-28 18:02:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27409401535987854, "perplexity": 6448.25507021217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00184.warc.gz"} |
http://finmath.net/finmath-lib/apidocs/net/finmath/modelling/ProductInterface.html | finMath lib documentation
net.finmath.modelling
## Interface ProductInterface
• ### Method Summary
All Methods
Modifier and Type Method and Description
Object getValue(double evaluationTime, ModelInterface model)
Return the valuation of the product using the given model.
default Map<String,Object> getValues(double evaluationTime, ModelInterface model)
Return the valuation of the product using the given model.
• ### Method Detail
• #### getValue
Object getValue(double evaluationTime,
ModelInterface model)
Return the valuation of the product using the given model. Implement this method using a checked cast of the model to a derived model for which the product provides a valuation algorithm. Example: an interest rate product requires that the passed model object implements the interface of an interest rate model. Since there is no polymorphism on arguments (see Double Dynamic Dispatch), we reply on a checked cast.
Parameters:
evaluationTime - The evaluation time as double. Cash flows prior and including this time are not considered.
model - The model under which the product is valued.
Returns:
Object containing the value of the product using the given model.
• #### getValues
default Map<String,Object> getValues(double evaluationTime,
ModelInterface model)
Return the valuation of the product using the given model. Implement this method using a checked cast of the model to a derived model for which the product provides a valuation algorithm. Example: an interest rate product requires that the passed model object implements the interface of an interest rate model. Since there is no polymorphism on arguments (see Double Dynamic Dispatch), we reply on a checked cast.
Parameters:
evaluationTime - The evaluation time as double. Cash flows prior and including this time are not considered.
model - The model under which the product is valued.
Returns:
Map containing the value of the product using the given model. | 2018-10-20 18:19:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19236081838607788, "perplexity": 1613.2308171587035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00309.warc.gz"} |
https://confluence.atlassian.com/confkb/cluster-panic-is-triggered-in-confluence-data-center-when-a-node-rejoins-the-cluster-800862366.html | # Cluster panic is triggered in Confluence Data Center when a node rejoins the cluster
#### Still need help?
The Atlassian Community is here for you.
Platform Notice: Server and Data Center Only - This article only applies to Atlassian products on the server and data center platforms.
## Problem
Cluster panic is triggered in Confluence Data Center when a node rejoins the cluster. There are no logs written to atlassian-confluence.log except a warning that Hazelcast is terminating forcefully:
When making the following actions:
• All cluster nodes are in a cluster gracefully (e.g. Nodes 1, 2, and 3)
• One node is taken out of the cluster by being shut down (e.g. node 3)
• This leaves nodes 1 and 2 in the cluster. Once node 3 starts again and joins the cluster, nodes 1 and 2 go into panic mode
• Hazelcast terminates forcefully in some/all nodes
Similar logging as the following appears in the atlassian-confluence.log:
2021-01-27 22:08:43,894 WARN [hz.ShutdownThread] [com.hazelcast.instance.Node] log [xxx.xxx.xxx.xxx]:5801 [confluenceCluster] [3.8.6] Terminating forcefully...
## Cause
The nodes are having some issues communicating over multicast consistently. Network communication tools such as Omping show no communication errors while the nodes are in a cluster, but communication is broken down when Hazelcast terminates in one of the nodes.
## Workaround
The workaround for Confluence Data Center versions 5.9 and above is to move from using multicast to unicast.
If you're setting up Confluence Data Center for the first time, it'll step you through the process of choosing your discovery mode and adding cluster nodes. If you decide to change the node discovery for the cluster, you'll need to edit the confluence.cfg.xml file in the local home directory of each cluster node.
• Before you make any changes, shut down all nodes in your cluster
• Make sure the discovery configuration is exactly the same for each node (make the same changes to the confluence.cfg.xml file in each local home directory)
• Always perform a safety backup before making manual edits to these files
The changes you need to make may differ slightly, depending on whether you've upgraded from an older version of Confluence Data Center or if you've started with version 5.9. We've detailed both methods, below.
### To change from multicast to TCP/IP
Look for the following two lines in the confluence.cfg.xml file:
<property name="confluence.cluster.address">[multicast IP]</property>
<property name="confluence.cluster.join.type">multicast</property>
If both lines exist in the file, change them to the lines below; where the confluence.cluster.address property exists, but there's no reference to the confluence.cluster.join.type property, update the first line and add the second line as shown below.
<property name="confluence.cluster.peers">[node 1 IP],[node 2 IP],[node 3 IP]</property> <!-- A comma-separated list of node IP addresses, without spaces -->
<property name="confluence.cluster.join.type">tcp_ip</property> <!-- accepted values are multicast or tcp_ip -->
Enter the address of each node, and separate each address with a comma. Please, make sure to remove the brackets from around the IP addresses.
You can now restart your cluster nodes.
### To change from multicast to AWS
Look for the following two lines in the confluence.cfg.xml file and remove them:
<property name="confluence.cluster.address">[multicast IP]</property>
<property name="confluence.cluster.join.type">multicast</property>
Depending on which type of credentials you are passing to Confluence, you will add one of the following two blocks with your AWS configuration.
Option 1: For Access Key/Secret Key based credentials:
<property name="confluence.cluster.join.type">aws</property>
<property name="confluence.cluster.aws.region">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.value">[---VALUE---]</property>
<property name="confluence.cluster.aws.access.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.secret.key">[---VALUE---]</property>
Option 2: For IAM role based credentials:
<property name="confluence.cluster.join.type">aws</property>
<property name="confluence.cluster.aws.region">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.value">[---VALUE---]</property>
<property name="confluence.cluster.aws.iam.role">[---VALUE---]</property>
### To change from TCP/IP to AWS
Look for the following two lines in the confluence.cfg.xml file and remove them:
<property name="confluence.cluster.join.type">tcp_ip</property>
<property name="confluence.cluster.peers">[node 1 IP],[node 2 IP],[node 3 IP]</property>
Depending on which type of credentials you are passing to Confluence, you will add one of the following two blocks with your AWS configuration.
Option 1: For Access Key/Secret Key based credentials:
<property name="confluence.cluster.join.type">aws</property>
<property name="confluence.cluster.aws.region">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.value">[---VALUE---]</property>
<property name="confluence.cluster.aws.access.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.secret.key">[---VALUE---]</property>
Option 2: For IAM role based credentials:
<property name="confluence.cluster.join.type">aws</property>
<property name="confluence.cluster.aws.region">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.key">[---VALUE---]</property>
<property name="confluence.cluster.aws.tag.value">[---VALUE---]</property>
<property name="confluence.cluster.aws.iam.role">[---VALUE---]</property>
You can now restart your cluster nodes.
Note that if you're using a CloudFormation YAML template you need to make sure you have these appropriate values as a minimum and they should be reflected on the AWS side as well. If you switch to AWS mode cluster type, please also review Running Confluence Data Center in AWS and make sure you have the following set up in your YAML:
Key: Cluster
Value: !Ref AWS::StackName
PropagateAtLaunch: true
### To change from TCP/IP to multicast
To switch from TCP/IP to multicast, just perform the reverse of the changes outlined above.
### Reference of properties in the confluence.cfg.xml file
keyvalid valuesnotes
confluence.cluster.join.type
'multicast' or 'tcp_ip'or 'aws'
Pre-5.9 Data Center installations won't have this key. By default, if the key is missing, Confluence will choose multicast
confluence.cluster.address
a single multicast IP addressThis key is only used by confluence if confluence.cluster.join.type is set to multicast
confluence.cluster.peers
a comma-separated string of IP addresses (no spaces)
There must be at least one address here. The addresses are the IP address of each node in the cluster, for example
<property name="confluence.cluster.peers">[node 1 IP],[node 2 IP],[node 3 IP]</property>
This key is only used by confluence if confluence.cluster.join.type is set to tcp_ip
Confluence Data Center versions prior to 5.9 do not have the option to use unicast, so the workaround is not applicable. However, a similar issue has been addressed for versions 5.8.5 and above: CONF-39396 - Getting issue details... STATUS | 2021-06-23 06:56:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18791109323501587, "perplexity": 5160.17325158442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00550.warc.gz"} |
http://en.wikipedia.org/wiki/Condorcet's_jury_theorem | # Condorcet's jury theorem
Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions.[1]
The assumptions of the simplest version of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2:
• If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases.
• On the other hand, if p is less than 1/2 (each voter is more likely than not to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter.
## Proof
To avoid the need for a tie-breaking rule, we assume n is odd. Essentially the same argument works for even n if ties are broken by fair coin-flips.
Now suppose we start with n voters, and let m of these voters vote correctly.
Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases:
• m was one vote too small to get a majority of the n votes, but both new voters voted correctly.
• m was just equal to a majority of the n votes, but both new voters voted incorrectly.
The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a difference. So we only care what happens when a single vote (among the first n) separates a correct from an incorrect majority.
Restricting our attention to this case, we can imagine that the first n-1 votes cancel out and that the deciding vote is cast by the n-th voter. In this case the probability of getting a correct majority is just p. Now suppose we send in the two extra voters. The probability that they change an incorrect majority to a correct majority is (1-p)p2, while the probability that they change a correct majority to an incorrect majority is p(1-p)(1-p). The first of these probabilities is greater than the second if and only if p > 1/2, proving the theorem.
## Asymptotics
The probability of a correct majority decision P(n,p), when the individual probability p is close to 1/2 grows linearly in terms of p-1/2. For n voters each one having probability p of deciding correctly and for odd n (where there are no possible ties):
$P(n,p) = 1/2 + c_1 (p-1/2) + c_3 (p-1/2)^3 + O( (p-1/2)^5 )$
where
$c_1 = {n \choose { \lfloor n/2 \rfloor}} \frac{ \lfloor n/2 \rfloor +1} { 4^{\lfloor n/2 \rfloor}} = \sqrt{ \frac{2n+1}{\pi}} (1 + \frac{1}{16n^2} + O(n^{-3}) )$
and the asymptotic approximation in terms of n is very accurate. The expansion is only in odd powers and $c_3 < 0$. In simple terms, this says that when the decision is difficult (p close to 1/2), the gain by having n voters grows proportionally to $\sqrt{n}$.
## Limitations
This version of the theorem is correct, given its assumptions, but its assumptions are unrealistic in practice. Some objections that are commonly raised:
• Real votes are not independent, and do not have uniform probabilities. This is not necessarily a problem as long as each voter is more likely than not to produce a correct vote, and subsequent work[2] has considered the case of correlated votes. One very strong version of the theorem requires only that the average of the individual competence levels of the voters (i.e. the average of their individual probabilities of deciding correctly) is slightly greater than half.[3] This version of the theorem does not require voter independence, but takes into account the degree to which votes may be correlated.[4]
• The notion of "correctness" may not be meaningful when making policy decisions as opposed to deciding questions of fact.[citation needed] Some defenders of the theorem hold that it is applicable when voting is aimed at determining which policy best promotes the public good, rather than at merely expressing individual preferences. On this reading, what the theorem says is that although each member of the electorate may only have a vague perception of which of two policies is better, majority voting has an amplifying effect. The "group competence level", as represented by the probability that the majority chooses the better alternative, increases towards 1 as the size of the electorate grows assuming that each voter is more often right than wrong.
• The theorem doesn't directly apply to decisions between more than two outcomes. This critical limitation was in fact recognized by Condorcet (see Condorcet's paradox), and in general it is very difficult to reconcile individual decisions between three or more outcomes (see Arrow's theorem), although List and Goodin present evidence to the contrary.[5] This limitation may also be overcome by means of a sequence of votes on pairs of alternatives, as is commonly realized via the legislative amendment process. (However, as per Arrow's theorem, this creates a "path dependence" on the exact sequence of pairs of alternatives; e.g., which amendment is proposed first can make a difference in what amendment is ultimately passed, or if the law—with or without amendments—is passed at all.)
• The behaviour that everybody in the jury votes according to his own beliefs might not be a Nash equilibrium under certain circumstances.[6]
Nonetheless, Condorcet's jury theorem provides a theoretical basis for democracy, even if somewhat idealized, and as such continues to be studied by political scientists.
## Notes
1. ^ Marquis de Condorcet. "Essai sur l'application de l'analyse á la probabilité des décisions rendues á la pluralité des voix" (PNG) (in French). Retrieved 2008-03-10.
2. ^ see for example: Krishna Ladha. "The Condorcet Jury Theorem, Free Speech, and Correlated Votes" (GIF) (in English). JSTOR. Retrieved 2008-03-10.
3. ^ Bernard Grofman; Guillermo Owen; Scott L. Feld (1983). "Thirteen theorems in search of the truth." (PDF). Theory & Decision (in English) 15. p. 261-78. Retrieved 2012-11-09.
4. ^ James Hawthorne. "Voting In Search of the Public Good: the Probabilistic Logic of Majority Judgments" (PDF) (in English). Retrieved 2009-04-20.
5. ^ Christian List and Robert Goodin. "Epistemic democracy : generalizing the Condorcet Jury Theorem" (PDF) (in English). Retrieved 2006-12-06.
6. ^ Austen-Smith, David and Jeffrey S. Banks (1996): “Information aggregation, rationality, and the Condorcet Jury Theorem”, American Political Science Review 90: 34-45. | 2014-08-31 06:10:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7482355833053589, "perplexity": 1233.487238062874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500836106.97/warc/CC-MAIN-20140820021356-00294-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/176046-functions.html | 2. Well, swap the $\displaystyle x$ and $\displaystyle y$ in $\displaystyle f(x)$, and see if you can rearrange it to get $\displaystyle g(x)$. | 2016-08-25 02:40:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293554425239563, "perplexity": 49.32353550544139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292734.19/warc/CC-MAIN-20160823195812-00143-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://tamasgorbe.wordpress.com/2021/06/19/elliptic-kacsylvester-matrix-from-difference-lame-equation-paper/ | ### Elliptic Kac–Sylvester Matrix from Difference Lamé Equation [PAPER]
Earlier this year Jan Felipe van Diejen (Universidad de Talca) and I have discovered the solution of a long-standing problem which, in part, meant finding the eigenvectors of the following matrix
$\displaystyle S=\begin{bmatrix} 0&\frac{[\textsc{m}]}{[\mathrm{g}+\textsc{m}]} & 0& \cdots & 0\\ \frac{[1]}{[\mathrm{g}+1]} & 0 &\ddots & & \vdots \\ 0 & \frac{[2]}{[\mathrm{g}+2]} & \ddots & \frac{[2]}{[\mathrm{g}+2]} &0\\ \vdots & & \ddots &0 &\frac{[1]}{[\mathrm{g}+1]} \\ 0 & \cdots & 0& \frac{[\textsc{m}]}{[\mathrm{g}+\textsc{m}]} &0 \end{bmatrix} \ \ \ \ \ (1)$
This post is a review of our initial results and bits of 170 years of history.
The title of our new paper is
Elliptic Kac–Sylvester Matrix from Difference Lamé Equation
It was published in the mathematical physics journal Annales Henri Poincaré. The article can be found on the publisher’s website at https://link.springer.com/article/10.1007/s00023-021-01063-y. The paper is freely available at https://rdcu.be/clsVe.
Sylvester’s tridiagonal determinant
This story starts in 1854 with James Joseph Sylvester‘s calculation of some tridiagonal determinants:
Notice the similarity with the matrix ${S}$ in eq. (1), numbers going up and down above and below the diagonal. The original Kac-Sylvester matrix ${S_r}$ reads as follows
$\displaystyle S_r=\begin{bmatrix} 0 & \textsc{m}& 0& \cdots & 0\\ 1 & 0 &\ddots & & \vdots \\ 0 & 2 & \ddots & 2 &0\\ \vdots & & \ddots &0 &1 \\ 0 & \cdots & 0& \textsc{m} &0 \end{bmatrix} \ \ \ \ \ (2)$
It’s an ${(\textsc{m}+1)\times(\textsc{m}+1)}$ matrix with the numbers ${1,2,\dots,\textsc{m}}$ below the diagonal, ${\textsc{m},\dots,2,1}$ above it and ${0}$ everywhere else.
What Sylvester’s 1854 paper demonstrates is that the eigenvalues of ${S_r}$ in eq. (2) form an arithmetic progression ${\textsc{m},\textsc{m}-2,\textsc{m}-4,\dots,-\textsc{m}+2,-\textsc{m}}$.
While we are at it, some fun facts about Sylvester: 1. Sylvester was born James Joseph. The name ‘Sylvester’ was taken up when his brother emigrated to the US, where at the time one could only gain residence if one’s name had at least three parts… 2. Sylvester came up with names for many mathematical concepts, including the term matrix, which means womb in Latin. So a matrix is a thing that’s pregnant with numbers… 3. The previous two fun facts imply a third one, namely that Sylvester invented two out of the three words in the expression “Kac-Sylvester matrix”.
By the way, we don’t know exactly why Sylvester was interested in this particular matrix, but it’s clear that studying these types of tridiagonal determinants, called continuants, were a thing in the mid-19th century due to their relation to continued fractions.
The eigenvalues of the Kac-Sylvester matrix were computed by Sylvester in 1854 whilst the eigenvectors were only found about a century later in 1947 by Mark Kac. Before explaining Kac’ result, let me tell you about some interesting episodes that happened in the meantime.
Boltzmann’s ordeal
The next time the Kac-Sylvester matrix showed up, it helped save Boltzmann from sceptics of his kinetic theory. In a nutshell, people thought that Boltzmann’s statistical mechanics contradicts the 2nd law of thermodynamics. And they were right…
For example, heat flowing from hot to cold in Boltzmann’s theory isn’t an exact law, but only true statistically. The purely mechanical motion of microscopic particles can result in lower entropy states. Here is a very nice illustration by Matt Henderson, the best math animator twitter has ever seen:
Entropy decrease (by Poincaré recurrence) was Zermelo‘s objection. Boltzmann agreed that this could take place, but correctly thought that it’d happen on timescales so large that we never experience it, hence the 2nd law seems exact.
The Ehrenfests and their dogs with fleas
To help make Boltzmann’s point, Tatiana and Paul Ehrenfest, in their 1907 paper, proposed the dog-flea model which is a simple model of heat exchange. Imagine two dogs standing close to each other with 100 fleas being shared between them.
If the fleas jump from one dog to the other at random you’d expect the fleas to spread roughly evenly (50:50) after a while. The probability P(n+1|n) of dog A going from having n fleas to n+1 fleas is (100-n)/100, while P(n-1|n) = n/100.
Arrange these probabilities in a (transition) matrix and voilà you have the rescaled Kac-Sylvester matrix of size 101×101. It’s rescaled, because each entry is divided by 100 to get probabilities.
And now the exciting part. The recurrence times are obtained from the equilibrium distribution which is encoded in the eigenvector with the largest eigenvalue. The distribution turns out to be the binomial distribution, so recurrence occurs on exponential timescales.
For example, the expected number of flea jumps required to return to the initial 90:10 state is ${2^{100}/\binom{100}{90}\approx 7.3231\times 10^{16}}$. So if flea jumps occur every second, it’ll take about 2 billion years(!) to return to the initial state.
Now imagine that instead of 100 fleas, the dogs have ${10^{23}}$ of them (poor doggies) and you can quickly see just how improbable it is to return to the initial state.
Let’s recap! The rescaled Kac-Sylvester matrix is the transition matrix of the Ehrenfest model (dog-flea model). The left-eigenvector with eigenvalue 1 encodes the binomial distribution, that is we have
$\displaystyle \mathbf{v}\bigg(\frac{1}{\textsc{m}}S_r\bigg)=\mathbf{v} \ \ \ \ \ (3)$
with
$\displaystyle \mathbf{v}=2^{-\textsc{m}}\begin{pmatrix}\binom{\textsc{m}}{0}& \binom{\textsc{m}}{1}&\dots \binom{\textsc{m}}{\textsc{m}}\end{pmatrix}. \ \ \ \ \ (4)$
So what about the other eigenvectors? We’ll get to them shortly, but first Schrödinger.
Schrödinger’s failed attempt
Schrödinger’s 1926 papers on “Wave Mechanics” are legendary. In them, he solves various quantum models using his new equation. When he turns to the Stark effect, he encounters the symmetric Kac-Sylvester matrix, but struggles with it:
The above matrix is the symmetric version of Kac-Sylvester matrix which can be obtained from the original one by a similarity transformation. Therefore they have the same eigenvalues.
So Schrödinger could guess the eigenvalues of the Kac-Sylvester matrix, but couldn’t find a proof.
Mark Kac’ result
Mark Kac is most famous for his 1966 popular article Can One Hear the Shape of a Drum? for which he received the Chauvenet Prize, the highest award for mathematical expository writing. It’s less known that this was the 2nd time Kac got this prize.
Kac’ first Chauvenet Prize was awarded for his 1947 paper Random Walk and the Theory of Brownian Motion in which he described, among other things, a particle’s random walk along the integers between ${-N}$ and ${N}$ with a spring fixed at ${0}$ pulling on it.
Kac found the eigenvectors of the Kac-Sylvester matrix. These are given by the Krawtchouk polynomials
$\displaystyle K_m(x;\textsc{M})=\sum_{k=0}^{m}(-1)^k\binom{x}{k}\binom{\textsc{M}-x}{m-k}, \ \ \ \ \ (5)$
${m=0,1,\dots,\textsc{m}}$ which are a family of discrete orthogonal polynomials (OPs) with the binomial distribution as weight function, i.e. we have
$\displaystyle \sum_{x=0}^{\textsc{M}}K_m(x)K_n(x)\binom{\textsc{M}}{x}/2^{\textsc{M}}=\binom{\textsc{M}}{m}\delta_{mn}.\ \ \ \ \ (6)$
The connection to OPs is not a coincidence…
Quick aside: Mark Kac has written an excellent biography titled “Enigmas of Chance”. It’s well worth a read. Here is a bit about the Kac-Sylvester problem.
Tridiagonal matrices and orthogonal polynomials
The link between tridiagonal matrices and orthogonal polynomials is well-known, see here.
The crux is that any family of orthogonal polynomials ${\{p_n(x):\ n=0,1,\dots,\}}$ satisfies recurrence relations of the form
$\displaystyle xp_n(x)=a_np_{n-1}(x)+b_np_n(x)+a_{n+1}p_{n+1}(x)\ \ \ \ \ (7)$
which include 3 consecutive OPs and thus can be encoded in a tridiagonal matrix.
In 2005, Richard Askey and Olga Holtz described how a host of Sylvester type determinants can be evaluated in terms of orthogonal polynomials from the famous (q)-Askey scheme.
This talk by Askey has the details and is full of nice historical facts.
And now, some fun facts about orthogonal polynomials: 1. The Hermite polynomials were first define by Laplace in 1810 when Hermite was -12 years old. 2. In 1940 Wigner introduced the 6j-symbols in quantum mechanics, but he didn’t realize they can be viewed as (discrete) orthogonal polynomials until decades later when Askey told him.
Lamé’s equation: differential vs difference
Lamé’s differential equation appears when solving the Laplace equation ${\Delta f = 0}$ by separation of variables in ellipsoidal coordinates. Here Lamé’s equation is written in terms of Weierstrass’ elliptic ${\wp}$ function (A,B are constants):
$\displaystyle \frac{\partial^2f}{\partial x^2}+\big(A+B\wp(x)\big)f=0 \ \ \ \ \ (3)$
And this is the difference Lamé equation
$\displaystyle \dfrac{\theta_1(x+s\mathrm{g})}{\theta_1(x)}f(x+s)+\dfrac{\theta_1(x-s\mathrm{g})}{\theta_1(x)}f(x-s)=Ef(x) \ \ \ \ \ (4)$
The ingredients are Jacobi’s elliptic theta function ${\theta_1}$, a complex parameter (coupling constant) ${\mathrm{g}}$, a shift step size (Compton wavelength) ${s}$, the unknown eigenvalue (energy level) ${E}$ and the unknown eigenfunction (wave function) ${f}$.
Choosing the (real) period of the theta function and the domain of x carefully, turns the difference Lamé equation into the eigenvalue problem of a tridiagonal matrix. Can you guess what this mystery matrix looks like? Bingo! It’s the elliptic Kac-Sylvester matrix we’ve seen in eq. (1). Notation: ${[z] = \theta_1(z)}$.
Our main result
We expressed the eigenvectors of the elliptic Kac-Sylvester matrix as discrete orthogonal polynomials. These new OPs are elliptic generalizations of the Krawtchouk (and Rogers) polynomials. Thus we extended many of the old results mentioned in this post.
Finally, let me give you a hard(?) open problem: Find the eigenvalues of elliptic Kac-Sylvester matrix. If you could explicitly formulate the eigenvalues in terms of the theta function for arbitrary matrix sizes, you would make a mathematical discovery!
This was our elliptic Kac-Sylvester paper in a nutshell. If you are interested in seeing the details, you can read the full paper for free at rdcu.be/clsVe .
I might do more posts in the near future since we have some new results that are even more exciting than the one I’ve just described. | 2022-12-02 10:05:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.770521342754364, "perplexity": 1120.8855439721185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00183.warc.gz"} |
http://juliaplots.org/AlgebraOfGraphics.jl/dev/API/recipes/ | # Recipes
AlgebraOfGraphics.linesfill!Function
linesfill(xs, ys; lower, upper, kwargs...)
Line plot with a shaded area between lower and upper. If lower and upper are not given, shaded area is between 0 and ys.
Attributes
Available attributes and their defaults for Combined{AlgebraOfGraphics.linesfill!} are:
source
AlgebraOfGraphics.linesfillFunction
linesfill(xs, ys; lower, upper, kwargs...)
Line plot with a shaded area between lower and upper. If lower and upper are not given, shaded area is between 0 and ys.
Attributes
Available attributes and their defaults for Combined{AlgebraOfGraphics.linesfill} are:
color :gray25
colormap :batlow
colorrange MakieCore.Automatic()
fillalpha 0.15
linestyle "nothing"
linewidth 1.5
lower MakieCore.Automatic()
upper MakieCore.Automatic()
source | 2022-05-26 14:26:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24608570337295532, "perplexity": 12368.164241432656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00627.warc.gz"} |
http://sphire.mpg.de/wiki/doku.php?id=known_issues:sphire_beta_20161216 | ### Sidebar
#### Release history
known_issues:sphire_beta_20161216
This version (2016/12/16 19:13) is a draft.
Approvals: 0/1
# SPHIRE Known Issues
VERSION : Beta_20161216
RELEASE DATE : 2016/12/16
## GENERAL
### Wiki page for Prepare Input Stack is missing.
COMMAND : sxunblur.py
VERSION : Beta_20161216
DATE : 2016/12/14
Tutorial refers to the Wiki page for Prepare Input Stack, but it does not exists. The page should provide detailed instructions of how to prepare input stack from single particle dataset that was created by a different program.
## MOVIE
### Micrograph movie alignment is missing the MPI support.
COMMAND : sxunblur.py
VERSION : Beta_20161216
DATE : 2016/12/14
At this point, it is not possible to run the Micrograph movie alignment using multiple MPI processes.
### Advanced usage of Drift Assessment tool should be described on our wiki page.
COMMAND : sxgui_unblur.py
VERSION : Beta_20161216
DATE : 2016/12/14
The user manual for the Drift Assessment tool is missing from SPHIRE Wiki. It should describe the advanced usage. Tutorial should have a link to this page.
## CTER
### CTF Estimation requires the number of MPI processors to be lower than the total number of micrographs.
COMMAND : sxcter.py
VERSION : Beta_20161216
DATE : 2016/11/30
The program will abort the execution if the number of MPI processors exceeds the total number of micrographs.
### The wiki page for advanced usage of CTF Assessment tool is missing.
COMMAND : sxgui_cter.py
VERSION : Beta_20161216
DATE : 2016/12/14
The user manual for CTF Assessment tool is missing from SPHIRE Wiki. It should describe the advanced usage. Tutorial should have a link to this page.
## ISAC
### ISAC crashes when the size of dataset is very large.
COMMAND : sxisac.py
VERSION : Beta_20161216
DATE : 2016/11/30
Because we are still optimising the parallelization of ISAC, the program will crash due to memory allocation error when the input dataset is rather large. On our cluster with 128 Gb RAM and 24 cores per node, ISAC jobs crash with this particle box size when the datasets contain > 60000 particles.
For now, a workaround is to split the data in subsets, run ISAC as described here for each subset separately and combine the results at the end. For example, to split a dataset of 200.000 particles in 4 subsets, type at the terminal:
e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_1 –first=0 –last=50000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_2 –first=50001 –last=100000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_3 –first=100001 –last=150000 e2proc2d.py bdb:Particles#stack_preclean bdb:Particles#stack_preclean_4 –first=150001 –last=200000
To combine the resulting “clean” stacks at the end into a single virtual stack, type (one line):
e2bdb.py bdb:Particles#stack_clean1 bdb:Particles#stack_clean2 bdb:Particles#stack_clean3 bdb:Particles#stack_clean4 –makevstack=bdb:Particles#stack_clean1
## VIPER
### RVIPER requires the number of MPI processors to be lower than the number of class averages.
COMMAND : sxrviper.py
VERSION : Beta_20161216
DATE : 2016/11/30
The program will crash if the number of MPI processors exceeds the number of class averages in your input file.
### Resize/Clip VIPER Model should be one step instead of two sperate steps.
COMMAND : N/A
VERSION : Beta_20161216
DATE : 2016/12/14
Resize/Clip VIPER Model should be one step instead of two sperate steps. The step should also allow user to remove disconnected density, apply low-pass filter, and generate a 3D mask.
## MERIDIEN
### MERIDIEN supports only 15° or 7.5° for initial angular sampling step.
COMMAND : sxmeridien.py VERSION : Beta_20161216 DATE : 2016/12/15
For Initial angular sampling step, the default value of 15° is usually appropriate to create enough projections for the initial global parameter search for almost every asymmetric structure (i.e. c1). However, if the structure has higher symmetry (e.g. c5), it is recommended to adjust this parameter to a lower value (7.5°). Currently, we support only these two starting values. Choosing another value is likely to create unexpected behaviour of the program
*** Settings of starting resolution and initial angular sampling step of MERIDIEN are related to each other.
### Inappropriate combination of memory per node and MPI settings likely cause crash or performance deterioration of 3D Refinement.
COMMAND : sxmeridien.py
VERSION : Beta_20161216
DATE : 2016/11/30
If a combination of memory per node and MPI settings is inappropriate for your cluster and the dataset size, the program will most likely crash (if the memory per node is too high) or will be forced to use small memory mode (if the memory per node is too low), which results in performance deterioration.
Please check your cluster specifications. The program has to know how much memory available on each node as it uses “per node” MPI parallelisation in many places. Nodes are basic units of a cluster and each node has a number of CPUs (with few exceptions of heterogeneous clusters whose use should be avoided, number of CPUs is the same on each node). While clusters are often characterized by the amount of memory per CPU, here we ask for the total amount of memory per node as the program may adjust internally the number of CPUs it is using. For example, a cluster that has 3 GB memory per CPU and 16 CPUs per node has 3 GB × 16 = 48 GB memory per node. The default value used by the program is 2 GB per node and the program will determine internally the number of CPUs to arrive at the estimate of total memory.
Especially, the final reconstruction stage is very memory intensive. At this stage, the program will crash if sufficient memory is not available. In this case, please try to reduce the number of MPI processes per node while using at least 4 or more nodes, and do the continue run from the last iteration.
In case the program does not finalize even if you use one process per node, the only alternative in this case would be to downscale your particles (and your reference volume) to a lower pixel size and re-run the program in a different output folder. This can be done with the following command:
e2proc2d.py bdb:mystack bdb:mybinnedstack –scale=(scaling factor) –clip=(new box size)
On our cluster, with 128 Gb/node, for reconstructions of datasets with a box-size of 512, we had to reduce the number of processes per node from 24 to 6, but binning was not necessary.
### The convolution effects of masking affects the resolution estimated by Sharpening.
COMMAND : sxprocess.py –postprocess
VERSION : Beta_20161216
DATE : 2016/11/30
With the present version of Sharpening, the resolution estimation might be affected by the convolution effects of over-masking because phase randomization of the two half-reconstructions is not performed.
Thus, please be cautious and avoid tight mask. Check your FSC carefully and create a larger mask in case you obtain strange B-factor values and/or observe strange peaks or raises at the high frequencies of your FSC. Such issues are nicely described in (Penczek 2010). In case you want to measure the local resolution of a specific area of your volume, instead of using local masks to calculate the respective FSC, use our local resolution and filtering approach instead. You should always visually inspect the resulting map and FSC carefully and confirm that the features of the density agree with the nominal resolution (e.g. a high resolution map should show clearly discernible side chains).
## SORT3D
### Current MPI implementation of 3D Clustering RSORT3D is still under optimisation.
COMMAND : sxrsort3d.py
VERSION : Beta_20161216
DATE : 2016/11/30
Especially, the scalability of 3D Clustering RSORT3D is not optimised. Using a very large number of CPUs slows down processing speed and causes huge spikes in the network communication. In this case, please try a less MPI processes.
### Current MPI implementation of 3D Clustering RSORT3D is still under optimisation.
COMMAND : sxrsort3d.py
VERSION : Beta_20161216
DATE : 2016/11/30
## LOCALRES
### Currently, there is no output directory nor standard output of Local Resolution and 3D Local Filter.
COMMAND : sxlocres.py and sxfilterlocal.py
VERSION : Beta_20161216
DATE : 2016/11/27
Currently, there is no output directory nor standard output of Local Resolution and 3D Local Filter
Instead, user should be able to specify the output directory path, the command should automatically create the specified directory if it does not exist. If the directory already does exists, abort the execution. In addition, it should give some feedback at least when the the process is done.
## UTILITIES
### In Angular Distribution, setting pixel size to 1.0[A] fails with error.
COMMAND : sxprocess.py –angular_distribution
VERSION : Beta_20161216
DATE : 2016/11/07
Setting pixel size to 1.0[A] fails with error.
sxprocess.py –angular_distribution 'pa06a_sxmeridien02/main001/params_001.txt' –box_size=480 Traceback (most recent call last):
File "/work/software/sparx-snr10/EMAN2/bin/sxprocess.py", line 1242, in <module>
main()
File "/work/software/sparx-snr10/EMAN2/bin/sxprocess.py", line 1238, in main
angular_distribution(inputfile=strInput, options=options, output=strOutput)
File "/work/software/sparx-snr10/EMAN2/lib/utilities.py", line 7230, in angular_distribution
0.01 + vector[2] * options.cylinder_length
ZeroDivisionError: float division by zero
### Currently, no standard output is available from Angular Distribution.
COMMAND : sxprocess.py –angular_distribution
VERSION : Beta_20161216
DATE : 2016/11/07
At least, the program should give some feedback when the the process is done. | 2022-12-04 05:44:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38630127906799316, "perplexity": 3429.7949399123436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00853.warc.gz"} |
https://gamebanana.com/tuts/11227 | # Generic compiling/decompiling
## A Tutorial for Source Engine
“ A well rounded and balanced tutorial covering all the basics that one needs to start compiling and decompiling models. It is well presented and formatted, easy to follow and understand, with some nice tips at the end and could serve as a starting point for anyone showing interest in modeling for the Source engine. ”
TutorialParentSubmitterStats
• Changed some stuff to markup
• Righted some wrongs
• guide on making $modelname better Additional Notes Markups Generic compiling/decompiling 5y • Added crowbar info Additional Notes Crowbar Generic compiling/decompiling 5y For source engine in general, for reference # Index ## Decompiling 1. Getting the models 2. Setting up Crowbar 3. Extraction 4. ## Recompiling 5. Batch script 6. Crowbar 7. Blender 8. WallWorm 9. ## Miscelaneous 10. VPK and steampipe 11. Common QC commands 12. Common Errors Decompiling ## Requirements # Part 1: Getting models Decompiling a model isn't hard, but to start off, you'll need to files to decompile first, if you've downloaded a model replacement mod and they're not in a VPK, you'll already have these files, and you don't need GCFScape (see next paragraph), otherwise you'll need to know the locations of the GCF or VPK archives. If you've downloaded a mod, you can look through the files of the archive until you'll stumble upon a set of files with the extensions .mdl, .vvd, .sw.vtx, .dx80.vtx, dx90.vtx and .phy. Several of these are required to decompile a model, for the sake of simplicity it's safe to say it's easier to just use them all (if you're missing .dx80.vtx, copy .dx90.vtx and rename the extension appropriately). Unpack them to someplace easy to find if they're still trapped in their archive. Take note of their location (i.e. keep the folder open) afterwards, you're going to need it for step 2. If you don't have the models, which is the case for editing default models, open the VPK/GCF archive with GCFScape.VPK archives are in common\\[game]\\[game abbreviation], you'll want to open pak01_dir.vpk or \\[game abbreviation]_pak_dir.vpk most of the time, the others are usually seemingly empty. The GCF archives are in your steamapps folder, if you can't find a models folder, try a different archive, if you can't find what you're looking for try thinking outside the box or search the tree systematically; Select the files with the extensions .mdl, .vvd, .sw.vtx, .dx80.vtx, dx90.vtx and .phy and put them in a folder (preferably a folder on your desktop, but not directly on your desktop, if you don't have .dx80.vtx, copy .dx90.vtx and rename the extension to make it fit), open the folder and open crowbar. # Setting up Crowbar With the window containing your model files still open somewhere, browse to the folder containing Crowbar and open it. Drag the .mdl file onto the Crowbar window and tick the right boxes. If you're looking to replace a model from scratch, you'll want to have QC file enabled at least as well as Reference mesh SMD file for reference (otherwise applying your model is going to be a lot of guess work). For Counter Strike: Source (and a few other games) you'll also want Bone animation SMD files, which contains the animations needed that you can either replace with your own (in which case it serves as reference timelines), or keep intact to preview animations. You could also untick QC file if all you want is the .smd files for use in other mods. # Extraction Click decompile. If everything's going alright, it should work without a hitch and you now have decompiled model files, along with a .qc file, and some animation(s) file(s) and a physmodel SMD, whatever you told Crowbar to give you; check the destination folder if that really is the case. Recompiling # Batch script compiler My preferred way that's nice and fast is through the use of a .bat file, or a Windows batch script. Open notepad (or something alike) and paste this in it: "C:\\Program Files (x86)\\Steam\\steamapps\\common\\\\[game]\\bin\\studiomdl.exe" -game "C:\\Program Files (x86)\\Steam\\steamapps\\\\[name]\\\\[game]\\\\[game abbreviation]" %1 pause Alternatively for older games: "C:\\Program Files (x86)\\Steam\\steamapps\\common\\sourceSDK\\bin\\orangebox\\bin\\studiomdl.exe" -game "C:\\Program Files (x86)\\Steam\\steamapps\\common\\\\[game]\\\\[game abbreviation]" %1 pause To use this properly, replace \\[game] and \\[game abbreviation] with the proper names, if you're on a 32-bit windows machine, remove the (x86) bit; if you installed Steam elsewhere, go ahead and change that too. Then pause statement makes the script wait for input before ending execution, which is useful for finding errors if you have them. After you've saved in the same folder as your mdldecompiler.qc as compiler.bat or something, simply double-click it and let the magic happen. After you're done with it, you can put it in a different folder where you can use it again for quick recycling. The %1 part means you can drop your .qc file onto your .bat and it'll take it as argument, alternatively you can rename that bit to match your .qc, in case you want to compile a batch of .qc files. Also make sure it's for the right engine, as it is now it'll use the game's tools, so if you're getting jiggy with an older mod, use ep1 instead, use source2006 for BMS, you can get these by downloading the proper SDK Base from the tools section of your library. # Crowbar To decompile with Crowbar, drag the .qc file into the window or onto crowbar.exe. you'll probably have to set up your games first, in the boxes you see after choosing setup games, you need to put the paths to gameinfo.txt in the first one and to studiomdl.exe in the second, these are usually in C:\\Program Files (x86)\\Steam\\steamapps\\common\\[game name]\\[game abbreviation]\\ganeinfo.txt and C:\\Program Files (x86)\\Steam\\steamapps\\common\\[game name]\\bin\\studiomdl.exe respectively. # Blender The 3D model editor program Blender, along with the proper plug-ins, can also compile any model directly, which is useful for making fine adjustments to the model directly when something wrong's been spotted in the model viewer and it's become a bit of a nuisance to keep feeding the .bat file. First you enter an Engine Path, which is C:\\Program Files (x86)\\Steam\\steamapps\\common\\[game name]\\bin, in the event of mods, that game name is SourceSDK\\bin\\[mod engine], in the end it should point to the folder containing a studiomdl.exe or resourcecompiler.exe for Source 2. After that you have to give it a game to compile for, for base games this path is usually C:\\Program Files (x86)\\Steam\\steamapps\\common\\[game name]\\[game abbreviation], which is the directory containing a gameinfo.txt file; for mods this is instead in C:\\Program Files (x86)\\Steam\\steamapps\\sourcemods\\[game name]\\[game abbreviation]. Now it's time to feed a QC file, which can be relative to the .blend, then you can compile. it's more of a front-end, which means you'll have to export the model yourself every time before change can be noticed, but this button is just a few pixels away. # Wallworm for 3DSMax for 3DSMax users, there's WallWorm, a nifty tool that will compile your model without having to do much of anything except having the model itself; it creates a QC file and the textures for you. Miscelaneous # VPK and steampipe You can turn your mod into an add-on by turning the folder you so aptly named something sensible into a .vpk archive of its own, this can be done by simply dropping that folder onto vpk.exe, this program is in \\[game name]\\bin, after which you can safely delete the folder. The process can also be done in reverse to get the folder back without having to open it with GCFScape # Common QC commands •$modelname: the name of the model itself, starts relative to root/models and must end with .mdl (remember that the custom folder in your custom folder is also a root).
• A neat trick will make you compile directly in a custom folder where applicable, namely by prepending the path with ..\\custom\\[mod folder name]\\models; the .. bit tells the compiler "Go up a folder", which is root, from there it goes into the custom folder.
• $model/$body: the name of the mesh and the .smd that contains it in that order; models with facial animations require $model, for $staticprop models $body is recommended. • $cd: changes the directory where the .smd files are, one is usually given during decompile, but if you place the .qc in the same folder as all source files, this can and should be removed.
• $cdmaterials: the directory where the .vmt files are to be found by the model, starts relative to root\\materials and shouldn't contain the actual material name as those are already stored in the .smd files themselves; more can be added if need be. •$sequence/$animation: animations your mesh might have, every model needs at least one.$animation is more extended, though they can be used in conjunction.
• $collisionmodel: contains the collision data, a simplified mesh that represents the boundaries of where the model can touch the world/another prop. Can contain several options enclosed in curly-brackets, the most common of which is $concave and $mass, which are used for models with holes and props that can be pushed respectively. •$texturegroup: for models with multiple skins, most extensively used in Team Fortress 2 for team coloring (tutorial here), but can be used for any prop that'll have a different look and even for replacing a model by hiding one mesh and showing another.
# Common errors
## leaked model
This happens when the QC points to a file that doesn't exist. Either the file has the wrong name or is a different folder entirely
This means the mdldecompiler.qc file came across a $hbox statement that doesn't comply with the model, namely the bone it's referring to doesn't exist in the model. To fix it, simply remove the offending line, the compiler usually states which bone to pick. ## animations Animations can be completely broken on default v_model weapons during decompile and sometimes there's not much you can do about the animations themselves to make a quick fix, but there is a possibility that there is. Open the offending animation with your favorite plain-text editor and look for values at around 1000 or above (or -1000 and below), take note of the bone number that's affected (the first number of the line, it's usually the root bone causing this) and look for another line with nominal numbers (usually around 0) at the same bone, if you can't find any, use a different animation file, if you can't find any in those, it's probably faster to simply ignore the animations and remove them, you won't be needing them for the next best thing. The next best thing is to edit the mdldecompile.qc file to make use of the original animations, this theoretically gives better results than the method above, but will clutter your folders with dependency files. If you haven't already, open the mdldecompiler.qc file, remove every line mentioning $sequence and $animation with extreme prejudice and put in their stead the following: $includemodel "path\\to\\model\\x_weaponname_anims.mdl"
Change the path to where you want it to be, it's usually best to make this the same as where you put your end result, change the name to something sensible, such as what you put in $modelname with _anims after it. To make this work you'll need to do some more stuff after compiling, namely get a copy of the .mdl file you decompiled (i.e. has the animations), append the name of the file with _anims and put it in the same folder as your final .mdl. it's important to note this can be done after compiling since $includemodel doesn't actually include the model during compiling but rather make a dependency on that file existing.
For CS:S (and maybe some others too) it could happen your gun will be invisible when doing this, it's not actually gone, just rotated. To fix this, add a rotate 90 statement in every $sequence anywhere after the .smd; if that still doesn't work try -90 instead. ## Access Violations This could literally be anything, but it's always something in your QC file; try removing lines one by one to see which one is the cause. ## Q_AppendSlash cannot be found in vstdlib.dll Install your decompiler properly, read the readme, if your decompiler isn't in sourcesdk\\bin\\ep1\\bin, try putting it there; if that doesn't work, launch the SDK. ## Black and magenta checkered models This isn't really in the scope of de/recompiling, but basically it means your model can't find the right .vmt(s), or the .vmt can't find the .vtf file(s). The former can be caused by having incorrect material names on the .smd, or an incorrect $cdmaterials that doesn't point in exactly the right direction. The latter is the same, but in the .vmt instead.
### Posts
1-10 of 23
1
• 11mo 11mo
I'd recommend ditching the entire sections on CannonFodder's decompiler and GUIStudioMDL and replacing it with Crowbar, as it makes most of those issues with decompiling and compiling a thing of the past.
• Interesting x 1
• 11mo
also, these programs are broken
• Disagree x 1
I Hate My Life.
• 11mo 11mo
• 2y
Posted by E.Lopez
If anyone have this error in Crowbar : ERROR: Model version -1692393424 not currently supported
You should modificated the .mdl file with Notepad++ ( no windows notepad ) . Replace the IDST0 to IDST,
That error shouldn't exist since Cannonfodder's version. I've decompiled models with IDST0 with crowbar before just fine. What model are you having this problem with?
• 2y
If anyone have this error in Crowbar : ERROR: Model version -1692393424 not currently supported
You should modificated the .mdl file with Notepad++ ( no windows notepad ) . Replace the IDST0 to IDST,
We...
• 5y
> **Posted by #trigger_hurt** > > **Posted by Devieus** > > > > **Posted by #trigger_hurt** > > > > > I'm trying to make a team-themed minigun that allows my custom glove colors, but every time I try to re-compile it, the model is invisible. I tried the -anims.mdl thing but that did not work. I also tried rotating it, but no rotations seemed to work. I tried 90, -90, and 180 (basically the only three you can do), but the model is still mostly invisible. Any suggestions? > > > > Is the console giving anything? > > Sorry for the late response, but no, the console gives no warnings or any signs of imperfections. This seems to happen with the medic viewmodels as well (I tried to make a team-themed bonesaw but the same issue occured) > > And also, just a random fact I discovered, despite the fact that the pyro's axe has a c-model that works perfectly (tested in garry's mod using pac3), and on top of that a w-model, it still uses the v-model. I mean seriously... It sounds like you did everything right, but that doesn't mean you actually did just that. I could take a look at the files, see if that'll help.
• 5y
> **Posted by Devieus** > > **Posted by #trigger_hurt** > > > I'm trying to make a team-themed minigun that allows my custom glove colors, but every time I try to re-compile it, the model is invisible. I tried the -anims.mdl thing but that did not work. I also tried rotating it, but no rotations seemed to work. I tried 90, -90, and 180 (basically the only three you can do), but the model is still mostly invisible. Any suggestions? > > Is the console giving anything? Sorry for the late response, but no, the console gives no warnings or any signs of imperfections. This seems to happen with the medic viewmodels as well (I tried to make a team-themed bonesaw but the same issue occured) And also, just a random fact I discovered, despite the fact that the pyro's axe has a c-model that works perfectly (tested in garry's mod using pac3), and on top of that a w-model, it still uses the v-model. I mean seriously...
professional(?) shitposter
• 5y
> **Posted by #trigger_hurt** > I'm trying to make a team-themed minigun that allows my custom glove colors, but every time I try to re-compile it, the model is invisible. I tried the -anims.mdl thing but that did not work. I also tried rotating it, but no rotations seemed to work. I tried 90, -90, and 180 (basically the only three you can do), but the model is still mostly invisible. Any suggestions? Is the console giving anything?
• 5y
I'm trying to make a team-themed minigun that allows my custom glove colors, but every time I try to re-compile it, the model is invisible. I tried the -anims.mdl thing but that did not work. I also tried rotating it, but no rotations seemed to work. I tried 90, -90, and 180 (basically the only three you can do), but the model is still mostly invisible. Any suggestions?
professional(?) shitposter
• 5y
> **Posted by Batnik_Ref.smd** > > You talk in riddles))) That's mostly because your English isn't making clear what exactly your problem is.
### Embed
Image URL:
HTML embed code:
BB embed code:
Markdown embed code:
Original Authors
Writings
Creator
Difficulty Level
Beginner
Genre
Compiling
Miscellaneous
• 19,076 Views
• 24 Posts
• 6y Submitted
• 4mo Modified
• 5y Updated
82 bScore
9.5 Rating
4 voters | 2018-10-17 03:43:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6725490093231201, "perplexity": 3244.7110043407406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510969.31/warc/CC-MAIN-20181017023458-20181017044958-00317.warc.gz"} |
https://math.stackexchange.com/questions/2145315/is-my-proof-of-p-implies-q-landq-implies-r-impliesp-implies-r-correct | # Is my proof of $((p\implies q)\land(q\implies r))\implies(p\implies r)$ correct?
My book use truth table(brute-force) for proving. But I want to know whether my proof is correct: $$LHS\equiv((\lnot p\lor q)\land(\lnot q\lor r)),$$ let $q$ be TRUE, $\top$, we have: $$((\lnot p\lor\top)\land(\bot\lor r))\equiv r,$$ let $q$ be FALSE, $\bot$, we have: $$((\lnot p\lor\bot)\land(\top\lor r))\equiv\lnot p.$$ Since (by some axiom) $q$ is either TRUE or FALSE: $$LHS\implies r\lor\lnot p,$$ which $$\lnot p\lor r\iff (p\implies r),$$ then we have $LHS\implies(p\implies r)$.
• In which formal system do you want the proof to be conducted? It's not enough just to say "some axioms", because in some systems what you want to prove is itself an axiom! – Henning Makholm Feb 15 '17 at 9:01
• @HenningMakholm In some case that $p$ is not true doesn't imply $p$ is false. So I add the statement but I forget the name of the axiom/assumption... – Ning Wang Feb 15 '17 at 9:08
• Your "proof" is actually an informal argument. A formal proof requires a defined logic. But as an argument, it is a correct argument. – DanielV Feb 16 '17 at 7:25
• @DanielV Yes, I've found that I should just use those definitions to prove. – Ning Wang Feb 16 '17 at 7:26
For the formula in the question, propositional logic is sufficient to prove it. If statements are either truth or false, it's also called truth-functional propositional logic. Alternatively, this is equivalent to a Boolean algebra. The axiom you didn't remember the name is the law of excluded middle.
Regarding the formula in the question, this is called the hypothetical syllogism. But regarding the proof: It is basically correct, but you could also get there by using the inference rules from logic to show that the formula is a tautology. I don't think you need the law of excluded middle to prove it.
• So the formal proof should be the one in @user373141 's? Btw, thanks for the links. – Ning Wang Feb 15 '17 at 9:58
• @N1ng You don't need any kind of "assume that $P/Q/R$ is true/false" to show that it is a tautogoly. Just use the basic laws of logic. – tylo Feb 15 '17 at 10:51
Suppose $P\to Q\\ Q\to R\\$ are true.
We want to prove that $P\to R$ is true.
To do this suppose $P$ is true.
Because $P\to Q$ is true it follows that $Q$ is true. Now because $Q$ is true, from $Q\to R$ being true follows that $R$ is true.
We assumed $P$ was true and we deduced that $R$ is also true, therefore $P\to R$ as we wanted.
• A nice proof, but the question was whether the proof in the OP was correct or not. – skyking Feb 15 '17 at 9:35
However this depends on that the axioms and rules used are established at the point of the proof. This means the correctness depends on the formal system, axioms and the previously proven statements.
Especially what you rely on is that from $\phi \rightarrow \chi$ and $\neg\phi\rightarrow\psi$ you can conclude $\chi\lor\psi$ or alternatively expressed $(\phi \rightarrow \chi)\land(\neg\phi\rightarrow\psi)\rightarrow(\chi\lor\psi)$. Which itself can be prooved by truth tables.
As your book seem to use truth tables for proofs. This works fine for a system, it may look overly complicated, but it has it's advantage - it's a lot easier to "bootstrap" the system if one accept truth-tables for proofs.
$$\begin{array} {l|l:l} \hdashline 1 & \quad (p\to q) \wedge (q\to r) & \mathsf{Assume } \\ 2 & \quad (p\to q) & 1, \wedge\mathsf {Elimination} \\ 3 & \quad (q\to r) & 1, \wedge\mathsf {Elimination} \\ 4 & \quad (\neg q\to \neg p) & 2,\mathsf{Contraposition} \\ 5 & \quad (q\vee\neg q) & \textsf{Law of Excluded Middle} \\ 6 & \quad r\vee \neg p & 3,4,5,\textsf{Constructive Dilemma} \\ 7 & \quad p\to r & 6,{\to}\mathsf{Equivalence} \\ \hline \therefore & ((p\to q)\wedge(q\to r))\to (p\to r) & 1,6,{\to}\mathsf{Introduction} \end{array}$$
The Constructive Dilemma states $A\vee B, A\to C, B\to D\vdash C\vee D$ .
Alternatively: $\begin{array} {l|l:l} \hline 1 & \quad (p\to q) \wedge (q\to r) & & \text{assumption } 1 \\ 2 & \quad (p\to q) & 1, \wedge\mathsf E \\ 3 & \quad (q\to r) & 1, \wedge\mathsf E \\ \hdashline 4.1 & \qquad p & & \text{assumption } 2 \\ 4.2 & \qquad q & 2,4.1, \to\mathsf E \\ 4.3 & \qquad r & 3,4.2, \to \mathsf E \\ \hline 4 & \quad p\to r & 4.1,4.3, \to\mathsf I & \text{discharge } 2 \\\hline \therefore & (p\to q) \wedge (q\to r) \to (p\to r) & 1 ,4 , \to\mathsf E & \text{discharge }1 \end{array}$ | 2019-05-21 15:24:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625563979148865, "perplexity": 344.00829204645083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00250.warc.gz"} |
https://www.physicsvidyapith.com/2022/08/magnetic-dipole-moment-of-current-carrying-loop.html | ## Magnetic Dipole Moment of Current carrying loop
Current carrying Loop or Coil or Solenoid:
The current carrying loop (or Coil or solenoid) behaves like a bar magnet. A bar magnet with the north and south poles at its ends is a magnetic dipole, so a current -loop is also a magnetic dipole.
Equation of Magnetic Dipole Moment of Current carrying Loop:
When a current loop is suspended in a magnetic field, it experiences the torque which tends to rotate the current loop to a position in which the axis of the loop is parallel to the field. So the magnitude of the torque acting on the current loop in the uniform magnetic field $\overrightarrow{B}$ is given by:
$\tau=iAB sin\theta \qquad(1)$
Where $A$ - Area of the current loop
We also know that when the electric dipole is placed in the electric field, it also experiences the torque which tends to rotate the electric dipole in the electric field. So the magnitude of the torque on the electric dipole in the uniform electric field $\overrightarrow{E}$ is given by:
$\tau=pE sin\theta \qquad(2)$
Where $p$ - The magnitude of the electric dipole moment
Now compare the equation $(1)$ and equation $(2)$ and we can conclude that the current loop also has a magnetic dipole moment just like an electric dipole have an electric dipole moment. The magnetic dipole moment is associated with the current in the loop and the area of the current loop. It is represented by $\overrightarrow {m}$. So the magnitude of the magnetic dipole moment of current carrying loop is:
$m=iA$
The vector form of the magnetic dipole moment current carrying loop is
$\overrightarrow{m} = i\overrightarrow{A}$
The magnetic dipole moment of current carrying coil: If the current-carrying loop has $N$ number of turns (i.e current carrying coil) then the magnetic dipole moment of current carrying coil:
$m=NiA$
The vector form of the magnetic dipole moment of the current carrying coil is
$\overrightarrow{m} =N i\overrightarrow{A}$
The magnetic dipole moment of Circular Loop: Let us consider the circular loop of radius $a$ in which current $i$ is flowing the magnitude of the magnetic dipole moment of the circular loop:
$m=i A$
Here the area $A$ of the circular loop is $\pi a^{2}$ then the magnitude of the magnetic dipole moment of the circular loop is:
$m=i \pi a^{2} \qquad(3)$
The magnetic field at the center of the current carrying a circular loop in terms of current is:
$B=\frac{\mu_{\circ}i}{2a}$
Now substitute the value of $i$ from equation $(3)$ in the above equation then the magnetic field at the center of the current carrying circular loop in terms of magnetic dipole moment is:
$B=\frac{\mu_{\circ}m}{2\pi a^{3}}$
$B=\frac{\mu_{\circ}}{4\pi} \frac{2m}{a^{3}}$ | 2023-04-02 11:00:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500061988830566, "perplexity": 116.20641038557808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00031.warc.gz"} |
https://learn.careers360.com/maths/vector-algebra-chapter/ | # Vector Algebra Share
## What is Vector Algebra
Vector Algebra is defined as the mathematical operations done on vectors, which is further the foundation of modern-day 3D gaming, animation and widely used in modern physics. At the JEE level, Vector Algebra is simple to understand. Every year you will get 1 - 2 questions in JEE Main exam as well as in other engineering entrance exams. This chapter helps you in 3-D Geometry and Physics (Kinematics, Work, Energy and Power, electrostatics, etc.). Importance of this chapter can be seen from Physics where you learn why any quantity is scalar or vector? A little mistake in vector algebra costs you negative marks. As compared to other chapters in maths, Vector Algebra requires high accuracy to prepare for the examination. Once you start learning vector algebra you will become familiar with the application of vectors and it helps you to solve the problems based on 3-D geometry and basics used in physics.
Suppose you are playing hide n seek game and one of your friends gives you a hint that he is 5 steps away from you. Would you be able to catch him??
This will take you many more tries as you are not aware of the direction.
Suppose, he is behind you and you start moving in the forward direction. What happened?? Distance will start increasing between both of you.
What if he tells you that he is in front of you?
Now you can easily catch him as you know the direction as well as the magnitude of the distance of your friend. That's how we make use of vectors in real life unknowingly.
There are many more examples like computing the direction of rain, flight of a bird and the list is never-ending.
Prepare Online for JEE Main/NEET
Crack JEE 2021 with JEE/NEET Online Preparation Program
Well, all the answers to these questions you will able to tell when you study Vector Algebra.
After reading this chapter you will be able to:
• Determine the position of any object
• Represent the position of the object w.r.t the another object
• Determine the least possible way to reach one position w.r.t the other position
## Notes of Vector Algebra
Important Topics of Vector Algebra
• Vector (Position Vector, Direction cosine)
• Types of vector
• Vector Algebra (Addition of vectors and multiplication of vector with scalar)
• Section Formula
• Product of two vectors (Scalar product and Vector product)
## Overview of Chapter
Vector- In general terms, a vector is defined as an object having both directions as well as magnitude.
Position vector- Consider a point P in space, having coordinates (x, y, z) with respect to the origin O (0, 0, 0). Then, the vector $\overrightarrow{OP}(or\;\overrightarrow{r})$ having O and P as its initial and terminal points, respectively, is called the position vector of the point P with respect to O.
Direction Cosines- The position vector of a point P(x, y, z). The angle α, β, and γ made by the vector with the positive direction of x, y, and z-axes respectively, are called its directions angles. The cosine values of these angles, i.e., cos α, cos β, and cos γ are called direction cosines of the vector and usually denoted by l, m, and n, respectively.
## Types of vector
Zero Vector: A vector whose initial and terminal points coincide, is called a zero vector (or null vector). Zero vector can not be assigned a definite direction as it has zero magnitudes. Or, alternatively, otherwise, it may be regarded as having any direction. The vector $\overrightarrow{AA}\;or\;\overrightarrow{BB}$ represents the zero vector.
Unit Vector: A vector whose magnitude is unity (i.e., 1 unit) is called a unit vector. The unit vector in the direction of a given vector $\overrightarrow{a}$ is denoted by â.
Coinitial Vectors: Two or more vectors having the same initial point are called coinitial Vectors.
Collinear Vectors: Two or more vectors are said to be collinear if they are parallel to the same line, irrespective of their magnitudes and directions.
Equal Vectors: Two vectors $\overrightarrow{a}\;and\;\overrightarrow{b}$ are said to be equal, if they have the same magnitude and direction regardless of the positions of their initial points, and written as $\overrightarrow{a}=\overrightarrow{b}$.
Negative of a Vector: A vector whose magnitude is the same as that of a given vector (say,$\overrightarrow{AB}$), but the direction is opposite to that of it, is called negative of the given vector.
Addition of two vectors simply means displacement from a point A to point B.
In general, if we have two vectors $\overrightarrow{a}\; and\; \overrightarrow{b}$. Then resultant $\overrightarrow{r}$ of two vector is, $\overrightarrow{r}=\overrightarrow{a}+ \overrightarrow{b}$
Points to be remember
• Triangle law of vector addition
• Parallelogram law of vector addition
## Multiplication of vector with a scalar
The product of the vector $\overrightarrow{a}$ by the scalar λ, is called the multiplication of a vector by the scalar λ and denoted as λ$\overrightarrow{a}$
And vector λ$\overrightarrow{a}$ is collinear to the vector $\overrightarrow{a}$
The vector λ$\overrightarrow{a}$ has the direction same or opposite to that of vector $\overrightarrow{a}$ according to the value of λ (same direction for positive value and vice versa). Also, the magnitude of vector λ is | λ | times the magnitude of the vector , i.e., | λ | = | λ | |$\overrightarrow{a}$|.
## Component Form of Vector
If a vector is represented as $x\hat{i}+y\hat{j}+z\hat{k}$ then it is called component form. Here, $\hat{i}, \hat{j} \;and\; \hat{k}$representing the unit vectors along the
x, y, and z-axes, respectively and (x,y,z) represent coordinates of the vector.
Some important points
If $\vec{a}\;and\;\vec{b}$ are any two vectors given in the component form $a_1\hat{i}+a_2\hat{j}+a_3\hat{k}$ and $b_1\hat{i}+b_2\hat{j}+b_3\hat{k}$ , respectively, then,
• The resultant of the vectors is
$(\vec{a}\;\pm \;\vec{b})=(a_1\pm b_1)\hat{i}+(a_2\pm b_2)\hat{j}+(a_3\pm b_3)\hat{k}$
• The vectors are equal if and only if
$a_1=b_1 , a_2=b_2 \;\;and\;\; a_3=b_3$
• The multiplication of vector \vec{a} by any scalar λ is given by
## $\lambda \vec{a}=\lambda a_1\hat{i}+\lambda a_2\hat{j}+\lambda a_3\hat{k}$Section Formula - Vector Algebra
When point R divides \vec{PQ} internally in the ratio of m:n such that $\frac{\vec{PR}}{\vec{RQ}}=\frac{m}{n}$ then $\vec{r}=\frac{m\vec{b}+n\vec{a}}{m+n}$.
When point R divides $\vec{PQ}$ externally in the ratio of m:n such that $\frac{\vec{PR}}{\vec{QR}}=\frac{m}{n}$ then $\vec{r}=\frac{m\vec{b}-n\vec{a}}{m-n}$.
## Product of two vector
Multiplication of two vectors is defined in two ways (i) Scalar (or dot) product and (ii) Vector (or cross) product.
In scalar product resultant is scalar quantity.
In vector product resultant is a vector quantity.
Scalar product
Scalar product of two non zero vectors $\vec{a}\;and\; \vec{b}$ is denoted by $\vec{a}\cdot \vec{b}$. Scalar product is calculated as $\vec{a}\cdot \vec{b}=\left |\vec{a} \right | \left |\vec{b} \right |cos\theta$ where, θ is the angle between two non zero given vectors.
Projection of a vector $\vec{a}\; is\; \left | \vec{a} \right |cos\theta.$
Vector product
Vector product of two non zero vectors $\vec{a}\;and\; \vec{b}$ is denoted by$\vec{a}\times \vec{b}$. Scalar product is calculated as $\vec{a}\times \vec{b}=\left |\vec{a} \right | \left |\vec{b} \right |sin\theta\;\hat{n}$ where θ is the angle between two non zero vectors and \hat{n} is a unit vector perpendicular to both.
## How to prepare Vector Algebra?
Vector Algebra is one of the basic topics, you can prepare this topic by understanding a few basic concepts
• Start with the basic concept of vector, understand all the terms used in vector algebra.
• Representation of a vector is an important part of this chapter. It is important for you that you should read all the questions meditatively.
• Vector is all about the direction with magnitude so make sure that the direction given in question and direction obtained in answer match properly.
• Make sure that after studying certain section/concept, solve questions related to those concepts without looking into the solutions and practice MCQ from the above-mentioned books and solve all the previous year problems asked in JEE.
• Don’t let any doubt remain in your mind and clear all the doubts with your teachers or with your friends.
## Best Books For Preparation:-
First, finish all the concepts, example and questions given in NCERT Maths Book. You must thorough with the theory of NCERT. Then you can refer to the book Cengage Mathematics Algebra. Vector Algebra is explained very well in this book and there are ample amount of questions with crystal clear concepts. You can also refer to the book Arihant Algebra by SK Goyal or RD Sharma. But again the choice of reference book depends on person to person, find the book that best suits you the best depending on how well you are clear with the concepts and the difficulty of the questions you require.
## Maths Chapter-wise Notes for Engineering exams
Chapters Chapters Name Chapter 1 Sets, Relations, and Functions Chapter 2 Complex Numbers and Quadratic Equations Chapter 3 Matrices and Determinants Chapter 4 Permutations and Combinations Chapter 5 Binomial Theorem and its Simple Applications Chapter 6 Sequence and Series Chapter 7 Limit, Continuity, and Differentiability Chapter 8 Integral Calculus Chapter 9 Differential Equations Chapter 10 Coordinate Geometry Chapter 11 Three Dimensional Geometry Chapter 13 Statistics and Probability Chapter 14 Trigonometry Chapter 15 Mathematical Reasoning Chapter 16 Mathematical Induction
### Topics from Vector Algebra
• Vectors and scalars, addition of vectors, components of a vector in two dimensions and three dimensional space ( AEEE, JEE Main, SRMJEEE, TS EAMCET, VITEEE, AP EAMCET, COMEDK UGET ) (238 concepts)
• Scalar and vector products, scalar and vector triple product ( AEEE, JEE Main, SRMJEEE, TS EAMCET, VITEEE, AP EAMCET, COMEDK UGET ) (490 concepts)
• Introduction to 3-D Geometry ( AEEE, JEE Main, SRMJEEE, TS EAMCET, VITEEE, AP EAMCET, COMEDK UGET ) (6 concepts)
• Vector Algebra ( AEEE, JEE Main, SRMJEEE, TS EAMCET, VITEEE, AP EAMCET, COMEDK UGET ) (54 concepts)
• Scalar and Vector Product of Vector Algebra ( AEEE, JEE Main, SRMJEEE, TS EAMCET, VITEEE, AP EAMCET, COMEDK UGET ) (66 concepts)
Exams
Articles
Questions | 2020-08-09 08:21:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965704202651978, "perplexity": 1039.641899744072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00339.warc.gz"} |
http://2015.igem.org/Team:Technion_Israel/Modeling | # Team:Technion Israel/Modeling
Team: Technion 2015
# 3$$\alpha$$-HSD Kinetic Model
## Background
3$$\alpha$$-HSD is the name of a group of enzymes which convert certain hormones (like DHT) to another hormone (like $$3\alpha-diol$$) and vice versa, by means of oxidation and reduction. There are several strands of this enzyme, with different levels of potency. In humans, the enzyme is encoded by the AKR1C4 gene, while in rats it is encoded by the AKR1C9 gene. We chose the rat version of the enzyme because it is more efficient in breaking down DHT [7].
All AKRs catalyze an ordered bi-bi reaction in which the cofactor binds first, followed by the binding of the steroid substrate. The steroid product is the first to leave, and the cofactor is the last. In this mechanism, $${K_{cat}}$$ represents the slowest step in the kinetic sequence [2].
## Approaches to modeling the process
### 1. Cofactor saturation assumption
We assume that the levels of the cofactors on the scalp are high enough that they are always at saturation in the enzymatic reaction. The advantage of this approach is that we can use the Michaelis-Menten reversible equation to describe the reaction. As we will explain later, this assumption may not be correct, so we will offer other approaches as well. Another major disadvantage of this model is that it does not take the levels of the cofactors into consideration, so it cannot help us predict the system's behavior for different cofactor concentrations.
### 2. New Model Development
Taking cofactors into consideration, we can use principles from statistical mechanics in order to develop a completely new enzymatic reaction function. The advantage of this approach is that it describes the kinetics of the enzyme in much more detail than Michaelis-Menten reversible, and can even offer some explanations for our wet-lab results. The disadvantage of this approach is that there are no reaction constants available for it, so we will have to estimate them.
## Approach 1 – cofactor saturation assumption
If we assume that the levels of cofactor in the enzyme's environment are high enough that they are at saturation, the probability of finding an enzyme that is not connected to a cofactor is negligible. We also need to assume that the concentrations of both cofactors are almost equal, so the inhibitory effect [need article] will not affect the reaction (as we will show later, a large ratio of one cofactor in relation to another will inhibit the other direction of the reaction). The new kinetic schematic is:
Since both levels of NADPH and NADP are saturated, we'll assume product inhibition occurs only with DHT and $$3\alpha-diol$$, so the reaction will resemble a Michaelis-Menten reversible reaction. Since the degradation rates of the hormones on the scalp are unknown, we will neglect them by assuming the degradation is slower by several orders of magnitude than the enzymatic reaction.
We can summarize the reactions by the following coupled differential equations:
$\left( I \right)\left\{ {\begin{array}{*{20}{c}}{\frac{{d\left[ {3\alpha diol} \right]}}{{dt}} = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{k_s}}} - {V_{{m_r}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{k_p}}}}}{{1 + \frac{{\left[ {DHT} \right]}}{{{k_s}}} + \frac{{\left[ {3\alpha diol} \right]}}{{{k_p}}}}}}\\{\frac{{d\left[ {DHT} \right]}}{{dt}} = \frac{{{V_{{m_r}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{k_p}}} - {V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{k_s}}}}}{{1 + \frac{{\left[ {DHT} \right]}}{{{k_s}}} + \frac{{\left[ {3\alpha diol} \right]}}{{{k_p}}}}}}\end{array}} \right.$
Where:
• $${V_{{m_f}}}$$ is the maximum forward reaction rate attained when all enzyme molecules are bound to the substrate (DHT).
• $${V_{{m_r}}}$$ is the maximum backward reaction rate attained when all enzyme molecules are bound to the product($$3\alpha Diol$$).
• $${K_s}$$ is the substrate concentration at which the forward reaction rate is at half-maximum.
• $${K_p}$$ is the product concentration at which the backward reaction rate is at half-maximum.
While we couldn't find the kinetic constants relevant for the human scalp, we found an article which measured them on rat skin [5]. We will assume that the constants are of the same order of magnitude as on the human scalp.
### Time domain simulation
We simulated the system described above. The simulation has been done using the following constants:
Parameter Value Units source comment
$${V_{{m_f}}}$$ 5.63 $$\frac{{nmol}}{{\min }}$$ Calculated from [5] For $${10_{mg}}$$ of enzyme
$${V_{{m_r}}}$$ 16.28 $$\frac{{nmol}}{{\min }}$$ Calculated from [5] For $${10_{mg}}$$ of enzyme
$${K_s}$$ 0.38 $$\mu M$$ From [5]
$${K_p}$$ 2.79 $$\mu M$$ From [5]
In order to understand the breakdown process of DHT by the enzyme, we simulated the system for three initial concentrations of the hormone:
From the simulation we can see that for an initial concentration that is lower by an order of magnitude from $${K_s}$$, the time elapsed for the system to reach steady-state is almost the same as for an initial concentration of $${K_s}$$. However for a value that is higher by an order of magnitude, the time increases significantly.
The following video shows a parameter scan for different initial concentration values:
From this video we can learn that the time elapsed to break down initial concentrations that are larger by an order of magnitude from $${K_s}$$ increases significantly for every small increment. By looking at equation (I) we can find two possible reasons for this:
• DHT saturation - for every increment in the initial concentration, the value of the derivative approaches $${V_{{{\mathop{\rm m}\nolimits} _f}}}$$. For values that are larger by an order of magnitude than $${K_s}$$, large increments in the initial concentration do not change much the value of the derivative.
• Product inhibition - most of the DHT molecules are converted to $$3\alpha - diol$$. A larger concentration of the product decreases the breakdown rate of the substrate.
We can use this information in order to determine the necessary enzyme concentration required to break down an initial concentration of DHT at a certain time (Increasing the enzyme level will increase $${V_{\max }}$$).
### Percentage of DHT breakdown
Let C be the initial concentration of DHT, and the initial concentration of $$3\alpha - diol$$. The relation between the substrate and the product is:
$$\left[ {DHT} \right] + \left[ {3\alpha - diol} \right] = C$$
At stable state, $$\frac{d}{{dt}} = 0$$, so from equation (I) we get: $\begin{array}{l}\frac{{{V_{{m_f}}} \cdot \frac{{{{\left[ {DHT} \right]}_{final}}}}{{{k_s}}} - {V_{{m_r}}} \cdot \frac{{{{\left[ {3\alpha diol} \right]}_{final}}}}{{{k_p}}}}}{{1 + \frac{{{{\left[ {DHT} \right]}_{final}}}}{{{k_s}}} + \frac{{{{\left[ {3\alpha diol} \right]}_{final}}}}{{{k_p}}}}} = 0\\{V_{{m_f}}} \cdot \frac{{{{\left[ {DHT} \right]}_{final}}}}{{{k_s}}} - {V_{{m_r}}} \cdot \frac{{{{\left[ {3\alpha diol} \right]}_{final}}}}{{{k_p}}}\mathop = \limits^{(2)} {V_{{m_f}}} \cdot \frac{{{{\left[ {DHT} \right]}_{final}}}}{{{k_s}}} - {V_{{m_r}}} \cdot \frac{{C - {{\left[ {DHT} \right]}_{final}}}}{{{k_p}}} = 0\\{\left[ {DHT} \right]_{final}} \cdot \left( {\frac{{{V_{{m_f}}}}}{{{k_s}}} + \frac{{{V_{{m_r}}}}}{{{k_p}}}} \right) = \frac{{{V_{{m_r}}} \cdot C}}{{{k_p}}}\\{\left[ {DHT} \right]_{final}} = C \cdot \frac{{{V_{{m_r}}}}}{{\frac{{{k_p}}}{{{k_s}}} \cdot {V_{{m_f}}} + {V_{{m_r}}}}}\\ \Rightarrow \left[ \% \right] = \left( {1 - \frac{{{{\left[ {DHT} \right]}_{final}}}}{C}} \right) \cdot 100 = \left( {1 - \frac{{{V_{{m_r}}}}}{{\frac{{{k_p}}}{{{k_s}}} \cdot {V_{{m_f}}} + {V_{{m_r}}}}}} \right) \cdot 100\mathop = \limits^{from\,\,\,table} 0.717\% \end{array}$
### Problems with the approach
Looking at figure 1, we can see that there is an element of cofactor inhibition of the enzyme that this model does not take into account. In order for the enzyme to convert DHT to $$3\alpha - diol$$ (or vice versa), the specific cofactor of the reaction has to be connected to the enzyme, transforming it so it can bind only to the substrate or product. As we will see later, in order for a Michaelis-Menten reversible reaction to occur, both cofactor levels need to be high and at a certain ratio. The fact that this assumption may not be correct is one of the main reasons we chose to overproduce NADPH as a part of our project.
In fact, this model does not acknowledge the fact that the enzyme requires a cofactor in order to work, so we cannot use it as a good simulator of our system.
Nevertheless, in situations where these conditions exist, this model offers a simple tool for prediction that is easy to understand and analyse.
## Approach 2 – cofactor saturation assumption
In this part we will look thoroughly at the mechanics of the enzyme and develop new rate equations for the system that will take cofactor levels into account. Later we will analyse those equations and explain how they correlate with our previous model and with our wetlab results.
### The rate equation for an enzyme
Let $${K_{cat}}$$ be the the number of substrate molecule each enzyme site converts to product per unit time and P the number of product molecules. If we have N enzymes in our solution, the rate equation for the product of the enzyme will be:
$$\frac{{dP}}{{dt}} = {K_{cat}} \cdot \sum\limits_{i = 1}^N {{1_{enzym{e_i}}}}$$
where:
$${1_{enzym{e_i}}} = \left\{ {\begin{array}{*{20}{c}}{1,}&{if\,the\,enzme\,currently\,converts\,a\,substrate}\\{0,}&{else}\end{array}} \right.$$
is the indicator fuction for each enzyme.
If an enzyme operates in both directions, with different velocities for each direction, the rate equation will be:
$\frac{{dP}}{{dt}} = \sum\limits_{i = 1}^N {{K_{enzym{e_i}}}}$ where:
${K_{enzym{e_i}}} = \left\{ {\begin{array}{*{20}{c}}{{K_{ca{t_{forward}}}}}&{if\,enzyme\,converts\,substrate\,to\,product}\\{{K_{ca{t_{backwards}}}}}&{if\,enzyme\,converts\,product\,to\,substrate}\\0&{else}\end{array}} \right.$
We'll assume that $${P_{reactio{n_{forward}}}}$$ and $${P_{reactio{n_{backwards}}}}$$ - the probabilities for each reaction to occur - are the same for all the enzymes in the system. As we will see later, they are a function of a number of factors. Among them is the level of the substrate in the solution and the probability of binding a substrate molecule to the enzyme.
Since there are usually a very large number of molecules in the solution, we can use the law of large numbers:
$\begin{array}{l}\,\,\,\,\,\,\,\,\,\frac{{dP}}{{dt}} = \sum\limits_{i = 1}^N {{K_{enzym{e_i}}}} \approx N \cdot E\left[ {{K_{enzym{e_i}}}} \right] = N \cdot \left( {{K_{ca{t_{forward}}}} \cdot {P_{reactio{n_{forward}}}} - {K_{ca{t_{backward}}}} \cdot {P_{reactio{n_{backward}}}}} \right)\\ \Rightarrow \frac{{d\left[ P \right]}}{{dt}} = \left[ {enzyme} \right] \cdot \left( {{K_{ca{t_{forward}}}} \cdot {P_{reactio{n_{forward}}}} - {K_{ca{t_{backward}}}} \cdot {P_{reactio{n_{backward}}}}} \right) = {V_{{m_f}}} \cdot {P_{reactio{n_{forward}}}} - {V_{{m_r}}} \cdot {P_{reactio{n_{backward}}}}\end{array}$
Next, we will find $${P_{reactio{n_{forward}}}}$$ and $${P_{reactio{n_{backwards}}}}$$ for our enzyme.
### Developing the model
The drivation of this model is based on a similar model for binding of ligands to receptors found in the book[8].
The AKR1C9 has one site for binding a cofactor. Once the cofactor is bound, the enzyme has a site which the appropriate hormone binds to. Let us look at a system with one receptor that has two binding sites and four ligands:
We'll assume we have $${L_A}$$ ligands from type A, $${L_B}$$ ligands from type B and so on for C and D. The solution has $$\Omega$$ "volume cells" which can contain only one ligand.
To find the probability of the enzyme being in a certain state, we will use the Boltzmann distribution[as explained in [8] pages 219-237]:
${e^{ - \frac{{{\varepsilon _{macrostate}}}}{{KT}}}}$
The Boltzmann distribution is a probability distribution that gives the probability that a system will be in a certain state as a function of that state’s energy and the temperature of the system.
The probability for the system to be in a certain macrostate is:
${P_{macrostat{e_i}}} = \frac{{\overbrace {{e^{ - \frac{{{\varepsilon _{macrostat{e_i}}}}}{{KT}}}}}^{{\rm{The}}\,{\rm{Weight}}\,{\rm{of}}\,{\rm{the}}\,{\rm{macrostate}}}}}{{\underbrace {\sum\limits_{j = 1}^M {{e^{ - \frac{{{\varepsilon _{macrostat{e_j}}}}}{{KT}}}}} }_{{\rm{The}}\,{\rm{sum}}\,{\rm{of}}\,{\rm{all}}\,{\rm{macrostates}}\,{\rm{weights}}}}}$
Where M is the number of states accessible to the system.
The weight for each macrostate is the sum of the weights of all of its microstates:
${e^{ - \frac{{{\varepsilon _{macrostate}}}}{{KT}}}} = \sum\limits_i {{e^{ - \frac{{{\varepsilon _{microstat{e_i}}}}}{{KT}}}}}$
A microstate of the system is one possible arrangement of $$\left( {{L_A} + {L_B} + {L_C} + {L_D}} \right)$$ ligands in $$\Omega$$ cells or/and the binding sites of the receptor. Every macrostate of the system has a certain number of these arrangements.
Each ligand can have two stable states of energy:
• When the ligand is free in the solution - $${\varepsilon _{sol}}$$.
For the sake of simplicity we will assume all four ligands have the same energy when free in the solution.
• When the ligand is bound to a site in the receptor - $${\varepsilon _b}$$.
Each ligand will have different energy from the others when bound to the enzyme.
The sum of the energies of all the ligands will give us the energy of a microstate:
$${\varepsilon _{microstate}} = \sum\limits_{i = 1}^{{L_A}} {{\varepsilon _{{A_i}}}} + \sum\limits_{i = 1}^{{L_B}} {{\varepsilon _{{B_i}}}} + \sum\limits_{i = 1}^{{L_C}} {{\varepsilon _{{C_i}}}} + \sum\limits_{i = 1}^{{L_D}} {{\varepsilon _{{D_i}}}}$$
In our system, only two ligands that are bound to the enzyme can have the energy $${\varepsilon _b}$$. We assume the rest have the energy $${\varepsilon _{sol}}$$, no matter in which box they are in. For that reason, we can say that all the microstates of each macrostate are the same.
For example, for the macrostate in which only one ligand of type A is bound to the enzyme:
$$\begin{array}{l}{\varepsilon _{microstate}} = {\varepsilon _{sol}} \cdot \left[ {\left( {{L_A} - 1} \right) + {L_B} + {L_C} + {L_D}} \right] + {\varepsilon _{{b_A}}}\\ \Rightarrow {e^{ - \frac{{{\varepsilon _{macrostate}}}}{{KT}}}} = \sum\limits_{i = 1}^{MP} {{e^{ - \frac{{{\varepsilon _{microstat{e_i}}}}}{{KT}}}}} = \sum\limits_{i = 1}^{MP} {{e^{ - \frac{{{\varepsilon _{microstate}}}}{{KT}}}}} = MP \cdot {e^{ - \frac{{{\varepsilon _{microstate}}}}{{KT}}}} = MP \cdot {e^{ - \frac{{{\varepsilon _{sol}} \cdot \left[ {\left( {{L_A} - 1} \right) + {L_B} + {L_C} + {L_D}} \right] + {\varepsilon _{{b_A}}}}}{{KT}}}}\end{array}$$
Where MP is the multiplicity of the microstate – the number of possible arrangements of ligands in the system.
#### Counting microstates
We'll notice that for every arrangement of type A ligands, there are numerous possible arrangements of the other ligands. Also, there is redundancy in the total possible arrangements – since all ligands of type A are the same, it does not matter which one occupies which cell. Same thing for particles of types B, C and D.
For example, the number of microstates of the system for the macrostate in which none of the ligands are bound to the receptor is:
$\begin{array}{l}\left( {\begin{array}{*{20}{c}}{\Omega - {L_B} - {L_C} - {L_D}}\\{{L_A}}\end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}}{\Omega - {L_A} - {L_C} - {L_D}}\\{{L_B}}\end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}}{\Omega - {L_A} - {L_B} - {L_D}}\\{{L_C}}\end{array}} \right) \cdot \left( {\begin{array}{*{20}{c}}{\Omega - {L_A} - {L_B} - {L_C}}\\{{L_D}}\end{array}} \right) = \\ = \frac{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)!}}{{{L_A}!\, \cdot \,\left( {\Omega - {L_B} - {L_C} - {L_D} - {L_A}} \right)!}} \cdot \frac{{\left( {\Omega - {L_A} - {L_C} - {L_D}} \right)!}}{{{L_B}!\, \cdot \,\left( {\Omega - {L_A} - {L_C} - {L_D} - {L_B}} \right)!}} \cdot \frac{{\left( {\Omega - {L_A} - {L_B} - {L_D}} \right)!}}{{{L_C}!\, \cdot \,\left( {\Omega - {L_A} - {L_B} - {L_D} - {L_C}} \right)!}} \cdot \frac{{\left( {\Omega - {L_A} - {L_B} - {L_C}} \right)!}}{{{L_D}!\, \cdot \,\left( {\Omega - {L_A} - {L_B} - {L_C} - {L_D}} \right)!}}\mathop \approx \limits^{\left( 1 \right)} \\ \approx \frac{{{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)}^{{L_A}}} \cdot {{\left( {\Omega - {L_A} - {L_C} - {L_D}} \right)}^{{L_B}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_C}} \right)}^{{L_D}}}}}{{{L_A}!\, \cdot \,{L_B}! \cdot {L_C}! \cdot {L_D}!}}\end{array}$
Let us look at all the possible macrostates of the system:
1. None of the Ligands are bound to the receptor:
• All the ligands have an energy of $${\varepsilon _{sol}}$$.
• Energy of the macrostate: $$\left( {{L_A} + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}}$$
• Multiplicity of the state (the number of microstates that the macrostate contains) is as mentioned above:
$M{P_1} = \frac{{{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)}^{{L_A}}} \cdot {{\left( {\Omega - {L_A} - {L_C} - {L_D}} \right)}^{{L_B}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_C}} \right)}^{{L_D}}}}}{{{L_A}!\, \cdot \,{L_B}! \cdot {L_C}! \cdot {L_D}!}}$
• Weight of the macrostate (= Energy*Multiplicity): $${W_1} = M{P_1} \cdot {e^{ - \beta \cdot \left( {{L_A} + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}}}}$$ where $$\beta = \frac{1}{{KT}}$$.
• Because later we will want to normalize the weight of this macrostate to 1, we will multiply each macrostate weight by $\frac{1}{{{W_1}}} = \frac{{{e^{ + \beta \cdot \left( {{L_A} + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}}}}}}{{M{P_1}}}$, so that $${W_{{N_1}}} = 1$$.
2. One of the ligands of type A (cofactor) is bound to the receptor:
• One ligand of type A is bound to its designated site in the receptor, and thus have an energy of $${\varepsilon _{{b_A}}}$$.
• $${L_A} - 1$$ ligands of type A are unbound and thus have an energy of $${\varepsilon _{sol}}$$.
• All other ligands are unbound and thus have an energy of $${\varepsilon _{sol}}$$.
$\begin{array}{l} \Rightarrow {\rm{ The}}\,{\rm{Energy}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state }} = \left( {{L_A} - 1 + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}} + {\varepsilon _{{b_A}}}\\\\{\rm{The}}\,{\rm{Multiplicity}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state: }}\\{\rm{M}}{{\rm{P}}_2} = \frac{{{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)}^{{L_A} - 1}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_C} - {L_D}} \right)}^{{L_B}}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - {L_C}} \right)}^{{L_D}}}}}{{({L_A} - 1)!\, \cdot \,{L_B}! \cdot {L_C}! \cdot {L_D}!}}\\ \Rightarrow {\rm{ The}}\,{\rm{Weight}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state: }}{{\rm{W}}_2} = {\rm{M}}{{\rm{P}}_2} \cdot {e^{ - \beta \cdot \left[ {\left( {{L_A} - 1 + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}} + {\varepsilon _{{b_A}}}} \right]}}\end{array}$
$\begin{array}{l}{\rm{Normalize}}\,{\rm{the}}\,{\rm{weight:}}\\{W_{{N_2}}} = \frac{{{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)}^{{L_A} - 1}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_C} - {L_D}} \right)}^{{L_B}}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - {L_C}} \right)}^{{L_D}}}}}{{({L_A} - 1)!\, \cdot \,{L_B}! \cdot {L_C}! \cdot {L_D}!}} \cdot {e^{ - \beta \cdot \left[ {\left( {{L_A} - 1 + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}} + {\varepsilon _{{b_A}}}} \right]}} \cdot \\ \cdot \frac{{{L_A}!\, \cdot \,{L_B}! \cdot {L_C}! \cdot {L_D}!}}{{{{\left( {\Omega - {L_B} - {L_C} - {L_D}} \right)}^{{L_A}}} \cdot {{\left( {\Omega - {L_A} - {L_C} - {L_D}} \right)}^{{L_B}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - {L_B} - {L_C}} \right)}^{{L_D}}}}} \cdot {e^{ + \beta \cdot \left( {{L_A} + {L_B} + {L_C} + {L_D}} \right) \cdot {\varepsilon _{sol}}}} = \\ = \frac{{{L_A}}}{{\Omega - {L_B} - {L_C} - {L_D}}} \cdot {\left( {\frac{{\overbrace {\Omega - {L_A} - {L_C} - {L_D}}^{ > > 1} - 1}}{{\Omega - {L_A} - {L_C} - {L_D}}}} \right)^{{L_B}}} \cdot {\left( {\frac{{\overbrace {\Omega - {L_A} - {L_B} - {L_D}}^{ > > 1} - 1}}{{\Omega - {L_A} - {L_B} - {L_D}}}} \right)^{{L_C}}} \cdot {\left( {\frac{{\overbrace {\Omega - {L_A} - {L_B} - {L_C}}^{ > > 1} - 1}}{{\Omega - {L_A} - {L_B} - {L_C}}}} \right)^{{L_D}}} \cdot {e^{ - \beta \cdot \left( {{\varepsilon _{{b_A}}} - {\varepsilon _{sol}}} \right)}} \approx \\ \approx \frac{{{L_A}}}{{\Omega - {L_B} - {L_C} - {L_D}}} \cdot {\left( {\overbrace {\frac{{\Omega - {L_A} - {L_C} - {L_D}}}{{\Omega - {L_A} - {L_C} - {L_D}}}}^1} \right)^{{L_B}}} \cdot {\left( {\frac{{\overbrace {\Omega - {L_A} - {L_B} - {L_D}}^1}}{{\Omega - {L_A} - {L_B} - {L_D}}}} \right)^{{L_C}}} \cdot {\left( {\frac{{\overbrace {\Omega - {L_A} - {L_B} - {L_C}}^1}}{{\Omega - {L_A} - {L_B} - {L_C}}}} \right)^{{L_D}}} \cdot {e^{ - \beta \cdot \left( {{\varepsilon _{{b_A}}} - {\varepsilon _{sol}}} \right)}} = \\ = \frac{{{L_A}}}{{\Omega - {L_B} - {L_C} - {L_D}}} \cdot {e^{ - \beta \cdot \left( {{\varepsilon _{{b_A}}} - {\varepsilon _{sol}}} \right)}}\mathop \approx \limits^{\left( 2 \right)} \frac{{{L_A}}}{\Omega } \cdot {e^{ - \beta \cdot \left( {{\varepsilon _{{b_A}}} - {\varepsilon _{sol}}} \right)}}\mathop = \limits^{\left( 3 \right)} \frac{{\left[ A \right]}}{{\left[ {{c_0}} \right]}} \cdot {e^{ - \beta \cdot \left( {{\varepsilon _{{b_A}}} - {\varepsilon _{sol}}} \right)}}\mathop = \limits^{\left( 4 \right)} \frac{{\left[ A \right]}}{{{K_A}}}\end{array}$
3. Two ligands are bound to the receptor – one of type A and one of type B (Forward enzyme reaction):
• $${L_A} - 1$$ ligands of type A are unbound and thus have an energy of $${\varepsilon _{sol}}$$.
• $${L_B} - 1$$ ligands of type B are unbound and thus have an energy of $${\varepsilon _{sol}}$$.
• One ligand of type A is bound to its designated site in the receptor, and thus have an energy of $${\varepsilon _{{b_A}}}$$.
• One ligand of type B is bound to it's designated site in the receptor, and thus have an energy of $${\varepsilon _{{b_B}}}$$.
• All other ligands are unbound and thus have an energy of $${\varepsilon _{sol}}$$.
$\begin{array}{l} \Rightarrow {\rm{ The}}\,{\rm{Energy}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state }} = \left( {{L_A} + {L_B} + {L_C} + {L_D} - 2} \right) \cdot {\varepsilon _{sol}} + {\varepsilon _{{b_A}}} + {\varepsilon _{{b_B}}}\\\\{\rm{The}}\,{\rm{Multiplicity}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state: }}\\{\rm{M}}{{\rm{P}}_3} = \frac{{{{\left( {\Omega - {L_B} - 1 - {L_C} - {L_D}} \right)}^{{L_A} - 1}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_C} - {L_D}} \right)}^{{L_B} - 1}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - 1 - {L_D}} \right)}^{{L_C}}} \cdot {{\left( {\Omega - {L_A} - 1 - {L_B} - 1 - {L_C}} \right)}^{{L_D}}}}}{{({L_A} - 1)!\, \cdot \,({L_B} - 1)! \cdot {L_C}! \cdot {L_D}!}}\\ \Rightarrow {\rm{ The}}\,{\rm{Weight}}\,{\rm{of}}\,{\rm{the}}\,{\rm{state: }}{{\rm{W}}_3} = {\rm{M}}{{\rm{P}}_3} \cdot {e^{ - \beta \cdot \left[ {\left( {{L_A} + {L_B} + {L_C} + {L_D} - 2} \right) \cdot {\varepsilon _{sol}} + {\varepsilon _{{b_A}}} + {\varepsilon _{{b_B}}}} \right]}}\\ \Rightarrow {\rm{ The}}\,{\rm{normalized}}\,{\rm{weight}}\,{\rm{(using}}\,{\rm{the}}\,{\rm{same}}\,{\rm{methods}}\,{\rm{as}}\,{\rm{for}}\,{\rm{the}}\,{\rm{previous}}\,{\rm{state): }}{{\rm{W}}_{{N_3}}} = \frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}}\end{array}$
4. One of the ligands of type C (cofactor) is bound to the receptor:
• Using the same methods as above, the normalized weight will be: $${W_{{N_4}}} = \frac{{\left[ C \right]}}{{{K_C}}}$$
5. Two ligands are bound to the receptor – one of type C and one of type D (backwards enzyme reaction):
• Using the same methods as above, the normalized weight will be: $${W_{{N_5}}} = \frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}$$
Approximations and definitions used in the development process:
$\begin{array}{l}\left( 1 \right)\frac{{\Omega !}}{{\left( {\Omega - L} \right)!}}\mathop \approx \limits^{\Omega \gg L} {\Omega ^L}\\\left( 2 \right)\Omega - {L_{A,B,C,D}}\mathop \approx \limits^{\Omega \gg L} \Omega \Rightarrow \frac{{{L_{A,B,C,D}}}}{{\Omega - {L_{A,B,C,D}}}} \approx \frac{{{L_{A,B,C,D}}}}{\Omega }\\\left( 3 \right)\Omega = \left[ {{c_0}} \right] \cdot V,{L_i} = \left[ i \right] \cdot V\\\left( 4 \right){K_i} \equiv \left[ {{c_0}} \right] \cdot {e^{ + \beta \cdot \left( {{\varepsilon _{{b_i}}} - {\varepsilon _{sol}}} \right)}}\end{array}$
To sum it all up, we can use the weight functions we found to find $${P_{reactio{n_{forward}}}}$$ and $${P_{reactio{n_{backwards}}}}$$:
$\begin{array}{l}P_{forward} = {P_3} = \frac{{{W_{{N_3}}}}}{{\sum\limits_{i = 1}^5 {{W_{{N_i}}}} }} = \frac{{\frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}}}}{{1 + \frac{{\left[ A \right]}}{{{K_A}}} + \frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}} + \frac{{\left[ C \right]}}{{{K_C}}} + \frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}}}\\{P_{reverse}} = {P_5} = \frac{{{W_{{N_5}}}}}{{\sum\limits_{i = 1}^5 {{W_{{N_i}}}} }} = \frac{{\frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}}}{{1 + \frac{{\left[ A \right]}}{{{K_A}}} + \frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}} + \frac{{\left[ C \right]}}{{{K_C}}} + \frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}}}\end{array}$
And the overall reaction rate:
$$\frac{{d\left[ P \right]}}{{dt}} = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}} - {V_{{m_r}}} \cdot \frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}}}{{1 + \frac{{\left[ A \right]}}{{{K_A}}} + \frac{{\left[ A \right]}}{{{K_A}}} \cdot \frac{{\left[ B \right]}}{{{K_B}}} + \frac{{\left[ C \right]}}{{{K_C}}} + \frac{{\left[ C \right]}}{{{K_C}}} \cdot \frac{{\left[ D \right]}}{{{K_D}}}}}$$
Since this is our model for the enzyme, we will write the equation again with our parameters:
$$\frac{{d\left[ {3\alpha diol} \right]}}{{dt}} = \frac{{d\left[ {NADP} \right]}}{{dt}} = - \frac{{d\left[ {DHT} \right]}}{{dt}} = - \frac{{d\left[ {NADPH} \right]}}{{dt}} = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{1 + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}$$
### Compatibility with known theories
Here are some of the reasons why we think our model is correct:
1. Compatibility with single-direction Michaelis-Menten:
For $$\left[ {NADP} \right] < < {K_2}$$ and $$\left[ {3\alpha diol} \right] < < {K_P}$$, we get the following rate equation:
$\begin{array}{l}\frac{{d\left[ P \right]}}{{dt}} \approx \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}}}}{{1 + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}}}} = \\ = {V_{{m_f}}} \cdot \frac{{\frac{{\left[ {DHT} \right]}}{{{K_s}}}}}{{\frac{{{K_1}}}{{\left[ {NADPH} \right]}} + 1 + \frac{{\left[ {DHT} \right]}}{{{K_s}}}}}\mathop \approx \limits^{\left[ {NADPH} \right] > > {K_1}} {V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s} + DHT}}\end{array}$
This means that for very low levels of $$3\alpha-diol$$ and NADP, the reaction is similar to Michaelis-Menten. In reality, the reaction will increase the levels of both substrates, so we cannot assume the entire reaction is similar to Michaelis-Menten, rather only the beginning of it. As we'll see later, this correlates with our wet-lab results for varying concentrations of DHT.
2. Compatibility with Michaelis-Menten reversible reaction:
If we assume both cofactor levels are saturated, and are high enough so that the reaction of the enzyme does not change their level much, we can treat them as constants:
$\begin{array}{l}\frac{{{V_{{m_f}}} \cdot \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{1 + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}\mathop \approx \limits^{\left[ {NADPH} \right] \approx [NADP] \approx const = C > > {K_{1,}}{K_2}} \\ \approx \frac{{{V_{{m_f}}} \cdot \frac{C}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{C}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{1 + \frac{C}{{{K_1}}} + \frac{C}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{C}{{{K_2}}} + \frac{C}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}} = \frac{{{V_{{m_f}}} \cdot \frac{1}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{1}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{\frac{1}{{{K_1}}} + \frac{1}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{1}{{{K_2}}} + \frac{1}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}} = \\ = \frac{{{V_{{m_f}}} \cdot \frac{1}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{1}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{\left( {\frac{1}{{{K_1}}} + \frac{1}{{{K_2}}}} \right) + \frac{1}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{1}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}} = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} \cdot \frac{{{K_2}}}{{{K_1} + {K_2}}} - {V_{{m_r}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}} \cdot \frac{{{K_1}}}{{{K_1} + {K_2}}}}}{{1 + \frac{{\left[ {DHT} \right]}}{{{K_s}}} \cdot \frac{{{K_2}}}{{{K_1} + {K_2}}} + \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}} \cdot \frac{{{K_1}}}{{{K_1} + {K_2}}}}} = \\ = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}'}} - {V_{{m_r}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}'}}}}{{1 + \frac{{\left[ {DHT} \right]}}{{{K_s}'}} + \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}'}}}}\end{array}$
This is the assumption we made for approach 1 for modeling the reaction. We can see that under the conditions we assumed, the reaction behaves like a Michaelis-Menten reversible reaction.
3. Inhibition of directions:
Examination of the in vitro properties of the recombinant AKR1C2 showed that it was potently inhibited in the oxidation direction by NADPH[6]. Since all AKRs catalyze an ordered bi-bi reaction, we can assume the enzyme we chose to use - AKR1C9 - has the same mechanism. In that case, if our model describes this mechanism accurately, this attribute should be reflected in it.
Our model describes a general AKR enzyme. The difference between AKR1C2 and AKR1C9 is probably not in the expression of the rate reaction, but at the rate reaction's constants. Because these constants can vary in different environments even for the same strain of enzyme, it means that each direction of the reaction can be inhibited on different conditions (as in [5] for rat liver and rat skin). For that reason, we will need to make an assumption about the constants in order to show that the model describes it.
We'll assume that $${K_1} < < {K_2}$$ so that for $$\left[ {NADPH} \right] \approx \left[ {NADP} \right] > > {K_1}$$,
$$\frac{{\left[ {NADPH} \right]}}{{{K_1}}} > > \frac{{\left[ {NADP} \right]}}{{{K_2}}}$$
From this assumption and equation (II) we will get:
$\begin{array}{l}\frac{{{V_{{m_f}}} \cdot \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}}{{1 + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}}}} = \\ = \frac{{{V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {V_{{m_r}}} \cdot \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{{K_1}}}{{\left[ {NADPH} \right]}}}}{{\frac{{{K_1}}}{{\left[ {NADPH} \right]}} + 1 + \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{{K_1}}}{{\left[ {NADPH} \right]}} + \frac{{\left[ {3\alpha diol} \right]}}{{{K_P}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{{K_1}}}{{\left[ {NADPH} \right]}}}}\mathop \approx \limits^{\begin{array}{*{20}{c}}{\left[ {NADPH} \right] > > {K_1}}\\{\frac{{\left[ {NADPH} \right]}}{{{K_1}}} > > \frac{{\left[ {NADP} \right]}}{{{K_2}}}}\end{array}} \\ \approx {V_{{m_f}}} \cdot \frac{{\frac{{\left[ {DHT} \right]}}{{{K_s}}}}}{{1 + \frac{{\left[ {DHT} \right]}}{{{K_s}}}}} = {V_{{m_f}}} \cdot \frac{{\left[ {DHT} \right]}}{{1 + \left[ {DHT} \right]}}\end{array}$
As we can see, our model can describe the inhibition of oxidation. With a simple assumption in regards to the relationship between two constants, we demonstrated that this reversible equation can behave like a single direction reaction.
## Correlation with wet-lab results
Our team succeeded in over-expressing the 3$$\alpha$$-HSD enzyme in E. coli, and validated it's activity by measuring NADPH fluorescence. The results of the various measurements demonstrated certain behaviors which can be explained by our model. In this section we will simulate the model in order to see the correlation with our wet-lab results.
#### The rate equations for our system:
$$\begin{array}{l}\frac{{d\left[ {3\alpha - diol} \right]}}{{dt}} = \left[ {3\alpha - HSD} \right] \cdot \overbrace {\frac{{{K_{ca{t_f}}} \cdot \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} - {K_{ca{t_r}}} \cdot \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha - diol} \right]}}{{{K_P}}}}}{{1 + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} + \frac{{\left[ {NADPH} \right]}}{{{K_1}}} \cdot \frac{{\left[ {DHT} \right]}}{{{K_s}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} + \frac{{\left[ {NADP} \right]}}{{{K_2}}} \cdot \frac{{\left[ {3\alpha - diol} \right]}}{{{K_P}}}}}}^{f\left( {\left[ {DHT} \right]\,\,,\,\left[ {NADPH} \right]\,,\,\left[ {3\alpha - diol} \right]\,,\,\left[ {NADP} \right]} \right)} - {\gamma _{{{\deg }_1}}} \cdot \left[ {3\alpha - diol} \right]\\\frac{{d\left[ {NADP} \right]}}{{dt}} = f - {\gamma _{{{\deg }_2}}} \cdot \left[ {NADP} \right]\\\frac{{d\left[ {DHT} \right]}}{{dt}} = - f - {\gamma _{{{\deg }_3}}} \cdot \left[ {DHT} \right]\\\frac{{d\left[ {NADPH} \right]}}{{dt}} = - f - {\gamma _{{{\deg }_4}}} \cdot \left[ {NADPH} \right]\end{array}$$
We will solve the equation system numerically using Matlab with the constants from the table below:
Parameter Value Units source comment
$${{K_{ca{t_f}}}}$$ $${\rm{0}}{\rm{.563}} \cdot {\rm{1}}{{\rm{0}}^{ - 3}}$$ $$\frac{1}{{\sec }}$$ Calculated from [5] using the molar mass of AKR1C9 from [3] (***)
$${{K_{ca{t_r}}}}$$ $${\rm{1}}{\rm{.628}} \cdot {10^{ - 3}}$$ $$\frac{1}{{\sec }}$$ Calculated from [5] using the molar mass of AKR1C9 from [3] (***)
$$\left[ {3\alpha - HSD} \right]$$ 0.1 $$\mu M$$ Chosen arbitrarily. The concentration of the enzyme is connected linearly to the maximum reaction rates and do not effect the dynamics.
$${K_s}$$ 0.38 $$\mu M$$ From [5] (***)
$${K_p}$$ 2.79 $$\mu M$$ From [5] (***)
$${K_1}$$ 0.38 $$\mu M$$ None (****)
$${K_2}$$ 2.79 $$\mu M$$ None (****)
$${\gamma _{{{\deg }_1}}}$$ 0 $$\frac{1}{{\sec }}$$ None (*)
$${\gamma _{{{\deg }_2}}}$$ 0.0001 $$\frac{1}{{\sec }}$$ None (**)
$${\gamma _{{{\deg }_3}}}$$ 0 $$\frac{1}{{\sec }}$$ None (*)
$${\gamma _{{{\deg }_4}}}$$ 0.0001 $$\frac{1}{{\sec }}$$ None (**)
(*) Note that since we could not measure DHT and $$3\alpha-diol$$, we neglected them in the simulation for the sake of simplicity.
(**) The degradation rates we measured for NADPH varied too much between measurements, probably due to the different concentration of lysates. Since the purpose of this model is to emulate the behavior of our system, small degradation constants were chosen arbitrarily for NADPH and NADP. By giving both cofactors small degradation constants, we are able to mimic the behavior of the reaction while looking at the dynamics of the enzyme's reaction.
(***) We chose to use the constants measured at [5], although they are probably not the correct constants for the environment of the experiment. For that reason, these constants sometime differ from simulation to simulation by orders of magnitude in order to get the simulation result that best fit our system's behavior.
(****) Since the values of $${{K_1}}$$ and $${{K_2}}$$ are unknown, and since we demonstrated how the relation between them can effect the reaction rate (inhibition of direction), we chose an arbitrary values for most simulations that are identical to $${{K_s}}$$ and $${{K_P}}$$. That being said, we experimented a lot with these values in order to test different behaviors of this system.
#### Simulations:
1. Activity check simulation
We simulated the system in time domain under the initial conditions:
$\left( {\left[ {DHT} \right],\,\left[ {NADPH} \right],\,\left[ {3\alpha - diol} \right],\,\left[ {NADP} \right]} \right) = \left( {{{40}_{\left[ {\mu M} \right]}},\,{{150}_{\left[ {\mu M} \right]}},\,0,\,0} \right)$
which are the same initial conditions for the avctivity check experiment
2. Reaction rate vs. DHT concentrations
We simulated the system for varying initial concentrations of DHT at $\left[ {NADPH} \right] = {150_{\left[ {\mu M} \right]}}$. For each initial concentration we took the derivative of NADPH at t=0 (in absolute value), which is the maximal reaction rate for the set of initial conditions.
The result from the simulation is very similar to the wetlab result, with the reaction rate saturating as the concentration of DHT increases. This can be explained by our model since we saw that for low concentrations of $$3\alpha - diol$$ and NADP the reaction is close to michaelis menten.
3. Enzymatic activity as a function of NADPH concentration
As noted in the results section, we observed a decrease in the reaction rate with an increase in NADPH concentration in presence of 3ɑ-HSD enzyme, contrary to substrate dependency. One possible explanation for this is cofactor inhibition - the addition of DHT to the lysate with the enzyme and NADPH was done 30 minutes after adding the NADPH to the lysate, in order for the reaction between NADPH and the lysate to reach steady state. During that time, there was a decrease in NADPH levels from their initial values. We hypothesized that some of the NADPH was converted to NADP, which in turn was bound to the enzyme. Since There is no $$3\alpha-diol$$ in the system, the oxidation reaction cannot take place so the enzymes that are bound to NADP molecules cannot participate in another reaction. In this state, NADP inhibit the reduction of DHT.
In order to verify that the model can actually describe this result, we took the initial concentrations of NADPH from the experiment and checked those levels under 30 minutes. We made an assumption that most of the molecules of NADPH that broke down before the addition of DHT were converted to NADP. Under this assumption, we ran a simulation with the initial conditions of the wetlab experiment after 30 minutes:
As can be seen in the figure, the derivatives were the high for the reactions in which the NADPH/NADP ratio was high, and vice versa.
In order to better understand the process and the results, a simulation of the reaction rate of NADPH as a function of both initial NADPH and initial NADP was built. In each run we changed the ratio between K1 and K2. The result is in the following video:
from this video we can see that for K1>K2, the system is more sensitive to an increase in NADP (which inhibit the reaction) than to an increase in NADPH.
This can explain the results we got in the lab because the more we increased the initial concentration of NADPH, the more NADPH was converted to NADP and so the inhibition effect was more significant.
## References
1. Cooper, W. C., Heredia, V. V., Jin, Y., & Penning, T. M. (2006). Comparison of the Rate-Limiting Steps in 3alpha-Hydroxysteroid Dehydrogenase (AKR1C9) Catalyzed Reactions. In H. Weiner, B. Plapp, R. Lindahl, & E. Maser, Enzymology and Molecular Biology of Carbonyl Metabolism 12 (pp. 301-307). Purdue University Press. Retrieved from https://books.google.co.il/books?id=Bbkf6tfnsOoC&pg=301#v=onepage&q&f=false
2. Jin, Y., Heredia, V. V., & Penning, T. M. (2006). Enzymatic Mechanism of 5alpha-Dihydrotestosterone Reduction Catalyzed by Human Type 3 3alpha-Hydroxysteroid Dehydrogenase (AKR1C2): Molecular Docking and Kinetic Studies. In H. Weiner, B. Plapp, R. Lindahl, & E. Maser, Enzymology and Molecular Biology of Carbonyl Metabolism 12 (pp. 294-300). Purdue University Press. Retrieved from https://books.google.co.il/books?id=Bbkf6tfnsOoC&pg=294#v=onepage&q&f=false
3. Penning TM, M. I. (1984, Sep 15). Purification and properties of a 3 alpha-hydroxysteroid dehydrogenase of rat liver cytosol and its inhibition by anti-inflammatory drugs. The Biochemical Journal. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/6435601
4. Penning, T. M., Jin, Y., Heredia, V. V., & Lewis, M. (2003). Structure–function relationships in 3alpha-hydroxysteroid dehydrogenases: a comparison of the rat and human isoforms. The Journal of Steroid Biochemistry & Molecular Biology, 85, 247-255. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12943710
5. Pirog, E. C., & Collins, D. C. (1994, April). 3alpha-Hydroxysteroid dehydrogenase activity in rat liver and skin. Steroids, 59, 259-264. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8079380
6. TEA LANISˇ NIK RIZˇ NER*, HSUEH K. LIN, DONNA M. PEEHL, STEPHAN STECKELBROECK,DAVID R. BAUMAN, AND TREVOR M. PENNING. Human Type 3 3alpha-Hydroxysteroid Dehydrogenase Aldo-Keto Reductase 1C2) and Androgen Metabolism in Prostate Cells. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12810547
7. Biswas, M. G., & Russell, D. W. (1997). Expression cloning and characterization of oxidative 17β-and 3α-hydroxysteroid dehydrogenases from rat and human prostate. Journal of Biological Chemistry, 272(25), 15959-15966. Chicago
8. Rob Phillips Jane Kondev (Author, Julie Theriot, (2008) "Physical Biology of the Cell" p.219-237, 978-0815341635 | 2021-02-26 21:14:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385515809059143, "perplexity": 1043.1120320479754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00158.warc.gz"} |
https://www.physicsforums.com/threads/simple-poles.794186/ | # Simple Poles
1. Jan 25, 2015
### MMS
Hi guys,
Is it right to say that a simple pole (pole of order 1) is a removable singularity (and vice versa)?
2. Jan 25, 2015
### lavinia
No. A pole of order 1 is not removable. Think about the integral of 1/z on the unit circle.
3. Jan 25, 2015
### MMS
I realized it as soon as I posted this thread and forgot to delete it.
Anyway, thank you! | 2018-03-22 16:36:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728666305541992, "perplexity": 2529.310164681372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647892.89/warc/CC-MAIN-20180322151300-20180322171300-00660.warc.gz"} |
https://calendar.math.illinois.edu/?year=2015&month=04&day=17&interval=day | Department of
# Mathematics
Seminar Calendar
for events the day of Friday, April 17, 2015.
.
events for the
events containing
More information on this calendar program is available.
Questions regarding events or the calendar should be directed to Tori Corkery.
March 2015 April 2015 May 2015
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 1 2
8 9 10 11 12 13 14 5 6 7 8 9 10 11 3 4 5 6 7 8 9
15 16 17 18 19 20 21 12 13 14 15 16 17 18 10 11 12 13 14 15 16
22 23 24 25 26 27 28 19 20 21 22 23 24 25 17 18 19 20 21 22 23
29 30 31 26 27 28 29 30 24 25 26 27 28 29 30
31
Friday, April 17, 2015
4:00 pm in 147 Altgeld Hall,Friday, April 17, 2015
#### Staircase diagrams and enumeration of smooth Schubert varieties
###### Edward Richmond [email] (Oklahoma State Math)
Abstract: Staircase diagrams are certain partially ordered sets defined over a graph. When the graph is the Dynkin diagram of a simple Lie group, these diagrams correspond to smooth Schubert varieties of the corresponding flag variety. Staircase diagrams have two applications. First, they encode much of the geometric and combinatorial data of Schubert varieties. Second, these diagrams give a way to calculate the generating series for the number of smooth Schubert varieties of any type. This extends the work of M. Haiman who calculated this generating series in type A. This talk is on joint work with W. Slofstra.
4:00 pm in 241 Altgeld Hall,Friday, April 17, 2015
#### Thompson's Groups and Knot Diagrams
###### Malik Obeidin (UIUC Math)
Abstract: In 1965, Richard J. Thompson introduced three finitely presented groups, $F \subset T \subset V$, which have a number of curious group-theoretic properties. These groups have been rediscovered by topologists on several occasions - most recently, Vaughan Jones showed that the group $F$ encodes knot diagrams in a particular way. I will discuss the group $F$, its descriptions, and its possible applications to studying knots. | 2022-01-23 02:25:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5437375903129578, "perplexity": 449.9150322430024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00027.warc.gz"} |
https://www.buckscountyspa.com/0dxh2tc/viewtopic.php?a472f1=rectangular-matrix-order | Comparison of G12 results from FEA with those from MMA in the case of carbon fiber/epoxy resin. where b is a column vector with elements equal to the spectral density at each point in the image bj(ω). To this purpose, we extend to the two-field model the procedure followed in Section 25.4 in the case of the displacement-based formulation. There exist linearly independent 1-forms g1,g2,…,g2msuch that ω is expressible in the following canonical form, We can easily prove this theorem by resorting to mathematical induction. where {u1,u2,…,uNh,v1,v2,…,vNh}T and {p1,p2,…,pMh}T are the sets of degrees of freedom for u_h and ph, respectively. That is, consider a matrix A of size m × n with m ≥ n, then there exist two orthogonal matrices U, V of size m × m and n × n, respectively, and a quasidiagonal matrix Q of size m × n satisfying: with U = [u1 u2 …um] and V = [v1 v2 …vn] where ui and vi are the singular vectors of A, and. of n, orde m £r n m . Horizontal Matrix A matrix in which the number of rows is less than the number of columns, is called a horizontal matrix… Column Matrix A matrix having only one column and any number of rows is called column matrix. Hence, the transformations. In the second step, we are led to, Continuing this way, we arrive at the following result in the kth step, This clearly means that the k-form ω is now generated by basis forms gα1∧gα2∧⋯∧gαk. The square matrix with circular fibers, for example, has a smaller difference than the other models between MMA and FEA results. Specified as: an integer; n ≥ 0, and: If side = 'R', n ≤ lda. C is a matrix of order 2 × 4 (read as ‘2 by 4’) A matrix for which horizontal and vertical dimensions are not the same (i.e., an matrix with ). Composite's properties for carbon fiber with epoxy resin. Matrices are represented by the capital English alphabet like A, B, C……, etc. (12.2.7) as. The usual matrix inverse is defined as a two-side inverse, i.e., AA−1 = I = A−1A because we can multiply the inverse matrix from the left or from the right of matrix A and we still get the identity matrix. Therefore, we have a choice in forming the product of several A rectangular matrix … □, We now apply the general approach which we have developed above to a 2-form owing to its rather simple structure. Let ω be a 2-form whose rank is 2m. 1 ⋮ Vote. In order to find the rank of the form ω we have to determine nontrivial, linearly independent solutions hi ∈ U* of the homogeneous equations. Qin, in Toughening Mechanisms in Composite Materials, 2015. To avoid this type of uncertainty let us, first, consider elements Lll′m of this matrix corresponding to μl′′=μl. The condition Q T Q = I says that the columns of Q are orthonormal. The generalized inverse of a rectangular matrix is related to the solving of system linear equations Ax = b. Since it is a rectangular array, it is 2-dimensional. Figure 1.6 shows a comparison between the result E1 of varying fiber and matrix geometries. This highlights the nonlinear nature of FEA modeling software. After repeating this operation k number of times, we reach to the conclusion, The rank of the quadratic form Фk depending on n − 2k number of 1-forms will now at most 2m − 2k. Figure 1.7. In the case of an AR model, we need to resolve a linear equations system, based on the autocorrelation matrix, to find the corresponding model parameters (Eq. The right-hand side F_ has already been introduced in Section 25.4 and is a column vector with 2Nh rows. Conversely, if a k-form is simple it can be written in the form (1.5.8) as follows. https://mathworld.wolfram.com/RectangularMatrix.html. How to solve Ax=b. (12.2.1) can be written as, and the superscript H represents the Hermitian transpose, defined by interchanging the rows and columns of the matrix/vector and taking the complex conjugate. represents the problem of mult implicationof a square matrix by a rectangular matrix. A matrix for which horizontal and vertical dimensions are not the same (i.e., an matrix Hence, the matrix $A$ is called a rectangular matrix. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. ... Rectangular Matrix. Therefore, when we repeat this operation a sufficient number of times the form Фk will eventually vanish and we shall arrive at the relation (1.6.10). It is obvious from Figure 1.4 that: (a) total area is 6 for RM and 9 for SM; and (b) the fiber’s area is πr2 = 0.78539 for circular fiber, 33/2t2/2 = 33/2 × 0.577352/2 = 0.86602 for hexagonal fiber, and 0.5 for triangular fiber. (12.2.8) as, which has M rows and M columns and can be related to the cross spectrum of the source strengths Q = (π/T)E[qHq] as, To implement array processing we arrange the steering vectors wm(j) into a rectangular matrix with M rows and J columns, where J is the number of image points so that W=wmj. M ) and a number of rows is called column matrix is a rectangular array rows of is... Result of the rectangular matrix 27 Transpose of a is equal to the use of.! But $J = n$ ) if a, rectangular matrix order, C……,.! Matrix B__ has no definite rank until we specify the degree of the polynomial basis functions of Vh and.., of the above homogeneous equations is given by, with the Levinson-Durbin algorithm from R′ and permits to. We made use of the displacement-based formulation Gilberto Espinosa-Paredes, in a 2-D square or rectangular array of,... Relationship between the source distribution, $m = 1$ but $J = n$ a difference. Occur for this family random practice problems and answers with built-in step-by-step solutions rectangular rectangular matrix order... List the comparison in results between FEA and MMA for E1 yield quite similar results in Section 25.4 in fourth. 8 Two matrices a and B = 2 for RM and B multiplied. To no of columns of Q are orthonormal 6.18 ): using SVD, the of., consider elements Lll′m of this matrix can express a 2-form whose rank is 2m as Eq. If Q is an even number, then we have developed above to normal! A composite composted of carbon fiber/epoxy resin 1.8 list the comparison in is. About the source distribution vector spaces with even dimensions size we obtain ( 2.20 ) and permits us enhance... And MATLAB offers a number of columns you try the next step on your own we a. A∈ℝm×N is equal to no of columns the comparison in results between FEA and MMA E1... Of each row or column of a matrix for which horizontal and vertical dimensions are not the same (,! Us now consider in detail the filtering operation ( Eq ) in the first step we obtain ( )! And QQ T = I are not the same ( i.e., an m×n with... Form of a column vector with 2Nh rows an array is also a valuable tool and!, F.R.Ae.S,... CARL T.F only be defined on vector spaces with even dimensions degree... Number of sinusoidal components from R′ and permits us to enhance the spectral estimation for a complete expansion [ ]. This operation this time for the form Ф1, first, consider elements Lll′m this. Becomes 2m are shown in Figure 1.5 ( kr ) of this matrix that the columns a. To implement in ( 2.19 ) first a and sort each column of row..., of the unit cell used the effective number of rows x number of.! Is basically a rectangular matrix is then obtained as in Eq Engineers: Deterministic Techniques volume! Becomes 2m liansheng Tan, in Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, volume 1 then! Obtained as in Eq F_ has already been introduced in Section 25.4 and is a result the... Graph, the left inverse already been introduced in Section 25.4 in form... Be calculated using Eq sort each column of a matrix a matrix A∈ℝm×n is equal to x =.... The shaded region shows complete polynomials of order m x n, is called a row is. 2M − 2 if side = ' R ', n ≤ lda, in determination of E1 from. A generalized Framework of linear Multivariable Control, 2017 2-form by the graph, left... Ω becomes 2m finite-dimensional subspaces of polynomial scalar functions that a symplectic form can only happen if Q is even! The conditions Q T Q = I and QQ T = I says that the columns of a equal! With ) result E1 of varying fiber and matrix geometries of G12 results from FEA with those MMA... Mma can provide acceptable accurate results for E1 advantages occur for this family G12! Then we have ωαkα=0, α=1,2, …, m due to ( 1.6.14 ) 1 of... Mma for E1 the simple form of a matrix is a row matrix matrix! Other models between MMA and FEA results collected for E2 and G12, respectively derived! Problems and answers with built-in step-by-step solutions ) provides some uncertainty of type 0/0 or ∞-∞ some advantages occur this... [ 4 ] other models between MMA and FEA above example, matrix a matrix 3... ≠ n, such that m ≠ n, is called rectangular matrix and fiber ( B = 2 RM. Boiling Water Reactors, 2019 where we made use of the form ( )! ( last 30 days ) Boni_Pl on 16 Nov 2020 at 7:01 alfonso Prieto-Guerrero, Gilberto Espinosa-Paredes in. Within 3 %, for all results, in above example, a two-dimensional matrix consists of form... Within 8 %, between all the results collected for E2 and G12, respectively, derived MMA! Expansion will of course less than the other models between MMA and FEA results and Stability. Mathematically, it represents a collection of information stored in an arranged manner 1.1! Arranged in rows and columns 3×3 [ orm×n using Eq C……,.. An even rectangular matrix order, then the maximal rank will be at most 2m − 2,. Above to a normal equation is x = A−b use cookies to help provide and enhance rectangular matrix order... In comparison with the assumption ω12 ≠ 0 let us, first, consider composite! Solving of system linear equations Ax = B often called as generalized left inverse those needed for a order. The spectral estimation for a higher order [ 4 ] of cookies Gilberto,. Inequality ) if a, B and C are rectangular matrices and acoustic! Structure for 2-forms imposed by their ranks Mh×Nh and their entries are given by Eq 2 columns called. Within 3 %, between all the results collected for E2 and G12 epoxy resin of yield... By the capital English alphabet like a, B, C……,.... G12 results from FEA with those from MMA in the form Ф1: rectangular matrix B__ has no rank. Zero matrix or a null matrix is related to the spectral density at microphone! Number is not zero, namely, if kα > 1, 2008 written the. 2 for RM and B are multiplied to get AB if is equal columns. Can express a 2-form owing to its degree C……, etc variables or functions arranged in rows columns. And also shown in Figures 1.6–1.8 is: Figure 1.4 shows the geometrical configuration of the rectangular matrix order used! Of cookies john case M.A., F.R.Ae.S,... CARL T.F the numbers are called the elements, entries! By Eq ( AB ) C = a ( BC ) of n = 4 we apply... Components: the number m can now be at most 2m − 2 I not... Matlab offers a number of rows x number of columns is called rectangular matrix 27 of. Alphabet like a, B and C are rectangular matrices and the ABC. Days ) Boni_Pl on 16 Nov 2020 at 12:31 Accepted Answer: Matt J ( ATA ) −1ATb which! The solution to a 2-form on a 4-dimensional vector space given by, with the Levinson-Durbin.... To ( 1.6.14 ) 1 this can only be defined on vector spaces with even dimensions directly and relatively. Essential components: the number m can now be at most 2m − 2 occur for this.... Ф1≠ 0, then the rank reduces to 2 corresponding to μl′′=μl sort function sorts the elements of matrix. Begin with a relatively simple to implement after introducing Two finite-dimensional subspaces of polynomial scalar functions by... Unit cell used also evaluates the cross spectrum of the source strength and the order of a matrix only... Take k = m, this expansion will of course less than cardinality... Difference in results between E2 and G12, respectively, derived through MMA and FEA table 1.2 also! Each microphone is given by, with the Levinson-Durbin algorithm Framework of linear Multivariable Control, 2017 the effective of. Represents a collection of information stored in an array is also a valuable tool, and: if side '! First a and B = 2 for RM and B = 2 for RM B. Zero, namely, if Ф1≠ 0, then the maximal rank will be at most 2,. Nl′×Nl matrix with different fiber geometries is called column matrix a matrix that has all its zero... In Toughening Mechanisms in composite Materials, 2015 built-in step-by-step solutions and enhance our service and tailor content ads! Carl T.F property is only true for a higher order system linear equations =. $B$ is called a rectangular matrix is $5 \times 2$ be! Theorem: theorem 1.6.1 is present above those needed for a square matrix m! On a 4-dimensional vector space given by functions of Vh and Qh formation of rectangle shape in this autocorrelation.. Hence, the matrix $a$ is called rectangular matrix … generated... T Q = I are not equivalent when mapping of shape functions to more general is... Cell used not equal to x = A−b results from FEA with those MMA... Important in comparison with the Levinson-Durbin algorithm for the formation of rectangle shape in this matrix! Only one column and any number of rows is called column matrix condition T. In Eq are no rectangular matrix order limitations on the number of rows ( m ) and a of... Its degree only true for a higher order equations is given by, the! Let ω be a 2-form on a 4-dimensional vector space given by or ∞-∞ a A∈ℝm×n... Chain order problem matrix multiplication is associative, meaning that ( AB ) =... | 2022-06-27 04:44:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7573187947273254, "perplexity": 1072.9911198209822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00229.warc.gz"} |
https://nomerbiget.tk/humor/rotation-150-degrees-is-how-many-radians.php | # Rotation 150 degrees is how many radians
45 degrees, pi/4 radians. 60 degrees, pi/3 radians. 90 degrees, pi/2 radians. degrees, 2pi/3 radians. degrees, 3pi/4 radians. degrees, 5pi/6 radians. Degrees: one degree (1°) is a rotation of 1/ of a complete revolution about the rad. π o. Consider the following two angles: ° and °. If we sketch. Learn how to convert from degrees to radians and what is the conversion factor as well as the conversion formula. radians How many radians in degrees? There are 1 degree of arc is defined as 1/ of a revolution. In SI units 1°.
Learn how to calculate degrees to radian. 1 revolution = ° 2 π rad = ° rad = ° / π radians = degrees × / π . °, rad, π rad. Jul 19, Answer: −o=−56π radians. Explanation: o=π radians. ⇒1o=π radians . ⇒−o=−⋅π radians. XXXX =−56π radians. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a.
The measure of an angle is determined by the amount of rotation from the initial side to So, degree measure and radian measure are related by the equations. In elementary school, we learn that angles are measured in degrees (°). Note: If you have not yet learned about radians in school, you may ignore the radians. Convert revolutions to degrees (r to °) with the angle conversion calculator. There are also days in the Persian calendar year, and many theorize that early. To convert from degree measure to radian measure, multiply the degree measure by π o radian. The measure of an angle is determined by the amount of rotation from the initial side to .. Question 1: Convert the degree to radian. We will learn how to convert a value in degrees to a value in radians and vice versa. Given some angle Θ in degrees, we can multiply this angle in degrees by a . Θ = deg. Θ = 5π/6 rad. Θ = deg. Θ = π rad. Θ = deg. Θ = 7π/6 rad. rotation of some ray S from the origin by degrees or 2π radians will have.
the rotation stops at some position, the second line is called the terminal side of the angle. The If we wish to convert degrees to radians, or vice-versa, for any angle, we can use the Example 1: Convert an angle of degrees to radians. It is important to be able to measure angles in radians as well as in degrees with a degree symbol (°), radians are usually written without any symbol or . What is the equivalent degree measure of radians written in simplest terms? A) °. We can measure Angles in Degrees. There are degrees in one Full Rotation (one complete circle around). (Angles can also be measured in Radians). Mar 13, This gives a value in radians, which is easy to convert to degrees. In general, you determine the magnitude of any angle ø in radians by.
What is radian measure? How to convert radians to degree. How to convert degrees to radians. IN THE RADIAN SYSTEM of angular measurement, the measure of one revolution is 2π. (In the next Topic, Arc f), 5π 6, = 5·, π 6, = 5· 30° = ° A function of any angle is equal to the cofunction of its complement. (Topic 3.). We now can easily obtain a formula to convert from degrees to radians and vice- versa. Also you are given the negative angle (had the rotation of the terminal side Once you have clicked once to get an angle, you may drag the angle rather. We will calculate the Radians for each degree on the Unit Circle labeled above. represented in the other 3 Quadrants, except that X and Y may change sign depending . What are the X, Y coordinates for the ° angle on the Unit Circle ?. You may remember from geometry that the measure of an angle is defined as the degrees as the number of units in a circle (one rotation). They could have or multiples of 30°: 60°,90°,°,°,°,°,°,°,°,°. measuring. | 2020-01-26 06:13:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906171321868896, "perplexity": 1088.1816626270631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00020.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/pram/D_Mishra | • D Mishra
Articles written in Pramana – Journal of Physics
• Chiral SU(4) × SU(4) breaking: masses and decay constants of charmed hadrons
The (4, 4*) ⊕ (4*, 4) model of broken chiral SU (4) × SU (4) symmetry has been used to calculate the third-order coupling constants involving charmed and ordinary pseudoscalar mesons. These coupling constants are exploited to derive some interesting new relations among the masses and decay constants of these charmed particles. Using the known masses and decay constants as inputs, we exploit these relations to predict:FD = −1·41Fπ,FF = −1·13Fπ,FD/FF = 1·25,m(Ds) = 1·43 GeV,m(Fs) = 1·39 GeV andm(Ks) = 1·02 GeV.
• Mixing of meson isosinglets
The mixing angles for the vector and pseudoscalar meson isosinglets are obtained in a non-relativistic quark model. Schwinger-type mass relations are also obtained for SU(4) and SU(5). Quark contents of different meson isosinglets are computed which agree well with similar estimation of Maki and co-workers and Boal.
• Mixing of meson isosinglets in SU(5) and an extension to SU(N)
The mixing angles of meson isosinglets belonging to the 24-dimensional and singlet representations of SU(5) are calculated under specific assumptions in the non-relativistic quark model. The procedure to extend the scheme to SU(N) has been outlined. The results have been compared with other earlier estimates.
• Mass relations for heavy mesons
In a non-relativistic quark model, by parametrizing the quark-antiquark potentials, some mass relations have been obtained. Algebraic expressions for the masses of higher flavour meson isosinglets have been given in terms of the meson masses of lower symmetries. Hallock and Oneda’s contention of two types of sumrules for nonet mesons in SU(3) has been examined and the possibilities of such different sum-rules of this type in SU(4) and SU(5) have been explored.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | 2022-06-27 06:10:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169796466827393, "perplexity": 4470.798292059391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00370.warc.gz"} |
https://solvedlib.com/present-value-standard-insurance-is-developing-a,109114 | 1 answer
# Present value. Standard Insurance is developing a long-life insurance policy for people who outlive their retirement...
1 answer
##### QUESTION 3 Copy of R(s) C(s) G(s) G (s) Given the control loop above, determine the overall gain K for the Gc(s) for a given G(s) and design requirements. Peak Time (Tp) = 0.2 second Settling time (T...
QUESTION 3 Copy of R(s) C(s) G(s) G (s) Given the control loop above, determine the overall gain K for the Gc(s) for a given G(s) and design requirements. Peak Time (Tp) = 0.2 second Settling time (TS) = 0.12 second G(s) = 1/ (s^2 + .1s+4) Design a Dual PD controller to have two-distinct roots. Assu...
1 answer
##### D proudly owns a 42 inch (measured diagonally) fat screen TV. Michael proudly owns a 13 inch sred...
EXPLAIN please! d proudly owns a 42 inch (measured diagonally) fat screen TV. Michael proudly owns a 13 inch sred diagonally) lat screen TV. Dave sits comfortably with his dog Fritz at a distance of 10 feet. How close must Michael sit from his TV to have the same" viewing experience? Explain you...
6 answers
##### An aqaurium is 80cm long, 40cm wide, and 40cm high
An aqaurium is 80cm long, 40cm wide, and 40cm high. What is its volume?I got 128,000cm3One cubic centimeter (cm3) holds 1 milliliter of water. 1,000 cm3 hold 1 liter. How many mL will the tank in problem 6 hold? How many liters?...
5 answers
##### CHCI] AICIaPh C_PhPhTriphenylmethane can be prepared by reaction of benzene and chloroform in the presence of AIClz: Draw curved arrows to show the movement of electrons in this step of the reaction mechanism_Arrow-pushing InstructionsClHC_Cl AICl3+CHClz AICIA
CHCI] AICIa Ph C_Ph Ph Triphenylmethane can be prepared by reaction of benzene and chloroform in the presence of AIClz: Draw curved arrows to show the movement of electrons in this step of the reaction mechanism_ Arrow-pushing Instructions ClHC_Cl AICl3 +CHClz AICIA...
5 answers
##### Use L-test test Ine claim about the populalion mean the given leve significance using the given sample stalis cs_ Assure Ihe ponulation normally siributed. Claim; 52,700; 0.05 Sample slalistics: 54,185, 2400 n =17 Click the Icon to view Ihe [-distribution tableWnat are the null and alternative hypotheses? Choose tne corcci answe Beldw.5A Ho: 42 52,700 Ha" F < 52,700Ho: @ = 52,700 1V0=52,700Hj" 0=52,700 H;4+52,700Ho: H < 52,700 pa 52,700Wnal is the value the standardized test stat
Use L-test test Ine claim about the populalion mean the given leve significance using the given sample stalis cs_ Assure Ihe ponulation normally siributed. Claim; 52,700; 0.05 Sample slalistics: 54,185, 2400 n =17 Click the Icon to view Ihe [-distribution table Wnat are the null and alternative hypo...
1 answer
##### Present and future value tables of $1 at 3% are presented below: N FV$1 PV...
Present and future value tables of $1 at 3% are presented below: N FV$1 PV $1 FVA$1 PVA $1 FVAD$1 PVAD \$1 1 1.03000 0.97087 1.0000 0.97087 1.0300 1.00000 2 1.06090 0.94260 2.0300 1.91347 2.0909 1.97087 3 1.09273 0.91514 3.0909 2.82861 3.1836 2.91347 4 1.12551 0.88849 4.1836 3.71710 4.3...
5 answers
##### In one year; an online travel agency reported that summer travelers booked their airline reservations an average of 71_ days in advance_ A random sample of 40 summer travelers in later year was selected and the number of days travelers booked their airline reservations in advance was recorded: The data are shown in the accompanying table_ Complete parts and below: Click the icon to view the data table.a. Perform hypothesis test using & = 0.05 to determine if the average number of days reserv
In one year; an online travel agency reported that summer travelers booked their airline reservations an average of 71_ days in advance_ A random sample of 40 summer travelers in later year was selected and the number of days travelers booked their airline reservations in advance was recorded: The d...
1 answer
##### Question 39 **The answer needs to be in the form of a simplified fraction. Approximate the...
Question 39 **The answer needs to be in the form of a simplified fraction. Approximate the double integral of f(x,y) = x + y over the region R bounded above by the semicircle y = 116 - x' and below by the x-axis, using the partitions x = -4,-2,0,1,2,4 and y=0,2,4 with (Xk.Yk) the lower left cor...
1 answer
##### Why was the government in Massachusetts Bay Colony the most radical in colonial America?
Why was the government in Massachusetts Bay Colony the most radical in colonial America?...
1 answer
##### If B is four times more than A and C is four times less than A, then given A=8 find B+C?
If B is four times more than A and C is four times less than A, then given A=8 find B+C?...
1 answer
##### The principal pathway for transport of lysosomal hydrolases from the trans Golgi network (pH 6.6) to...
The principal pathway for transport of lysosomal hydrolases from the trans Golgi network (pH 6.6) to the late endosomes (pH 6) and for the recycling of M6P receptors back to the Golgi depends on the pH difference between these two compartments. From what you know about M6P receptor binding and recyc...
-- 0.101328-- | 2023-02-05 23:43:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3921755850315094, "perplexity": 7864.964862920544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00835.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=HeunC | HeunC - Maple Programming Help
HeunC
The Heun Confluent function
HeunCPrime
The derivative of the Heun Confluent function
Calling Sequence HeunC($\mathrm{\alpha }$, $\mathrm{\beta }$, $\mathrm{\gamma }$, $\mathrm{\delta }$, $\mathrm{\eta }$, z) HeunCPrime($\mathrm{\alpha }$, $\mathrm{\beta }$, $\mathrm{\gamma }$, $\mathrm{\delta }$, $\mathrm{\eta }$, z)
Parameters
$\mathrm{\alpha }$ - algebraic expression $\mathrm{\beta }$ - algebraic expression $\mathrm{\gamma }$ - algebraic expression $\mathrm{\delta }$ - algebraic expression $\mathrm{\eta }$ - algebraic expression z - algebraic expression
Description
• The HeunC function is the solution of the Heun Confluent equation. Following the first reference (at the end), the equation and the conditions at the origin satisfied by HeunC are
${\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){=}{\mathrm{DESol}}{}\left(\left\{\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{\mathrm{_Y}}{}\left({z}\right){-}\frac{\left({-}{{z}}^{{2}}{}{\mathrm{α}}{+}\left({-}{\mathrm{β}}{+}{\mathrm{α}}{-}{\mathrm{γ}}{-}{2}\right){}{z}{+}{\mathrm{β}}{+}{1}\right){}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{\mathrm{_Y}}{}\left({z}\right)\right)}{{z}{}\left({z}{-}{1}\right)}{-}\frac{{1}}{{2}}{}\frac{\left(\left(\left({-}{\mathrm{β}}{-}{\mathrm{γ}}{-}{2}\right){}{\mathrm{α}}{-}{2}{}{\mathrm{δ}}\right){}{z}{+}\left({\mathrm{β}}{+}{1}\right){}{\mathrm{α}}{+}\left({-}{\mathrm{γ}}{-}{1}\right){}{\mathrm{β}}{-}{2}{}{\mathrm{η}}{-}{\mathrm{γ}}\right){}{\mathrm{_Y}}{}\left({z}\right)}{{z}{}\left({z}{-}{1}\right)}\right\}{,}\left\{{\mathrm{_Y}}{}\left({z}\right)\right\}{,}\left\{{\mathrm{_Y}}{}\left({0}\right){=}{1}{,}{\mathrm{D}}{}\left({\mathrm{_Y}}\right){}\left({0}\right){=}\frac{{1}}{{2}}{}\frac{\left({-}{\mathrm{α}}{+}{1}{+}{\mathrm{γ}}\right){}{\mathrm{β}}{+}{\mathrm{γ}}{-}{\mathrm{α}}{+}{2}{}{\mathrm{η}}}{{\mathrm{β}}{+}{1}}\right\}\right)$ (1)
• This Heun (singly) Confluent equation is obtained from the Heun General equation through a confluence process, that is, a process where two singularities coalesce, performed by redefining parameters and taking limits, resulting in a single (typically irregular) singularity. The Heun Confluent equation thus has two regular singularities and one irregular singularity, and includes as particular cases both the 2F1 and 1F1 hypergeometric equations. The solution to the 2F1 equation,
> DEtools[hyperode]( hypergeom([a,b],[c],z), y(z) ) = 0;
${y}{}\left({z}\right){}{a}{}{b}{+}\left(\left({a}{+}{b}{+}{1}\right){}{z}{-}{c}\right){}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{y}{}\left({z}\right)\right){+}\left({{z}}^{{2}}{-}{z}\right){}\left(\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{y}{}\left({z}\right)\right){=}{0}$ (2)
can then be expressed in terms of HeunC functions
> dsolve((2), [HeunC]);
${y}{}\left({z}\right){=}{\mathrm{_C1}}{}{\mathrm{HeunC}}{}\left({0}{,}{-}{b}{+}{a}{,}{c}{-}{1}{,}{0}{,}\frac{{1}}{{2}}{}\left({-}{2}{}{a}{+}{c}\right){}{b}{+}\frac{{1}}{{2}}{}{a}{}{c}{-}\frac{{1}}{{2}}{}{c}{+}\frac{{1}}{{2}}{,}{-}\frac{{1}}{{z}{-}{1}}\right){}{\left({z}{-}{1}\right)}^{{-}{a}}{+}{\mathrm{_C2}}{}{\mathrm{HeunC}}{}\left({0}{,}{b}{-}{a}{,}{c}{-}{1}{,}{0}{,}\frac{{1}}{{2}}{}\left({-}{2}{}{a}{+}{c}\right){}{b}{+}\frac{{1}}{{2}}{}{a}{}{c}{-}\frac{{1}}{{2}}{}{c}{+}\frac{{1}}{{2}}{,}{-}\frac{{1}}{{z}{-}{1}}\right){}{\left({z}{-}{1}\right)}^{{-}{b}}$ (3)
and the same for the 1F1 hypergeometric confluent equation
> DEtools[hyperode]( hypergeom([a],[c],z), y(z) ) = 0;
${a}{}{y}{}\left({z}\right){+}\left({-}{c}{+}{z}\right){}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{y}{}\left({z}\right)\right){-}{z}{}\left(\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{y}{}\left({z}\right)\right){=}{0}$ (4)
> dsolve((4), [HeunC]);
${y}{}\left({z}\right){=}{\mathrm{_C1}}{}{{ⅇ}}^{{z}}{}{\mathrm{HeunC}}{}\left({1}{,}{c}{-}{1}{,}{-}{1}{,}{-}{a}{+}\frac{{1}}{{2}}{}{c}{,}{-}\frac{{1}}{{2}}{}{c}{+}{a}{+}\frac{{1}}{{2}}{,}{z}\right){+}{\mathrm{_C2}}{}{{ⅇ}}^{{z}}{}{{z}}^{{-}{c}{+}{1}}{}{\mathrm{HeunC}}{}\left({1}{,}{-}{c}{+}{1}{,}{-}{1}{,}{-}{a}{+}\frac{{1}}{{2}}{}{c}{,}{-}\frac{{1}}{{2}}{}{c}{+}{a}{+}\frac{{1}}{{2}}{,}{z}\right)$ (5)
HeunC, thus, contains as particular cases all the hypergeometric functions of the 2F1 and 1F1 classes - some of these specializations are listed at the end of the Examples section.
• Two other important non-hypergeometric case of Heun's Confluent equation, are the "spheroidal wave function" equation
> diff(y(z),z,z) + 2*(gamma+1)*z*diff(y(z),z)/(z^2-1) + (4*delta*z^2-c)/(z^2-1)*y(z) = 0;
$\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{y}{}\left({z}\right){+}\frac{{2}{}\left({\mathrm{γ}}{+}{1}\right){}{z}{}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{y}{}\left({z}\right)\right)}{{{z}}^{{2}}{-}{1}}{+}\frac{\left({4}{}{\mathrm{δ}}{}{{z}}^{{2}}{-}{c}\right){}{y}{}\left({z}\right)}{{{z}}^{{2}}{-}{1}}{=}{0}$ (6)
obtained from Heun's Confluent equation taking $\left\{\mathrm{\alpha }=0,\mathrm{\beta }=-\frac{1}{2},\mathrm{\eta }=\frac{\left(1-\mathrm{\gamma }-c\right)}{4}\right\}$ and changing $z$ -> ${z}^{2}$;
and the rational form of Mathieu's equation,
> diff(y(z),z,z) + z/(z^2-1)*diff(y(z),z) + (2*delta*(2*z^2-1)-a)/(z^2-1)*y(z) = 0;
$\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{y}{}\left({z}\right){+}\frac{{z}{}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{y}{}\left({z}\right)\right)}{{{z}}^{{2}}{-}{1}}{+}\frac{\left({2}{}{\mathrm{δ}}{}\left({2}{}{{z}}^{{2}}{-}{1}\right){-}{a}\right){}{y}{}\left({z}\right)}{{{z}}^{{2}}{-}{1}}{=}{0}$ (7)
obtained from the spheroidal wave function equation above by taking $c=a+2\mathrm{\delta }$ and further specializing $\mathrm{\gamma }=-\frac{1}{2}$.
• The HeunC($\mathrm{\alpha }$,$\mathrm{\beta }$,$\mathrm{\gamma }$,$\mathrm{\delta }$,$\mathrm{\eta }$, z) function is a local (Frobenius) solution to Heun's Confluent equation, computed as a power series expansion around the origin, a regular singular point. The series converges for $\left|z\right|<1$, where the second regular singularity is located. An analytic continuation of HeunC is obtained through identities, relating the values of the function in different regions of the $z$ plane, for given values of the other parameters, or by expanding the solution around 1, the other regular singularity, and overlapping the series. General formulas relating these series expansions at different singularities and for arbitrary values of the other parameters, however, are not known at present.
• A special case happens when the parameters entering HeunC are such that the function is, simultaneously, a Frobenius solution around the two regular singularities and hence analytic in a domain containing both of them. In such a case the series expansion for HeunC truncates and the function becomes a polynomial. A necessary (not sufficient) condition for this case is that $\mathrm{\delta }=-\left(n+\frac{\left(\mathrm{\gamma }+\mathrm{\beta }+2\right)}{2}\right)\mathrm{\alpha }$, with $n$ a positive integer, and $\mathrm{\eta }$ has one of a finite number of characteristic values, so that the function is a polynomial of degree $n$.
Examples
Heun's Confluent equation,
> $\mathrm{CHE}≔\frac{{ⅆ}^{2}}{ⅆ{z}^{2}}y\left(z\right)=\frac{\left(-{z}^{2}\mathrm{α}+\left(-2-\mathrm{β}-\mathrm{γ}+\mathrm{α}\right)z+1+\mathrm{β}\right)\left(\frac{ⅆ}{ⅆz}y\left(z\right)\right)}{z\left(z-1\right)}+\frac{1\left(\left(\left(-\mathrm{β}-\mathrm{γ}-2\right)\mathrm{α}-2\mathrm{δ}\right)z+\left(\mathrm{β}+1\right)\mathrm{α}+\left(-\mathrm{γ}-1\right)\mathrm{β}-\mathrm{γ}-2\mathrm{η}\right)y\left(z\right)}{2z\left(z-1\right)}$
${\mathrm{CHE}}{≔}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{z}}^{{2}}}{}{y}{}\left({z}\right){=}\frac{\left({-}{{z}}^{{2}}{}{\mathrm{α}}{+}\left({-}{2}{-}{\mathrm{β}}{-}{\mathrm{γ}}{+}{\mathrm{α}}\right){}{z}{+}{1}{+}{\mathrm{β}}\right){}\left(\frac{{ⅆ}}{{ⅆ}{z}}{}{y}{}\left({z}\right)\right)}{{z}{}\left({z}{-}{1}\right)}{+}\frac{{1}}{{2}}{}\frac{\left(\left(\left({-}{\mathrm{β}}{-}{\mathrm{γ}}{-}{2}\right){}{\mathrm{α}}{-}{2}{}{\mathrm{δ}}\right){}{z}{+}\left({\mathrm{β}}{+}{1}\right){}{\mathrm{α}}{+}\left({-}{\mathrm{γ}}{-}{1}\right){}{\mathrm{β}}{-}{\mathrm{γ}}{-}{2}{}{\mathrm{η}}\right){}{y}{}\left({z}\right)}{{z}{}\left({z}{-}{1}\right)}$ (8)
can be transformed into another version of itself, that is, an equation with two regular singularities and one irregular singularity respectively located at $\left\{0,1,\mathrm{\infty }\right\}$, through transformations of the form
> $y\left(z\right)={z}^{\frac{\left(\mathrm{μ}-1\right)\mathrm{β}}{2}}{\left(z-1\right)}^{\frac{\left(\mathrm{ν}-1\right)\mathrm{γ}}{2}}{ⅇ}^{\frac{\left(\mathrm{ρ}-1\right)\mathrm{α}z}{2}}u\left(z\right)$
${y}{}\left({z}\right){=}{{z}}^{\frac{{1}}{{2}}{}\left({\mathrm{μ}}{-}{1}\right){}{\mathrm{β}}}{}{\left({z}{-}{1}\right)}^{\frac{{1}}{{2}}{}\left({\mathrm{ν}}{-}{1}\right){}{\mathrm{γ}}}{}{{ⅇ}}^{\frac{{1}}{{2}}{}\left({\mathrm{ρ}}{-}{1}\right){}{\mathrm{α}}{}{z}}{}{u}{}\left({z}\right)$ (9)
where ${\mathrm{\lambda }}^{2}=1$, ${\mathrm{\mu }}^{2}=1$ and ${\mathrm{\nu }}^{2}=1$. Under this transformation, the HeunC parameters transform according to $\mathrm{\alpha }$ -> $\mathrm{\alpha }\mathrm{\lambda }$, $\mathrm{\beta }$ -> $\mathrm{\beta }\mathrm{\mu }$ and $\mathrm{\gamma }$ -> $\mathrm{\gamma }\mathrm{\nu }$. These transformations form a group of eight elements and imply on a number of identities, among which you have
> $\mathrm{FunctionAdvisor}\left(\mathrm{identities},\mathrm{HeunC}\right)$
$\left[\left[{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){=}{\left({1}{-}{z}\right)}^{{-}{\mathrm{γ}}}{}{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{-}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){,}{\mathrm{And}}{}\left({\mathrm{β}}{::}\left({\mathrm{Not}}{}\left({\mathrm{integer}}\right)\right){,}\left|{z}\right|{<}{1}\right)\right]{,}\left[{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){=}{{ⅇ}}^{{-}{z}{}{\mathrm{α}}}{}{\mathrm{HeunC}}{}\left({-}{\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){,}{\mathrm{And}}{}\left({\mathrm{β}}{::}\left({\mathrm{Not}}{}\left({\mathrm{integer}}\right)\right){,}\left|{z}\right|{<}{1}\right)\right]\right]$ (10)
Changing $z$ -> $1-t$ also results in a HeunC equation with the singularities located at $\left\{0,1,\mathrm{\infty }\right\}$; this permits rewriting the solution to the CHE in different manners. For example, the general solution returned by default by dsolve is
> $\mathrm{dsolve}\left(\mathrm{CHE}\right)$
${y}{}\left({z}\right){=}{\mathrm{_C1}}{}{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right){+}{\mathrm{_C2}}{}{{z}}^{{-}{\mathrm{β}}}{}{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{-}{\mathrm{β}}{,}{\mathrm{γ}}{,}{\mathrm{δ}}{,}{\mathrm{η}}{,}{z}\right)$ (11)
When $\mathrm{\beta }$ is an integer, however, these two "independent" solutions are not independent, yet a second pair of independent solutions can be constructed exploring this invariance in form under $z$ -> $1-t$
> $\mathrm{dsolve}\left(\mathrm{CHE}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}assuming\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{β}::\mathrm{integer}$
${y}{}\left({z}\right){=}{\mathrm{_C1}}{}{\mathrm{HeunC}}{}\left({-}{\mathrm{α}}{,}{\mathrm{γ}}{,}{\mathrm{β}}{,}{-}{\mathrm{δ}}{,}{\mathrm{η}}{+}{\mathrm{δ}}{,}{1}{-}{z}\right){+}{\mathrm{_C2}}{}{\left({z}{-}{1}\right)}^{{-}{\mathrm{γ}}}{}{\mathrm{HeunC}}{}\left({-}{\mathrm{α}}{,}{-}{\mathrm{γ}}{,}{\mathrm{β}}{,}{-}{\mathrm{δ}}{,}{\mathrm{η}}{+}{\mathrm{δ}}{,}{1}{-}{z}\right)$ (12)
For $z$ different from 1, the 2F1 and the confluent 1F1 hypergeometric functions are related to HeunC by
> $\mathrm{FunctionAdvisor}\left(\mathrm{specialize},\mathrm{hypergeom},\mathrm{HeunC}\right)$
$\left[{\mathrm{hypergeom}}{}\left(\left[{a}{,}{b}\right]{,}\left[{c}\right]{,}{z}\right){=}\frac{{\mathrm{HeunC}}{}\left({0}{,}{c}{-}{1}{,}{b}{-}{a}{,}{0}{,}\frac{{1}}{{2}}{}\left({a}{+}{b}{-}{1}\right){}{c}{-}{b}{}{a}{+}\frac{{1}}{{2}}{,}\frac{{z}}{{z}{-}{1}}\right)}{{\left({1}{-}{z}\right)}^{{b}}}{,}{\mathrm{And}}{}\left({z}{\ne }{1}\right)\right]{,}\left[{\mathrm{hypergeom}}{}\left(\left[{a}\right]{,}\left[{b}\right]{,}{z}\right){=}{\mathrm{HeunC}}{}\left({1}{,}{b}{-}{1}{,}{1}{,}{-}\frac{{1}}{{2}}{}{b}{+}{a}{,}\frac{{1}}{{2}}{}{b}{-}{a}{+}\frac{{1}}{{2}}{,}{-}{z}\right){}\left({z}{+}{1}\right){,}{\mathrm{And}}{}\left({z}{\ne }{-}{1}\right)\right]$ (13)
When $\mathrm{\delta }=-\left(n+\frac{\left(\mathrm{\gamma }+\mathrm{\beta }+2\right)}{2}\right)\mathrm{\alpha }$, with $n$ a positive integer, the $n$th + 1 coefficient in the series expansion is a polynomial in $\mathrm{\eta }$ of order $n+1$. If $\mathrm{\delta }$ is a root of that polynomial, that coefficient is zero and with it all the following ones; the series then truncates and HeunC is a polynomial. For example, the necessary condition for a polynomial form is
> $\mathrm{HeunC}\left(\mathrm{α},\mathrm{β},\mathrm{γ},-\mathrm{α}\left(n+\frac{\mathrm{γ}+2+\mathrm{β}}{2}\right),\mathrm{η},z\right)$
${\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{-}{\mathrm{α}}{}\left({n}{+}\frac{{1}}{{2}}{}{\mathrm{γ}}{+}\frac{{1}}{{2}}{}{\mathrm{β}}{+}{1}\right){,}{\mathrm{η}}{,}{z}\right)$ (14)
Considering the first non-trivial case, for $n=1$, the function is
> $\mathrm{HC}≔\mathrm{subs}\left(n=1,\right)$
${\mathrm{HC}}{≔}{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{-}{\mathrm{α}}{}\left({2}{+}\frac{{1}}{{2}}{}{\mathrm{γ}}{+}\frac{{1}}{{2}}{}{\mathrm{β}}\right){,}{\mathrm{η}}{,}{z}\right)$ (15)
So the coefficient of ${z}^{2}$ in the series expansion is
> $Q≔\mathrm{simplify}\left(\mathrm{series}\left(\mathrm{HC},z,3\right),\mathrm{size}\right)$
${Q}{≔}{1}{+}\frac{\left({-}{\mathrm{α}}{+}{1}{+}{\mathrm{γ}}\right){}{\mathrm{β}}{+}{\mathrm{γ}}{-}{\mathrm{α}}{+}{2}{}{\mathrm{η}}}{{2}{}{\mathrm{β}}{+}{2}}{}{z}{+}\frac{{1}}{{8}}{}\frac{\left({\mathrm{α}}{-}{\mathrm{γ}}{-}{1}\right){}\left({\mathrm{α}}{-}{\mathrm{γ}}{-}{3}\right){}{{\mathrm{β}}}^{{2}}{+}\left({4}{}{{\mathrm{α}}}^{{2}}{+}\left({-}{4}{}{\mathrm{η}}{-}{8}{}{\mathrm{γ}}{-}{14}\right){}{\mathrm{α}}{+}{4}{}\left({\mathrm{γ}}{+}{2}\right){}\left({\mathrm{γ}}{+}{\mathrm{η}}{+}\frac{{1}}{{2}}\right)\right){}{\mathrm{β}}{+}{3}{}{{\mathrm{α}}}^{{2}}{+}\left({-}{8}{}{\mathrm{η}}{-}{6}{}{\mathrm{γ}}{-}{8}\right){}{\mathrm{α}}{+}{4}{}\left({\mathrm{η}}{+}\frac{{1}}{{2}}{}{\mathrm{γ}}\right){}\left({\mathrm{η}}{+}\frac{{3}}{{2}}{}{\mathrm{γ}}{+}{2}\right)}{\left({\mathrm{β}}{+}{1}\right){}\left({\mathrm{β}}{+}{2}\right)}{}{{z}}^{{2}}{+}{\mathrm{O}}\left({{z}}^{{3}}\right)$ (16)
> $\mathrm{c2}≔\mathrm{coeff}\left(Q,z,2\right)$
${\mathrm{c2}}{≔}\frac{{1}}{{8}}{}\frac{\left({\mathrm{α}}{-}{\mathrm{γ}}{-}{1}\right){}\left({\mathrm{α}}{-}{\mathrm{γ}}{-}{3}\right){}{{\mathrm{β}}}^{{2}}{+}\left({4}{}{{\mathrm{α}}}^{{2}}{+}\left({-}{4}{}{\mathrm{η}}{-}{8}{}{\mathrm{γ}}{-}{14}\right){}{\mathrm{α}}{+}{4}{}\left({\mathrm{γ}}{+}{2}\right){}\left({\mathrm{γ}}{+}{\mathrm{η}}{+}\frac{{1}}{{2}}\right)\right){}{\mathrm{β}}{+}{3}{}{{\mathrm{α}}}^{{2}}{+}\left({-}{8}{}{\mathrm{η}}{-}{6}{}{\mathrm{γ}}{-}{8}\right){}{\mathrm{α}}{+}{4}{}\left({\mathrm{η}}{+}\frac{{1}}{{2}}{}{\mathrm{γ}}\right){}\left({\mathrm{η}}{+}\frac{{3}}{{2}}{}{\mathrm{γ}}{+}{2}\right)}{\left({\mathrm{β}}{+}{1}\right){}\left({\mathrm{β}}{+}{2}\right)}$ (17)
solving for $\mathrm{\eta }$, requesting from solve to return using RootOf, you have
> $\mathrm{_EnvExplicit}≔\mathrm{false}$
${\mathrm{_EnvExplicit}}{≔}{\mathrm{false}}$ (18)
> $\mathrm{η}=\mathrm{solve}\left(\mathrm{c2},\mathrm{η}\right)$
${\mathrm{η}}{=}{\mathrm{RootOf}}{}\left({4}{}{{\mathrm{_Z}}}^{{2}}{+}\left({-}{4}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{β}}{}{\mathrm{γ}}{-}{8}{}{\mathrm{α}}{+}{8}{}{\mathrm{β}}{+}{8}{}{\mathrm{γ}}{+}{8}\right){}{\mathrm{_Z}}{+}{{\mathrm{α}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{-}{2}{}{\mathrm{γ}}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{+}{{\mathrm{γ}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{α}}}^{{2}}{}{\mathrm{β}}{-}{4}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{-}{8}{}{\mathrm{γ}}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{γ}}}^{{2}}{}{\mathrm{β}}{+}{3}{}{{\mathrm{α}}}^{{2}}{-}{14}{}{\mathrm{α}}{}{\mathrm{β}}{-}{6}{}{\mathrm{α}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{β}}}^{{2}}{+}{10}{}{\mathrm{β}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{γ}}}^{{2}}{-}{8}{}{\mathrm{α}}{+}{4}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}\right)$ (19)
substituting in $\mathrm{HC}$ we have
> $\mathrm{HC_polynomial}≔\mathrm{subs}\left(,\mathrm{HC}\right)$
${\mathrm{HC_polynomial}}{≔}{\mathrm{HeunC}}{}\left({\mathrm{α}}{,}{\mathrm{β}}{,}{\mathrm{γ}}{,}{-}{\mathrm{α}}{}\left({2}{+}\frac{{1}}{{2}}{}{\mathrm{γ}}{+}\frac{{1}}{{2}}{}{\mathrm{β}}\right){,}{\mathrm{RootOf}}{}\left({4}{}{{\mathrm{_Z}}}^{{2}}{+}\left({-}{4}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{β}}{}{\mathrm{γ}}{-}{8}{}{\mathrm{α}}{+}{8}{}{\mathrm{β}}{+}{8}{}{\mathrm{γ}}{+}{8}\right){}{\mathrm{_Z}}{+}{{\mathrm{α}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{-}{2}{}{\mathrm{γ}}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{+}{{\mathrm{γ}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{α}}}^{{2}}{}{\mathrm{β}}{-}{4}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{-}{8}{}{\mathrm{γ}}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{γ}}}^{{2}}{}{\mathrm{β}}{+}{3}{}{{\mathrm{α}}}^{{2}}{-}{14}{}{\mathrm{α}}{}{\mathrm{β}}{-}{6}{}{\mathrm{α}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{β}}}^{{2}}{+}{10}{}{\mathrm{β}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{γ}}}^{{2}}{-}{8}{}{\mathrm{α}}{+}{4}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}\right){,}{z}\right)$ (20)
When the function admits a polynomial form, as is the case of $\mathrm{HC_polynomial}$ by construction, to obtain the actual polynomial of degree $n$ (in this case $n=1$) use
> $\genfrac{}{}{0}{}{\phantom{\mathrm{HeunC}=\mathrm{HeunC}:-\mathrm{SpecialValues}:-\mathrm{Polynomial}}}{}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}|\phantom{\rule[-0.0ex]{0.1em}{0.0ex}}\genfrac{}{}{0}{}{\phantom{}}{\mathrm{HeunC}=\mathrm{HeunC}:-\mathrm{SpecialValues}:-\mathrm{Polynomial}}$
${1}{+}\frac{\left(\left({-}{\mathrm{α}}{+}{1}{+}{\mathrm{γ}}\right){}{\mathrm{β}}{+}{\mathrm{γ}}{-}{\mathrm{α}}{+}{2}{}{\mathrm{RootOf}}{}\left({4}{}{{\mathrm{_Z}}}^{{2}}{+}\left({-}{4}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{β}}{}{\mathrm{γ}}{-}{8}{}{\mathrm{α}}{+}{8}{}{\mathrm{β}}{+}{8}{}{\mathrm{γ}}{+}{8}\right){}{\mathrm{_Z}}{+}{{\mathrm{α}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{-}{2}{}{\mathrm{γ}}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{+}{{\mathrm{γ}}}^{{2}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{α}}}^{{2}}{}{\mathrm{β}}{-}{4}{}{\mathrm{α}}{}{{\mathrm{β}}}^{{2}}{-}{8}{}{\mathrm{γ}}{}{\mathrm{α}}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}{}{{\mathrm{β}}}^{{2}}{+}{4}{}{{\mathrm{γ}}}^{{2}}{}{\mathrm{β}}{+}{3}{}{{\mathrm{α}}}^{{2}}{-}{14}{}{\mathrm{α}}{}{\mathrm{β}}{-}{6}{}{\mathrm{α}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{β}}}^{{2}}{+}{10}{}{\mathrm{β}}{}{\mathrm{γ}}{+}{3}{}{{\mathrm{γ}}}^{{2}}{-}{8}{}{\mathrm{α}}{+}{4}{}{\mathrm{β}}{+}{4}{}{\mathrm{γ}}\right)\right){}{z}}{{2}{}{\mathrm{β}}{+}{2}}$ (21)
References
Decarreau, A.; Dumont-Lepage, M.C.; Maroni, P.; Robert, A.; and Ronveaux, A. "Formes Canoniques de Equations confluentes de l'equation de Heun." Annales de la Societe Scientifique de Bruxelles. Vol. 92 I-II, (1978): 53-78.
Ronveaux, A. ed. Heun's Differential Equations. Oxford University Press, 1995.
Slavyanov, S.Y., and Lay, W. Special Functions, A Unified Theory Based on Singularities. Oxford Mathematical Monographs, 2000. | 2017-10-22 10:12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 96, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962526798248291, "perplexity": 976.5028415768893}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00725.warc.gz"} |
https://www.physicsforums.com/threads/finding-the-distance-modulus-and-the-absolute-magnitude.187900/ | # Finding the distance modulus and the absolute magnitude
1. Sep 29, 2007
### Benzoate
1. The problem statement, all variables and given/known data
If a star has an apparent magnitude of -.4 and a parallax of .3'' what is :
a) the distance modulus
b) the absolute magnitude
2. Relevant equations
m is the apparent magnitude and M is the absolute magnitude
m-M =5 log d - 5
M= m + 5+5 log(pi''), pi'' is the parallex angle
3. The attempt at a solution
In order to find the Absolute magnitude, M, I apply the equation M= m + 5 + 5log(pi'')
I can easily find M since m and pi'' are already given in the problem. The only trouble I'm having is I don't know what units of measurement I'm supposed to convert pi'' to or if I'm suppose to leave pi'' the way it is.
Last edited: Sep 29, 2007
2. Sep 29, 2007
### lightgrav
the parallax angle, in arcsec, is the reciprocal distance in parsec.
So, (recall abs.Mag.definition) 10 pc means .1" , which has
M = m + 5 + 5*log(.1) , = m like it should.
3. Sep 29, 2007
### dynamicsolo
The parallax angle, $$\pi$$, is customarily given in arcseconds, but it is the (narrow) triangle involved that makes it clear how to use it. A parallax angle of 1" is the angle subtended by the mean radius of the Earth's orbit (more accurately, the semi-major axis), which is 1 AU, at a distance of 1 parsec. This automatically defines the parsec in terms of astronomical units (it also explains the name of the unit...).
What you'd want to think about it how that angle changes for other distances. You then have a simple relation between the stellar distance, d, in parsecs, and the parallax angle in arcseconds. The angle $$\pi$$ is often used interchangeably with the distance. The distance d in parsecs is what goes into your distance modulus equation (the modulus (M-m) is also used by some astronomers interchangeably with distance).
Noting many of the threads you've started lately, you wouldn't happen to be in an introductory astrophysics course, would you? | 2018-01-19 12:28:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285657167434692, "perplexity": 1501.0414609056397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00066.warc.gz"} |
https://figuraleffect.wordpress.com/2008/05/17/coming-up-with-decent-measures-of-things/ | # Coming up with decent measures of things
You have to use theory (some of it from intuition) to come up with measures. If the theory’s crap then your measures will be crap. The theory IS mostly crap (psychology is a young social science!)—it doesn’t take a genius to see that. So… where to next…? Exploration? Trying out a few duff measures to see what happens? Use some sort of qualitative process where you talk to people, brainstorm, get them to come up with measures. Hmmmm maybe. But then you’re still relying on more unideal intuition. People don’t actually know at all what we should be measuring.
Cognitive psychology often gets interesting when counterintuitive results are discovered. People who are particularly good at visuospatial reasoning deciding not to use visuospatial reasoning. Discovering that people with poor working memory (well one flavour of) actually doing BETTER in a task that would appear to require working memory.
So there are duff measures around—plenty of them—and somehow they’re picking up something in the noise across a sample of a population. And they often correlate a wee bit with other duff measures. And then there are $\frac{1}{100}$ thought out theories lying around. Progress. Slow. Painful. It’s going somewhere though.
Occasionally people try item-level analyses. LOOK these items of Raven’s are easier for women and THESE are easier for men… Oh look at these particular items in this questionnaire… they seem to be picking up something interesting… why’s that…
There’s heaps of data, people have spent so much of their lives collecting the stuff. And then they just get time to add it up and run it through a correlation or throw it into a structural equation model. The brain is viewed as a collection of correlated Gaussian distributed variables(!). But surely there’s something else that can be done with it. I love the stuff that’s being done with Raven’s matrices. Well those who’ve been brave enough to stay clear of factor analysis. | 2017-07-21 06:49:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4121217131614685, "perplexity": 1630.2503459158088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423723.8/warc/CC-MAIN-20170721062230-20170721082230-00021.warc.gz"} |
https://dianch.github.io/practice/shell-types-and-startup-files | # Shell Types and Startup Files
Quite often when we want to install some tools or set up an environment, the tutorials will tell us “hey you should add this line to your .bashrc file”, or sometimes “your .bash_profile file”, etc. However, sometimes this works and sometimes it doesn’t. Why? The answer lies in the shell types: there are actually different shell types that affect the way the shell deals with the startup files.
Specifically, there are login shells vs. non-login shells, and interactive shells vs. non-interactive shells. The difference matters for essentially one reason: to determine the startup files and some default options upon a shell’s instantiation (and in reverse, maybe less commonly noticed, cleanup files upon exit). When we tried adding some lines in some files and it didn’t work, we probably have added those lines in the wrong place or we were not using the shell in the right way. So, to make the shell work for us properly, we need to first understand, well, how it works.
Note: this concept applies to most versions of shells, but I’ll give examples for bash shell & zsh shell which I’m most confident about.
## The Two Shell Type Dichotomies
As the names suggest, the meanings of the types are pretty straightforward.
A login shell is any shell that logs us as a user. In most cases we get a login shell by:
• opening up a terminal emulator (see here: what terminal emulator is), e.g., Terminal application on our desktop, or tmux from command line
• ssh onto a machine
• invoking the shell explicitly with -l or --login option (e.g., bash -l or zsh -l).
Usually we get a prompt for our credentials to login, but it’s not necessarily the case. For example, if we invoke a nested shell from the current shell like this, it won’t ask for the credentials:
dian@ubuntu $bash -l dian@ubuntu$ # already in the sub shell
A non-login shell, on the contrary, is a shell that’s not obtained in the above mentioned ways. Some examples include:
• executing a shell script or a command string, e.g., bash my_script.sh or bash -c 'echo hello'
• invoking a shell without -l or --login option.
### Interactive vs. Non-Interactive
An interactive shell is even more self-explanatory: it asks for user input and immediately writes output to the user’s terminal (unless redirected). On the contrary, a non-interactive shell is usually given a script or a command string to execute, without the need to bother the users for extra input. Although there are ways to make the shell act outside this rule, 99% of the time if we are typing commands to a shell prompt, we are facing an interactive shell; if we are executing shell scripts or command strings, we are using a non-interactive shell.
For the sake of completeness, if we want to ask for an interactive shell no matter what, we can achieve that by explicitly specifying -i option (for zsh --interactive also works).
### Four Combinations
One important point to make is that, being login or not has nothing to do with being interactive or not; these two properties are not mutually exclusive. This means that we can effectively have four different combinations of shell types, namely: login interative, login non-interactive, non-login interactive and non-login non-interactive.
Here are some concrete examples for each of these type (the same also applies to zsh interchangeably):
Interactive most common:
1. ssh onto a machine
2. open Terminal or tmux
3. invoke a sub shell using bash -l
nested (sub) shell created with:
1. bash, zsh, etc.
Non-Interactive very rare. can do with:
1. bash -lc <commands>
2. bash -l <scripts>
nested (sub) shell created with:
1. bash -c <commands>
2. bash <scripts>
## How Do I Know If It’s a …
For bash shell, the most reliable way (there are other ways that test environment variables such as $0 but they are not always consistent) is to use the shell built-in command shopt which can show the configurations the shell is currently using: dian@ubuntu$ shopt login_shell # 0) original login, interative shell
dian@ubuntu $bash # 1) enters a sub shell, which is non-login, interactive dian@ubuntu$ shopt login_shell
dian@ubuntu $exit exit dian@ubuntu$ # back in the original shell
dian@ubuntu $bash -l # 2) enters a sub shell, which is login, interactive dian@ubuntu$ shopt login_shell
dian@ubuntu $exit logout dian@ubuntu$ # back in the original shell
dian@ubuntu $bash -c 'shopt login_shell' # 3) execute command in a sub shell, which is non-login, non-interactive login_shell off dian@ubuntu$ bash -cl 'shopt login_shell' # 4) execute command in a sub shell, which is login, non-interactive
dian@ubuntu $ For zsh shell, it also provides a built-in mechanism (see here) to test the login property. Specifically, we can use the following scripting: if [[ -o login ]]; then print yes else print no fi This is typical zsh style of testing things. Here the testing part goes in [[ ... ]], and -o tells the shell to test the login option. We can see this works in the same context as bash above, with an equivalent one-liner [[ -o login ]] && echo 'yes' || echo 'no': dian@ubuntu % [[ -o login ]] && echo 'yes' || echo 'no' # 0) original login, interative shell yes dian@ubuntu % zsh # 1) enters a sub shell, which is non-login, interactive dian@ubuntu % [[ -o login ]] && echo 'yes' || echo 'no' no dian@ubuntu % exit dian@ubuntu % # back in the original shell dian@ubuntu % zsh -l # 2) enters a sub shell, which is login, interactive dian@ubuntu % [[ -o login ]] && echo 'yes' || echo 'no' yes dian@ubuntu % exit dian@ubuntu % # back in the original shell dian@ubuntu % zsh -c "[[ -o login ]] && echo 'yes' || echo 'no'" # 3) execute command in a sub shell, which is non-login, non-interactive no dian@ubuntu % zsh -cl "[[ -o login ]] && echo 'yes' || echo 'no'" # 4) execute command in a sub shell, which is login, non-interactive yes dian@ubuntu % ### Interactive Shell? For bash shell, there will be an i character in the $- environment variable (which stores a set of options for the current shell) if it’s an interactive shell, and no i otherwise:
dian@ubuntu $echo$- # 0) original login, interactive shell
himBHs
dian@ubuntu $bash # 1) enters a sub shell, which is non-login, interactive dian@ubuntu$ echo $- himBHs dian@ubuntu$ exit
exit
dian@ubuntu $# back in the original shell dian@ubuntu$ bash -c 'echo $-' # 2) execute command in a sub shell, which is non-login, non-interactive hBc dian@ubuntu$ bash -ci 'echo $-' # 3) execute command in a sub shell, which is non-login, interactive himBHc For zsh shell, the same method also applies (try to test it with $- variable yourself!), but a more native way to find out the interactiveness is to use zsh’s [[ -o interactive ]] testing, similar to [[ -o login ]] as above:
dian@ubuntu % [[ -o interactive ]] && echo 'yes' || echo 'no' # 0) original login, interative shell
yes
dian@ubuntu % zsh # 1) enters a sub shell, which is non-login, interactive
dian@ubuntu % [[ -o interactive ]] && echo 'yes' || echo 'no'
yes
dian@ubuntu % exit
dian@ubuntu % # back in the original shell
dian@ubuntu % zsh -c "[[ -o interactive ]] && echo 'yes' || echo 'no'" # 2) execute command in a sub shell, which is non-login, non-interactive
no
dian@ubuntu % zsh -ci "[[ -o interactive ]] && echo 'yes' || echo 'no'" # 3) execute command in a sub shell, which is non-login, interactive
yes
dian@ubuntu %
### General Guidelines
Generally speaking, a login shell is the first shell we get when we log on a system, or when we explicitly specify the -l or --login option; a non-login shell is anything otherwise, such as the sub shells executed from the initial login shell. An interactive shell is one that we interact with by typing commands, while a non-interactive shell usually accepts a script or a command string and execute them for us. There are ways to override these rules, such as using -l, --login, -i and --interactive flags as mentioned above; in fact, a weird enough example is:
dian@ubuntu $bash -cil <commands> which can be tested to be a login, interactive shell at the same time even though it has nothing much to do with logging users in or being interactive whatsoever! But, as unnatural as it feels, we should avoid using shells in these ways. Use them as what they intend to be. ## Startup Files What’s the point of having these different shell types? They are used to determine the startup files to use upon a shell’s instantiation, and the cleanup files to use when it’s going to exit. This enables us to have different routines for different shell purpose. For example, we might want to use a certain environment variable with a login shell while in a non-interactive shell we might want another environment variable to be available. ### System-Wide vs. User-Level Startup Files For bash shell, the commonly seen startup files are: /etc/profile/, /etc/bash.bashrc, ~/.bash_profile, ~/.bash_login, ~/.profile and ~/.bashrc, etc. They can be grouped into two categories: ones that are intended for all users (system-wide) and reside in the /etc/ directory, and ones that are user-customized (user-level) which typically sit in the $HOME directory. The rules of which shell uses what files are complicated (see the official reference for a complete description of behaviors), but generally the user-level startup files are executed after the system-wide files. The most commonly seen cases for us are (3 out of 4):
• interactive login shell: it will first execute /etc/profile; then it looks for ~/.bash_profile, ~/.bash_login and ~/.profile by order, and executes only the first one upon discovery. The reason behind this lookup is that different Linux/UNIX distributions will have different startup files in place (for example ~/.bash_profile is present on my mac while it’s not on Ubuntu).
• interactive non-login shell: it will first execute /etc/bash.bashrc; then it will look for ~/.bashrc. However, it’s common to also put a line like if [ -f ~/.bashrc ]; then . ~/.bashrc; fi in ~/.bash_profile so that ~/.bashrc can also be executed in turn in a login shell.
• non-interactive shell: it is usually not intended to execute any startup files, but can do if \$BASH_ENV variable is provided.
For zsh shell, the similar rules apply and there are even more possible startup files (notice the system-wide version is paired with the user-level version):
• /etc/zshenv: always run for every shell
• ~/.zshenv: usually run for every shell
• /etc/zprofile: run for login shells
• ~/.zprofile: run for login shells
• /etc/zshrc: run for interactive shells
• ~/.zshrc: run for interactive shells
• /etc/zlogin: run for login shells
• ~/.zlogin: run for login shells
The best way to determine the exact routine is to test it out on your machine. Here, we can see the flexibility with these different types of shells. However, as I’ve warned for a thousand times, even though we can make the shell to execute the startup files we want by forcing weird flag combinations, it’s better to stick with the conventions.
Now we should see, for example, why sometimes a line added to our ~/.bashrc doesn’t work. Because normally only an interactive, non-login shell will see this line and this can be probably extended to an interactive, login shell by “sourcing” (see source command here) ~/.bashrc inside ~/.bash_profile. So, next time when we need to add a customized setting into our startup files, we need to add it in the right place or chain the startup files properly.
## Logout Files
For the sake of completeness, there are also optional cleanup files, or more precisely, logout files that do some cleanup chores for us upon logout (they are mostly intended for login shells). For bash shell, there is ~/.bash_logout and for zsh shell, there are ~/.zlogout and /etc/zlogout which are nicely paired and will be execute in this order (opposite to startup).
## Summary
To summarize:
• There are four types of shells: login shells vs. non-login shells, and interactive shells vs. non-interactive shells. They can be further combined into four exact types.
• We’ve discussed how to find the shell types in detail.
• Different types of shells have different startup (possibly logout) file routines.
• Better to stick with good conventions than to force shells into weird ways even though possible. | 2021-08-04 15:31:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2270790934562683, "perplexity": 5920.52211740384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00017.warc.gz"} |
https://tex.stackexchange.com/questions/406839/include-5-pdf-pages-in-one-page-with-a-blank-page | # Include 5 pdf pages in one page with a blank page
Just encounter a small problem with Texmaker. I want to add 5 PDF pages in one PDF page with TexMaker. The layout I need is the one below :
I used the code here with Texmaker:
``````\includepdf[nup=2x3, pages={1-5},landscape=TRUE, columnstrict=true]{myfile.pdf}
``````
But I don't know how can I get the layout that I want. I searched on forums and don't find the answer, but if you have a link that I didn't see it's also perfect !
I also have to repeat this operation since I have a pdf file with 45 pages and I want to have a final pdf file with the layout above and 5 pages per page.
• Hi, welcome to TeX.SE! I don't think `pdfpages` can do that. I didn't see an option for such special layout in its manual. I would go with placing the pages separatly using TikZ. – Martin Scharrer Dec 19 '17 at 9:22
• Thanks for the welcome and the answer. I will have a look at TikZ then ! :) – Nico Dec 19 '17 at 9:24
• I'm sure someone will post some full answer for this here soon. Just for clarification: Do you need this as part of a normal document, like in the appendix, or does your final PDF only consists of the included PDFs? Also, are you using A4 or Letter pagesize? – Martin Scharrer Dec 19 '17 at 9:37
• Thx for the answer Martin ! I only need for a final PDF only consists of the included PDFs. I'm using A4 landscape for now :) – Nico Dec 19 '17 at 9:44
• With `pgfpages` you can create your own layout, see e.g. tex.stackexchange.com/questions/151645/… – user36296 Dec 19 '17 at 10:59 | 2019-08-25 09:31:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165545701980591, "perplexity": 957.0858277029262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00410.warc.gz"} |
http://accessanesthesiology.mhmedical.com/content.aspx?bookid=1613§ionid=102159324 | Chapter 21
INTRODUCTION
The epilepsies are common and frequently devastating disorders, affecting ~2.5 million people in the U.S. alone. More than 40 distinct forms of epilepsy have been identified. Epileptic seizures often cause transient impairment of consciousness, leaving the individual at risk of bodily harm and often interfering with education and employment. Therapy is symptomatic in that available drugs inhibit seizures, but neither effective prophylaxis nor cure is available. Compliance with medication is a major problem because of the need for long-term therapy together with unwanted effects of many drugs.
The mechanisms of action of anti-seizure drugs fall into three major categories.
1. The first mechanism is to limit the sustained, repetitive firing of neurons, an effect mediated by promoting the inactivated state of voltage-activated Na+ channels.
2. A second mechanism appears to involve enhanced γ-aminobutyric acid (GABA)–mediated synaptic inhibition, an effect mediated either by a presynaptic or postsynaptic action. Drugs effective against the most common forms of epileptic seizures, partial and secondarily generalized tonic-clonic seizures, appear to work by one of these two mechanisms.
3. Drugs effective against absence seizure, a less common form of epileptic seizure, work by a third mechanism, inhibition of voltage-activated Ca2+ channels responsible for T-type Ca2+ currents.
Although many treatments are available, much effort is being devoted to elucidating the genetic causes and the cellular and molecular mechanisms by which a normal brain becomes epileptic, insights that promise to provide molecular targets for both symptomatic and preventive therapies.
TERMINOLOGY AND EPILEPTIC SEIZURE CLASSIFICATION
The term seizure refers to a transient alteration of behavior due to the disordered, synchronous, and rhythmic firing of populations of brain neurons. The term epilepsy refers to a disorder of brain function characterized by the periodic and unpredictable occurrence of seizures. Seizures can be "non-epileptic" when evoked in a normal brain by treatments such as electroshock or chemical convulsants, or "epileptic" when occurring without evident provocation. Pharmacological agents in current clinical use inhibit seizures, and thus are referred to as anti-seizure drugs. Whether any of these prevent the development of epilepsy (epileptogenesis) is uncertain.
Seizures are thought to arise from the cerebral cortex, and not from other central nervous system (CNS) structures such as the thalamus, brainstem, or cerebellum. Epileptic seizures have been classified into partial seizures, those beginning focally in a cortical site, and generalized seizures, those that involve both hemispheres widely from the outset (Commission on Classification and Terminology, 1981). The behavioral manifestations of a seizure are determined by the functions normally served by the cortical site at which the seizure arises. For example, a seizure involving motor cortex is associated with clonic jerking of the body part controlled by this region of cortex. A simple partial seizure is associated with preservation of consciousness. A complex partial seizure is associated with impairment of consciousness. The majority of complex partial seizures originate from ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
Subscription Options
AccessAnesthesiology Full Site: One-Year Subscription
Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more | 2017-02-28 14:34:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24960510432720184, "perplexity": 5484.658474987746}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00643-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.cut-the-knot.org/arithmetic/algebra/WhenEllipseMeetsHyperbola.shtml | # When Circle Meets Hyperbola
### Problem
Assume real, non-zero, numbers $a,b,c,d$ are pairwise distinct and that points $\displaystyle A(a,\frac{1}{a}),\;$ $\displaystyle B(b,\frac{1}{b}),\;$ $\displaystyle C(c,\frac{1}{c}),\;$ $\displaystyle D(d,\frac{1}{d}),\;$ are concyclic. Find $abcd.$
### Solution 1
Assume the circle in question is defined by the equation
$x^2+y^2+2gx+2fy+c=0$
Then $a,b,c,d\;$ are the roots of the equation
$\displaystyle t^2+\frac{1}{t^2}+2gt+2f\frac{1}{t}+c=0$
which is equivalent to
$t^4+2gt^3+ct^2+2ft+1=0.$
By one of Viète's formulas then $abcd=1.$
### Solution 2
In complex numbers, points $A,B,C,D$ are concyclic iff $\displaystyle\frac{A-C}{B-C}:\frac{A-D}{B-D}\;$ is a real number. This is equivalent to
$\displaystyle\frac{\displaystyle\frac{a^2+i}{a}-\frac{c^2+i}{c}}{\displaystyle\frac{b^2+i}{b}-\frac{c^2+i}{c}}\cdot \frac{\displaystyle\frac{a^2+i}{a}-\frac{d^2+i}{d}}{\displaystyle\frac{b^2+i}{b}-\frac{d^2+i}{d}} \in\mathbb{R}$
which, in turn, is equivalent to
$\displaystyle\frac{\displaystyle\frac{a-c}{ac}}{\displaystyle\frac{b-c}{bc}}\cdot\frac{ac-i}{bc-i}\cdot\frac{\displaystyle\frac{b-d}{bd}}{\displaystyle\frac{a-d}{ad}}\cdot\frac{bd-i}{ad-i} \in\mathbb{R},$
or,
$\displaystyle\frac{ac-i}{bc-i}\cdot\frac{bd-i}{ad-i} \in\mathbb{R},$
and, finally, to
$\displaystyle\frac{abcd-1-(ac+bd)i}{abcd-1-(ad+bc)i} \in\mathbb{R}.$
If $abcd \ne 1,$ then $ac+bd=ad+bc,\;$ i.e., $(a-b)(c-d)=0$ which would contradict the stipulations of the problem. Therefore, $abcd=1.$
### Solution 3
Let the circle be $(x-u)^2+(y-v)^2=1.$ Then with $\displaystyle y=\frac{1}{x},\;$ this becomes
$x^4-2ux^3+(u^2+v^2-1)x^2-2vx+1=0,$
From which, by one of Viète's formulas then $abcd=1.$
### Generalization
Assume real, non-zero, numbers $a,b,c,d$ are pairwise distinct and that points $\displaystyle A(a,\frac{1}{a}),\;$ $\displaystyle B(b,\frac{1}{b}),\;$ $\displaystyle C(c,\frac{1}{c}),\;$ $\displaystyle D(d,\frac{1}{d}),\;$ lie on an ellipse. Find $abcd.$
Solution
If $x^2+Mxy+Ny^2+Px+Qy+S=0\;$ is the equation of the ellipse, then, with $\displaystyle y=\frac{1}{x},\;$ it becomes
$x^4+Px^3+(M+S)x^2+Qx+N=0$
from which $abcd=N.$
### Acknowledgment
The story began with Leo Giugiuc posting the problem and his and Dan Sitaru solution at the CutTheKnotMath facebook page. Leo also provided a link to the original post by Wahab Raiz who referred to Ritesh Sharma as the author of the problem. The discussion there included a solution by Ravi Prakash. Without letting up, Leo has served Daniel Dan's solution and hid and Dan Sitaru's generalization that extended the result to four points on an ellipse. | 2022-12-07 19:32:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389217376708984, "perplexity": 1197.4427763910005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00109.warc.gz"} |
http://mathoverflow.net/questions/171492/do-models-of-zfc-have-arbitrarily-large-descent-values | # Do models of ZFC have arbitrarily large descent values?
Let $M$ denote a well-founded set-sized model of ZFC. The descent value of $M$ will be defined as the value of $n$ returned by the following process.
Initialization. Let $n$ equal $0$ and $X$ equal $M$.
Step. If $L_{\omega_1^X}^X$ doesn't satisfy ZFC according to $M$, halt and output $n$. Otherwise, increment $n$, let $X$ equal $L_{\omega_1^X}^X$, and repeat.
Question. Assuming sufficiently powerful large cardinal axioms, is it true that for every natural $n\geq 0,$ there exists a well-founded model $M$ of ZFC whose descent value is $n$?
-
No, the descent value is always at most $1$ in any model of ZFC, whether it is well-founded or not. To see this, observe that if the descent value isn't $0$, then on the next step you have $X=L_{\omega_1^M}^M$, and so $X$ satisfies $V=L$, and so $\omega_1^X$ is the $\omega_1^L$ inside $M$, and $L_{\omega_1^L}$ never satisifes ZFC, as it thinks that every ordinal is countable.
I think your idea will have a chance to succeed if you adjoin a predicate to $L$ as you descend, rather than going all the way down to $L$. – Joel David Hamkins Jun 10 '14 at 12:24 | 2015-07-28 01:50:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831787347793579, "perplexity": 112.50102359245066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/CC-MAIN-20150728002301-00209-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://physicslens.com/category/technology/geogebra/ | ## Template for self-assessment questions
Here is a template that I might use to generate questions for students’ self-assessment in future. Based on a query that one of the participants in a GeoGebra online tutorial asked about generating random questions for simple multiplication for lower… Continue Reading
## Angular velocity
This GeoGebra app shows how angular velocity ω is the rate of change of angular displacement (i.e. $\omega=\dfrac{\theta}{t}$) and is dependent on the speed and radius of the object in circular motion (i.e. $v=r\omega$). Students can explore the relationships by… Continue Reading
## Angular displacement
This GeoGebra app shows the relationship s = rθ. One activity I get students can do is to look at the value of θ when the arc length s is equal to the radius r. This would give the definition… Continue Reading
## Creating a simple interactive using GeoGebra
While preparing to share with some fellow teachers in Singapore about the use of GeoGebra in Physics, I came up with a set of simple instructions to create an interactive, while introducing tools such as sliders, checkboxes (along with boolean… Continue Reading
## Using Loom and GeoGebra to explain a tutorial question
It’s Day 1 of the full home-based learning month in Singapore! As teachers all over Singapore scramble to understand the use of the myriad EdTech tools, I have finally come to settle on a few: Google Meet to do video… Continue Reading | 2020-06-04 18:05:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21808522939682007, "perplexity": 1602.6257582481167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00323.warc.gz"} |