url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.clutchprep.com/physics/practice-problems/142075/find-the-torque-about-p-due-to-f-your-answer-should-correctly-express-both-the-m
|
Intro to Torque Video Lessons
Concept
# Problem: Find the torque about p due to F. Your answer should correctly express both the magnitude and sign of torque. Express your answer in terms of rm and F or in terms of r, θ, and F.
###### FREE Expert Solution
Torque:
$\overline{){\mathbf{\tau }}{\mathbf{=}}{\mathbf{r}}{\mathbf{f}}}$
rm = r•sin θ
90% (465 ratings)
###### Problem Details
Find the torque about p due to F. Your answer should correctly express both the magnitude and sign of torque. Express your answer in terms of rm and F or in terms of r, θ, and F.
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Intro to Torque concept. You can view video lessons to learn Intro to Torque. Or if you need more Intro to Torque practice, you can also practice Intro to Torque practice problems.
|
2020-11-27 13:49:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4674140214920044, "perplexity": 759.6576487559548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00209.warc.gz"}
|
https://www.khanacademy.org/math/algebra/algebra-functions/recognizing-functions-ddp/e/recog-func-2
|
# Recognize functions from graphs
Determine whether a given graph represents a function.
### Problem
Consider the graph below, which shows a relation between the variables x and y.
Does this graph represent a function?
Please choose from one of the following options.
|
2016-07-31 01:49:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7387104630470276, "perplexity": 1045.6344101709121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948335.92/warc/CC-MAIN-20160723072908-00047-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1293206
|
# [✓] Write a matrix multiplication with indefinite limits?
Posted 1 year ago
1129 Views
|
4 Replies
|
1 Total Likes
|
Hello, I need to find an answer to this problem. Let G(t) be a nxn matrix. I need to calculate G(t-1)xG(t-2)x...xG(2)xG(1) where x is the usual matrix multiplication. I can't use the function Product[G(i),{i,t-1,1}] because it uses the usual multiplication of real numbers. Any Idea of how can I solve this?
4 Replies
Sort By:
Posted 1 year ago
The matrix multiplication is Dot in Mathematica. If you need your product for display only, you can inactivate it: Inactive[Dot][g[t - 1], g[t - 2], \[Ellipsis], g[2], g[1]] If you need it for actual calculation you can use the infix form of Dot: g[5].g[4].g[3].g[2].g[1] or generate the terms with Table: Dot @@ Table[g[k], {k, 5, 1, -1}]
Posted 1 year ago
I'm sorry, I guess I've not been clear. I need to calculate (not only for display) G(t-1)xG(t-2)x...xG(2)xG(1) without setting any value to t. For instance, the product x(x-1)...*(x-t) is equal to Gamma(x+1)/Gamma(x-t). I want to find a closed expression that would depend on t.
Hmmm. What do you mean by "closed expression"? Smile, make one of your own. Say your procuct is called U[t] and according to Gianlucas proposal you can write U[t_?NumericQ] := Dot @@ Table[g[j], {j, 1, t - 1}] You can use it everywhere: In[7]:= U[t] Out[7]= U[t] Whenever you need it mor explicit give it a t ( element of Integers) In[6]:= U[4] Out[6]= g[1].g[2].g[3]
Let me try to explain with an example: In[249]:= Product[x - i, {i, 0, t - 1}] Out[249]= (1 - t + x) Pochhammer[2 - t + x, -1 + t] what I need is the expression in "Out[249]", this is what I mean by "closed expression". Look that the boundary I've used is an unapropriate bound to a Table or to a loop, but it's not to the function "Product". I'm looking for a function that would do the same thing as the function Product but instead of using the usual multiplication of real numbers, it uses the usual matrix product.
|
2019-03-26 00:52:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771335482597351, "perplexity": 1216.2356085342037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00088.warc.gz"}
|
http://www.impan.pl/cgi-bin/dict?reflect
|
## reflect
Strictly speaking, we should write something like $a(l,m,n)$ to reflect the dependence; we shall rely upon context instead.
If $s_0$ lies below $R_{-2}$, then we can reflect about the real axis and appeal to the case just considered.
|
2016-02-13 04:28:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257603287696838, "perplexity": 464.5170120799775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166141.55/warc/CC-MAIN-20160205193926-00204-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/225172/how-large-can-the-smallest-generating-set-of-a-group-g-of-order-n-be/225180
|
# How large can the smallest generating set of a group $G$ of order $n$ be?
Let $n$ be a natural number. For every group $G$ of order $n$, denote
$d(G)$ : The number of elements of the smallest generating set of $G$
How large is the maximum possible value of $d(G)$ depending on $n$ ?
If $n$ is a cyclic number, we have $d(G)=1$ for every group of order $n$. For $n=2p$ , $p$ an odd prime, there are two groups : the cyclic group and the dihedral group with $2$ generators, so in this case the maximum value is $2$.
But I wonder, if the maximal value for $d(G)$ can be determined in general, assuming the factorization of $n$ is known. Is the value known for $n=2048$, for example ?
• For $n=2048$ the maximum value of $d(G)$ is 11, obtained by $(\mathbb{Z}/2\mathbb{Z})^{11}$. – Richard Stanley Dec 3 '15 at 13:30
• For p-groups, the Burnside Basis Theorem tells you exactly how many generators you need (and the elementary abelian case is indeed the worst case). – Noah Snyder Dec 3 '15 at 13:39
• For group of order $p^n$ simply choose elements $a_1,a_2,\dots$ such that $a_k$ does not lie in a subgroup $G_{k-1}$ generated by $a_1,\dots,a_{k-1}$. Then $|G_0|=1$, $|G_k|\geq p|G_{k-1}|$, hence this process stops on at most $n$ steps. – Fedor Petrov Dec 3 '15 at 13:44
• I am curious why there are 3 votes to close this? – Benjamin Steinberg Dec 4 '15 at 1:24
• @BenjaminSteinberg: It's maybe also because, while determining the maximum value of $d(G)$ from the factorization of the order $n$ is a delicate and interesting question, the choice of the particularly bad example $n=2^{11}$ shows a certain lack of understanding. (I did not vote to close.) – Frieder Ladisch Dec 4 '15 at 11:44
By a Theorem of Guralnick and Lucchini (which does require CFSG), if each Sylow subgroup of $G$ (ranging over all primes) can be generated by $r$ or fewer elements, then $G$ can be generated by $r+1$ or fewer elements. As noted in comments, if $G$ has a Sylow $p$-subgroup $P$ of order $p^{a}$, then $P$ can be generated by $a$ or fewer elements (and $a$ are needed if and only if $P$ is elementary Abelian). Hence if $|G|$ has prime factorization $p_{1}^{a_{1}}p_{2}^{a_{2}} \ldots p_{r}^{a_{r}}$ with the $p_{i}$ distinct primes, and the $a_{i}$ positive integers, then $G$ can be generated by $1 + {\rm max}(a_{i})$ or fewer elements.
(The result attributed to Guralnick and Lucchini was not a joint paper, rather a result proved independently at around the same time: references:
R. Guralnick, "A bound for the number of generators of a finite group, Arch. Math. 53 (1989), 521-523.
A Lucchini: "A bound on the number of generators of a finite group", Arch. Math 53, (1989), 313-317).
• Interesting. I have a related vague question: Assume $G$ embedds in $S_n$. Your answer gives a bound of $n$ on the number of generators. For abelian $G$ this is tight (up to a multiplicative constant, maybe). Is there some family of $G$'s for which we have a better bound? E.g., simple subgroups of $S_n$? – Ofir Gorodetsky Dec 3 '15 at 14:19
• Simple groups are all $2$-generated. – Derek Holt Dec 3 '15 at 14:21
• @FriederLadisch : The result was proved independently by Guralnick and Lucchini at around the same time, it was not a joint paper - I had forgotten that myself! – Geoff Robinson Dec 4 '15 at 0:20
• @GeoffRobinson: Thank you very much! – Frieder Ladisch Dec 4 '15 at 11:33
The general answer (as a function just of $n$, rather than of its factorization into primes) is $\log_2 n$. It is elementary to prove that this number suffices. Just choose $1 \ne g_1,g_2,g_3,\ldots \in G$ with $g_{i+1} \not\in G_{i} := \langle x_1,\ldots,x_i \rangle$, until $G_k=G$. Since each $G_i <G_{i+1}$ for $i<k$, we have $|G_{i+1}/G_i| \ge 2$, so $|G| = |G_k| \ge 2^k$.
But since an elementary abelian $2$-group requires that number of generators, this bound is best possible.
• .You could add: "... and unlike Geoff's answer, this doesn't require CFSG!" – Stefan Kohl Dec 3 '15 at 14:48
• @Peter Classification of finite simple groups. – Igor Rivin Dec 3 '15 at 14:52
• $\log_2\,n$ is an upper bound, but sometimes one can do better. For instance, if $n$ is a product of distinct primes $p_i$, and no $p_i|(p_j-1)$, then every group of order $n$ is cyclic. – Richard Stanley Dec 3 '15 at 16:12
• With all due respect, I do not agree that $\log_2 n$ is "the general answer", since in the question it is assumed that the factorization of $n$ is known, and for example if $n$ is a big prime, then $\log_2 n$ is pretty far of the right answer. $\log_2 n$ is only the answer for $n$ a power of $2$, and also $2$-powers yield the biggest values. – Frieder Ladisch Dec 3 '15 at 19:12
• I would formulate it in the following way: for groups with at most $n$ elements, maximum is $[\log_2 n]$. While for groups with exactly $n$ elements it is a delicate question. What may be said for sure is that the answer is either $m$ or $m+1$, where $m$ is maximal exponent of primes in factorization of $n$. – Fedor Petrov Dec 4 '15 at 10:39
For natural $$n$$, let $$d(n)$$ be the maximum of $$d(G)$$ as $$G$$ runs through all groups of order $$n$$, and let $$\nu(n)$$ denote the maximal exponent occurring in the canonical prime factorization of $$n$$. It follows from answers and comments elsewhere in this thread that $$d(n)= \begin{cases} 1+\nu(n) & \text{for } n\in S \\ \nu(n) & \text{otherwise} \end{cases}$$ where $$S\subseteq\mathbb{N}$$ is a set that we would like to characterize as precisely as possible.
I have now added an entry A332766 in OEIS listing the members of $$S = \{ 6, 10, 14, 18, 21, 22, \ldots\}$$. Does anyone have any nontrivial contribution to the characterization of $$S$$?
|
2020-07-11 09:04:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106239080429077, "perplexity": 235.07316087945637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00322.warc.gz"}
|
http://fromdata.org/2013/10/27/the-continuous-breeders-equation/
|
# The Continuous Breeder's Equation
The Breeder's Equation has always fascinated me. I worked a bit with this equation in graduate school, and now that I'm free to work with whatever interests me at the moment, so I thought I would bring this back up. This equation is as follows:
$R=h^{2}S$
This equation tells us the phenotypic response (R) in a population generation under some selective force (S) which depends also on the heritability (h) of the trait. A naive example is to consider a trait that is fully heritable (h=1), then the response in a generation is exactly equal to the selected trait. Unfortunately, most traits have heritability that fall somewhere between 0 and 1, this is the hard part to figure out. But all we have to do is wait a generation of intense selection and see the proportion of the population that follows the selection and we have our heritability. This is well-played out with controlled populations that breed discretely (e.g. agriculture products like corn, soybeans, cows...). Like it or not, we have been performing genetic modification of species ever since mankind changed from a hunter/gatherer society to an agriculture based society thousands of years ago. A farmer would always keep the best crops and use them for reseeding next year. Doing this, she imposes a selection force on the population towards a desired characteristic she wants. What about other populations that don't breed/reproduce so discretely? You might even suggest that wild, uncontrolled populations tend to breed discretely as there are many wild animal populations that have a yearly breeding season or two. But let's widen our scope. Most of the life on this planet is single celled organisms. Bacteria, archea, single celled eukaryotes, etc... these populations definitely do NOT breed discretely. If you consider a whole population of bacteria, at any moment in time, there are many cells in various stages of reproduction (assuming a growing population). This equation can NOT be applied to such a population.
In mathematics there are a few ways to convert discrete equations to continuous. But we have to unravel the breeder's equation first. Let's consider it as follows.
$(\mu_{o}-\mu)=h^{2}(\mu_{p}-\mu)$
Here, the response, R, equals a difference in population means, $\mu_{o}$ is the mean phenotypic trait of the offspring generation, $\mu_{p}$ is the mean phenotypic trait of the parent generation (that we are selecting for), and $\mu$ the mean of the population as a whole. Again, a quick sanity check (h=1) will give us $\mu_{o}=\mu_{p}$, a purely heritable trait. Now I want to write the equation to include time. We assume that $g$ is the 'generation time' of the population.
$(\mu(t+g)-\mu(t))=h^{2}(\mu_{p}-\mu(t))$
or
$\frac{(\mu(t+g)-\mu(t))}{g}=\frac{h^{2}}{g}(\mu_{p}-\mu(t))$
I think you can see how this is going to work. Now we want the limit as $g\rightarrow 0$ and we'll arrive at a continuous derivative.
$\frac{d\mu}{dt}=\mu^{'}(t)=\frac{h^{2}}{g}(\mu_{p}-\mu(t))$
This non-homogeneous first order differential equation is solvable with the condition that $\mu(0)=\mu_{0}$. I'll save you the fun details that you can find in any differential equations book and skip to the answer.
$\mu(t)=\mu_{p}\left[ 1+ \left( \frac{\mu_{0}}{\mu_{p}} -1 \right) e^{-\frac{h^{2}}{g}t} \right]$
This is a very interesting equation. Let's look at a graph of the situation where the heritability (h) = 0.5, the generation time (g) = 1 unit, the starting phenotypic trait ($\mu_{0}$) is 1 and we are selecting for a phenotype of 2 ($\mu_{p}$).
This graph looks exactly as we would expect the population to behave- over time the population approaches the phenotypic value we select. I hope to find some actual data out there soon that approximates this curve. But I'll save that for later, as I don't feel like combing through evolutionary microbiology articles right now.
This entry was posted in Genetics, Microbiology and tagged , . Bookmark the permalink.
|
2017-07-27 06:46:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5633600354194641, "perplexity": 871.8593437021689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00197.warc.gz"}
|
http://mathhelpforum.com/algebra/196126-graphs-functions.html
|
Math Help - Graphs & Functions
1. Graphs & Functions
Hi Guys
I missed a week of school due to going on a holiday and i got a maths test this wednesday, which the teacher is not giving me extra time to study for. He gave 2 practice questions on what we have been doing, but not a worked out example. If you guys can please answear this questions for me as i don't understand them. Also im in year 11 Maths Methods, the questions are:
1. Find the standard form of the equation of the circle for which the endpoints of a diameter are (-2,3) and (4,-10)
2. Write the slope-intercepts forms of the equation of the lines through the point (-8,3) and
(a) parallel
(b) perpendicular
To the line 2x+3y=5
Worked solutions to this questions will be truely appriciated, if you can please answear as quick as possible because i need move on to question 3 & 4 which are more difficult then these, which i dont have a understanding of. please help
3. Re: Graphs & Functions
highvoltage you are a legend, thanks for taking your own time to help with this, I'm truely greatful
4. Re: Graphs & Functions
Originally Posted by iFuuZe
Hi Guys
I missed a week of school due to going on a holiday and i got a maths test this wednesday, which the teacher is not giving me extra time to study for. He gave 2 practice questions on what we have been doing, but not a worked out example. If you guys can please answear this questions for me as i don't understand them. Also im in year 11 Maths Methods, the questions are:
1. Find the standard form of the equation of the circle for which the endpoints of a diameter are (-2,3) and (4,-10)
2. Write the slope-intercepts forms of the equation of the lines through the point (-8,3) and
(a) parallel
(b) perpendicular
To the line 2x+3y=5
Worked solutions to this questions will be truely appriciated, if you can please answear as quick as possible because i need move on to question 3 & 4 which are more difficult then these, which i dont have a understanding of. please help
1#
$(x-x_C)^2 + (y-y_C)^2 = r^2$
where :
$x_C=\frac{x_A+x_B}{2} ~\text{and}~y_C=\frac{y_A+y_B}{2}$
$r=\frac{1}{2} \cdot \sqrt{(x_B-x_A)^2+(y_B-y_A)^2}$
2#
Let :
$p : y=m_1 x+n_1$
$q : y=m_2 x+n_2$
then :
$\begin{cases}m_1=m_2, & \text{if } p \parallel q \\m_1 \cdot m_2=-1, & \text{if } p \perp q\end{cases}$
5. Re: Graphs & Functions
Originally Posted by iFuuZe
Hi Guys
I missed a week of school due to going on a holiday and i got a maths test this wednesday, which the teacher is not giving me extra time to study for. He gave 2 practice questions on what we have been doing, but not a worked out example. If you guys can please answear this questions for me as i don't understand them. Also im in year 11 Maths Methods, the questions are:
1. Find the standard form of the equation of the circle for which the endpoints of a diameter are (-2,3) and (4,-10)
2. Write the slope-intercepts forms of the equation of the lines through the point (-8,3) and
(a) parallel
(b) perpendicular
To the line 2x+3y=5
Worked solutions to this questions will be truely appriciated, if you can please answear as quick as possible because i need move on to question 3 & 4 which are more difficult then these, which i dont have a understanding of. please help
I'm a private maths tutor. If you live in Melbourne and want some tutoring, send me a PM...
|
2016-02-09 04:48:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5858244299888611, "perplexity": 714.4205482344437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156448.92/warc/CC-MAIN-20160205193916-00051-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://nbviewer.jupyter.org/github/gpeyre/numerical-tours/blob/master/matlab/graphics_7_shape_shading.ipynb
|
This tour explores the resolution of the shape from shading inverse problem using Fast Marching.
In [2]:
addpath('toolbox_signal')
## Forward Image Formation¶
We consider here a simplified imaging system where the camera performs an orthogonal projection and the lighting is orthogonal to the camera focal plane and is at infinity. We also assume that the surface is perfectly diffuse (Lambertian).
Shape From Shading, Emmanuel Prados, Olivier Faugeras Handbook of Mathematical Models in Computer Vision, Springer, page 375-388, 2006.
The shape use in this tour is taken from
Shape from Shading: A Survey, Ruo Zhang, Ping-Sing Tsai, James Edwin, Cryer and Mubarak Shah, IEEE Trans. on PAMI, Vol. 21, No. 8, 1999.
Load a surface as a 2D height field, i.e. it is defined using an image $f(x,y)$.
In [3]:
name = 'mozart';
n = size(f,1);
Height $h>0$ of the surface.
In [4]:
h = n*.4;
Final rescale surface $(x,y,f(x,y)) \in \RR^3$.
In [5]:
f = rescale(f,0,h);
Display the image.
In [6]:
clf;
imageplot(f);
Display as a 3-D surface.
In [7]:
clf;
surf(f);
colormap(gray(256));
axis equal;
view(110,45);
axis('off');
camlight;
Compute the normal to the surface $(x,y,f(x,y))$. The tangent vectors are $(1,0,\pd{f}{x}(x,y))$ and $(0,1,\pd{f}{y}(x,y))$ and the normal is thus $$N(x,y) = \frac{N_0(x,y)}{\norm{N_0(x,y)}} \in \RR^3 \qwhereq N_0(x,y) = \pa{\pd{f}{x}(x,y), \pd{f}{y}(x,y), 1}.$$
Compute $N_0$, the un-normalized normal, using centered finite differences.
In [8]:
options.order = 2;
Compute the normalized normal $N$.
In [9]:
s = sqrt( sum(N0.^2,3) );
N = N0 ./ repmat( s, [1 1 3] );
Display the normal map as a color image.
In [10]:
clf;
imageplot(N);
We compute the shaded image obtained by an infinite light coming from a given direction $d \in \RR^3$.
In [11]:
d = [0 0 1];
We use a Labertian reflectance model to determine the luminance $$L(x,y) = (\dotp{N(x,y)}{ d })_+ \qwhereq (\al)_+ = \max(\al,0).$$
In [12]:
L = max(0, sum( N .* repmat(reshape(d,[1 1 3]), [n n 1]),3 ) );
Display the lit surface.
In [13]:
vmin = .3;
clf;
imageplot(max(L,vmin));
Exercise 1
Display the surface with several light directions $d \in \RR^3$.
In [14]:
exo1()
In [15]:
%% Insert your code here.
For a vertical ligthing direction $d=(0,0,1)$, the forward imaging operator reads $$L(x,y) = \frac{1}{ \sqrt{ \norm{ \nabla f(x,y) }^2+1 } }.$$
Compute the luminance map for $d=(0,0,1)$.
In [16]:
d = [0 0 1];
L = sum( N .* repmat(reshape(d,[1 1 3]), [n n 1]),3 );
The height field $f$ can thus, in theory, be recovered by solving the following Eikonal equation: $$\norm{ \nabla f(x,y) } = W(x,y) \qwhereq W(x,y) = \sqrt{ 1/L(x,y)^2 - 1 }$$ With additional boundary conditon $$\forall i \in I, \quad f( p_i) = f_i$$ For a set of well chosen grid points $p_i \in \RR^2$.
The issue is that this equation is ill-posed (several solution) due to the singular points $(q_j)_{j \in J}$ where $L(q_j)$ is close to 1, so that $W(q_j)$ is close to 0. A way to regularize the inversion is thus to choose the boundary condition points $p_i$ to be the singular points $q_j$.
Set a tolerance $\epsilon>0$.
In [17]:
epsilon = 1e-9;
We show in white the location of singular points $q_j$, that satisfies $L(q_j)>1-\epsilon$.
In [18]:
clf;
imageplot(L>1-epsilon);
Compute the Eikonal speed $W$ (right hand side of the equation).
In [19]:
W = sqrt(1./L.^2-1);
To avoid too much ill-posedness, we threshold it.
In [20]:
W = max(W,epsilon);
Display the speed.
In [21]:
clf;
imageplot(min(W,3));
We use here a single point $p$. Select the tip of the nose as base point $p$.
In [22]:
p = [140;120];
Solve the Eikonal equation, assuming $f(p)=0$.
In [23]:
[f1,S] = perform_fast_marching(1./W, p);
Rescale the height.
In [24]:
f1 = -f1*n;
Display as a 3D surface.
In [25]:
clf;
hold on;
surf(f1);
h = plot3(p(2), p(1), f1(p(1),p(2)), 'r.');
set(h, 'MarkerSize', 30);
colormap(gray(256));
axis('equal');
view(110,45);
axis('off');
camlight;
Exercise 2
Try to reconstruct the image starting from other base points. What do you observe ?
In [26]:
exo2()
In [27]:
%% Insert your code here.
Exercise 3
Try to improve the quality of the reconstruction by selecting several points, and imposing their height.
In [28]:
exo3()
In [29]:
%% Insert your code here.
|
2018-12-14 01:11:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141844868659973, "perplexity": 1067.6703650479667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00080.warc.gz"}
|
https://www.pediatricnursing.org/article/S0882-5963(14)00306-6/fulltext
|
Research Article| Volume 30, ISSUE 1, P25-35, January 2015
• PDF [261 KB]PDF [261 KB]
• Top
# Children With Chronic Conditions: Perspectives on Condition Management
Open AccessPublished:November 14, 2014
## Highlights
• Children may better understand condition management within the context of routines.
• Children develop perspectives about their condition and its management.
• Children's reports are consistent with the family management framework.
• Children may not have insight about long-term implications of their condition.
This qualitative study described children's (8–13 years old) perspectives of their chronic health conditions (e.g., asthma, diabetes, cystic fibrosis): how they perceived their condition, its management, and its implications for their future. The study used the family management style framework (FMSF) to examine child perspectives on the joint venture of condition management between the child and family. Children within this age group viewed condition management in ways similar to their parents and have developed their own routines around condition management. Future studies of this phenomenon comparing child and parent perspectives would further our understanding of the influence of family management.
## Key words
CHILDREN WITH CHRONIC health conditions (CHCs) learn how to manage their conditions through everyday life experiences with their families, peers, health providers, and others in their communities. While most studies using children's perspectives describe disease specific management issues and tend to be more skill related, non-categorical or non-disease-specific issues are largely overlooked (
• Wollenhaupt J.
• Rodgers B.
• Sawin K.J.
Family management of a chronic health condition: Perspectives of adolescents.
). Non-categorical studies, because they may be applied to a multitude of conditions, may be of special benefit to future clinical practice, health care policy, and research (
• Rolland J.S.
Families, illness, & disability: An integrative treatment model.
). The purpose of this qualitative descriptive study was to systematically describe the understandings of condition management from the perspectives of school-aged children (8–13 years) with a variety of CHCs.
School-aged children are transitioning from concrete ways of thinking to cognitive thought processes that are more complex and intellectual (
• Vygotsky L.S.
Imagination and creativity in childhood. [Article]. [Voobrazhenie i tvorchestvo v detskom vozraste,[1967].
). Children's understanding also varies according to everyday experiences. The lives of children with CHC are filled with daily reminders and potential learning experiences related to their condition (
• Crisp J.
• Ungerer J.A.
• Goodnow J.J.
The impact of experience on children's understanding of illness.
,
• McMenamy J.M.
• Perrin E.C.
The impact of experience on children's understanding of ADHD.
). Although the family remains the main source of information and guidance for the school-aged child, sustained encounters outside the home and family environment provide opportunities for expanded experiences (
• Coll C.G.
• Szalacha L.A.
The multiple contexts of middle childhood.
). Thus, school-aged children with CHCs begin to learn how to navigate life and their conditions outside the home. Their families are then challenged to expand condition management from the home to include the school and the community as their children engage in these settings and rely more on adults outside the family structure (
• Emiliani F.
• Bertocchi S.
• Poti S.
• Palareti L.
Process of normalization in families with children affected by hemophilia.
).
The family management style framework (FMSF) (Figure 1) was developed using symbolic interactionism (
• Blumer H.
Symbolic interactionism; perspective and method.
) to describe the process of family management, identifying how families define the condition, manage it, and perceive the consequences of the condition (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
). The FMSF has been used to explore family management of a variety of conditions in a non-categorical or non-disease-specific manner and to identify the domains or categories that are common across disease entities, with findings applicable to a wide range of health conditions (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
). The major components within the framework, including definition of the situation, management behaviors, and perceived consequences, provide us with the parents' perspectives on non-disease specific condition management, that is, how they see the child and the condition, the amount of effort it takes to manage the condition and the disruption the condition causes the family, and the way the parents are thinking about the child's future (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
). The FMSF was developed predominantly from information gathered from the parents of children with CHCs, but as can be seen from the framework, differentiates family members and the person with the condition. This study adds the perspectives of school-aged children with CHCs within the context of family management and describes how these children understand their condition and incorporate it into their daily lives.
## Design and Methods
This qualitative, descriptive study identified the perspectives of school-aged children with CHCs using directed content analysis. Directed content analyses are based on an a priori framework that guide the creation of interview guide and analytic codes (
• Hsieh H.F.
• Shannon S.E.
Three approaches to qualitative content analysis.
). The FMSF dimensions (defining, managing, and perceived consequences of the condition) directed the development of the interview guide as well as the analysis of the interviews. Data were collected through interviews with children who had been diagnosed with a CHC for at least six months and were between 8 and 13 years of age. A six-month lag from diagnosis ensured that the child and family had time to understand the reality of the diagnosis and develop an approach to condition management.
## Setting and Sample
Thirty-two children with a variety of CHCs were recruited from three ambulatory clinics (endocrine, hematology, and pulmonary) in a large pediatric hospital located in the northeastern U.S. Both the hospital and the university with which it is affiliated granted IRB approval for the study prior to any recruitment activities. A purposeful, maximum variation sampling strategy was used to recruit a sample with a wide variety of condition experiences (
• Patton M.Q.
Qualitative research and evaluation methods.
). A three-pronged approach to recruitment was used: 1) clinic recruitment via posters in the waiting room and referral from the health care provider; 2) mailings to families meeting the inclusion criteria; and 3) word of mouth. Interested parents contacted the study via phone or return mail inquiry, were contacted by phone, provided verbal consent, and then provided screening information regarding inclusion criteria and condition characteristics. If the screening criteria were met an appointment for the home interview was made. At the beginning of the interview, the first author who was principal investigator reviewed the study information with the parent and the child, answered any questions, and obtained informed consent/assent. It was made clear throughout the process that participation was voluntary.
## Data Collection
The first author conducted the interviews between June 2012 and January 2013. Most of the interviews (n = 30) were held at participants' homes, although two families preferred to meet at an alternative setting, one at the local YMCA and the other at the university. While the qualitative interview data were collected from the child with a CHC, the parent completed demographic information and surveys about the child and the family (Table 1). For reporting purposes a primary/recruitment CHC was identified for each child; however, over half of the children in the sample had more than one CHC.
Table 1Characteristics of study population.
CharacteristicN (%) or mean (range)
Parent32
Mother informant30 (94%)
Age in years41 (32 to 51)
Household income (US dollars)11 (30%) less than $30,000/year 3 (9%)$30,000–$59,000/year 4 (13%)$60,000–$99,000/year 13 (41%) over$100,000/year
1 (3%) Not reported
Educational Level24 (75%) graduated from college
Race/Ethnicity10 (31%) Black
1 (3%) Hispanic
21 (66%) White
Child32
Age in years10.4 (8 to 13)
Male18 (56%)
Primary diagnosis
Asthma13 (41%)
Diabetes8 (25%)
Cystic fibrosis4 (13%)
Hemophilia2 (6%)
Hereditary spherocytosis1 (3%)
Phenylketonuria1 (3%)
Sickle cell disease1 (3%)
Eosinophilic gastrointestinal disease1 (3%)
Chronic sinusitis1 (3%)
Interview location
Home30 (94%)
Local YMCA1 (3%)
School of nursing1 (3%)
The in-depth, semi-structured interviews were conducted using open-ended questions focused on children's descriptions of their families, what it was like to be diagnosed with a CHC, what typical school and weekend days were like, and how they perceived their futures. The interview guide was developed with consideration for the developmental age and abilities of the children participating in the study. Initially, the interview guide was developed from the aims of the study, directly inquiring how the children perceived their condition, managed it, and understood its consequences. In consultation with researchers experienced in conducting interviews with children, the interview guide was revised to a more conversational format. The questions dealt with the child's everyday life, an area where they realized they in fact were the experts (Table 2). The interview guide was piloted with two children, and a few very small changes were made based on the experience of the interviewer and child feedback.
Table 2Interview guide.
Tell me a little about yourself. Draw a picture of your family; tell me about it. Tell me when you found out you had (name of condition). Take me through a typical school day. Describe a typical weekend day and its difference from weekdays. What will it be like when you are older? What will change? Tell me your advice for: a child who just found out they have (name of condition). Your family. Your friends.
All interviews were digitally recorded and transcribed verbatim by a transcription service. Child interviews lasted between 23 and 81 minutes; although some children seemed a little shy at the beginning, none refused to be interviewed and, once they began telling their stories and realized that there were no wrong answers, they seemed much more comfortable.
Data collection stopped when saturation on major themes was reached and no new information emerged from the child interviews (
• Patton M.Q.
Qualitative research and evaluation methods.
). The interviewer wrote field notes shortly after leaving the family homes/interviews to document impressions and reflections so as to improve the accuracy and thoroughness of the descriptions.
## Data analysis
Analyses of the children's responses were conducted using directed content analysis methods (
• Elo S.
• Kyngas H.
The qualitative content analysis process.
,
• Hsieh H.F.
• Shannon S.E.
Three approaches to qualitative content analysis.
), which allowed for the identification of categories related to the children's perspectives on family management of their CHCs. We attempted to remain close to the children's own words and meanings while using our current knowledge regarding the FMSF to guide or sensitize the inquiry (
• Hsieh H.F.
• Shannon S.E.
Three approaches to qualitative content analysis.
). The initial code list was developed using the definitions of domains and categories within the FMSF, revised to reflect the likely perspective of children. Coding of the interviews began with receipt of the first verified transcript. Each subsequent transcript was read and coded for the child's perceptions of his or her condition, management behaviors, and consequences. Codes were then modified and grouped into categories. Constant comparison was used for subsequent interviews, allowing for analysis both of the individual data and across cases (
• O'Connor M.K.
• Netting F.E.
• Thomas M.L.
Grounded theory - Managing the challenge for those facing Institutional Review Board oversight.
). Data collection was complete when saturation of major themes was identified (
• Patton M.Q.
Qualitative research and evaluation methods.
). Atlas-ti (ATLAS.ti Scientific Software Development GmbH, Berlin, Germany), a qualitative data management software program, was used to maintain and sort the interviews and related data.
Several strategies to ensure trustworthiness and credibility were used. The lead author conducted a methodical review of each interview and documented the decision process throughout the study, using audit trails. An experienced qualitative researcher (JD) listened to interviews and conducted an audit of the analyses of the data, using the audit trails as a guide. In addition, the researcher participated in a weekly qualitative collective—a group engaged in the study of qualitative methodologies—that provided feedback and confirmation of analysis process throughout the study.
## Results
The 32 children in the study were between 8 and 13 years old (M = 10.4 years). There was a range of family incomes and diversity of race in the sample (Table 1). Although the child was the primary informant and focus of the study, the parent (mother = 30) provided all of the demographic data as well as the condition characteristic information (Table 3). Condition characteristics were described as the 1) onset of the condition (acute or gradual), 2) progression of the condition (relapsing/remitting, progressive, stable), and 3) stigma their child experienced due to the condition. These characteristics provided a way to describe the diversity of the sample across specific diagnoses (Table 3). These data show that the parent described the onset, progression, and stigma independently of the diagnosis. For example, of the eight children with diabetes, five parents described the onset as acute, whereas three thought it was gradual; two described the course as progressive, two as constant, and four as relapsing; and five identified the condition as stigmatizing whereas three did not. These data show the diversity of this sample regarding key characteristics both within and across diagnoses and support the potential of these children to provide data regarding the cross cutting issues regarding family management of their chronic health conditions.
Table 3Diversity across conditions.
Adapted from categorization of chronic illnesses by psychosocial type (Rolland, 1994).
Condition (n)OnsetCourseStigma
Asthma (13)58148310
Diabetes (8)5322453
Cystic fibrosis (5)2313141
Hemophilia (2)2002011
Other (4)1103122
Totals (32)
2 diagnosed at birth; no symptoms/no onset1515414141517
Total by Category30
Adapted from categorization of chronic illnesses by psychosocial type (Rolland, 1994).
3232
Genetic5318164
Developed10123613913
a Adapted from categorization of chronic illnesses by psychosocial type (
• Rolland J.S.
Families, illness, & disability: An integrative treatment model.
).
The results of the directed content analysis presented here are organized according to the FMSF's three dimensions: 1) definition of the situation; 2) management behaviors; and 3) perceived consequences. Within each dimension, categories are identified that explain the perspectives of the children with CHCs who participated in the study. These categories have been named using a phrase from the child interview that best represented the child perspective.
## Definition of the Situation
As the children described what their CHC meant for them, important elements related to having a chronic health condition were identified and defined. Children spoke about how they felt compared to their peers and siblings, what made their day easier or harder, what they worried about in regard to the condition and what gave them feelings of confidence or control over the condition. Quotes from the children were used to identify the themes and are as follows: They want us to be like regular kids, Sometimes I get scared, And then we’re good, It’s pretty easy for us to handle/it’s hard for us cause it’s not normal, and Mom and Dad agree/disagree. The school-aged children in this study readily described what it meant to them to have a CHC.
### They (Parents and Providers) Want us to be Like Regular Kids
Children talked about the things they were able to do that were typical and made themselves feel typical, comparing themselves to their siblings or friends. They also spoke about the ways that the condition limited them and their ability to participate in activities and made them feel different. Children often described that management of their health condition could be hard for them, either as hard for them to learn or hard for them to follow the recommended treatments, or both. They did, however, recognize that their abilities and understanding had changed over time as they had matured and developed.
Children discussed the way they and their friends dealt with the condition and the support the friends provided and how that made them feel typical, especially friends who had the same condition. “…you have to be able to push it aside…. you can’t go ’oh, I can’t go with my friends cause my diabetes is messed up.’ You kind of don’t have to think about it all the time” (11-year-old, type 1 diabetes). For some children the condition was not a big problem, but rather something they recognized made them unique and they were proud of. Other children had more difficulty incorporating the condition into everyday life and stated that it was hard, made them feel very different than those around them, and felt that people did not really understand what it was like.
### Sometimes I Get Scared
Reflecting their awareness of the seriousness of their condition, the children discussed whether their parents worried about them or if they worried about themselves. They were particularly aware their parents worried about them within the context of remembering the reaction at diagnosis. Statements such as “…she wasn’t worried about it at that time cause I wasn’t like…older yet” (9-year-old, type 1 diabetes) showed an awareness of the potentially serious nature of the diagnosis and the likelihood that it would change in the future.
In terms of the children's worries, as one child with asthma stated, “Basically, when I’m swimming, sometimes I get scared and I’m like ‘Oh, no, what’s going to happen?’ I get scared that I won’t be able to breathe” (13-year-old, asthma) There were also children on the other side of the spectrum who did not think the condition was very serious. One boy with asthma stated, “Mine’s just really weak…. The asthma's weak. I don’t even think I need the medication” (11-year old, asthma).
A few of the children spoke of knowledge they had regarding the condition that made them worry. Some children with diabetes were aware of the potential for amputations and renal failure when they were older, and a child with asthma told of knowing a friend who died because his asthma was not controlled. This information confirmed the seriousness of their condition, but as one young boy stated, “That kind of scares me. That pretty much convinces me to get my blood sugar down all the time” (11-year-old, type 1 diabetes).
### And Then We are Good
Children's understanding of their condition was evident in their discussion of symptoms and symptom management. The children's understanding regarding their diagnosis, symptoms, and treatments varied widely. All knew the name of their condition. Though some children had an intimate understanding of the condition and why treatments were given, others did not know how they got the condition, what medications they were taking, or what the medications did.
Children talked about doing things to decrease symptoms and manage the condition, as well as plans that were in place should something happen. Children with asthma spoke of stopping to rest and catch their breath and of drinking water, and children with diabetes would check their blood sugar if they were not feeling well and prior to strenuous activities. Many children had cell phones that enabled them to keep in contact with their parents regarding condition updates while they were away from home. Having access to parents seemed to increase the children's confidence. One girl stated that access to her parents allowed her to handle her glucose levels when away from home and allowed participation in activities with her friends without direct adult supervision. “I have my own emergency cell phone…. I always have it … call [mother’s] cell, I'm like, ‘oh, I'm here,’ or ‘hi, I'm low’ or whatever. Then I'll go to sleep…l keep my phone right beside me… Test, tell [mother] my blood sugar, and then we're good” (11-year-old, type 1 diabetes).
### It is Pretty Easy for us to Handle/It is Hard for us Because it is not Normal
The participants in the study spoke of their impression of incorporating management into daily care; what made having the condition easy or hard both within the home and during outside activities. Children identified having the family showing support and understanding, telling how the family let them be in control relative to their treatment regimen when possible, including planning for outings and activities, as things that made having the condition easier. At school, children who described relative ease of management discussed having understanding teachers and nurses, an ability to integrate care into the everyday routine (e.g., keeping an inhaler in their desk, permission to have extra snacks), and a flexible schedule that allowed the student to do what needed to be done and still participate in the important classroom activities. As one child with allergies said, “I don’t have bad allergies…no, just like I’m anaphylactic…so if I touch it, I get a hive. If I eat it, then that’s when I’ll need an Epipen” (8-year-old, cystic fibrosis, allergies).
Alternatively, some children expressed difficulty managing the condition and the way it made them feel different. As another child with cystic fibrosis confided, “I didn’t really want it. It’s not good. I don’t like it and I want to get rid of it” (9-year-old with cystic fibrosis). Children reporting more difficulty carrying out management within the home said it was difficult for them to perform the treatment correctly and there was no one to remind them or help them problem solve. These children had trouble remembering treatments and medications. These kinds of incidents threw off the child's day and made it difficult to get back on track. At school, teachers or staff who didn’t understand their condition and prevented them from getting the treatment they needed made management more difficult. Although this group was a minority, they spoke of the frustration of not being listened to when they believed they needed to do something.
The majority of children believed that their parents always agreed on the approach used in condition management. Only one child in the study identified an area where his parents did not agree on a management activity. This disagreement revolved around the child's ability to give his own insulin injections: “…mom just doesn’t want me to do it, but I don’t know why. She thought that I did it the wrong way … My dad, he does think I can do it, but I think no” (9-year-old, type 1 diabetes).
The remainder of the children described agreement between the parents with regard to management and identified either one parent as their primary point person or shared responsibilities between both parents. Most often the mother was identified as the primary person, although the father was readily identified as being the backup. Other children talked about each parent having discrete activities he or she took responsibility for or the child and parent sharing management responsibilities.
## Management Behaviors
Management of the condition refers to efforts directed toward caring for the condition and incorporating it into everyday life, both for children and their families. This section does not identify tasks associated with management that would be condition-specific, but rather describes the children's perspectives on overall condition management activities and how they make sense of them.
### They Do It for My Health and Stuff
Children were able to discuss why condition management was important and connected to specific strategies for such management. As might be expected given the developmental stage of the participants, the children had only a very basic understanding of why condition management was important, but generally understood it was to keep them healthy. They talked of activities such as checking in to make sure treatments were done, reminders about schedules, and actively getting treatments and medications ready.
The children also reflected on the ways the family accepted and problem solved the diagnosis and how it helped them frame the condition for themselves. One child talked about how her parents were proactive in learning about the diagnosis and incorporating it as a normal part of their child; “…he [Dad] [looked up Olympians with diabetes] said he looked them up, ‘Just to let you know your dreams will never be crushed because of this.’ That helped” (12-year-old, type 1 diabetes). For this child it meant having diabetes did not mean she did not have to give up other aspirations and goals.
Children often talked about the goals they had playing games or sports, but were less likely to talk about goals in terms of condition management. One child, however, clearly identified his personal goal of remembering to take his pill every morning without being reminded. Children also had treatment preferences based upon their priorities. One child with diabetes talked about changing the type of insulin she was taking so she could have more control and worry less about whether she could eat something. Two children with cystic fibrosis talked about the time it took for treatments and giving priority to a method that did the job but took less time. These children understood the need to keep healthy balanced with their goals and desires in other areas of their lives.
### I Do It, They Do It, and We Do It
Children discussed the way routines and related strategies for management of the condition were incorporated into everyday life. Three ways of doing things were apparent from the children's perspectives. They used “I” statements to explain management activities they did on their own (e.g., I check my blood sugar, I take my treatment, and I do it myself). Children used “they” statements to identify management activities outside of their control. Finally, children used “we” statements, talking about management as a joint venture between them and their parents.
### It is Just Kind of My Schedule
The children also described how they developed their own routines and related strategies for condition management and incorporated them into everyday life. One child with asthma explained her strategy for participating in sports while keeping her asthma under control, assessing how her body felt, and doing things that helped her with endurance and relaxation;When I run, I only run like two laps. I run out of breath, I walk, then I run again, I run out of breath, then I walk for another couple of laps, then I jog while I breathe really heavy, and after we stretch a little bit and there’s this one stretch called the goalie stretch where you just lay down and you stretch your whole body. That kind of relaxes me (10-year-old, asthma).
Children in the study often talked about their view of condition management in relation to the school day: what they did before, during, and after school. Furthermore, treatments were tied to school and activity-related events, not clock time (e.g., medications or treatments were done before lunch, during the second recess, or before taking the bus home). This also carried over to after-school activities, where treatments were tied to going to practice, dance or instrument lessons, and bedtime. Children looked at their daily routines as a series of events. Children also spoke of the effort they needed to take care of their condition and how management was incorporated into the school day or disrupted school. For example, they spoke of having to leave class early to go to the nurse's office for treatments, blood glucose testing, or other medications at prescribed times during the day. Over the weekend or on non-school days, the major differences were stated in terms of the school day.
Children also identified routines or schedules for occasional activities such as vacations or trips, and what had to happen or what planning needed to occur to ensure that the proper medications and technology were taken along. They also spoke of what needed to happen in order to sleep over at a friend's house or at the grandparents' home. Some children were also aware of the routines associated with appointments, describing the need to go every 3 months for an HbA1C check or once a year for pulmonary function tests. One student athlete with both diabetes and asthma reported that he often had to stop before, during, and after practices or games to check his blood glucose level or take inhalation treatments, and also he had to have rescue inhalers on the sideline. Parents were frequently at the games, which provided assurance that medical needs could be handled.
### To Tell or Not to Tell
Children spoke of the role that telling others played in the management strategies. Some were very clear that others needed to know in order for them to maintain their health status. This was evident across all conditions when children spoke of participating in activities outside of the home and recognizing the risk of others not knowing in case they needed help. Two children stated that it was a group effort, and one said his friends would actually ask him if he was okay sometimes: “If I’m acting like upset or angry all the time, they’ll just be like ‘Okay, are you alright? Do you need to do your thing or whatever?’ I’m like ‘Yeah, I’ll go test’ and they’re usually right” (11-year-old, type 1 diabetes).
Other children were more private, saying no one really needed to know. One girl said people knowing might hurt your chances of getting a job you really need, and another explained a friend had teased her, so now she does not tell friends.
Children described the process that occurred within the family in order to manage their condition, and they reported varying personal control. Some of the participants had very little control beyond following the instructions they were given for condition management by the health care provider or the caregiver or passively watching the caregiver, but all were aware of condition management and the approaches and attitudes of the people around them.
### If They Were Not Hounding Me, I Would Not be This Free
Children described the way their family incorporated condition management into family life and what it meant to them. The children spoke of their view of family life and also of their parents' and their own satisfaction with the management.
Many children spoke of the family's focus outside the realm of condition management, citing activities the family did together. Whether playing golf, watching the Three Stooges, or traveling, children recognized when their family was focusing on family life and when they were too focused on their condition. One child suggested that families should have “check-ins” to recap the week in order to identify what worked well and what may need to change. This may be the child's recognition that occasionally the family focus needs to come back to the condition for a brief period of time in order to evaluate the process.
Two children recognized that the focus was on the condition when parents were doing or assisting with treatments. Complexity was added for the caregivers when they were helping with treatments, and siblings were vying for the parents' attention. Children believed they should be the priority during that time and thought parents should control siblings.
Some children spoke of the attention or focus that was on the condition as a necessary part of family life, and that was accepted. One child recognized the family's adjustment to her condition as putting more responsibility on everyone, but acknowledged that they were able to take on these additional challenges. Another child explained her perspective when she contrasted her family's focus to that of another child she knew, explaining her family hounded her about the things she needed to do, but over time it allowed her to be free to do it all herself while her friend had no stability because his parents never helped him out and he was left to figure things out by himself. Other children told about the family diet that had changed for everyone, not just the child with the chronic condition, in conjunction with the diagnosis and diet restrictions.
## Consequences
The consequences theme related to how children viewed the future in light of their CHC. This was a difficult area for most children to address in relation to managing their condition. The children did recognize the things they were currently doing were different from what they had done when they were younger, but had limited insight into what might be expected of them in the future. Some identified more typical life changes that they expected to occur, such as going to college, getting married, having a family. A few children were very technology oriented and described changes that might occur if scientific advancement in condition treatment was made.
### I Might Have a Totally Different Life When I am Older
The future was not something many of this group of children spoke of readily within the context of their CHC. Many children spoke of having more responsibilities or being more responsible in the future, although what those responsibilities would be and what being more responsible meant was largely left unsaid. Coupled with the expectation that responsibilities would increase was the implied understanding that parental responsibilities would decrease. As one child stated when reflecting on the future, “…it’s a little bit harder ‘cause you have all the responsibilities., like your parents don’t help you out with everything like when you’re my age” (11 year old, type 1 diabetes).
Children spoke vaguely about the implications of their condition for their future and the family's future. Expectations centered on changes in treatment and changes in expectations for self and the family. Some of the school-aged children in the study had “techie” looks at the future, imagining time machines so they could look ahead and see the future. One child imagined “tech pads” that would test blood sugar, and another child seemed to be well versed in potential technological developments that may be on the horizon, talking about the “artificial pancreas” and the FDA approval process.
Children expected change in the future, although they did not explain what form the change would take. Children expressed uncertainty concerning the future in terms of medication and treatment requirements, and they imagined a future in which their parents would not be readily assisting with their care and where they would possibly be living on their own. Several children talked about future changes related to the need to be employed, and two children talked of having their own family one day. “I might have a totally different life when I’m older. Maybe I would get a house, maybe I would get a job, and maybe I would get a life” (8-year-old, hemophilia) was the poignant comment of one child.
## Discussion
The results of this study support the applicability of the FMSF as a framework to explore the perspectives of school-aged children with a variety of CHCs. In telling their stories, the children discussed the meaning the condition had for their life, the management efforts required, and the expected consequences of the condition and management needs. Although not all areas were discussed with the same depth and description, the children provided rich descriptions of the meaning and management components, and showed the beginning development of understanding of the condition consequences and future considerations.
Definition of the Situation, the first dimension of the FMSF, examines the subjective meaning family members attribute to important elements of their situation (
• Knafl K.A.
• Deatrick J.A.
Further refinement of the family management style framework.
). From the adult perspective, child identity and view of condition are foundational to family management as parents' beliefs about the child's capabilities are tied closely to their understanding of the condition and the associated demands and limitations (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
).
• Wollenhaupt J.
• Rodgers B.
• Sawin K.J.
Family management of a chronic health condition: Perspectives of adolescents.
, in an analysis of adolescents with spina bifida, also noted the importance of identity for adolescence and the potential for this to influence the relationship between the adolescent and the family. This was similar to our findings with the children in our study, who were comparing themselves to siblings and classmates and talking of ways they were similar or different. Studies examining families of children with CHCs have found that the connections between families and children can have great impact with positive family relationships leading to better health outcomes (
• Cohen D.M.
• Lumley M.A.
• Naar-King S.
• Partridge T.
• Cakan N.
Child behavior problems and family functioning as predictors of adherence and glycemic control in economically disadvantaged children with type 1 diabetes: A prospective study.
,
• DeLambo K.E.
• Ievers-Landis C.E.
• Drotar D.
• Quittner A.L.
Association of observed family relationship quality and problem-solving skills with treatment adherence in older children and adolescents with cystic fibrosis. [Research Support, U.S. Gov't, P.H.S.].
,
• Fiese B.H.
• Wamboldt F.S.
• Anbar R.D.
Family asthma management routines: connections to medical adherence and quality of life. [Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, P.H.S.].
) and negative family relationships leading to declines in children's health (
• Fiese B.H.
• Everhart R.S.
Medical adherence and childhood chronic illness: Family daily management skills and emotional climate as emerging contributors. [Research Support, N.I.H., Extramural Review].
,
• Lewin A.B.
• Heidgerken A.D.
• Geffken G.R.
• Williams L.B.
• Storch E.A.
• Gelfand K.M.
• et al.
The relation between family factors and metabolic control: The role of diabetes adherence.
). Children are making connections between perceptions of how their family makes meaning from the condition and how they understand their condition. Additional research is needed to understand the relationship between family management, child identity and these important health outcomes.
Management behaviors represents the efforts directed toward caring for the condition and adapting family life to condition related demands and incorporates family beliefs about the condition in addition to the goals, priorities and values the guide the approach to management (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
). Of special interest, the development of routines for managing the condition is an important aspect of condition management that helps families (
• Case-Smith J.
Parenting a child with a chronic medical condition. [Research Support, U.S. Gov't, P.H.S.].
).
• Bedell G.M.
• Cohn E.S.
• Dumas H.M.
Exploring parents' use of strategies to promote social participation of school-age children with acquired brain injuries. [Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, Non-P.H.S.].
and
• Cashin G.H.
• Small S.P.
• Solberg S.M.
The lived experience of fathers who have children with asthma: A phenomenological study.
also highlight the importance of having the whole family involved in these routines, and having routines both at home and for condition-related responsibilities to be done outside of the home. Family rituals, whether infrequent (e.g., birthday celebrations, holidays) or daily (mealtime, games, or reading), promote a positive family environment and health-related quality of life (
• Santos S.
• Crespo C.
• Silva N.
• Canavarro M.C.
Quality of life and adjustment in youths with asthma: The contributions of family rituals and the family environment.
). Families and children with CHCs, recognizing the importance of rituals and routines as a way to integrate illness care into family life and to decrease emphasis on the illness itself, can positively influence both family and child outcomes (
• Knafl K.A.
• Deatrick J.A.
• Knafl G.J.
• Gallo A.M.
• Grey M.
• Dixon J.
Patterns of family management of childhood chronic conditions and their relationship to child and family functioning. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't].
). For instance, children with CHC in this study viewed their condition management around their daily activities, including before and after school. Therefore, communicating about condition management within the context of family routines may enable parents and children to problem solve their responsibilities and fulfill their roles.
Perceived Consequences examines the balance between condition management and other aspects of family life and the implications for the child and family's future (
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
). When children talked about the condition itself, it was not usually oriented toward the future; rather, it was about changes over time since they had first been diagnosed or from when they were younger. Some spoke about the condition being easier to handle because they were older and understood more about the condition, whereas others spoke about how the condition may have gotten better or worse over time. Consistent with other studies in which older children demonstrated stronger language skills and higher levels of cognitive functioning (
• Coyle K.K.
• Russel L.A.
• Shields J.P.
• Tanaka B.A.
Summary report: Collecting data from children ages 9–13: Lucile Packard Foundation for Children's Health.
), the younger group tended to be more concrete and had relatively less insight into several of the dimensions. Although these findings are typical within the developmental expectations, of importance is the degree of insight and perspective of the older group. These findings show the developmental progression and changes in cognitive understanding.
The findings also support the expectation that condition management is a two way street, with both the child and the family having perspectives and influencing the process. Knowing that family management is a phenomenon that resonates with families and also with children with CHC provides the foundation for exploration of this unique perspective and its relationship to both family and child outcomes. There are certainly other variables, including those in the contextual influences of the FMSF that are both environmental (i.e., family situations, social determinants of health) and child-specific (development, health condition) that place in context and influence these dimensions. Findings from this study about intra-family processes, however, are important for practitioners, researchers, and families to consider as we work to prepare children with chronic health conditions to become adolescents assuming more of their health care management on a daily basis.
## Implications for Practice
• Michaud P.-A.
• Suris J.C.
• Viner R.
The adolescent with a chronic condition: Epidemiology, developmental issues and health care provision.
acknowledged, “In clinical interactions with younger children, management decisions are made ‘adult to adult’ by health professionals in consultation with parents, and day-to-day disease management is generally undertaken directly by parents.” (p. 8). The findings here support the need for health care professionals to include children at a much younger age, realizing that children with CHC are listening and forming perspective regardless of their intentional inclusion or exclusion. The American Academy of Pediatrics supports that approach, recommending children to be included in visits as early as age four in order to become comfortable speaking with the health care provider. The academy also recognizes that some children as young as 9 or 10 may have concerns or questions about their health that they want to discuss with the provider alone, although others this age may not be ready for this (
• Hagan J.F.
• Shaw J.S.
• Duncan P.M.
Bright futures: Guidelines for health supervision of infants, children, and adolescents.
). Including the school-aged child in discussions can help the child better understand and plan condition management when away from the parents and help the family create ways to support the child in this developmental endeavor (
• Kirk S.
• Beatty S.
• Callery P.
• Milnes L.
• Pryjmachuk S.
Perceptions of effective self-care support for children and young people with long-term conditions.
). One can imagine that a plan to develop the necessary toolbox with the child and family will help support and ease transitions from family-focused management to self-care in the adolescent and young adult.
Concern is high regarding the transition of pediatric patients to adult care (
• Schwartz L.A.
• Brumley L.D.
• Tuchman L.K.
• Barakat L.P.
• Hobbie W.L.
• Ginsberg J.P.
• et al.
Stakeholder validation of a model of readiness for transition to adult care. [Research Support, N.I.H., Extramural Validation Studies].
); the goal is to have the transfer done in a timely and safe manner. This is especially true of specialty pediatric practices that see children and families with particular chronic health conditions. The evidence in this study focuses on the issues that concerned school-aged children, especially their self-identity, view of the condition, and management approach. Issues were not often mentioned about future responsibilities and expectations, ways to resolve conflict concerning condition management, and decision-making within the health care context. Efforts need to be placed on building upon a developmentally appropriate awareness of future management goals to accomplish preparation for transition. For instance, children were able to accomplish the tasks of care, but were not aware of anticipated changes that may occur with puberty, as they enter middle school, or as a general course of the condition. Helping to prepare children for anticipated changes and providing skills can help children manage these changes.
Though school-aged children with CHC are not ready to be the primary decision maker, they are aware of many limitations, implications, and useful strategies for management relative to their condition. If they are not included in goal setting, creating strategies to meet the goals, and evaluating the outcomes reflectively, they may not develop the appropriate skills for decision making as they enter adolescence and young adulthood. Health care providers usually have years, starting at diagnosis, to help children with CHCs and their families focus on issues key to condition management and prepare for the transition to adult care. It would be beneficial to develop the mindset that this preparation should include the child from the diagnosis forward.
## Limitations
There are some limitations to this study that must be acknowledged. Although there was considerable diversity in this sample of school-aged children, the participants all were treated at the same large children's hospital, which may have led to some homogeneity in the treatment experience, especially within clinics. Additionally, these children were typical for age related to cognitive development, and children with cognitive impairments might have presented differently. Although the characteristics of the various conditions were diverse, the conditions were predominantly physical. Considering that some of the most prevalent CHCs among children are asthma, obesity, and mental health conditions including ADHD (
• Perrin J.M.
• Gnanasekaran S.
• Delahaye J.
Psychological aspects of chronic health conditions.
), only asthma was a primary diagnosis in this sample, and ADHD was a co-morbidity in four children. The diversity across race was also limited, and future studies should include participants with broader cultural experiences. The small sample size did not allow for comparison within and among subgroups marked by age, race, socio-economic status, or other important variables. (Table 1)
## Future Research
The current study shows that the FMSF can be used to investigate the perspectives of children with CHCs. Recently, four different patterns of family management have been identified; family-focused, somewhat family-focused, somewhat condition-focused, and condition-focused (
• Knafl K.A.
• Deatrick J.A.
• Knafl G.J.
• Gallo A.M.
• Grey M.
• Dixon J.
Patterns of family management of childhood chronic conditions and their relationship to child and family functioning. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't].
). A mixed methods analysis of the qualitative child data and quantitative data from one of their parents is currently in progress. This analysis is exploring the similarities and differences between the parent and child perspectives of family management based upon the management pattern.
Future research to address the limitations of the current study is needed. The current study sample was limited with respect to conditions represented and lacked cultural diversity. As qualitative studies are often limited to a small number of participants, the next step needs to be quantitative. Development of a family management measure for children/adolescence would require a larger sample and an opportunity to obtain a more diverse sample. With the development of a child measure to complement the family management measure (FaMM) we would have the ability to identify the strengths families and children have to build on and weaknesses for which interventions might help to improve outcomes.
This study fills gaps in our science about school-aged children and their understanding of their CHC, how they and their families incorporate CHC into their daily lives, and family management. The perspectives of children not only adds important contextual understandings for the FMSF, but also helps us better comprehend the family unit. In addition, the design of this study systematically considers selection of a sample based on the characteristics of the children's conditions and not on their medical diagnoses. These data, therefore, have the potential to be used to formulate a measure that fulfills a mandate set by the United States National Guideline Clearinghouse; that is, that we design measures and metrics that are sensitive to health phenomenon across populations and that can be used to stratify subgroups in order to examine whether disparities in health exist among a diverse population of patients (
• National Quality Measure Clearinghouse
Desirable attributes of a quality measure.
). As populations of children with CHC survive with more and more intense and complex CHC, pediatric nurses are called upon to use robust frameworks to identify those issues that not only have significance within specific settings but have the potential to be tested globally within and across potentially vulnerable and at risk children and families.
## Conflict of Interest
The authors have no conflicts of interest to declare.
## Funding
The first author received financial support from the National Institute of Nursing Research, National Institutes of Health (T32NR007066; T32NR007100; F31NR011524; R01NR011589) and Sigma Theta Tau, Xi Chapter.
## Acknowledgments
The authors acknowledge and thank Phyllis A. Dexter for her editorial assistance with this paper.
## References
• Bedell G.M.
• Cohn E.S.
• Dumas H.M.
Exploring parents' use of strategies to promote social participation of school-age children with acquired brain injuries. [Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, Non-P.H.S.].
The American journal of occupational therapy : official publication of the American Occupational Therapy Association. 2005; 59: 273-284
• Blumer H.
Symbolic interactionism; perspective and method.
Prentice-Hall, Englewood Cliffs, N.J.1969
• Case-Smith J.
Parenting a child with a chronic medical condition. [Research Support, U.S. Gov't, P.H.S.].
The American journal of occupational therapy : official publication of the American Occupational Therapy Association. 2004; 58: 551-560
• Cashin G.H.
• Small S.P.
• Solberg S.M.
The lived experience of fathers who have children with asthma: A phenomenological study.
Journal of Pediatric Nursing. 2008; 23: 372-385https://doi.org/10.1016/j.pedn.2007.08.001
• Cohen D.M.
• Lumley M.A.
• Naar-King S.
• Partridge T.
• Cakan N.
Child behavior problems and family functioning as predictors of adherence and glycemic control in economically disadvantaged children with type 1 diabetes: A prospective study.
Journal of Pediatric Psychology. 2004; 29: 171-184
• Coll C.G.
• Szalacha L.A.
The multiple contexts of middle childhood.
Future of Children. 2004; 14: 81-97
• Coyle K.K.
• Russel L.A.
• Shields J.P.
• Tanaka B.A.
Summary report: Collecting data from children ages 9–13: Lucile Packard Foundation for Children's Health.
2007
• Crisp J.
• Ungerer J.A.
• Goodnow J.J.
The impact of experience on children's understanding of illness.
Journal of Pediatric Psychology. 1996; 21: 57-72
• DeLambo K.E.
• Ievers-Landis C.E.
• Drotar D.
• Quittner A.L.
Association of observed family relationship quality and problem-solving skills with treatment adherence in older children and adolescents with cystic fibrosis. [Research Support, U.S. Gov't, P.H.S.].
Journal of Pediatric Psychology. 2004; 29: 343-353
• Elo S.
• Kyngas H.
The qualitative content analysis process.
Journal of Advanced Nursing. 2008; 62: 107-115https://doi.org/10.1111/j.1365-2648.2007.04569.x
• Emiliani F.
• Bertocchi S.
• Poti S.
• Palareti L.
Process of normalization in families with children affected by hemophilia.
Qualitative Health Research. 2011; 21: 1667-1678https://doi.org/10.1177/1049732311417456
• Fiese B.H.
• Everhart R.S.
Medical adherence and childhood chronic illness: Family daily management skills and emotional climate as emerging contributors. [Research Support, N.I.H., Extramural Review].
Current Opinion in Pediatrics. 2006; 18: 551-557https://doi.org/10.1097/01.mop.0000245357.68207.9b
• Fiese B.H.
• Wamboldt F.S.
• Anbar R.D.
Family asthma management routines: connections to medical adherence and quality of life. [Research Support, Non-U.S. Gov't Research Support, U.S. Gov't, P.H.S.].
The Journal of Pediatrics. 2005; 146: 171-176https://doi.org/10.1016/j.jpeds.2004.08.083
• Hagan J.F.
• Shaw J.S.
• Duncan P.M.
Bright futures: Guidelines for health supervision of infants, children, and adolescents.
3rd ed. American Academy of Pediatrics, Elk Grove Village, IL2008
• Hsieh H.F.
• Shannon S.E.
Three approaches to qualitative content analysis.
Qualitative Health Research. 2005; 15: 1277-1288https://doi.org/10.1177/1049732305276687
• Kirk S.
• Beatty S.
• Callery P.
• Milnes L.
• Pryjmachuk S.
Perceptions of effective self-care support for children and young people with long-term conditions.
Journal of Clinical Nursing. 2012; 21: 1974-1987https://doi.org/10.1111/j.1365-2702.2011.04027.x
• Knafl K.A.
• Deatrick J.A.
Further refinement of the family management style framework.
Journal of Family Nursing. 2003; 9: 232-256
• Knafl K.A.
• Deatrick J.A.
• Havill N.L.
Continued development of the family management style framework. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't Review].
Journal of Family Nursing. 2012; 18: 11-34https://doi.org/10.1177/1074840711427294
• Knafl K.A.
• Deatrick J.A.
• Knafl G.J.
• Gallo A.M.
• Grey M.
• Dixon J.
Patterns of family management of childhood chronic conditions and their relationship to child and family functioning. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov't].
Journal of Pediatric Nursing. 2013; 28: 523-535https://doi.org/10.1016/j.pedn.2013.03.006
• Lewin A.B.
• Heidgerken A.D.
• Geffken G.R.
• Williams L.B.
• Storch E.A.
• Gelfand K.M.
• et al.
The relation between family factors and metabolic control: The role of diabetes adherence.
Journal of Pediatric Psychology. 2006; 31: 174-183https://doi.org/10.1093/jpepsy/jsj004
• McMenamy J.M.
• Perrin E.C.
The impact of experience on children's understanding of ADHD.
Journal of Developmental and Behavioral Pediatrics. 2008; 29: 483-492
• Michaud P.-A.
• Suris J.C.
• Viner R.
The adolescent with a chronic condition: Epidemiology, developmental issues and health care provision.
WHO Discussion Papers on Adolescence. World Health Organization, 2007
• National Quality Measure Clearinghouse
Desirable attributes of a quality measure.
(Retrieved October 1, 2014, from)
• O'Connor M.K.
• Netting F.E.
• Thomas M.L.
Grounded theory - Managing the challenge for those facing Institutional Review Board oversight.
Qualitative Inquiry. 2008; 14: 28-45https://doi.org/10.1177/1077800407308907
• Patton M.Q.
Qualitative research and evaluation methods.
3rd ed. Sage Publications, Inc., Thousand Oaks, CA2002
• Perrin J.M.
• Gnanasekaran S.
• Delahaye J.
Psychological aspects of chronic health conditions.
Pediatrics in Review. 2012; 33: 99-109https://doi.org/10.1542/pir.33-3-99
• Rolland J.S.
Families, illness, & disability: An integrative treatment model.
Basic Books, New York1994
• Santos S.
• Crespo C.
• Silva N.
• Canavarro M.C.
Quality of life and adjustment in youths with asthma: The contributions of family rituals and the family environment.
Family Process. 2012; 51: 557-569https://doi.org/10.1111/j.1545-5300.2012.01416.x
• Schwartz L.A.
• Brumley L.D.
• Tuchman L.K.
• Barakat L.P.
• Hobbie W.L.
• Ginsberg J.P.
• et al.
Stakeholder validation of a model of readiness for transition to adult care. [Research Support, N.I.H., Extramural Validation Studies].
JAMA pediatrics. 2013; 167: 939-946https://doi.org/10.1001/jamapediatrics.2013.2223
• Vygotsky L.S.
Imagination and creativity in childhood. [Article]. [Voobrazhenie i tvorchestvo v detskom vozraste,[1967].
Journal of Russian & East European Psychology. 1967,2004; 42: 7-97
• Wollenhaupt J.
• Rodgers B.
• Sawin K.J.
Family management of a chronic health condition: Perspectives of adolescents.
Journal of Family Nursing. 2012; 18: 65-90https://doi.org/10.1177/1074840711427545
|
2023-03-25 00:29:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19222472608089447, "perplexity": 5846.259656646386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00400.warc.gz"}
|
https://www.csdn.net/tags/MtjaMg4sNjM0MzItYmxvZwO0O0OO0O0O.html
|
• 驱动器图标删除During my time as a UX/UI Designer at Perlego (the world’s online textbook library), I had already created an icon library that was user-driven, brand-focused and utilised a set of ...
驱动器图标删除During my time as a UX/UI Designer at Perlego (the world’s online textbook library), I had already created an icon library that was user-driven, brand-focused and utilised a set of guidelines that will allow consistent design for each new icon and iteration to come (you can view the article by clicking here). The platform has hundreds of thousands of books and to retain organisation of these books, they require to be split into multiple relevant categories that are easily accessible by our users. These categories are called — “Topics”. Similarly to a physical library, Mathematics books are grouped together as are History or Geography for example.在Perlego(世界在线教科书库)担任UX / UI设计师的那段时间,我已经创建了一个图标库,该库由用户驱动,以品牌为中心,并利用了一系列指导原则,可对每个新图标进行一致的设计以及后续的迭代(您可以通过单击此处查看文章)。 该平台有成千上万的书籍,并且为了保留这些书籍的组织性,需要将它们分为多个相关类别,以使我们的用户可以轻松访问这些类别。 这些类别称为“主题”。 与物理图书馆类似,数学书籍也被分组在一起,例如历史或地理。
我的角色
(My Role)
At the time prior to the brief, the company had 16 different Topics, which meant 16 different icons. Each icon meticulously colour coded to allow easier grouping, representation and identifiability. With icon design guidelines in place, the next logical step was to expand the icon library to include new Topic icon designs and so I was given my brief…but with a minor plot twist!在简报发布之前,该公司有16个不同的主题,这意味着16个不同的图标。 每个图标都经过精心的颜色编码,以便于分组,表示和识别。 有了图标设计指南之后,下一步的逻辑就是将图标库扩展为包括新的主题图标设计,因此给了我简要的介绍……但情节略有改变!
Due to the change in the structure of the library we were reassigning existing books to new topics. The task therefore was to redesign and replace the the Topic icons for web and mobile. Additionally, this was the prime opportunity to also rethink our colours for categorisation and so part of the task included taking this into consideration. 由于图书馆结构的变化,我们将现有图书重新分配给新主题。 因此,任务是重新设计和替换Web和移动设备的主题图标。 此外,这是重新思考我们的颜色进行分类的主要机会,因此一部分任务包括考虑这一点。 过程
(The process)
初步研究(Initial research)
Prior to my brief, we utilised a library from thenounproject.com in order to do this. Each icon represents a Topic. A briefcase for Business, a computer chip for Computer Science, an abacus for Mathematics, and so on.在我做简要介绍之前,我们利用thenounproject.com中的一个库来执行此操作。 每个图标代表一个主题。 用于商务的公文包,用于计算机科学的计算机芯片,用于数学的算盘等。
A briefcase for Business, a computer chip for Computer Science, an abacus for Mathematics. 一个用于商务的公文包,一个用于计算机科学的计算机芯片,一个用于数学的算盘。I wanted to understand what users thought of when they think of a particular word. I need to ensure the icons designs are easily recognisable by any user at an immediate glance without without additional supporting text. A user should be able to identify the Topic by looking at the icon alone. So my starting point was to send out a quick survey. This survey simply listed all 24 Topics and asked for each Topic name — what image is the first thing to come to mind? 我想了解用户在想到一个特定单词时的想法。 我需要确保任何用户都可以一眼就能认出这些图标的设计,而又没有其他支持文字。 用户应该能够仅通过查看图标来识别主题。 因此,我的出发点是进行快速调查。 这项调查仅列出了所有24个主题,并要求每个主题名称-首先想到什么图像?
Example of the survey I sent out to identify initial immediate responses. 我发出的调查示例旨在确定最初的即时响应。I received 29 responses and they starting providing clarity to the direction I need to initially move in. A couple of examples of the results of the survey are shown below. 我收到了29份回复,它们开始为我最初的前进方向提供了清晰度。下面显示了一些调查结果示例。
Architecture — users related closes to buildings and structures. 建筑-与用户相关的建筑物和构筑物接近。 Business — had a strong correlation with briefcases and money.业务-与公文包和金钱有很强的相关性。But to ensure I had the strongest starting point I took all the responses and put them in an excel. I grouped each response by colour coding them to how strongly they correlate to other responses and then applied a formula to the results to show my what percentage of users fell into either a strong or weak correlation. 但是为了确保我有最强的出发点,我接受了所有反馈,并将其置于卓越水平。 我通过对每个响应进行颜色编码来对它们与其他响应之间的关联程度进行分组,然后对结果应用公式以显示出属于强相关或弱相关的用户百分比。Dark blue — same response word for word. 深蓝色-相同的响应词。 Light blue — Similar word or close relation. 浅蓝色—相似的词或亲密关系。 Green — Related term. 绿色-相关术语。 Pink — Loosely related term. 粉色-松散相关的术语。 Purple — Weak and individualistic response. 紫色-较弱且个人主义的回应。 Yellow — Completely unrelated to any other response.黄色-完全与其他任何响应无关。 草绘的想法
(Sketching ideas)
At this point, I had a good idea and a strong starting point to ideate some icons. I decided for each Topic I would sketch out some variations as I came to realise, whilst users will have a visual idea when presented with a word, will they make the same connection when shown an image?在这一点上,我有了一个好主意,并且是构思一些图标的一个很好的起点。 我决定为每个主题勾勒出实现时的一些变化,而当用户看到一个单词时,他们会有一个视觉化的想法,而当显示图像时,他们会产生相同的联系吗?
One thing I realised throughout the sketching process is that some of the ideas I had were too complicated and I had to remember I only have a 32 x 32px grid to work with (examples below). This made me rethink my approach and simplify the icon design even further. 我在整个素描过程中意识到的一件事是,我的某些想法过于复杂,我必须记住,我只有一个32 x 32像素的网格可以使用(下面的示例)。 这使我重新思考了我的方法,并进一步简化了图标设计。
Examples of sketches I did for each topic that could potentially be used as an icon. 我为每个主题绘制的草图示例有可能用作图标。I further tested some of these icon sketches by approaching individuals on a one to one basis and creating an open dialogue about their thoughts on the icons, which they gravitated to per Topic and why this was. I used this information to guide a few other sketches and refine the ones that seemed to work well visually. 我通过一对一地与个人接触并就他们对图标的想法进行公开对话,进一步测试了其中一些图标草图,他们倾向于每个主题及其原因。 我使用这些信息来指导其他一些草图,并完善那些看起来在视觉上效果很好的草图。
从草图到数字
(Moving from sketches to digital)
My next step was to take the ideas that were received positively and start designing them following the icon guidelines on Sketch. During this process, I still came across the challenge that some of my designs encapsulated too much detail or were too complex for a 32px grid and I had to either find a way to strip back the detail or come up with a whole new idea altogether. An example of this was with the Topic “Architecture”. As it’s a Topic that is quite open to interpretation and subjective visually, I had to go through a couple of iterations until I found one that was not easily recognised but worked well with or without the word “Architecture” being present.我的下一步是采用受到好评的想法,并按照Sketch上的图标准则开始设计它们。 在此过程中,我仍然遇到挑战,即我的一些设计封装了太多的细节,或者对于32px的网格来说过于复杂,因此我不得不找到一种方法来剥离细节或提出一个全新的想法。 主题“建筑”就是一个例子。 由于该主题非常易于解释和主观视觉,因此我不得不经历几次迭代,直到发现一个不容易识别但无论是否出现“ Architecture”一词都能很好地工作的迭代。
Architecture — iteration 1 (top left), iteration 2 (top middle), iteration 3 (top right), iteration 4 (bottom left), iteration 5 (bottom middle), final iteration (bottom left). 体系结构-迭代1(左上方),迭代2(中上方),迭代3(右上方),迭代4(左下方),迭代5(中下方),最终迭代(左下方)。Once I had gone through this process a number of times for different icons, I had 24 pixel-perfect icons, tested and tried. Now I needed to find a colour system that worked to help categorise the books and give these black and white icons a pop of colour. 一旦针对不同的图标完成了多次此过程,我就可以测试并尝试使用24个像素完美的图标。 现在,我需要找到一种颜色系统,该系统可以帮助对书籍进行分类,并为这些黑白图标赋予某种色彩。Completed Topic icon set. 完成主题图标集。 涂色
(Applying colour)
We predominantly utilised a white background in the mobile app and on the web and aims to retain a minimal look and feel. The previous icon colours worked well, but this was the opportunity to really make these icons stand out.我们主要在移动应用程序和网络上使用白色背景,旨在保留最小的外观。 以前的图标颜色效果很好,但这是使这些图标脱颖而出的机会。
Because I had 24 icons, I had to find 24 individual colours. This part of the project I decided to share with a colleague of mine as colours can be quite subjective and it was beneficial to get someone with a strong UI background involved who can assist in guiding the visual direction. We had a whole spectrum of colours to choose from. So with some advice from the design team we decided to create a level above the Topics named “Subjects”. We split the subjects into 5 areas which each hold it’s own shade of colour from the spectrum: 因为我有24个图标,所以必须找到24种单独的颜色。 我决定与我的一位同事分享该项目的这一部分,因为颜色可能非常主观,并且让具有强烈UI背景的人参与进来可以帮助指导视觉方向,这是有益的。 我们有多种颜色可供选择。 因此,在设计团队的建议下,我们决定在主题上方创建一个名为“主题”的关卡。 我们将主题划分为5个区域,每个区域从光谱中保留自己的颜色阴影:
最终产品
(The final product)
Picking colours for each icon was an iterative process in itself where my colleague and I constantly gathered visual feedback from not only my design team but also other individuals both internally and externally. But this process made the end result better than I had initially planned.为每个图标选择颜色本身就是一个反复的过程,在这个过程中,我和我的同事不断收集视觉反馈,这些反馈不仅来自我的设计团队,还来自内部和外部的其他个人。 但是这个过程使最终结果比我最初计划的要好。
By the end, I had created 24 icons which were completely feedback driven throughout the whole process and they not only followed the icon guidelines, but they visually represented the Topics they are meant to represent, all whilst having an overhaul on the colour the defines them. They also continue to add to the platform’s growing personality and they stand out fantastically on both the mobile app and web platform. Below you can see the icons used in situ on the platform. 到最后,我创建了24个图标,这些图标在整个过程中都是完全由反馈驱动的,它们不仅遵循图标的准则,而且还直观地表示了它们要代表的主题,同时对定义它们的颜色进行了全面检查。 它们还继续增加了该平台不断增长的个性,并且在移动应用程序和Web平台上都表现出色。 在下面,您可以看到平台上原位使用的图标。
Topic icons being used on web (bottom left) and on the mobile app (bottom right). 网络(左下方)和移动应用程序(右下方)上正在使用的主题图标。I thoroughly enjoyed this project as it brought about some interesting and complex challenges that truly got me thinking about how something as simple as icons and colours can have such a large impact on the platform’s usability and visual appeal. Beyond that, it allowed me to think outside the box and approach iconography with a more structured format. 我非常喜欢这个项目,因为它带来了一些有趣且复杂的挑战,使我真正想到了像图标和颜色这样简单的内容如何对平台的可用性和视觉吸引力产生巨大影响。 除此之外,它还使我能够跳出框框思考,以更加结构化的格式来处理图像。I hope you enjoyed reading this as much as I enjoyed working on this project! Thank you for time. Please feel free to get in touch with me to discuss the project or anything else via the email address below: parin@ashra.co.uk / or on LinkedIn. If you would like to learn more about Perlego and their product you can visit their website www.perlego.com. 希望您喜欢阅读这篇文章,就像我从事这个项目一样愉快! 谢谢您的时间。 请随时通过以下电子邮件地址与我联系,讨论该项目或其他事宜:parin@ashra.co.uk /或在LinkedIn上。 如果您想了解有关Perlego及其产品的更多信息,请访问其网站www.perlego.com 。
Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.海湾地区黑人设计师:一个专业的黑人开发社区,他们是旧金山湾区的数字设计师和研究人员。 通过在社区中团结起来,成员可以共享灵感,联系,同伴指导,专业发展,资源,反馈,支持和韧性。 对系统性种族主义保持沉默是不可行的。 建立您相信的设计社区。翻译自: https://uxdesign.cc/the-user-driven-approach-to-topic-icon-design-286e8f799840驱动器图标删除
展开全文
• 1.图标预览 先看样式 2.软件不能关闭 百度和腾讯网盘都会创建,但是可以软件关闭,WPS以前也可以,现在新版作妖了 3.注册表删除 你做那我就删~Code:HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion...
1.图标预览
先看样式
2.软件不能关闭
百度和腾讯网盘都会创建,但是可以软件关闭,WPS以前也可以,现在新版作妖了
3.注册表删除
你做那我就删~Code:HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace
PS:Win+R 输入 regedit
4.重启资源管理器
重启下Win资源管理器
5.最终效果
解决~(很多东西没法手动或者软件干掉,那就原始方法搞定)
6.放大招
既然你作妖,那就别怪我放大招:链接: https://pan.baidu.com/s/1pV3VshVrzZl1ldjOglEhmg 提取码: wtag
PS:只提供7天有效期
转载于:https://www.cnblogs.com/dotnetcrazy/p/11149948.html
展开全文
• 删除资源管理器中,设备和驱动器与左侧边栏中存在的WPS网盘等图标 目录 1、存在的问题. 2、解决方案 2.1 删除设备和驱动器中不想要的图标 2.2 删除左侧边栏中不想要的图标 3、结论 1、存在的问题 资源管理器中,设备...
删除资源管理器中,设备和驱动器与左侧边栏中存在的WPS网盘等图标
目录
这里写目录标题1、存在的问题2、解决方案3、结论4 备份注册表
1、存在的问题
资源管理器中,设备和驱动器与左侧边栏中存在的百度网盘和WPS网盘等图标,看着比较碍眼,所以想设置为不显示,可是软件本身不提供右键不显示或删除的功能
2、解决方案
2.1 删除设备和驱动器中不想要的图标
win + R > 输入 regedit 进入注册表
进入以下路径中:计算机\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace,就可以看到非系统的几个图标项
对于想要删除的项(点击项名可以看到数据列中该项的软件名字),右键删除并选择是
再回到资源管理器中右键刷新或重新打开资源管理器就看不到所删除的图标项了(如果发现百度云盘等图标项仍存在于资源管理器,见后方结论)
2.2 删除左侧边栏中不想要的图标
注册表进入以下路径中:计算机\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Desktop\NameSpace
对于想要删除的项(点击项名可以看到数据列中该项的软件名字),右键删除并选择是
3、结论
WPS网盘等图标项可以通过删除注册表的方式进行去除
百度网盘的图标项无法通过删除注册表的方式来去除时,那就解铃还须系铃人,进入应用的设置界面,操作如下:
重装软件后,对应的图标项还是会被软件写入到资源管理器中
4 备份注册表
防止删除错了,建议备份下注册表,具体选择要备份内容然后刀出桌面,删错了就选择导入备份的注册表,即可。
展开全文
• 删除设备和驱动器下图标
千次阅读 2019-07-15 13:18:17
删除设备和驱动器图标
1、Win+R,输入regedit后回车.
2、定位到
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace
3、删除该位置下的所有值,或者你认为可以删除的值(只留“默认”就可以)
(完成后记得重启)
展开全文
• HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace 在右侧 找到百度云字样 删除 OK 转载于:https://www.cnblogs.com/xiaoshi657/p/5516505.html
• 按win键+R,打开注册表,找到这个路径: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer,把里面对应的注册表文件夹删除即可解决!
• 删除设备和驱动器中的图标
万次阅读 2017-08-19 12:32:03
安装完软件后,会在设备和驱动器中莫名其妙的加上图标,360云盘和百度云管家还行,最起码是个虚拟盘,但是爱奇艺和PPS,你凑什么热闹,...要想删除设备和驱动器中的百度云、360云盘、PPS或爱奇艺图标,可按照以下步骤。
• 打开注册表 win+R 输入regedit 回车 定位到HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace 有的说是HKEY_CURRENT_USER 但是我的...删除对应的NameSpace下的文件夹 ...
• 删除云盘在设备和驱动器图标
• 2. 输入“regedit”打开注册表,在注册表编62616964757a686964616fe4b893e5b19e31333366303766辑中选择“是”。3.打开注册表 定位到:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\My...
• 各大网盘纷纷关闭,前几个月我...下载了资料,删除了网盘客户端,可还剩下设备驱动图标: 铲除方法: 打开注册表HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace\ 把 ...
• win10 删除设备和驱动器中你不要的图标 原文:win10 删除设备和驱动器中你不要的图标 设备和驱动器可能有很多你不想要的东西,360云盘,百度网盘,微云…… 删除设备和驱动器中的百度云图标,360...
• Win+R > regedit > 回车,打开注册表。 依次打开:HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace 清理该注册项的子项目即可。 转载于:...
• 删除设备和驱动器中的百度云图标,360网盘图标,要去注册表 运行 regedit 点开 寻找到此目录下 HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Mycomputer\NameSpace 可以看到很多你不想...
• 打开 reg 查找以下路径: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\...下面每一个选项文件夹对应一个图标图标的名称会在右边的值中显示。 删除此文件夹即可。 ...
• 不知从何时起,资源管理器“此电脑(Win10)/这台电脑(Win8/Win8.1)/计算机(Win7)”的“设备和驱动器”中就开始流行被植入一些第三方项目。客观上说,这些项目给经常使用这些软件的用户提供了很大便利。但有些用户则并...
• windows中删除设备和驱动器中的其他软件图标如PPS、百度云、360云盘图标等 方法 单击开始找到运行:windows+R键打开运行输入框 在运行输入框输入regedit打开注册表 找到 HKEY_CURRENT_USER\SOFTWARE\Microsoft\...
• 操作注册表删除图标
• 删除图标 方法: win+R,regedit(注册表编辑) 路径:HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace 删除NameSpace下的目录 ...
• win10设备和驱动器里无效图标删除的具体步骤如下: 1、在Cortana搜索栏输入regedit后回车 2、定位到HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\Nam ...
• 删除Windows 8.1的设备和驱动器里的“百度云管家”图标
...
|
2021-04-19 19:57:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17182080447673798, "perplexity": 3901.028412344248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00549.warc.gz"}
|
https://thatsmaths.com/tag/fourier-analysis/
|
## Posts Tagged 'Fourier analysis'
### Making Sound Pictures to Identify Bird Songs
Top: Audio signal with three chirps. Bottom: Time-Frequency spectrogram of signal.
A trained musician can look at a musical score and imagine the sound of an entire orchestra. The score is a visual representation of the sounds. In an analogous way, we can represent birdsong by an image, and analysis of the image can tell us the species of bird singing. This is what happens with Merlin Bird ID. In a recent episode of Mooney Goes Wild, Niall Hatch of Birdwatch Ireland interviewed Drew Weber of the Cornell Lab of Ornithology, a developer of Merlin Bird ID. This phone app enables a large number of birds to be identified [TM237 or search for “thatsmaths” at irishtimes.com].
### Joseph Fourier and the Greenhouse Effect
Jean-Baptiste Joseph Fourier, French mathematician and physicist, was born in Auxerre 251 years ago today. He is best known for the mathematical techniques that he developed in his analytical theory of heat transfer. Over the past two centuries, his methods have evolved into a major subject, harmonic analysis, with widespread applications in number theory, signal processing, quantum mechanics, weather prediction and a broad range of other fields [TM159 or search for “thatsmaths” at irishtimes.com].
Greenhouse Effect [Image Wikimedia Commons]
### Don’t be Phased by Waveform Distortions
For many years there has been an ongoing debate about the importance of phase changes in music. Some people claim that we cannot hear the effects of phase errors, others claim that we can. Who is right? The figure below shows a waveform of a perfect fifth, with components in the ratio ${3 : 2}$ for various values of the phase-shift. Despite the different appearances, all sound pretty much the same.
Continue reading ‘Don’t be Phased by Waveform Distortions’
|
2023-03-30 04:45:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22514379024505615, "perplexity": 1818.0828432164567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00116.warc.gz"}
|
https://codereview.stackexchange.com/questions/197774/slow-image-generation
|
# Slow image generation
I build a Tower of Hanoi solver which print the solution as an image
It works as expected but generating the image is relatively slow compared to the time to calculate the answer.
Here is the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import argparse
from PIL import Image
def hanoi(disks, source, helper, target, steps):
if disks > 0:
hanoi(disks - 1, source, target, helper, steps)
target.append(source.pop())
steps.append([SOURCE[:], HELPER[:], TARGET[:]])
hanoi(disks - 1, helper, source, target, steps)
def save_image(name):
print('\nSaving image {}.png'.format(name))
data = []
peg = args.disks * 2
cells = peg * 3 + 40 # 40 is to put some spaces between pegs and the border
for step in steps:
for _ in range(5): # White space
data.append([1 for _ in range(cells)])
src = step[0]
hlp = step[1]
trg = step[2]
size = max(len(src), len(hlp), len(trg))
for _ in range(size - len(src)):
src.append(0)
for _ in range(size - len(hlp)):
hlp.append(0)
for _ in range(size - len(trg)):
trg.append(0)
src.reverse()
hlp.reverse()
trg.reverse()
for s, h, t in zip(src, hlp, trg):
blanksrc = peg - 2 * s
blankhlp = peg - 2 * h
blanktrg = peg - 2 * t
row = [1 for _ in range(10)]
row += [1 for _ in range(blanksrc // 2)]
row += [0 for _ in range(s * 2)]
row += [1 for _ in range(blanksrc // 2)]
row += [1 for _ in range(10)]
row += [1 for _ in range(blankhlp // 2)]
row += [0 for _ in range(h * 2)]
row += [1 for _ in range(blankhlp // 2)]
row += [1 for _ in range(10)]
row += [1 for _ in range(blanktrg // 2)]
row += [0 for _ in range(t * 2)]
row += [1 for _ in range(blanktrg // 2)]
row += [1 for _ in range(10)]
data.append(row)
for _ in range(5): # White space
data.append([1 for _ in range(cells)])
data.append([0 for _ in range(cells)]) # Black line to separate steps
da = [bit for row in data for bit in row]
image = Image.new('1', (cells, len(data)))
image.putdata(da)
image.save('{}.png'.format(name))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
help='Number of disks, default 4')
args = parser.parse_args()
if not args.disks > 0:
raise ValueError('There must be at least one disk')
SOURCE = list(reversed(range(1, args.disks + 1)))
TARGET = []
HELPER = []
steps = [[SOURCE[:], HELPER[:], TARGET[:]]]
hanoi(args.disks, SOURCE, HELPER, TARGET, steps)
save_image(args.filename)
As I add more disks in the problem, compared to the computation of the answer, the time taken to generate the image is longer and longer.
How can I make it faster and why it is so slow?
• I don't think the code is slow. It's just that there are $n^2$ steps – Maarten Fabré Jul 4 '18 at 7:37
• @MaartenFabré Is it not possible to reduce the complexity of the image generation? – wqeqwsd Jul 4 '18 at 8:10
• there are certainly steps to improve the performance, readability and testability of the code, by $n^2$ remains $n^2$ – Maarten Fabré Jul 4 '18 at 8:28
Like I said before, the reason why this takes a lot of time is because the number of steps is proportional to the square of the number of disks.
But there are some other improvements to be made to this code.
# range
list(reversed(range(1, args.disks + 1))) can be done more easily as list(range(disks, 0, -1))
# Global variables
Your image saving algorithm uses a lot of global scope (args.filename, steps,...), and with the src.reverce() even modifies these global variables, which is a sure way to introduce difficult to find bugs. If your function needs those parameters, pass them as arguments, and certainly don't change them.
You can prevent SOURCE, HELPER, TARGET to need in global scope, by passing these along in a dictionary. You can also use a namedtuple:
source = list(range(disks, 0, -1))
target = list()
helper = list()
state = dict(
source=source,
target=target,
helper=helper
)
hanoi_gen(disks, source, helper, target, state)
# Variable naming
I had to look really well at the code to find out what certain variables were. cells for example is the width of the image, peg is the maximum with of a peg etc. Name your variables clearly. s, h, t in zip(src, hlp, trg) is another example that can do with better names.
# Immutable variables
You use lists throughout your code. The advantage of lists is that they are mutable, but this is also the disadvantage. To make sure other code doesn't inadvertently change your data, use immutable containers like tuple where appropriate. So instead of SOURCE[:], I used tuple(...).
# Generator
Instead of instantiating all those lists at the same time, you can work with generators:
def hanoi_gen(disks, source, helper, target, state):
if disks:
yield from hanoi_gen(disks - 1, source, target, helper, state)
target.append(source.pop())
yield tuple(state['source']), tuple(state['target']), tuple(state['helper'])
yield from hanoi_gen(disks - 1, helper, source, target, state)
def solve_tower(disks):
source = list(range(disks, 0, -1))
target = list()
helper = list()
yield tuple(source), tuple(target), tuple(helper)
state = dict(
source=source,
target=target,
helper=helper,
)
yield from hanoi_gen(disks, source, helper, target, state)
steps = tuple(solve_tower(2))
assert steps == (
((2, 1), (), ()),
((2,), (), (1,)),
((), (2,), (1,)),
((), (2, 1), ()),
)
# Magic numbers
There are some magic numbers in your code, 10, 40, 5, ...
Better is to extract global constants from this:
BUFFER_PEG = 10
LINE_WIDTH = 1
BUFFER_STEP = 5
WHITE = 1
BLACK = 0
And use them like:
image_width = disks * 2 * 3 + 4 * BUFFER_PEG
BLACK and WHITE can also be done with an enum.IntEnum:
from enum import IntEnum
class Color(IntEnum):
BLACK = 0
WHITE = 1
# DRY
Compartmentalize!
There is a lot of repeated code, which makes this hard to maintain and test:
from itertools import repeat
def whitespace(width, image_width):
return repeat(Color.WHITE, width * image_width)
def line(width, image_width):
return repeat(Color.BLACK, width * image_width)
Create easy to use generators to add whitespace or black lines.
def pad_disk(disk_width, num_disks):
blank_width = num_disks - disk_width
yield from repeat(Color.WHITE, blank_width)
yield from repeat(Color.BLACK, disk_width * 2)
yield from repeat(Color.WHITE, blank_width)
Centrally pads a disk to twice the number of disks in play. This can be easily tested: the portion of a disk of width 1 in a stack of 4 disks:
assert tuple(pad_disk(1, num_disks=4)) == (1, 1, 1, 0, 0, 1, 1, 1)
## Format a row
def buffer_peg():
return repeat(Color.WHITE, BUFFER_PEG)
def format_row(disks, num_disks):
yield from buffer_peg()
for disk_width in disks:
yield from buffer_peg()
This can be easily tested like this:
row = [2, 0, 1]
num_disks = 4
assert tuple(format_row(row, num_disks)) == tuple(chain(
buffer_peg(),
(1, 1, 0, 0, 0, 0, 1, 1,),
buffer_peg(),
(1, 1, 1, 1, 1, 1, 1, 1,),
buffer_peg(),
(1, 1, 1, 0, 0, 1, 1, 1,),
buffer_peg(),
))
# Format individual steps
Here, I use a small helper function to reverse the peg, and pad it with 0s:
def pad_left_reverse(peg, size, fillvalue=0):
yield from repeat(fillvalue, size - len(peg))
yield from reversed(peg)
Then all of this:
src = step[0]
hlp = step[1]
trg = step[2]
size = max(len(src), len(hlp), len(trg))
for _ in range(size - len(src)):
src.append(0)
for _ in range(size - len(hlp)):
hlp.append(0)
for _ in range(size - len(trg)):
trg.append(0)
src.reverse()
hlp.reverse()
trg.reverse()
Can be replaced with:
def format_step(step, num_disks):
pegs = map(
step
)
And on the plus-side, this doesn't reverse the original input.
I replaced the size = max(len(src), len(hlp), len(trg)) with the number of disks, to keep all the steps equally high. If you can live with the uneven heights, size = len(max(step, key=len)) is an alternative formulation.
for row in zip(*pegs):
# print(row)
yield from format_row(row, peg_width)
Replaces the next 20 lines.
step = [2, 1], [5,4,3], []
num_disks = 5
step_data = list(format_step(step, num_disks))
Outputs something like:
1111111111111111111111111111111100000011111111111111111111111111111111
1111111111111100111111111111111000000001111111111111111111111111111111
1111111111111000011111111111110000000000111111111111111111111111111111
# Format the steps
def format_steps(steps, image_width, num_disks):
for step in steps:
yield from whitespace(BUFFER_STEP, image_width)
yield from format_step(step, num_disks)
yield from whitespace(BUFFER_STEP, image_width)
yield from line(LINE_WIDTH, image_width)
This speaks for itself.
# Context managers
If you open resources that need closing afterwards, like a file or a PIL.Image, use a with-statement.
# The main
if __name__ == '__main__':
num_disks = 5
steps = solve_tower(num_disks)
image_width = num_disks * 2 * 3 + 4 * BUFFER_PEG
data = list(format_steps(steps, image_width, num_disks))
with Image.new('1', (image_width, len(data) // image_width)) as image:
image.putdata(data)
name = 'my_hanoi.png'
image.save(name)
All-in-all this code is slightly longer than your code, and will not necessarily be much faster, it is a lot clearer for me, and a lot more parts can be individually tested.
The full code can be found here, and some tests here.
• Thanks a lot for all this explainations, I learned a lot from them – wqeqwsd Jul 4 '18 at 15:10
|
2020-01-23 03:40:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35543835163116455, "perplexity": 8880.882732592449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00200.warc.gz"}
|
https://tex.stackexchange.com/tags/multicol/hot
|
# Tag Info
## Hot answers tagged multicol
3
\label{...} had to be after caption or referable counter in a table (which aren't present in your table). Document class article doesn't define \chapter{...} . For it you neeed to use report or book document class (or other similar package, for example memoir) \documentclass{article} \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage{microtype} \...
1
There are many issues with the code you've shown, especially your use of \multicolumn. The multicol package is not needed for this command which is a command for tabular material; the multicol package is for making multicolumn text outside of a table. For this sort of table you don't want to use a table environment. The table environment turns its contents ...
1
You'll obtain what you want adding \leavevmode just after`\subsubsection' and removing the empty item: \documentclass{article} \usepackage{titlesec} \usepackage{enumitem} \usepackage{multicol} \titleformat{\subsubsection}[runin] {\bfseries} {} {0em} {} \begin{document} \subsubsection{Subsubsection}\leavevmode \begin{multicols}{2} \begin{itemize}[...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-10-23 08:56:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306397438049316, "perplexity": 9054.603274065237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00563.warc.gz"}
|
https://calendar.math.illinois.edu/?year=2017&month=10&day=25&interval=day
|
Department of
# Mathematics
Seminar Calendar
for events the day of Wednesday, October 25, 2017.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
September 2017 October 2017 November 2017
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 1 2 3 4 5 6 7 1 2 3 4
3 4 5 6 7 8 9 8 9 10 11 12 13 14 5 6 7 8 9 10 11
10 11 12 13 14 15 16 15 16 17 18 19 20 21 12 13 14 15 16 17 18
17 18 19 20 21 22 23 22 23 24 25 26 27 28 19 20 21 22 23 24 25
24 25 26 27 28 29 30 29 30 31 26 27 28 29 30
Wednesday, October 25, 2017
3:00 pm in 243 Altgeld Hall,Wednesday, October 25, 2017
#### Bott-Samelson varieties and combinatorics
###### Laura Escobar (UIUC)
Abstract: Schubert varieties parametrize families of linear spaces intersecting certain hyperplanes in C^n in a predetermined way. In the 1970’s Hansen and Demazure independently constructed resolutions of singularities for Schubert varieties: the Bott-Samelson varieties. In this talk I will describe their relation with associahedra. I will also discuss joint work with Pechenick-Tenner-Yong linking Magyar’s construction of these varieties as configuration spaces with Elnitsky’s rhombic tilings. Finally, based on joint work with Wyser-Yong, I will give a parallel for the Barbasch-Evens desingularizations of certain families of linear spaces which are constructed using symmetric subgroups of the general linear group.
4:00 pm in Altgeld Hall 141,Wednesday, October 25, 2017
#### Zariski tagent space to the moduli space of vector bundles on an algebraic curve
###### Jin Hyung To [email] (UIUC)
Abstract: Zariski tagent space to the moduli space of vector bundles on an algebraic curve Abs: We will show how to use deformations to find the Zariski tangent space. The moduli space of vector bundles is the GIT quotient of Hilbert scheme. Using this we find the Zariski tangent space of the moduli space of vector bundles.
4:00 pm in 245 Altgeld Hall,Wednesday, October 25, 2017
#### Life After Comps
###### Math Dept. PhD Students
Abstract: Current PhD students will talk about making the transition from primarily coursework to primarily research. Topics may include finding a thesis adviser, time management, talks and conferences, career preparation. There will be plenty of time for questions.
|
2019-03-22 04:27:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4505123496055603, "perplexity": 688.9054494202476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00359.warc.gz"}
|
https://www.physicsforums.com/threads/limit-of-cos-x-x-and-sec-x-x.746876/
|
# Limit of cos(x)/x and sec(x)/x
1. Apr 3, 2014
### Jhenrique
1. The problem statement, all variables and given/known data
Compute: $$\lim_{x \to 0} = \frac{\cos(x)}{x}$$ $$\lim_{x \to 0} = \frac{\cosh(x)}{x}$$ $$\lim_{x \to 0} = \frac{\sec(x)}{x}$$ $$\lim_{x \to 0} = \frac{sech(x)}{x}$$
2. Relevant equations $$\lim_{x \to x_0} = \frac{f(x)}{g(x)} = \lim_{x \to x_0} = \frac{\frac{df}{dx}(x)}{\frac{dg}{dx}(x)}$$
3. The attempt at a solution $$\lim_{x \to 0} = \frac{\cos(x)}{x} = \lim_{x \to 0} \frac{ \frac{d}{dx}\cos(x)}{\frac{d}{dx}x} = \lim_{x \to 0} \frac{- \sin(x)}{1} = 0$$ $$\\\lim_{x \to 0} = \frac{\cosh(x)}{x}=0 \\ \\\lim_{x \to 0} = \frac{\sec(x)}{x}=0 \\ \\\lim_{x \to 0} = \frac{sech(x)}{x}=0$$ Correct?
2. Apr 3, 2014
### jbunniii
You can't apply L'Hopital's rule unless you have an indeterminate form. What is $\lim_{x \rightarrow 0} \cos(x)$?
3. Apr 3, 2014
Is 1
4. Apr 3, 2014
### micromass
Staff Emeritus
You can easily check what the solution is by simply graphing the function. That will give you a big hint. Of course a graph is not a proof, but it helps.
5. Apr 3, 2014
### Staff: Mentor
Minor point, but you're showing all your limits incorrectly.
This --
$$\lim_{x \to 0} = \frac{\cos(x)}{x}$$
-- should be written without the = between "lim" and the function you're taking the limit of.
6. Apr 4, 2014
### Jhenrique
I didn't even realize!!! LOOOOOOL
7. Apr 4, 2014
### Curious3141
L' Hopital's Rule can only be used for limits of the form 0/0 or infinity over infinity (though in the latter case, the signs of the infinity don't matter). There are other criteria, but if your limit doesn't even fit into one of those indeterminate forms, LHR doesn't apply.
|
2017-10-20 00:17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5909846425056458, "perplexity": 2245.999946704908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00672.warc.gz"}
|
https://onemathematicalcat.org/Math/Precalculus_obj/graphOfSecant.htm
|
# GRAPH OF THE SECANT FUNCTION
by Dr. Carol JVF Burns (website creator)
Follow along with the highlighted text while you listen!
• PRACTICE (online exercises and printable worksheets)
This section discusses the graph of the secant function (shown below).
For ease of reference, some material is repeated from the Trigonometric Functions.
one period of the graph of $\displaystyle y = \sec x := \frac{1}{\cos x}$ (periodic with period $\,2\pi\,$) The cosine curve is shown in red. Key ideas contributing to the graph: the reciprocal of $\,1\,$ is itself the reciprocal of $\,-1\,$ is itself the number $\,0\,$ has no reciprocal the reciprocal of a small positive numberis a large positive number the reciprocal of a small negative numberis a large negative number a number smaller than $\,1\,$has reciprocal bigger than $\,1\,$
## The Secant Function: Definition and Comments
• Let $\,t\,$ be a real number (with restrictions noted below).
Think of $\,t\,$, if desired, as the radian measure of an angle.
• By definition, $\displaystyle \sec t := \frac{1}{\cos t}\,$.
• Using function notation, the number ‘$\,\sec t\,$’ is the output from the secant function when the input is $\,t\,$.
In other words, ‘$\,\sec\,$’ is the name of the function; $\,t\,$ is the input; $\,\sec t\,$ is the corresponding output.
• Recall the convention: for multi-letter function names, parentheses are usually omitted from function notation.
That is, $\,\sec(t)\,$ is usually written without parentheses, as $\,\sec t\,$, when there is no confusion about order of operations.
• However, don't ever write something like ‘$\,\sec t + 2\,$’!
Clarify as either $\,(\sec t) + 2\,$ (better written as $\,2 + \sec t\,$) or $\,\sec(t+2)\,$.
• Pronounce ‘$\,\sec t\,$’ as ‘secant (see-cant) of $\,t\,$’.
• The secant function isn't defined where the cosine is zero; this happens at the terminal points $\,(0,1)\,$ and $\,(0,-1)\,$.
Thus, secant is not defined for $\displaystyle \,t = \frac{k\pi}{2}\,$ for odd integers $\,k\,$: $\,k = \ldots, -5, -3, -1, 1, 3, 5,\, \ldots\,$
Note that the secant function has the same restrictions as the tangent.
## Where does the graph of the secant function come from?
The graph of the secant function is easy to obtain as the reciprocal of the cosine function.
The key ideas are illuminated below:
• the red curve is $\,y = \cos x\,$
• the green curve is $\displaystyle\,y = \sec x := \frac{1}{\cos x}\,$
the reciprocal of $\,1\,$ is $\,1\,$, so the points shown do not move the reciprocal of $\,-1\,$ is $\,-1\,$, so the points shown do not move zero has no reciprocal: where the cosine is zero, the secant has a vertical asymptote the reciprocal of a small positive number is a large positive number the reciprocal of a small negative number is a large negative number the cosine curve is bounded between $\,\color{red}{1}\,$ and $\,\color{red}{-1}\,$; thus, the reciprocals all have size greater than or equal to $\,\color{green}{1}\,$
## Important Characteristics of the Graph of the Secant Function
• The period of the secant function is $\,2\pi\,$:
$\sec(t+2\pi) = \sec t\,$ for all real numbers $\,t\,$ in the domain.
• The domain of the secant function excludes $\,\displaystyle\frac{\pi}{2} + k\pi\,$ for all integers $\,k\,$;
these are the values where the denominator of the secant function (the cosine) is $\,0\,$.
• The range of the secant function is the set of all real numbers with size greater than or equal to $\,1\,$.
• Using interval notation:
$\text{range of secant } = (-\infty,-1] \cup [1,\infty)$
(Recall from Advanced Set Concepts that ‘$\,\cup\,$’ is the union connective for sets.)
• Using set-builder notation:
$\text{range of secant } = \{ x\ \ |\ \ |x| \ge 1\}$
• The secant function has vertical asymptotes every place that it is not defined.
• Let $\displaystyle\,c = \frac{\pi}{2} + 2\pi k\,$ for integers $\,k\,$. Then:
• as $\,t \rightarrow c^-\,$, $\,\sec t\rightarrow\infty\,$
• as $\,t \rightarrow c^+\,$, $\,\sec t\rightarrow -\infty\,$
• Let $\displaystyle\,c = \frac{3\pi}{2} + 2\pi k\,$ for integers $\,k\,$. Then:
• as $\,t \rightarrow c^-\,$, $\,\sec t\rightarrow -\infty\,$
• as $\,t \rightarrow c^+\,$, $\,\sec t\rightarrow \infty\,$
Master the ideas from this section
|
2023-02-06 17:00:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676842093467712, "perplexity": 535.3454120606664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00046.warc.gz"}
|
https://www.physicsforums.com/threads/why-learn-integration-techniques.237131/
|
# Why learn integration techniques?
1. May 26, 2008
### ice109
if most integrals aren't integrable to and can be evaluated numerically to any degree of accuracy why learn these esoteric techniques at all?
2. May 26, 2008
### matt grime
Why bother learning any maths if a computer can solve it all numerically....?
3. May 26, 2008
### daniel_i_l
Math isn't just about finding solutions to equations - when it comes to proving theorems computers are rather useless in many cases. And even when computers can help us prove theorems - it's only when there're to many special cases for a person to examine, the four color theorem for example.
4. May 26, 2008
### Gib Z
Why learn how to differentiate when taking a very close finite difference will do?
5. May 26, 2008
### D H
Staff Emeritus
Be careful here. For example, it is often claimed that $$e^{-x^2}$$ is not integrable. That of course is not true. The integral of this function is very well-known:
$$\int_0^x e^{-t^2}dt = \frac{\surd \pi} 2 \text{erf}(x)$$
When mathematicians say something isn't integrable what they really mean that the solution cannot be expressed in terms of some limited set of functions, typically the elementary functions. If an integral comes up often enough mathematicians (or physicists, or whoever) will define a function based on this integral. The error function is one such special function.
That one has to resort to numerical techniques to solve a numerical problem is not limited to the special functions. What are the exact values of $\surd 2$ and $\sin 1$? We have to use numerical techniques to evaluate $\surd 2$ and $\sin 1$, even though both the square root and sine functions are elementary functions.
6. May 26, 2008
### MathematicalPhysicist
No, integrability means excatly what it means, that functions' riemann sum converges, and the limit of this is called riemann integral.
a function which isn't integrable in a specific domain, means that its riemann sum doesnt converge.
at least this is one way of defining integrability.
7. May 26, 2008
### D H
Staff Emeritus
Loop, while that is one way of defining integrability, it obviously is not the sense meant by the original poster. What the OP really meant was (my changes to the OP are in italics):
"if most antiderivatives cannot be expressed in terms of elementary functions but can be evaluated numerically to any degree of accuracy why learn these esoteric symbolic integration techniques at all?"
Last edited: May 26, 2008
8. May 26, 2008
### ice109
yes that was a mistake on my part.
yes i don't see anything odd about using a numerical method for a numerical problem such as evaluating a definite integral.
this begs the question: what are antiderivatives used for other than elegantly evaluating definite integrals.
actually nm that question cause i'm sure there exists a use somewhere, maybe proving certain things or some such thing.
but calculus classes are for engineers, why do they need to learn these techniques? i'm not an engineering student i don't know but i would guess their integrals are simple??? even if they are why waste their time when everyone these days has access to some way of evaluating them numerically.
that's not a very good rebuke because i don't need to resort to finite differences unless i'm taking discrete data which most people don't do.
before i get a lot of people rebuking me note that i think the ideas behind the indefinite/definite integral and derivative are very important, it's the rigamarole i don't see the necessity of.
Last edited: May 26, 2008
9. May 27, 2008
### Gib Z
I don't get what you mean by having "to resort to" them, my point was that taking close finite differences can give us the numerical value for the derivative at that point to any degree of accuracy, just as numerical integration techniques do for integrals. I thought this parallel might have made the answer a tiny bit easier to see, but it obviously doesnt, my bad :(
My point was, exact answers are always nice =] And when we can't get an exact answer, call it something new.
10. May 27, 2008
### DavidWhitbeck
And I think that's a good question, to wit I have a bad answer (*cough* speculation *cough*).
I think that many students haven't fully mastered algebra when they begin learning calculus. They can do algebra, but they can't think with algebra. In the process of doing all those limits, derivatives and integrals they obviously gain an intuition for calculus, but they also only then truly become adept at thinking with algebra. And that is crucial for doing well in physical science and engineering.
I say this because my students at the beginning of the year had no problem doing algebra, but they really struggled with interpreting and understanding algebraic equations even when they understood the physical concepts. By the end of the year, that really wasn't a problem. Now certainly you can attribute it to both physics and calculus, but I have a feeling that calculus played a stronger role in that learning process.
11. May 27, 2008
### dx
Integration is not always about finding the numerical value of an integral. Just pick up any book on pure mathematics. You'll most likely find integrals on every page, but not one of them will be evaluated to give a numerical answer. In fact, you can even just look at any book on physics or engineering. Most results will involve symbolic integration as intermediate steps in the derivations.
12. May 27, 2008
### ice109
and what is exact to be exact? is $\sqrt{2}$ exact? sure but who cares because you can't do anything with that symbol except algebraic manipulation. i would say derivatives are useful because we can always take a derivative of a continuous function and it's quicker than finite differences, that was my point.
i'm going to get lynched for this one but that again is a completely pointless thing im my humble opinon. anyway then symbolic integration is a tool for research engineers and physicists not practicing engineers. meaning of course i had in mind all the phenomenology that goes on in physics when i asked this question hence i restricted it to engineering. i guesss i should've been more specific about what kind.
13. May 27, 2008
### Mute
If you don't understand how integration works, why should you have any reason to expect that the number your computer is giving you will be accurate? Black boxes that give you magical answers can be dangerous things. If you don't understand how the program does things, you run the risk of getting an erroneous answer out of it. This might not be a large problem if you're a research scientist just trying to solve an integral to use in a calculation you're doing for a paper, but if you're an engineer and a computer gives you the wrong answer you could end up with a collapsing bridge. In order to understand how programs do integration, you need to know how integration works - convergence of the integral, how many bins you need to accurately represent the area under the curve, etc.
Furthermore, there could be issues of efficiency. Some integrals can be transformed into other integrals which could be easier to solve numerically. If you're writing your own integration problem you need to know how to work with integrals if you're going to turn the integral you have into something nicer to evaluate numerically.
Lastly, what if the integral you need to do is a simple one, or one with problem points that might cause a computer grief due to singular points that are easily dealt with symbolically? Why waste time getting a program to numerically solve $\int_a^b dx~\ln x$ when you could easily find the integral to be $\left[x \ln x - x\right]_a^b$? The computer would have a much easier time evaluating [b ln b - b] - [a ln a - a] than summing up several bins.
Last edited: May 27, 2008
14. May 27, 2008
### dx
You clearly have no idea what engineering is. Post this in the engineering forum and see what they say.
15. May 27, 2008
### ice109
how about instead of me double posting you give me an example since that is the point of this thread.
Look I'm not a polemicist, let everyone keep that in mind. For what it's worth I'm a pure math student so I'm not some lazy dunce trying to argue their way out of learning abstract concepts.
Last edited: May 27, 2008
16. May 27, 2008
### Crosson
The truth is that we teach integration techniques so that we can employ professors into mathematical research en masse.
If you want a better answer (in terms of morality, but only equally true in the world), calculation is part of the mathematical tradition and without it pure mathematics would collapse. The problem is that now that you don't care about calculations, you are one step closer to not caring about theorems. As you can see by looking at history, the criteria for what is a theorem and what is a mere example always shift in a more jaded direction over time --- towards the view that more and more is trivial --- but if we follow this trend to its logical conclusion we see that math will die of the same snob-strangulation that kills technique in other fields.
17. May 27, 2008
### daudaudaudau
Say you have to evaluate this integral 100,000 times for various values of "a" (this is a completely realistic scenario if you are simulating something)
$$\int_0^a \frac{1}{x^{0.999}}dx$$
which do you think is faster: Waiting for some numerical algorithm to converge, or to do it by hand?
Or what about a function that oscillates like crazy? It takes a while for that one to converge too.
18. May 27, 2008
### uman
I think a much more interesting question would be "why bother learning integration techniques when Mathematica can usually symbolically evaluate integrals for you?"
19. May 27, 2008
### rock.freak667
In my opinion, teaching the techniques allowed me to appreciate how technology shortened the many many integrals that I have had to do by hand.(And most of those, were long and nasty looking!)
20. May 27, 2008
### DavidWhitbeck
Actually in your case doing it by hand is still slower because you still have to evaluate the antiderivative 100,000 times. Simply use an algorithm that's quadrature based on rational functions (as opposed to polynomials) in C code and a home pc will beat you to the punchline. Even if you allow yourself the use of a scientific calculator when you evaluate that function 100,000 times!
|
2018-02-20 02:56:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7538937330245972, "perplexity": 594.3111839194418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00162.warc.gz"}
|
https://wiki.inspired-lua.org/image.height
|
# image.height
image.height is a function that is part of image.
This has been introduced in TI-Nspire OS 3.0 (Changes).
This function returns as an integer the height, in pixels, of the given image.
## Syntax
image.height(image)
Parameter Type Description
image
TI.Image The image you want to the height of.
## Example
imageHeight = image.height(theImage)
|
2019-01-19 14:21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44271567463874817, "perplexity": 5753.464035172372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583668324.55/warc/CC-MAIN-20190119135934-20190119161934-00464.warc.gz"}
|
https://math.stackexchange.com/questions/805631/prove-identity-involving-the-tsallis-q-logarithm
|
# Prove identity involving the Tsallis q-logarithm
The natural logarithm and the exponential can both be generalized to a called q-logarithms and q-exponentials.those functions are defined as follows: \begin{eqnarray} \log_q(x) &:=& \frac{x^{1-q} - 1}{1 - q} \\ \exp_q(x) &:=& \left(1 + (1-q) x\right)^{\frac{1}{1-q}} \end{eqnarray} Here $q>0$. In the limit $q\rightarrow 1$ we retrieve the natural log and the exponential respectively. The $q$-functions are mutual inverses, ie it holds $\exp_q(\log_q(x)) = x$ and $\log_q(\exp_q(x)) = x$. However, what happens if we feed the $q$-log into an ordinary exponential or, vice versa, feed the $q$-exponential into an ordinary log? If $q$ is close '' to unity we should be getting something that is close to identity. How close? In order to quantify that I ask to prove the following series expansion in $q$ around unity:
$$\exp\left(\imath \omega \log_q(x)\right) = x^{\imath \omega} \left[ 1 + \sum\limits_{p=2}^\infty \frac{((1-q) \log(x))^p}{p!} \left( \left.\frac{1}{1!} \left(\frac{\imath \omega}{1-q}\right)^1\right|_{p\ge 2} + \left.\frac{1}{2!} \left(\frac{\imath \omega}{1-q}\right)^2(2^p-2-2 p)\right|_{p\ge 4} + \left.\frac{1}{3!} \left(\frac{\imath \omega}{1-q}\right)^3(3^p-3(1+p/2)2^p+3(p^2+p+1))\right|_{p\ge 6} + \cdots \right) \right]$$
I do not fully understand the expansion in the OP; in absence of further details I would like to consider the limit $q\rightarrow 1$ as follows.
Let $x>0$; then $\ln_q(x)=\frac{e^{(1-q)\ln x}-1}{1-q}$ by definition.
To consider the limit $q\rightarrow 1$, we expand $\ln_q x$ around $q=1$, i.e.
$$\ln_q(x)=\frac{e^{(1-q)\ln x}-1}{1-q}=\frac{1}{1-q}\left[(1+(1-q)\ln x+\frac{(1-q)^2}{2!}\ln x+\dots)-1\right]=\\ \ln x + O(1-q).$$
In other words, in the above expansion the variable is $q$. Then
$$\exp(i\omega\ln_q x)=\exp(i\omega\ln x+ O(1-q))=\exp(O(1-q))\exp(i\omega\ln x)\rightarrow \exp(i\omega\ln x)$$
when $q\rightarrow 1$. We used $\exp(i\omega O(1-q))=\exp(O(1-q))\rightarrow 1$, in the limit $q\rightarrow 1$.
|
2019-12-12 01:07:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9834317564964294, "perplexity": 229.65569579396853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00362.warc.gz"}
|
https://www.math.ucdavis.edu/research/seminars?talk_id=2459
|
# Mathematics Colloquia and Seminars
### Lie Algebra Cohomology and Laplace Operators
Algebra & Discrete Mathematics
Speaker: Dmitry Fuchs, UC Davis Location: 2112 MSB Start time: Fri, Apr 17 2009, 1:10PM
This is a joint work with Connie Wilmarth.
Let {C_i,∂_i: C_i → C_{i-1}} be an arbitrary complex over $\bb R$, H_i= Ker(∂_i)/Im(∂_{i+1}) being its homology. Fix arbitrary Euclidean structures for C_i and let δ_{i-1}: C_{i-1} → C_i be the operator adjoint to ∂_i. The Laplace operator is Δ=Δ_i=∂_{i+1} \circ δ_i+δ_{i-1}\circ ∂_i: C_i → C_i. It is a common place that every harmonic (Δ_i c=0) chain/cochain c ∈ C_i is a cycle and a cocycle (∂_i c=0, δ_i c=0), and every homology/cohomology class from H_i is represented by a unique harmonic chain/cochain.
Let a (real) Lie algebra $\frak g$ be furnished with some Euclidean structure; this makes Euclidean the standard chain spaces C_i({\frak g}), and lets apply the previous definitions to the chain/cochain complexes {C_i({\frak g}),∂_i}, {C^i( {\frak g})=C_i({\frak g}), δ_i}\$. There are a few known isolated results giving, in a nice explicit form, the full spectrum of the Laplace operators for some classical nilpotent Lie algebras. Our work extends these results to a broad class of nilp otent Lie algebras: maximal nilpotent subalgebras of arbitrary Kac-Moody Lie algebras (furnished with some special Euclidean structures).
Many things remain unclear. What is the homological/cohomological meaning of eigenvectors of the Laplace operator with non-zero eigenvalues? What is the meaning of the Euclidean structure mentioned above (it is defined uniquely)? And more.
I promise to explain the definitions of the Lie algebra homology/cohomology and of Kac-Moody. I do not promise to make the things more clear to the audience than they are clear to myself.
|
2021-07-28 14:21:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7155681252479553, "perplexity": 3421.075793477599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153729.44/warc/CC-MAIN-20210728123318-20210728153318-00175.warc.gz"}
|
https://gmatclub.com/forum/the-triangle-in-the-figure-above-has-two-sides-extended-as-shown-what-246645.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Apr 2019, 09:35
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The triangle in the figure above has two sides extended as shown. What
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 54436
The triangle in the figure above has two sides extended as shown. What [#permalink]
### Show Tags
09 Aug 2017, 04:41
00:00
Difficulty:
25% (medium)
Question Stats:
89% (00:38) correct 11% (00:59) wrong based on 42 sessions
### HideShow timer Statistics
The triangle in the figure above has two sides extended as shown. What is the value of x?
(A) 38
(B) 78
(C) 102
(D) 106
(E) 116
Attachment:
2017-08-09_1259.png [ 4.57 KiB | Viewed 939 times ]
_________________
Current Student
Joined: 23 Jul 2015
Posts: 152
Re: The triangle in the figure above has two sides extended as shown. What [#permalink]
### Show Tags
09 Aug 2017, 08:44
angle adjacent to $$\angle x = 180- x$$ ... (i)
External angle of a triangle = sum of angles on opposite side -->
$$142 = (180-x) + 64$$
solving for x, x = 102 (Choice C)
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3386
Location: India
GPA: 3.12
Re: The triangle in the figure above has two sides extended as shown. What [#permalink]
### Show Tags
09 Aug 2017, 09:13
Since the sum of angles in a straight line is 180,
the angle inside the triangle adjacent to 142 degree is 38 degree.
Also, the third angle in the triangle is 180 - x
180 - x + 38 + 64 = 180(Sum of angles in a triangle is 180 degree)
38 + 64 = 180 - 180 + x
x = 38 + 64 = 102(Option C)
_________________
You've got what it takes, but it will take everything you've got
Current Student
Joined: 12 Aug 2015
Posts: 2613
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: The triangle in the figure above has two sides extended as shown. What [#permalink]
### Show Tags
09 Aug 2017, 15:30
Exterior angle = Sum of interior opposite angles.
Hence x = 64 + (180-142) => 64+38 => 102
Hence C.
_________________
Re: The triangle in the figure above has two sides extended as shown. What [#permalink] 09 Aug 2017, 15:30
Display posts from previous: Sort by
# The triangle in the figure above has two sides extended as shown. What
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2019-04-22 16:35:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389330625534058, "perplexity": 4956.912250620841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00328.warc.gz"}
|
https://www.paperswithcode.com/paper/enhancing-cross-target-stance-detection-with
|
Enhancing Cross-target Stance Detection with Transferable Semantic-Emotion Knowledge
Stance detection is an important task, which aims to classify the attitude of an opinionated text towards a given target. Remarkable success has been achieved when sufficient labeled training data is available... (read more)
PDF Abstract
|
2020-11-28 05:25:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445391058921814, "perplexity": 8614.312599487424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00252.warc.gz"}
|
https://quantum-computing.ibm.com/composer/docs/iqx/build-circuits
|
# Build, edit, and inspect quantum circuits¶
## Overview¶
Follow these steps to build a quantum circuits:
2. Create your circuit: either drag-and-drop operations or enter OpenQASM code. Optionally, create a custom operation in OpenQASM.
3. Inspect your circuit’s components using Circuit Inspector.
## Open IBM Quantum Composer¶
1. (Optional) If you are not currently signed in to IBM Quantum, select Sign in in the upper right corner. Then, you can either sign in or Create an IBMid account.
If you don’t sign in, the visualizations automatically show simulated results for up to four qubits. If you want to run your circuit on a quantum system or simulator, or if you want to visualize a circuit that has more than four qubits, you must sign in.
2. Open IBM Quantum Composer by clicking the Application switcher ( ) on the upper left corner, then click Composer. The workspace displays an untitled empty circuit.
3. Name your circuit by clicking on the words Untitled circuit and typing in a name for your circuit. Click the checkmark to save the name.
• Use the View menu to change from the default theme to a monochrome theme. You can also select which panels to include on your workspace, then use the menu in the right corner of any panel to access options for further customization. The options to show or hide phase disks, choose the alignment of the qubits on your circuit, and reset the workspace to the default are in the View menu as well.
• Open your account settings ( ) to switch between dark and light workspace themes.
• If you are signed in, the Files panel displays by default. You can close it by clicking the Files icon .
To build a circuit, you can either drag-and-drop operations, or you can enter OpenQASM code into the code editor. Your circuit is automatically saved every time you make a change.
## Build your circuit with drag-and-drop¶
### Operations catalog¶
Drag and drop operations from the operations catalog onto the quantum and classical registers. Start typing in the search window to quickly find an operation.
Collapse and expand the operations catalog by clicking the icon in the upper right corner of the operations panel. You can also switch between an expanded or condensed view of the operations by using the icons next to the search window.
Right-click an operation icon and select Info to view the definition of an operation, along with its QASM reference.
To undo or redo, use the curved arrows in the toolbar.
### Alignment¶
Choose Freeform alignment to place operations anywhere on the circuit. For a more compact view of your circuit, choose Left alignment. To see the order in which operations will execute, choose Layers alignment, which will apply left alignment and add column delineators that indicate the execution order, from left to right and top to bottom.
Once operations are placed on your circuit, you can continue to drag and drop them to new positions.
### Copy and paste¶
Click an operation and use the icons in the contextual menu to copy and paste it.
### Select multiple operations¶
You can select several operations to copy and paste them, drag them to a new location, or group them into a custom unitary operation that displays in your operations catalog and functions as a single gate.
To select more than one operation, place your cursor just outside one of the operations, then click and drag across the area to select. Shift-click individual operations to select or deselect them. A dashed line outlines the set of operations you are selecting, and each operation that is actually part of the selection is outlined in blue.
For example, in the following image, the Hadamard gate on q1 and the CX gate are selected. The Hadamard gate on q0 is not selected.
Select Copy from the contextual menu to copy the group.
To paste the group of operations, right-click in the circuit and select Paste.
### Build a custom operation using the group feature¶
To group several operations together and save them as a custom operation, first select the operations as described above, then select Group from the contextual menu. You are prompted to name the custom operation or you can accept the default name. Click OK, and the custom operation will be represented by a single box, both in your circuit and in the operations catalog.
You can now drag and drop the new operation throughout your circuit. Note that the operation is saved to this circuit but does not appear in the operations catalog for other circuits.
You can also build a custom operation directly in the OpenQASM code editor; see Create a custom operation in OpenQASM for more information.
### Ungroup a custom or predefined operation¶
To ungroup the gates within a custom or predefined operation, click the operation on the Composer and select Ungroup from the contextual menu. You can now move the separate operations individually. When you ungroup an operation, each element in the former group executes independently, which may mean they execute in a different order from when they were grouped together.
### Expand an operation’s definition¶
To view the operations that constitute a custom or predefined operation without ungrouping them, click Expand definition from the contextual menu to see the defining gates. Click the icon again to collapse the definition.
### Rename or delete a custom operation¶
To rename or delete a custom operation, right-click the operation in the operations catalog and select Rename or Delete. Deleting a custom operation from the operations catalog also deletes any instances of it on your circuit; however, deleting a custom operation from the circuit does not delete it from the operations catalog.
To add or remove quantum or classical registers, click Edit → Manage registers. You can increase or decrease the number of qubits or bits in your circuit and rename the registers. Click Ok to apply the changes. You can also simply click the register name (e.g., q[0]) and use the options in the contextual menu to quickly add or delete registers or qubits.
To add a conditional to a gate, drag the if operation to the gate and set the parameters in the Edit operation panel that automatically opens. You can also double-click a gate to access the Edit operation panel, and set the conditional parameters that way.
A control modifier yields a gate whose original operation is now contingent on the state of the control qubit. For more details, right-click the control modifier symbol in the operations catalog, then click Info.
Drag the control modifier to a gate in order to add a control to it. A dot appears on the control qubit and a line connects it to the target qubit. To edit which qubit is the control or target, click the gate and select the Edit operation icon (or double-click the gate) to open the Edit operation panel, then specify your parameters. From the Edit operation panel, you can also remove a control from a qubit by clicking the x next to the qubit name.
### Visualize with phase disks throughout your circuit¶
To visualize the state of all qubits at any point in your circuit, drag the phase disk icon from the operations catalog and place it anywhere in your circuit. A column of barrier operations and a column of phase disks are added (one barrier operation and phase disk per qubit). Hover over each phase disk to read the state of the qubit at that point in the circuit. Note that adding the phase disks does not alter your circuit; they are merely a visualization tool.
### Export a circuit image¶
To export an image of your circuit, select File → Export. The Export options window opens, where you can choose a theme (light, dark, white on black, or black on white), a format (.svg or .png), and whether you want to apply a line wrap. After you’ve chosen your options, click Export.
### Conditional reset¶
Several hardware systems can now perform conditional reset. If you have access to these systems, either through the IBM Quantum Network, or the Educators or Researchers programs, you can now initialize (or re-initialize) qubits in the state, through measurement followed by a conditional X gate. Conditional reset will be rolled out to more systems, including open-access systems, over time. Read these tutorials for more details:
## Build your circuit with OpenQASM code¶
Note
IBM Quantum Composer currently supports OpenQASM 2.0.
• To open the code editor, click View → Panels → Code Editor.
• See the Operations glossary for OpenQASM references to gates and other operations.
• You can define your own custom operations; see Create a custom operation in OpenQASM.
• For more information on using the OpenQASM language, including sample lines of code, see the original research paper, Open Quantum Assembly Language. The table of OpenQASM language statements from the paper is reproduced below. The OpenQASM grammar can be found in Appendix A of the paper.
OpenQASM language statements (version 2.0)
Statement
Description
Example
OPENQASM 2.0;
Denotes a file in OpenQASM format[a]
OPENQASM 2.0;
qreg name[size];
Declare a named register of qubits
qreg q[5];
creg name[size];
Declare a named register of bits
creg c[5];
include “filename”;
Open and parse another source file
include “qelib1.inc”;
gate name(params) qargs
Declare a unitary gate
(see text of paper)
opaque name(params) qargs;
Declare an opaque gate
(see text of paper)
// comment text
Comment a line of text
// oops!
U(theta,phi,lambda) qubit\|qreg;
Apply built-in single-qubit gate(s)[b]
U(pi/2,2\*pi/3,0) q[0];
CX qubit\|qreg,qubit\|qreg;
Apply built-in CNOT gate(s)
CX q[0],q[1];
measure qubit\|qreg -> bit\|creg;
Make measurement(s) in basis
measure q -> c;
reset qubit\|qreg;
Prepare qubit(s) in
reset q[0];
gatename(params) qargs;
Apply a user-defined unitary gate
crz(pi/2) q[1],q[0];
if(creg==int) qop;
Conditionally apply quantum operation
if(c==5) CX q[0],q[1];
barrier qargs;
Prevent transformations across this source line
barrier q[0],q[1];
[a] This must appear as the first non-comment line of the file.
[b] The parameters theta, phi, and lambda are given by parameter expressions; for more information, see page 5 of the paper and Appendix A.
### Create a custom operation in OpenQASM¶
You can define new unitary operations in the code editor (see the figure below for an example). The operations are applied using the statement name(params) qargs; just like the built-in operations. The parentheses are optional if there are no parameters.
To define a custom operation, enter it in the OpenQASM code editor using this format: gatename(params) qargs;. If you click +Add in the operations list, you will be prompted to enter a name for your custom operation, which you can then build in the code editor.
Once you have defined your custom operation, drag it onto the graphical editor and use the edit icon to fine-tune the parameters.
a) The gates to be included in the custom operation:
b) The code for the new operation:
c) The new operation in the graphical editor:
The Circuit Inspector de-mystifies the inner workings of the circuits you create. It steps through a simulation of your circuit, one layer at a time, so that you can see the state of the qubits as the computation evolves.
• In the View menu, select the panels for the visualizations that you want to use.
• Click the Inspect toggle in the toolbar. Note that once the Circuit Inspector is toggled on, you cannot add any more operations until it is turned off again.
• If you built your circuit with the Freeform alignment turned on, note that the Circuit Inspector automatically turns on the Left alignment.
• To move step-by-step through visualizations of your circuit’s components, control the movement of the Circuit Inspector using the forward and rewind buttons.
• To inspect only some operations, click the operations you want to be inspected, and a colored overlay appears over each that indicates they will be included when you run the Circuit Inspector. To unselect an operation, click on it again, and the overlay disappears.
|
2022-07-01 08:47:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20431743562221527, "perplexity": 2079.454818881597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00689.warc.gz"}
|
https://indico.fysik.su.se/event/2219/timetable/?view=standard_numbered
|
Conference on Frontiers in Quantum Gases Liquids and Solids
Europe/Stockholm
NORDITA
NORDITA
Roslagstullsbacken 23, 106 91 Stockholm, Sweden
, , , ,
Description
This conference is a part of the NORDITA program on "Quantum solids, liquids, and gases" and will focus on frontiers in physics of quantum solids, liquids and gases (defined in a broad sense). This conference is supported by NORDITA and the Swedish Research Council.
• Monday, August 9
• 1
Dr. Barnett, Ryan (Joint Quantum Institute): Quantum dynamics in ferromagnetic and antiferromagnetic condensates
TBA
• Tuesday, August 10
• 2
Prof. Dorsey, Alan (University of Florida): Low temperature properties of solid 4He: Supersolidity or quantum "metallurgy"?
A "supersolid" is a putative phase of matter possessing the distinguishing property of a solid--a nonzero shear modulus--together with Bose condensation. Numerous experiments over the last six years have yielded hints of supersolid behavior in solid 4^He, but the threads of these investigations have not produced a consistent interpretation. I'll briefly review some of the history of the subject, the recent experimental and theoretical work, and conclude with an overview of my own work on phenomenological modeling of defects in solid 4He.
• 3
Prof. Törmä, Päivi (Aalto University): Imbalanced Fermi gases: the FFLO state, polarons, and the Josephson effect
In this talk, I will discuss three topics. First, the FFLO phase in one dimensional optical lattice and a direct way to observe it as narrowing of the hopping modulation spectrum. Second, I discuss our work on polaron-type physics in one dimension where we can explain the results of exact simulations by a polaron ansatz in one the weakly interacting and by a spinless Fermion solution given by the Bethe ansatz in the strongly interacting limit. This corresponds to the polaron-molecule crossover in three dimensions. Finally, I present a novel type of Josephson effect where the components of the Cooper pair feel a different potential (voltage), which is possible to realize in ultracold gases. We show that this leads to spin-asymmetric Josephson oscillations and provide an explanation of this intriguing phenomenon which also gives new information about the traditional Josephson effect.
• Wednesday, August 11
• 4
Dr. Parish, Meera (University of Cambridge): Polarons, molecules and trimers in polarized atomic Fermi gases
In this talk, I will consider an atomic Fermi gas in the limit of extreme spin imbalance, where one has a single spin-down impurity atom interacting attractively with a spin-up atomic Fermi gas. By constructing variational wave functions for polarons, molecules and trimers, I will explore the quantum phase transitions between each of these bound states as a function of mass ratio and interaction strength. I will show that Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) pairing is mostly superceded by the formation of a p-wave trimer, which can be viewed as a FFLO molecule that has bound an additional majority atom. When the mass of impurity atom is sufficiently light, I find that these transitions lie outside the region of superfluid-normal phase separation in spin-imbalanced Fermi gases and should thus be observable in experiment, unlike the well-studied equal-mass case.
• 5
Prof. Zwierlein, Martin (Massachusetts Institute of Technology): TBA
TBA
• Thursday, August 12
• 6
Prof. Nikolic, Predrag (George Mason University):Unitarity in periodic potentials: a renormalization group analysis
We explore the universal properties of interacting fermionic lattice systems, mostly focusing on the development of pairing correlations from attractive interactions. Using renormalization group we identify a large number of fixed points and show that they correspond to resonant scattering in multiple channels. Pairing resonances in finite-density band insulators occur between quasiparticles and quasiholes living at different symmetry-related wavevectors in the Brillouin zone. This allows a BCS-BEC crossover interpretation of both Cooper and particle-hole pairing. We show that in two dimensions the run-away flows of relevant attractive interactions lead to charged-boson-dominated low energy dynamics in the insulating states, and superfluid transitions in bosonic mean-field or XY universality classes. Analogous phenomena in higher dimensions are restricted to the strong coupling limit, while at weak couplings the transition is in the pair-breaking BCS class. The models discussed here can be realized with ultra-cold gases of alkali atoms tuned to a broad Feshbach resonance in an optical lattice, enabling experimental studies of pairing correlations in insulators, especially in their universal regimes. In turn, these simple and tractable models capture the emergence of fluctuation-driven superconducting transitions in fermionic systems, which is of interest in the context of high temperature superconductors.
• 7
Prof. Todadri, Senthil (Massachusetts Institute of Technology): Quantum spin liquids and the Mott transition
TBA
• Dinner at Fågelängen
Dinner at Fågelängen (inside Albanova building, next to NORDITA)
• Friday, August 13
• 8
Prof. Ran, Ying (Boston College):TBA
TBA
• 9
Prof. Gurarie, Victor (University of Colorado at Boulder):SU(N) magnetism with cold atoms and chiral spin liquids
Certain cold atoms, namely the alkaline earth-like atoms whose electronic degrees of freedom are decoupled from their nuclear spin, can be thought of as quantum particles with an SU(N)-symmetric spin. These have recently been cooled to quantum degeneracy in the laboratories around the world. A new world of SU(N) physics has thus become accessible to experiment, including that described by the SU(N) Hubbard model in various dimensions as well as many others. We show that the Mott insulator of such cold atoms is a SU(N) symmetric antiferromagnet of the type not commonly studied in the literature. We further show that in 2 dimensions, this antiferromagnet is a chiral spin liquid, a long sought-after topological state of magnets, with fractional and non-Abelian excitations.
• Monday, August 16
• 10
Prof. Radzihovsky, Leo (University of Colorado): Fluctuations, stability, and phase transitions of Larkin-Ovchinnikov states: quantum liquid crystals
Motivated by polarized Feshbach-resonant atomic gases, I will discuss the nature of low-energy fluctuations in the putative Larkin-Ovchinnikov (LO) state. Because the underlying rotational and translational symmetries are broken spontaneously, this gapless superfluid is a quantum smectic liquid crystal, that exhibits fluctuations that are qualitatively stronger than in a conventional superfluid, thus requiring a fully nonlinear description of its Goldstone modes. Consequently, at nonzero temperature the LO superfluid is an algebraic phase even in 3d. It exhibits half-integer vortex-dislocation defects, whose unbinding leads to transitions to a superfluid nematic and other phases. In 2d at nonzero temperature, the LO state is always unstable to a charge-4 (paired Cooper-pairs) nematic superfluid. I expect this superfluid liquid-crystal phenomenology to be realizable in imbalanced resonant Fermi gases trapped isotropically.
• 11
Dr. Chung, Suk Bum (Stanford University): Half-quantum vortices in p_x + ip_y superconductors
Half-quantum vortices, each with flux of h/4e, are needed to realize topological quantum computation in a p+ip superconductor. However, until recently, there had not been any clear experimental observation of such vortices. We point out, although the magnetic energy is reduced by breaking full vortices into half-quantum vortices, there is an energy cost (which diverges with system size) due to the unscreened spin current and the spin state locking. The recent observation of half-quantum vortices by the Budakian group can be best explained by the fact that the magnetic energy savings can dominate over the spin energy cost in a mesoscopic setting. A finite density vortex lattice may have similar energetics, leading to a lattice of half-quantum vortices. Lastly we show that there can be entropy driven dissociation of a full vortex into two half-quantum vortices.
• 12
Mr. Mross, David (Massachusetts Institute of Technology):TBA
TBA
• Tuesday, August 17
• 13
Prof. Raghu Srinivas (Rice University/Stanford): Superconductivity in the repulsive Hubbard model: an asymptotically exact weak coupling solution
We study the phase diagram of the Hubbard model in the limit where U, the onsite repulsive interaction, is much smaller than the bandwidth. We present an asymptotically exact expression for T$_c$, the superconducting transition temperature, in terms of the correlation functions of the non-interacting system which is valid for arbitrary densities so long as the interactions are sufficiently small. Our strategy for computing T$_c$ involves first integrating out all degrees of freedom having energy higher than an unphysical initial cutoff $\Omega_0$. Then, the renormalization group (RG) flows of the resulting effective action are computed and T$_c$ is obtained by determining the scale below which the RG flows in the Cooper channel diverge. We prove that T$_c$ is independent of $\Omega_0$.
• 14
Prof. Zlatko Tesanovic (Johns Hopkins University, Bloomberg Center): Recent Developments in High-Temperature Superconductivity: Pnictides versus Cuprates
Two years ago, the discovery of high-temperature superconductivity in iron-pnictides reshaped the landscape of condensed matter physics. Until that time, for more than two decades, the copper-oxide materials were the only game in town and their mysterious properties loomed large as perhaps the greatest intellectual challenge in our field. Cuprates are strongly interacting systems, near to the so-called Mott insulating limit, in which electrons are made motionless by strong correlations, and it is currently believed that much of their unusual behavior stems from such correlations. In contrast, the newly discovered iron-based high-temperature superconductors exhibit a more moderate degree of correlations and do not appear to be near the Mott limit. Consequently, some of their properties might be easier to understand. In this talk, the basic ideas in theory of iron- pnictides will be introduced and illustrated with experimentally-relevant examples. Particular attention will be paid to the interband resonant-pairing mechanism of multiband superconductivity and the renormalization group description of the underlying physics. This will be contrasted with strongly correlated cuprates, where a thousand fancy theoretical ideas bloom, from quantum fluctuations to Berry phases, from gauge field theory to AdS/CMT duality. But we will never lose touch with reality and promise to keep a watchful eye on recent and sometimes conflicting experiments.
• 15
Kjäll, Jonas (UC Berkeley): Bound states with E8 symmetry in quantum Ising-like chains
In a recent experiment on CoNb2O6, Coldea et. al. found for the first time experimental evidence of the exceptional Lie algebra E8. The emergence of this symmetry was theoretically predicted long ago for the transverse quantum Ising chain in the presence of a weak longitudinal field. We consider an accurate microscopic model of CoNb2O6 and calculate numerically the dynamical structure function using a recently developed matrix-product state based method. The excitation spectra contain bound states which are characteristic to the E8 symmetry. We furthermore compare the observed bound states to the ones found in the transverse Ising chain in a longitudinal field.
• Wednesday, August 18
• 16
Prof. Bruun, Georg (University of Aarhus): RF spectroscopy, polarons, and dipolar interactions in cold gases
TBA
• 17
Prof. Ueda, Masahito (Universit of Tokyo):Topological excitations in Bose-Einstein Condensation
TBA
• Thursday, August 19
• 18
Prof. Svistunov, Boris (University of Massachusetts): Superfluid turbulence
TBA
• Dinner at Fågelängen
Dinner in the resturant inside Albanova (building next to
Nordita)
• Friday, August 20
• 19
Prof. Shlyapnikov, Georgy (LPTMS, Universite Paris Sud):New phases of fermionic dipolar gases
TBA
|
2022-08-09 13:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774508118629456, "perplexity": 2599.6732748312334}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00758.warc.gz"}
|
http://mathhelpforum.com/differential-equations/84644-help-solving-problem.html
|
# Thread: help solving this problem
1. ## help solving this problem
hi, i need help solving this problem, i have no clue how to do it. Thank you
y(1+2xy)dx+x(1-2xy)dy=0
2. Originally Posted by uga75
hi, i need help solving this problem, i have no clue how to do it. Thank you
y(1+2xy)dx+x(1-2xy)dy=0
Try $u = xy$. Your equation should separate.
3. Originally Posted by uga75
hi, i need help solving this problem, i have no clue how to do it. Thank you
y(1+2xy)dx+x(1-2xy)dy=0
multiply equation with 1/(4x^2*y^2) equation will become exact....
|
2016-08-27 12:40:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6696304082870483, "perplexity": 421.30059145984046}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982905736.38/warc/CC-MAIN-20160823200825-00045-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://christopherolah.wordpress.com/tag/topology/
|
## Posts Tagged ‘topology’
### Rethinking Topology (or a Personal Topologodicy)
April 18, 2011
(This document was typeset in unicode. This may cause problems for some people. A PDF is available as an alternative for them.)
When I was originally introduced to topology, I simply accepted most of its properties as generalizations of ℝⁿ. I didn’t give it any serious thought until about a month ago when I read an excellent thread on math overflow about it. Since then, its been one of the things I often find myself thinking about when I’m trying to fall asleep. Given the amount of thought I’ve put into it, and the fact that I feel I should be answer questions like this about topology, given that it’s one of the areas of math I spend a lot of time on, I thought I’d write up my thoughts. They lent themselves well to being written in the form of an introduction to topology, so that’s what I did.
(After finishing this essay I decided to reread the MO thread. The first comment — not answer, a comment — mentions the Kuratowski closure axioms and closure axioms sounded like one might call what I came up with. Sure enough, they’re the exact same, down to the ordering. Are all attempts to make mathematical contribution’s this frustrating? I’m posting this because of the amount of work I put in, but there’s nothing new here.)
Consider 1 with respect to [0,1). It isn’t part of the set, but in a sort of intuitive sense it almost is. And knowing which points are almost in’ a set gives us lots of information, for example notions of boundaries and connectedness. Topology is based on us formalizing this notion of almost in’ and once we formalize it, we can consider non-standard notions of being `almost in’ or apply these ideas to spaces that we don’t typically associate them with. (more…)
### Separation Axiom Visualisations
August 14, 2010
A couple days a go, I saw some nice visualisations of separation axioms on Wikipedia. Unfortunately, it wasn’t a full set. Well, here is a full set (well, T0, T1, T2, T2 1/2, T3, T4, T5):
### The Mandelbrot Set: Compact?
March 28, 2010
Several weeks ago, I read something on Wikipedia that shocked me: “The Mandelbrot set is a compact set.
At first I didn’t believe it. How could the Mandelbrot set, in its infinite complexity, be compact? (more…)
### Compactness Graph
March 9, 2010
Here’s the first revision of a graph of the implications of topological properties that I made:
(Click on it to see a better version!)
It’s mostly based off the stuff in Counterexamples In Topology (great book, BTW) but I did add some stuff (like Baire!) and merged/reorganised it. Diagram was made by Graphviz.
Most of the implications are trivial, but there are a few I haven’t prooved yet (most of the ones involving seperation axioms).
### Limits and the Infinitesimal Number
January 4, 2010
I’ve been thinking about the infinitesimal number, $\delta = \frac{1}{\infty}$, recently. In particular, that one could use it to evaluate limits.
What is a limit, really? I’ve been reading some topology recently and I think that it really is a function that returns an accumulation point (hint: these are alternatively known as limit points). More specifically, I believe that $\lim_{x\to a} f(x)$ is an attempt to find a value $y$ such that $(a,y)$ is a limit point of the graph of the the function $f$.
But I’ve digressed since the simpler, “It’s the value as we approach the point” is perhaps more useful to us…
Consider $\lim_{x\to a} f(x)$. How is this defferent from $f(x\pm\delta)$? The difference is that we’re looking for the hypothetical value that the function is becoming (also the value of the point which any open set containing it intersected with the graph is not null), not its value when it is infinitly close. Consider $\lim_{x\to 2} x$: the difference is $2$ versus $2\pm \delta$. So, we need to get rid of the infintesimal difference. Let $\mathbb{R}(x)$ represent the rounding of anumber $x$ to the nearest real number. Then,
$\lim_{x\to a} f(x) = \mathbb{R}\cdot f(x\pm\delta)$
Does this have any applications? I beleive it may provide a more elegant way to present Calculus.
Why can’t I take the first principles defenition of a derivative and:
$\frac{dy}{\rlap{---}dx} = \lim _{h\to 0} \frac{f(x+h) -f(x)}{\rlap{--}h}$
? Because the defenition of a limit would give us zero, that being the closest real number. But instead, we could just say that a value is close to delta and use that. Not useing $h$ because I see no reason to use a random symbol when there is a logical one.
$dy = f(x+dx) -f(x)|_{dx\simeq\delta}$
And we have a differential form, let’s add vectors (for more dimensions):
$dy = f(\vec{x}+\vec{dx}) -f(\vec{x})|_{|\vec{dx}|\simeq\delta}$
and for yet more clarity:
$dy = y(\vec{x}+\vec{dx}) -y(\vec{x})|_{|\vec{dx}|\simeq\delta}$
which is far less cumborsome than
$l\lim_{\vec{dx \to 0}} \frac{f(\vec{x}+\vec{dx}) -f(\vec{x})}{|\vec{dx}|}=0$
And has almost identical properties (I suppose that since it is $+ \delta$ it will be forward facing, eg, $d|x|(0) = \delta$ instead of undefined).
Just some random thoughts.
$dy = f(\vec{x}+\vec{dx}) -f(\vec{x})|_{|\vec{dx}|\simeq\delta}$
|
2013-05-24 02:46:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7529253363609314, "perplexity": 541.8814877118268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132729/warc/CC-MAIN-20130516113532-00088-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/270684/mean-value-theorem-limit-question
|
# Mean Value Theorem limit question
Suppose that $f(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. Then, by the mean value theorem, $\forall x\in(a,b]$ $\exists c\in(a,x)$ such that $f '(c)=(f(x)-f(a))/(x-a)$.
It seems to me that, since $c\in(a,x)$, as $x$ goes to $a$ from the right, $c$ must also "go to" $x$ and thus $f'(c)$ goes to $f'(x)$. Is this correct? If not, where have I gone wrong?
-
The point c depends on x, and is different for each value of x. – Calvin Lin Jan 5 '13 at 2:42
Yes, $c$ depends on $x$ and can vary as $x$ varies. Indeed, this fact is what I was using to try to "trap" $c$. As $x$ gets very close to $a$, $c$ must also be getting very close to $x$. This is what I was relying on (rightly or wrongly) in my argument. – jim Jan 5 '13 at 3:03
Well, I think you're right if we assume say that the derivative is continuous. Applying L'Hospital we'd get
$$\lim_{x\to a^+}\frac{f(x)-f(x)}{x-a}=\lim_{x\to a^+}f'(x)=f'(a)=\lim_{c\to a^+}f'(c)$$
-
Ok, thanks. However, we don't know that $f'(a)$ exists. – jim Jan 5 '13 at 2:57
@Don I would agree with your equation if there was no $f'(a)$ in there. In fact, that is all we can say, since $f'(a)$ might not exist, if the limit tends to infinity, like in $f(x) = \sqrt{x}$. – Calvin Lin Jan 5 '13 at 3:14
Indeed so, @CalvinLin...something's still missing here, and I think that as the question's given we can't reach a final conclusion: if $\,f'(a),$ exists and $\,f'(x)\,$ is continuous we can do as shown above, yet $\,f(x)=\sqrt x\,\,,\,\,a=0\,$ is a nice example that the above can miserably fail, as $\,(\sqrt x)'\,$ doesn't exists at $\,a=0\,$ – DonAntonio Jan 5 '13 at 10:55
Two issues here. First, what does it mean that $c$ "goes to" $x$? A limit is a number, not a variable. And $x$ is a variable. It would be correct to say that $c$ goes to $a$.
If $f'(a)$ exists (which you did not assume), then using your argument we can conclude the following: there are numbers $c$ arbitrarily close to $a$ for which $f'(c)$ is arbitrarily close to $f'(a)$. This can be put more precisely in the language of sequences: there exists a sequence $c_n\to a$ such that $f'(c_n)\to f'(a)$
However, even then it does not follow that $\lim_{c\to a} f'(c)=f'(a)$. The definition of limit requires us to investigate all values of $c$ that are close to $a$. We looked only at those $c$ which the Mean Value Theorem threw at us.
-
|
2014-10-31 14:35:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523347616195679, "perplexity": 158.6075203680867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900019.55/warc/CC-MAIN-20141030025820-00049-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://cms.math.ca/cjm/msc/58E50?fromjnl=cjm&jnl=CJM
|
Morse index of approximating periodic solutions for the billiard problem. Application to existence results This paper deals with periodic solutions for the billiard problem in a bounded open set of $\hbox{\Bbbvii R}^N$ which are limits of regular solutions of Lagrangian systems with a potential well. We give a precise link between the Morse index of approximate solutions (regarded as critical points of Lagrangian functionals) and the properties of the bounce trajectory to which they converge. Categories:34C25, 58E50
|
2014-03-08 00:38:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7594060897827148, "perplexity": 334.64297184645824}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651907/warc/CC-MAIN-20140305060731-00044-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://motls.blogspot.com/2019/04/dimons-capitalism-vs-aocs-socialism.html
|
## Saturday, April 06, 2019 ... //
### Dimon's capitalism vs AOC's socialism
Many of us feel that the civilization is falling into the gutter.
Pillars of the society and nation states are being systematically attacked by numerous folks. Those of us who have been asking "why did the Roman Empire decline" see an answer in the ongoing repetition of the process. Too many people simply lose any attachment to everything that is good about the society and deliberately start to promote changes that are terrifying and destructive. In the absence of truly formidable competitors, great civilizations collapse simply because the people inside want that collapse and those who don't lose their power to prevent it.
One of the aspects of the anti-civilization movement are the increasingly widespread criticisms of capitalism itself – the freedom of entrepreneurship. The young generation is increasingly absorbing pathological opinions about a great fraction of the political and societal questions. The opposition to capitalism is an example. In 2018, less than one-half of Americans between 18 and 29 years of age said to have a positive relationship to capitalism – a drop by 12 percentage points in a few years. Given these numbers, is capitalism sustainable at all?
Three days ago, these challenges were discussed by the dean of the Harvard Business School. The obvious question is whether this anti-capitalist delusion is also widespread among the HBS students. I think it is and I think it is a systemic failure. A person who can't understand why capitalism is economically superior over socialism just shouldn't be allowed in the HBS buildings – at most like a janitor. The very name indicates that the school exists to nurture business, not to decimate it. Business is a defining activity of capitalism – in socialism, we weren't quite allowed to even say "business". The understanding of the creative power of capitalism is a matter of apolitical expertise (or rudimentary knowledge), not a political issue where you should look for both sides of a "story". The story may have two sides but one side is right and the other side is wrong.
Aside from various left-wing activists, even lots of people whose positions make them the "de facto leaders of the American capitalism" have voiced not very positive things about capitalism itself. Ray Dalio, the chief of the largest hedge fund, said that "capitalism had to be reformed". Another guy, his fellow Bitcoin critic Warren Buffett, says that capitalism is still good for America. But he can't avoid all the comments that the taxation should be huge and hugely progressive and that the rich people should disperse their wealth before they die, too.
OK, the best grade may be given by a third Bitcoin critic, the boss of the largest U.S. bank JP Morgan, Jamie Dimon. In his 50-page-long annual letter to investors (full PDF), he touched lots of issues, including artificial intelligence, the essential role of stocks buybacks in capitalism, and the possibly rosy years ahead for value investors (instead of investors into the growth stocks where some reticence may be expected because of the 2008-2009 experience).
But his most catchy points were the – rare – defense of capitalism against socialism. Socialism "inevitably produces stagnation, corruption and often worse...". Exactly. He clarifies the latter: "..such as authoritarian government officials who often have an increasing ability to interfere with both the economy and individual lives – which they frequently do to maintain power. This would be as much a disaster for our country as it has been in the other places it's been tried."
Socialism just unavoidably gives too much power to people who aren't good at reproducing the well-being, who aren't motivated to do so, and who have a high enough probability to be malicious and to abuse their power. Why are they doing it? Because yes, they can. And they need to, to keep what they have accumulated (power etc.). Everything that can happen and isn't banned or discouraged by some actual mechanisms will happen, Gell-Mann's totalitarian principle says (as a rule for physics – but the whole world is a physical system). Capitalism actually discourages such bad things because it gives the – economic – power mostly to those who have done something that has been considered useful by others because they paid for it. When they cease to do useful things for others, they are running out of money, too, exactly how it should be. And it's a huge difference.
The banker keeps his "mandatory" pro-social comments reasonably contained: "I am not an advocate for unregulated, unvarnished, free-for-all capitalism. (Few people I know are.) But we shouldn't forget that true freedom and free enterprise (capitalism) are, at some point, inexorably linked." Well, I might be among the "few people". Or not. It depends. But indeed, free enterprise – a "civil right" freedom that makes capitalism unavoidable – is just another part of the freedoms and human rights and they can't really be sharply separated because you can't ever rigorously separate "financial" from "non-financial" freedoms.
Dimon also reminds everybody that the U.S. wouldn't be any exception if it tried socialism. This point is often contradicted by some kind of nationalist superstitions. Americans are exceptional so surely America would remain the most glorious land even if it switched to socialism, wouldn't it? Well, it almost certainly wouldn't. America isn't exceptional enough to beat the basic laws of mathematics – and the reasons why socialism harms the growth and the well-being of societies are basically mathematical theorems.
Germans were also skillful but you could still observe the different fates of East Germans and West Germans – which started as two completely comparable parts of a single nation, parts that were defeated by different "allies". Of course, after some 40 years, the ratio of GDP per capita could have been 10-to-1 in some meaningful measures. When this factor is translated to an annual rate, you may want to know that 101/40 = 1.059. It looks like capitalism is producing a whopping 6% of an extra growth per year. Like me, you surely find this number implausible because the capitalist Germany doesn't even seem to grow at a 6% rate. At any rate, the degree of stagnation and corruption that you get along with socialism is enough to beat basically "all of" the growth. For a country, to suffer from socialism may be a fatal problem.
A West German and East German car, 1989 – perhaps the most widespread car models in each country. Can you tell the difference between the Mercedes and the Trabant? After 40 years of divergence, these products were made by people whose DNA was almost identical. Socialism ruins any economy and there are no racial exceptions!
And we're bombarded by advertisements promoting people who are completely ignorant about the totally basic issues deciding about the happiness and prosperity of the nations – like AOC, a Congresswoman who has nontrivial chances to become a presidential candidate soon enough because lots of the activists in the media etc. just find it "cool". So she is also signed under a "Green New Deal", a more devastating plan for the future of America than the – hopefully abolished – North Korean plans to nuclear carpet bomb the U.S. territory. Along with the ban on airplanes and reliable power plants, she wants to add some massive communist machinery as a "cherry on a pie" extending her environmentalist plans. So she impresses the folks by saying "we will have the wind turbines everywhere" so you may only watch the TV if the wind blows. And by the way, just a detail, to make the life easier for the turbines, we will copy most of the Soviet bureaucracy, too.
There are lots of contradictions in her statements. For example, she says that in 75 years, her grandkids will do something – despite the fact that the world will end in 12 years and she doesn't want to have any kids, as she announced elsewhere.
Two days ago, she broadcast a one-hour-long monologue from her apartment in D.C. to her 3 million Instagram followers – some 8,000 watched live. Search for AOC popcorn at various servers (see John's favorite parody). OK, she's been a Congresswoman for half year, collecting at least $174,000 as her salary. But we learn from the young bartender – who is serving wine to herself, eating popcorn, and speaking with a full mouth – that she still sleeps on a mattress and doesn't have a chair or a bed. She just constructs a chair in front of the webcamera – which is claimed to be "Ikea" but it seems to be "Made in China". She doesn't use any nuts and bolts. Despite the$174,000 salary (and some staff, I guess), she doesn't have a bed that could cost some $50 – this cheap one would surely be more than enough for her who clearly has no trouble with much less luxurious furniture, as we see. I obviously relate to her approach to some extent. While moving from New Jersey to Greater Boston around my 9/11/2001 PhD defense, I also slept on a mattress for two weeks on both places. Well, not exactly six months but good enough. Lots of ordinary people may surely see her as being "just like them" which seems helpful to her politics. But should it? At the end, I have never been proud about non-representative apartments etc. And I think that no one should be proud. From any meritocratic viewpoint, "depraved" (her word) living conditions are signs of a lack of management. Those considerations are particularly relevant for a politician. You know, if she cannot reasonably quickly allocate a$50 from her annual $174,000 salary, it means that despite the$4 trillion U.S. federal budget, expenses that are as small or as large as $1 billion may also fail to work for half a year. She was born to a Puerto Rican family in Bronx, in 1989. Assuming a woman with a similar background, there is absolutely nothing remarkable about her popcorn-and-wine diet or the missing furniture. She's probably more civilized than the average Puerto Rican woman from Bronx. But what is remarkable is the suggestion that "it is cool and desirable" for folks like that to run for the highest offices in the U.S. I am just amazed how many people fail to realize the sheer insanity of the situation when folks like the AOC, who just can't get a$20 chair or a $50 bed for a very long time, despite the massive funding, are encouraged to design plans like the Green New Deal that would cost some$93 trillion, a group estimated. The total cost of the Green New Deal is some 1 trillion (1,000,000,000,000) times greater than the "exercise involving the bed" that is still well beyond her abilities.
If you're given $93 trillion to play with, you're very likely to waste most of the amount, aren't you? Well, she surely is, the furniture is one proof. A system that works well – like the U.S. economy – is rather fancy, delicately fine-tuned, and lots of people and companies must do "almost exactly" what they do for them not to go out of business and for billions not to be thrown away. The "right behavior" depends on lots of expertise, careful reactions, and more. Now, AOC is illiterate as a manager, financier, scientist, and everything else. But for some reason, we're being told by the media that it's a good idea if such a lady is encouraged to mastermind a plan what to do with some$93 trillion. And it could be more money because someone who becomes the U.S. president could increase the taxation and the size of her projects – she has explicitly expressed that desire. And she is almost certainly eager to micro-manage everything much more than other politicians do – although she's clearly incompetent at managing even the simplest things.
In some "potentially dangerous" political climate, there is still some balance between the ideologically driven and fashionable "cool" things on one side; and some common sense or meritocracy on the other side. The hype and ideology were clearly getting stronger for decades. But I think that we have reached the point that at least the world as sketched and planned by the "mainstream" media doesn't even have a tiny trace of the common sense and meritocracy anymore. A Congresswoman who fails a $50 furniture test is encouraged to think about playing analogous things with$93 trillion – construct a plan that is an incoherent mixture of some manipulated science about the climate, superstitions about the society and the money, fabricated grievances and conspiracy theories, and more. If the expansion of a budget in the wake of a failure by a factor of one trillion isn't agreed to be insane, then nothing will be agreed to be insane.
Under the popcorn videos on YouTube, most people write similar things like I do. But then you also encounter a substantial portion of the commenters who are her fans, who don't seem to think about the economical and social consequences (and logic) of any of these things, and who like that she's similar to them or they want to date her, aside from other mundane considerations. These people have the same vote in the elections as the people with the brain do. The brainless side may easily win the elections – they have already won numerous such elections. The windows to the AOC universe are wide open. Lots of people have been indoctrinated to support any "cause" that is pushed by the "mainstream media", and on top of that, lots of the "apolitical folks" may vote for such candidates because they find a chaotically speaking lady with wine and a full mouth of popcorn sexy.
So be sure that the massive expansion of her power is not universally agreed to be insane. In fact, people who just point out the obvious fact that AOC is incompetent for all issues like that and her plans are insane are being attacked by her MSM attack dogs. Their self-evidently correct criticism is "politically incorrect" because she belongs into the intersection of several privileged groups. In effect, the likes of AOC – while having a modest content of the brain – end up being politically stronger than the likes of Jamie Dimon – who knows quite a lot and has done quite a lot of successful things. Because identity politics is so powerful in raising some people above others and the "problematic" minorities are the privileged ones, we really live in an anti-meritocratic regime in which the people who are professionally worse get more influential in average and systematically.
This world is a very dangerous one. Many parts still work well because many companies, activities etc. that make it work aren't really being attacked. Jamie Dimon still has a lot of freedom to make his delicate financial decisions on behalf of JP Morgan, for example. They operate in a feasible way that isn't far from being the only feasible one. But people who have absolutely no respect towards the fine mechanisms that have been found by centuries (and billions of years) of evolution are waiting to make their revolution almost complete and destroy almost everything that we considered dear about our world. JP Morgan may face man-made existential problems, too. Why do they want to do it? Because yes, they can.
Meanwhile, even in the Democratic Party there are people who know that capitalism is obviously needed for prosperity. House Democrat Stephanie Murphy (Florida, Vietnamese American) declared herself a proud capitalist who is offended by the very existence of this conversation about socialism, a system that U.S. troops have spent blood to defeat in other countries. But such declarations are amplified or deamplified by the media and most of the media people belong among the socialism fans, not among the people with a brain.
|
2020-07-12 17:23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24116121232509613, "perplexity": 2439.1075776373573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00088.warc.gz"}
|
https://www.simscale.com/knowledge-base/basic-geometry-requirements-for-pedestrian-wind-comfort/
|
Required field
Required field
Required field
Required field
Documentation
# Basic Geometry Requirements for Pedestrian Wind Comfort
Pedestrian wind comfort analysis in SimScale is based upon the Lattice Boltzmann Solver by Pacefish®$$^1$$ and therefore has few geometry requirements in terms of quality, and by that, generally speaking, open geometry, overlapping, and coinciding geometry is ok. However, it is also highly automated, streamlined, and tuned specifically for wind comfort based upon validation projects such as the AIJ cases, best practices from industry in both CFD and wind tunnel methodology, and guidelines and standards. Therefore, to get the most out of the solver, a few considerations need to be given about the tested geometry.
For an overview of all CAD related topics in the context of a PWC simulation, the reader can refer to our documentation here.
## Orientation
SimScale’s PWC analysis type requires some basic orientation rules which most tools comply with by default. However, if you are creating geometry from scratch in software that is often used for 3D modeling rather than building or environment orientated it is important to consider these.
The geometry should have the Z+ direction as the up orientation, i.e. Z-axis points to the sky. Also, although not strictly required, the PWC analysis type uses the Y-axis by default to orientate to North, this can be corrected in the platform, but usually, geometries are also created with this in mind.
## Sizing and Regions
Geometry should be uploaded in real physical sizing, i.e. scaled-down models should not be uploaded. The default unit is meters, and therefore, when and if exporting STL geometry, you should ensure that this unit is used in the model beforehand.
You can also select units when uploading STL’s to the SimScale platform. This can be done by matching the unit in the CAD and the SimScale platform, but care should be taken to get this right.
Cell sizing is currently done using reference sizing based upon the building of interest and the building maximum height. Therefore, it is recommended that a region of interest no more than 400 $$m$$ is used, to approach the target resolution of 0.5-1 $$m$$ when meshing. Click below to learn more about mesh settings in PWC analysis.
The region of interest should include, in most use cases, the newly proposed building or buildings and the region in which their presence or design might affect the pedestrian’s comfort. The area of interest should include your main buildings of interest and surrounding buildings and can be detailed if you need it.
The surrounding region is likely to affect the aerodynamics significantly in the region of interest so it is advised that the surrounding buildings are present but do not need to have any detail, they can be block representations. Beyond that, the buildings and obstacles will be modeled using a standard wind profile, correcting the exposure to the terrain (ranging from open terrain to the city center).
The PWC analysis type is capable of dealing with geometry intersecting the bounding box, therefore if the bounding box representing the fluid domain limits has geometry outsides of its bounds it will not be included in the solution.
City of London Guidelines
There may be specific restrictions regarding the geometry in use for wind comfort analysis based on the location. Here you can find a set of standard guidelines for the geometry and the mesh when analyzing pedestrian wind comfort in the City of London.
## Centering the Model at Origin
When uploading CAD models to the SimScale platform for the purpose of running a PWC simulation, it is recommended to have these models in an STL (.stl) type format. For more information on how to export STL formats in Rhino, kindly refer to the following documentations:
Having mentioned that, usually when dealing with 3D city models there is a very high chance that the components of the model are represented by geo-referenced data. This means, when exporting the model as an STL, the base coordinates (which are based on the geo-referenced data) would be quite large. Generally, we would like to avoid geo-referenced models for the following reason:
“Rhino scales the resolution of the STL export to the size of the model versus the origin.”
This means that Rhino export would break the model and create coarse 3D geometry. To better understand this let’s take a look at the following example, where a 3D model from an area in the city of Bristol is exported using Rhino at two different locations of the center of the model.
In Figure 2, the image on the left shows the STL model exported with center on (0,0,0) while the other image shows the STL model when exported with center coordinates at (5000 $$km$$, 5000 $$km$$, 0). It can be clearly seen that the model with very far coordinates from the origin had been represented in a much coarser sense which reduces the overall quality of the model due to the reason explained above.
Therefore, it is recommended to ensure that the models are centered at the origin. This would also ease the overall interaction with your model when setting up the simulation parameters.
## Geometry and Automatic Mesh Sizing Insights
This part will mainly address the question of how the highest structure in the city model affects the overall mesh size. In a PWC simulation, the wind tunnel dimensions ($$H, S, I, O$$) are directly proportional to the maximum height of the model, see Figure 3.
Therefore, it is recommended to analyze the model and identify structures that have no considerable effect on the wind flow pattern. Removing such structures would reduce the computational domain size and in return saves time and cost. For example, a moderate wind tunnel size would be scaled with the following proportions:
$$H, S, I = 3h$$
$$O = 9h$$
An example that was based on a case from one of our customers, is the presence of a flag inside the 3D model. The flag itself serves no purpose and has a negligible effect on the wind flow pattern due to its small dimension, the only effect it brings is an increase in the size of the computational domain due to a larger $$h$$ dimension.
Important
Please be aware that a large building, even if not in the region of interest is relevant and important to keep as it influences the wind flow pattern in its wake.
Our Knowledge Base articles have a great collection of articles associated with CAD issues in PWC analysis. Explore them now!
References
Last updated: December 31st, 2021
|
2022-10-04 16:09:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3776525557041168, "perplexity": 952.4301777856178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00104.warc.gz"}
|
https://docs.chainer.org/en/latest/reference/generated/chainer.datasets.get_kuzushiji_mnist_labels.html
|
# chainer.datasets.get_kuzushiji_mnist_labels¶
chainer.datasets.get_kuzushiji_mnist_labels()[source]
Provides a list of labels for the Kuzushiji-MNIST dataset.
Returns
List of labels in the form of tuples. Each tuple contains the character name in romaji as a string value and the unicode codepoint for the character.
|
2020-07-13 17:57:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1923683136701584, "perplexity": 3341.0467147328263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00371.warc.gz"}
|
https://astronomy.stackexchange.com/questions/20678/finding-the-lens-diameter-of-a-telescope-by-magnitude
|
# Finding the lens diameter of a telescope by magnitude
We have a telescope that can see stars with about +14 magnitude. how to find its lens (or mirror) diameter? (I mean the $D$ in formula $\theta = 1.22 \lambda / D$)
I don't know whether I can assume that the temperature of star is equal to temperature of the sun or not. but if we think that they are equal we get this:
$m_{\mathrm{sun}} - m_{\mathrm{star}} = -2.5 \log_{10} (\frac{b_{\mathrm{sun}}}{b_{\mathrm{star}}}) = -2.5 \log (\frac{\sigma t^4 \times 4 \pi R_{\mathrm{sun}}^2 /(4 \pi r^2) }{\sigma t^4 \times 4\pi R_{\mathrm{star}}^2 / (4\pi r_{\mathrm{star}}^2)})$ if we solve this with what we know about magnitude of sun (-26.83 or something close to it between -25 to -27) and another things like radii we get:
($R/r = 3.16605 \times 10^{-11} \mathrm{rad} = \tan \theta/2 = \theta/2$ and from that and formula $\theta = 1.22 \lambda / D$ and $\lambda = 550 nm$ we get something like D = $10596.8 \mathrm{m}$. which is very big for a telescope. what's the problem? how to solve this? Is this answer correct?
• Something looks very wrong here.... It looks like you have found the size of a mirror needed to resolve a 14 magnitude, sun-like star as a disc. You don't need to resolve a star as a disc to "see" it. Apr 9 '17 at 20:22
• @JamesK I think it's OK - the $\sigma T^4$ times $4 \pi R^2$ is the amount of light, and it's divided by $4 \pi r^2$ the total solid angle, to get the ratio of light received. Then it's... whoa - oh, I see what you mean. Ya the derivation takes a left turn and diverges into an Airy disk.
– uhoh
Apr 10 '17 at 5:56
The unaided eye can typically see mag 6 objects. With your telescope, you can see an additional 8 magnitudes. This requires a factor of $100^{8/5}$ of additional light gathering power (since 5 magnitudes equals 100 times the brightness). I.e. about 1580 times larger aperture than the pupil of the eye. Assuming the pupil has a diameter of 7mm, then the diameter of your telescope would have to be $7\times 1580^{1/2}$, which is about 280 mm.
|
2021-10-27 23:50:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7817684412002563, "perplexity": 502.19788242916013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00323.warc.gz"}
|
https://mathhelpforum.com/threads/separation-axiom.146323/
|
# separation axiom
#### blbl
Prove that (ℝ,τ_{√2})is T₀ but not T₁,T₂, , Regular , normal , where τ_{√2}={u∩√2:u∈τ}
#### Drexel28
MHF Hall of Honor
Prove that (ℝ,τ_{√2})is T₀ but not T₁,T₂, , Regular , normal , where τ_{√2}={u∩√2:u∈τ}
What does this even mean? What is $$\displaystyle T_{\sqrt{2}}=\left\{U\cap\sqrt{2}:U\in T\right\}$$?? Are you saying the only open set is $$\displaystyle \sqrt{2}$$?
#### HallsofIvy
MHF Helper
For that matter, what does it mean to intersect a set with a number?
|
2019-11-14 19:32:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673974275588989, "perplexity": 7157.230663480101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00355.warc.gz"}
|
https://math.stackexchange.com/questions/1353265/singularities-of-an-integral
|
# Singularities of an integral
We have the integral :
$$I(t)=-i\int_0^\infty \frac{\log\left[\frac{\sin(t\log\sqrt{1+ix})}{\log(1+ix)} \right ]-\log\left[\frac{\sin(t\log\sqrt{1-ix})}{\log(1-ix)} \right ]}{e^{2\pi x}-1} \, dx$$
I have tried everything to compute the integral, but it seems it's not doable in terms of elementary functions. For instance, the form of integral suggests that the Able-Plana formula can be used, but it can't. And closing the contour is troublesome. I have reasons to believe that the integral has logarithmic singularities, and can be expressed as: $$I(t)=f(t)+\sum_{\beta_{j}}\log\left(1-\frac{t^{2}}{\beta_{j}^{2}}\right)$$ Where $f(t)$ is an even,entire function -possibly zero- and the numbers $\beta_{j}$ are positive, real numbers. However, i haven't been able to prove that. A plot of the function (numerical integration) could be helpful.
EDIT
We can express the integral as : $$\int_{1-i\infty}^{1+i\infty}\frac{\log \left[\frac{ \sin{\left(\frac{t}{2}\log{u}\right)}}{\log{u}} \right ]}{e^{2\pi i u}-1}du-\int_{1}^{1+i\infty}\log \left[\frac{ \sin{\left(\frac{t}{2}\log{u}\right)}}{\log{u}} \right ]du$$
EDIT 2
The derivative of the integral can be expressed as : $$\frac{d}{dt}I(t)=\frac{t}{2i}\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{\frac{\log(1+ix)^{2}}{\left(\frac{t}{2}\log(1+ix) \right )^{2}-\pi ^{2}n^{2}} -\frac{\log(1-ix)^{2}}{\left(\frac{t}{2}\log(1-ix) \right )^{2}-\pi ^{2}n^{2}} }{e^{2\pi x}-1}$$ We can do the integral if we can calculate $$\int_{0}^{\infty}\frac{\log(1\pm ix)^{2}}{\left(\frac{t}{2}\log(1 \pm ix) \right )^{2}-\pi ^{2}n^{2}}e^{-2\pi mx}dx$$
• what a base of log? try to simplify integral,hint,log(y)-log(x)=log(x/y) – haqnatural Jul 8 '15 at 0:25
• the base is $e$, your hint doesn't make sense ... the integral $$-i\int_{0}^{\infty}\frac{\log\left[\log(1+ix) \right ]-\log\left[\log(1-ix) \right ]}{e^{2\pi x}-1}dx$$ doesn't converge ! – Mohammad Al Jamal Jul 8 '15 at 0:38
• The question might be easier if we had $\sin( t\sqrt{\ln(1+ix)})$ instead of $\sin(\frac{t}{2}\ln(1+ix))$ – will Jul 13 '15 at 21:01
• can you explain in more details ? – Mohammad Al Jamal Jul 14 '15 at 18:14
• Euler used an infinite product: $sin(\pi z) = \pi z\prod(1-z^2n^{-2})$, which looks similar to the lograthmic singularities you expected. Upon further reflection, the square root is almost irrelevant. – will Jul 14 '15 at 19:02
As discussion
This is seem to me is of Bromwich type integral.
\begin{align} I\left( t \right) &= \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right) - \log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 - ix} } \right)} \over {\log \left( {1 - ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} \\ &= \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} - \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 - ix} } \right)} \over {\log \left( {1 - ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} \\ &= \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} - \int_0^{ - \infty } {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + iy} } \right)} \over {\log \left( {1 + iy} \right)}}}} \right)}}{{e^{ - 2\pi y} - 1}}d\left( { - y} \right)} \\ &= \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} - \int_{ - \infty }^0 {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + iy} } \right)} \over {\log \left( {1 + iy} \right)}}}} \right)}}{{e^{ - 2\pi y} - 1}}dy} \\ &= \int_0^\infty {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right)}}{{e^{2\pi x} - 1}}dx} + \int_{ - \infty }^0 {\frac{{\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + iy} } \right)} \over {\log \left( {1 + iy} \right)}}}} \right)}}{{1 - e^{ - 2\pi y} }}dy} \end{align} where we used the substitution $-x=y$ in the second integral.
Now, using geometric series we have \begin{align} \frac{1}{{e^{2\pi x} - 1}} = \frac{{e^{ - 2\pi x} }}{{1 - e^{ - 2\pi x} }} = e^{ - 2\pi x} \sum\limits_{n = 0}^\infty {e^{ - 2n\pi x} } = \sum\limits_{n = 0}^\infty {e^{ - 2\left( {n + 1} \right)\pi x} } \end{align} so that \begin{align} \int_0^\infty {\left( {\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + ix} } \right)} \over {\log \left( {1 + ix} \right)}}}} \right)} \right)\sum\limits_{n = 0}^\infty {e^{ - 2\left( {n + 1} \right)\pi x} } dx} + \int_{ - \infty }^0 {\log \left( {{\textstyle{{\sin \left( {t\log \sqrt {1 + iy} } \right)} \over {\log \left( {1 + iy} \right)}}}} \right)\sum\limits_{n = 0}^\infty {e^{ - 2n\pi y} } dy} \end{align} Let $u = \sqrt {1 + iy} \Rightarrow u^2 = 1 + iy \Rightarrow 2udu = idy \Rightarrow dy = - 2iudu$ \begin{align} I(t):= - 2i\int_1^{1 + i\infty } {\left( {\log \left( {{\textstyle{{\sin \left( {t\log u } \right)} \over {\log \left( {u^2} \right)}}}} \right)} \right)\sum\limits_{n = 0}^\infty {e^{ 2i\left( {n + 1} \right)\pi \left( {u^2 - 1} \right) } } udu} \\ &- 2i\int_{1 - i\infty }^1 {\log \left( {{\textstyle{{\sin \left( {t\log u } \right)} \over {\log \left( {u^2} \right)}}}} \right)\sum\limits_{n = 0}^\infty {e^{ 2in\pi \left( {u^2 - 1} \right) } } udu} \end{align} Now use the fact that: Theorem: Let $F$ be an analytic dunction whose singularities $z_1,z_2,\cdots,z_n$ belong to the half-plane $\{{z|\Re({z})<c}\}$ and let $\mathop {\lim }\limits_{z \to \infty } F\left( z \right) = 0$. Then \begin{align} \frac{1}{{2\pi i}}\int_{c - i\infty }^{c + i\infty } {e^{zt} F\left( z \right)dz} = \sum\limits_{k = 1}^n {\mathop {{\mathop{\rm Res}\nolimits} }\limits_{z = z_k } \left\{ {e^{zt} F\left( z \right)} \right\}} \end{align}
• there are problems in your reasoning ... $\frac{1}{e^{-2\pi y}-1}$ should be expanded when $y<0$. – Mohammad Al Jamal Jul 8 '15 at 22:03
• Since $x \in [0, \infty)$ we have assumed that $-x=y$ which is $<0$? Tell if am I wrong? – Mohammad W. Alomari Jul 8 '15 at 23:37
|
2020-01-23 20:13:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000094175338745, "perplexity": 143.49882813716872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00261.warc.gz"}
|
https://stats.stackexchange.com/questions/78321/term-frequency-inverse-document-frequency-tf-idf-weighting
|
Term frequency/inverse document frequency (TF/IDF): weighting
I've got a dataset which represents 1000 documents and all the words that appear in it. So the rows represent the documents and the columns represent the words. So for example, the value in cell $(i,j)$ stands for the times word $j$ occurs in document $i$. Now, I have to find 'weights' of the words, using tf/idf method, but I actually don't know how to do this. Can someone please help me out?
Wikipedia has a good article on the topic, complete with formulas. The values in your matrix are the term frequencies. You just need to find the idf: (log((total documents)/(number of docs with the term)) and multiple the 2 values.
In R, you could do so as follows:
set.seed(42)
d <- data.frame(w=sample(LETTERS, 50, replace=TRUE))
d <- model.matrix(~0+w, data=d)
tf <- d
idf <- log(nrow(d)/colSums(d))
tfidf <- d
for(word in names(idf)){
tfidf[,word] <- tf[,word] * idf[word]
}
Here's the datasets:
> colSums(d)
wA wC wD wF wG wH wJ wK wL wM wN wO wP wQ wR wS wT wV wX wY wZ
3 1 3 1 1 1 1 2 4 2 2 1 1 3 2 2 2 4 5 5 4
wA wC wD wF wG wH wJ wK wL wM wN wO wP wQ wR wS wT wV wX wY wZ
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
3 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
5 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
wA wC wD wF wG wH wJ wK wL wM wN wO wP wQ wR wS wT wV wX wY wZ
1 0 0 0 0 0 0.00 0 0 0 0 0.00 0 0 0.00 0 0 0 0.00 2.3 0.0 0
2 0 0 0 0 0 0.00 0 0 0 0 0.00 0 0 0.00 0 0 0 0.00 0.0 2.3 0
3 0 0 0 0 0 3.91 0 0 0 0 0.00 0 0 0.00 0 0 0 0.00 0.0 0.0 0
4 0 0 0 0 0 0.00 0 0 0 0 0.00 0 0 0.00 0 0 0 2.53 0.0 0.0 0
5 0 0 0 0 0 0.00 0 0 0 0 0.00 0 0 2.81 0 0 0 0.00 0.0 0.0 0
6 0 0 0 0 0 0.00 0 0 0 0 3.22 0 0 0.00 0 0 0 0.00 0.0 0.0 0
You can also look at the idf of each term:
> log(nrow(d)/colSums(d))
wA wC wD wF wG wH wJ wK wL wM wN wO wP wQ wR wS wT wV wX wY wZ
2.813411 3.912023 2.813411 3.912023 3.912023 3.912023 3.912023 3.218876 2.525729 3.218876 3.218876 3.912023 3.912023 2.813411 3.218876 3.218876 3.218876 2.525729 2.302585 2.302585 2.525729
• Thanks for your help! But is it possible to obtain some value for each word which represents some weighting (instead of a whole matrix)? Now we have a whole matrix of weights. I'm doing some feature selection and want to use tf/idf as a filter method... – ABC Dec 2 '13 at 17:25
• @ABC tf-idf by definition refers to the full matrix of weights. Perhaps you are interested in the idf weights alone, which you would get by log((number of docs)/(number of docs containing the term)). You could also just filter out the infrequent terms. – Zach Dec 2 '13 at 18:50
• Very clear! Really appreciated. – ABC Dec 2 '13 at 19:00
there is package tm (text mining) http://cran.r-project.org/web/packages/tm/index.html which should do exactly you need:
#read 1000 txt articles from directory data/txt
#some preprocessing
corpus <- tm_map(corpus, removeWords, stopwords("english"))
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, stemDocument, language="english")
#creating term matrix with TF-IDF weighting
terms <-DocumentTermMatrix(corpus,control = list(weighting = function(x) weightTfIdf(x, normalize = FALSE)))
#or compute cosine distance among documents
dissimilarity(tdm, method = "cosine")
R is a functional language so reading code can be tricky (e.g. x in terms)
Your code has an error: colSums computes the number of occurence in the corpus, not the number of texts with the word.
A version computing such would be:
tfidf=function(mat){
tf <- mat
id=function(col){sum(!col==0)}
idf <- log(nrow(mat)/apply(mat, 2, id))
tfidf <- mat
for(word in names(idf)){tfidf[,word] <- tf[,word] * idf[word]}
return(tfidf)
}
There is a new R package which can do this: textir: Inverse Regression for Text Analysis
The relevant command is tfidf, the example from the manual:
data(we8there)
## 20 high-variance tf-idf terms
colnames(we8thereCounts)[
order(-sdev(tfidf(we8thereCounts)))[1:20]]
I am late to this party, but I was playing with the concepts of tc-idf (I want to emphasize the word 'concept' because I didn't follow any books for the actual calculations; so they may be somewhat off, and definitely more easily carried out with packages such as {tm: Text Mining Package}, as mentioned), and I think what I got may be related to this question, or, in any event, this may be a good place to post it.
SET-UP: I have a corpus of 5 long paragraphs taken from printed media, text 1 through 5 such as The New York Times. Allegedly, it is a very small "body", a tiny library, so to speak, but the entries in this "digital" library are not random: The first and fifth entries deal with football (or 'soccer' for 'social club' (?) around here), and more specifically about the greatest team today. So, for instance, text 1 begins as...
"Over the past nine years, Messi has led F.C. Barcelona to national and international titles while breaking individual records in ways that seem otherworldly..."
Very nice! On the other hand you would definitely want to skip the contents in the three entries in between. Here's an example (text 2):
"In the span of a few hours across Texas, Mr. Rubio suggested that Mr. Trump had urinated in his trousers and used illegal immigrants to tap out his unceasing Twitter messages..."
So what to do to avoid at all cost "surfing" from the text 1 to text 2, while continuing to rejoice in the literature about almighty Barcelona F.C. in text 5?
TC-IDF: I isolated the words in every text into long vectors. Then counted the frequency of each word, creating five vectors (one for each text) in which only the words encountered in the corresponding text were counted - all the other words, belonging to other texts, were valued at zero. In the first snippet of text 1, for instance, its vector would have a count of 1 for the word "Messi", while "Trump" would have 0. This was the tc part.
The idf part was also calculated separately for each text, and resulted in 5 "vectors" (I think I treated them as data frames), containing the logarithmic transformations of the counts of documents (sadly, just from zero to five, given our small library) containing a given word as in:
$\log\left(\frac{\text{No. documents}}{1\, +\, \text{No. docs containing a word}}\right)$. The number of documents is 5. Here comes the part that may answer the OP: for each idf calculation, the text under consideration was excluded from the tally. But if a word appeared in all documents, its idf was still $0$ thanks to the $1$ in the denominator - e.g. the word "the" had importance 0, because it was present in all texts.
The entry-wise multiplication of $\text{tc} \times \text{idf}$ for every text was the importance of every word for each one of the library items - locally prevalent, globally rare words.
COMPARISONS: Now it was just a matter of performing dot products among these "vectors of word importance".
Predictably, the dot product of text 1 with text 5 was 13.42645, while text 1 v. text2 was only 2.511799.
The clunky R code (nothing to imitate) is here.
Again, this is a very rudimentary simulation, but I think it is very graphic.
|
2019-07-16 20:36:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925394296646118, "perplexity": 968.2206387759478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00321.warc.gz"}
|
https://arrow.apache.org/docs/r/reference/write_parquet.html
|
Parquet is a columnar storage file format. This function enables you to write Parquet files from R.
write_parquet(
x,
sink,
chunk_size = NULL,
version = NULL,
compression = default_parquet_compression(),
compression_level = NULL,
use_dictionary = NULL,
write_statistics = NULL,
data_page_size = NULL,
use_deprecated_int96_timestamps = FALSE,
coerce_timestamps = NULL,
allow_truncated_timestamps = FALSE,
properties = NULL,
arrow_properties = NULL
)
## Arguments
x data.frame, RecordBatch, or Table A string file path, URI, or OutputStream, or path in a file system (SubTreeFileSystem) chunk size in number of rows. If NULL, the total number of rows is used. parquet version, "1.0" or "2.0". Default "1.0". Numeric values are coerced to character. compression algorithm. Default "snappy". See details. compression level. Meaning depends on compression algorithm Specify if we should use dictionary encoding. Default TRUE Specify if we should write statistics. Default TRUE Set a target threshold for the approximate encoded size of data pages within a column chunk (in bytes). Default 1 MiB. Write timestamps to INT96 Parquet format. Default FALSE. Cast timestamps a particular resolution. Can be NULL, "ms" or "us". Default NULL (no casting) Allow loss of data when coercing timestamps to a particular resolution. E.g. if microsecond or nanosecond data is lost when coercing to "ms", do not raise an exception A ParquetWriterProperties object, used instead of the options enumerated in this function's signature. Providing properties as an argument is deprecated; if you need to assemble ParquetWriterProperties outside of write_parquet(), use ParquetFileWriter instead. A ParquetArrowWriterProperties object. Like properties, this argument is deprecated.
## Value
the input x invisibly.
## Details
Due to features of the format, Parquet files cannot be appended to. If you want to use the Parquet format but also want the ability to extend your dataset, you can write to additional Parquet files and then treat the whole directory of files as a Dataset you can query. See vignette("dataset", package = "arrow") for examples of this.
The parameters compression, compression_level, use_dictionary and write_statistics support various patterns:
• The default NULL leaves the parameter unspecified, and the C++ library uses an appropriate default for each column (defaults listed above)
• A single, unnamed, value (e.g. a single string for compression) applies to all columns
• An unnamed vector, of the same size as the number of columns, to specify a value for each column, in positional order
• A named vector, to specify the value for the named columns, the default value for the setting is used when not supplied
The compression argument can be any of the following (case insensitive): "uncompressed", "snappy", "gzip", "brotli", "zstd", "lz4", "lzo" or "bz2". Only "uncompressed" is guaranteed to be available, but "snappy" and "gzip" are almost always included. See codec_is_available(). The default "snappy" is used if available, otherwise "uncompressed". To disable compression, set compression = "uncompressed". Note that "uncompressed" columns may still have dictionary encoding.
## Examples
# \donttest{
tf1 <- tempfile(fileext = ".parquet")
write_parquet(data.frame(x = 1:5), tf1)
# using compression
if (codec_is_available("gzip")) {
tf2 <- tempfile(fileext = ".gz.parquet")
write_parquet(data.frame(x = 1:5), tf2, compression = "gzip", compression_level = 5)
}
# }
|
2021-01-26 09:13:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3888677954673767, "perplexity": 10878.939221618311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00260.warc.gz"}
|
https://mathoverflow.net/questions/300575/is-there-a-connected-t-2-topology-on-mathbbq-that-is-coarser-than-the-euc
|
# Is there a connected $T_2$-topology on $\mathbb{Q}$ that is coarser than the Euclidean one?
Let $\mathbb{Q}$ be the rationals, and let $\tau$ be the Euclidean topology on $\mathbb{Q}$. Is there a topology $\tau' \subseteq \tau$ such that $(\mathbb{Q},\tau')$ is connected and $T_2$?
Using Sierpinski's topological characterization of $\mathbb Q$ (as a unique countable regular second countable space without isolated points), it can be shown that $\mathbb Q$ is homeomorphic to the set $\mathbb N$ endowed with the Furstenberg topology $\tau$ generated by the base consisting of all possible arithmetric sequences $a+\mathbb N_0b:=\{a+bn:n\ge 0\}$ with $a,b\in\mathbb N$.
The Furstenberg topology contains the Golomb topology $\tau'$ on $\mathbb N$, which is generated by the base consisting of arithmetric sequences $a+\mathbb N_0b$ where $a,b$ are comprime.
|
2019-12-09 09:37:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339873194694519, "perplexity": 58.74991294919467}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00111.warc.gz"}
|
https://brilliant.org/problems/an-electricity-and-magnetism-problem-by-rishabh-2/
|
# An electricity and magnetism problem by Rishabh Deep Singh
If the equivant capacitance between the points $$A$$ and $$B$$ is of the form $$\dfrac pq$$, where $$p$$ and $$q$$ are coprime positive integers, find $$p+q$$.
×
|
2017-01-23 06:48:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148819446563721, "perplexity": 298.84602695609976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00316-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://study.com/academy/answer/in-a-manufacturing-process-a-random-sample-of-36-manufactured-bolts-has-a-mean-length-of-3-cm-with-a-standard-deviation-of-0-3-cm-what-is-the-99-percent-confidence-interval-for-the-true-mean-length.html
|
In a manufacturing process, a random sample of 36 manufactured bolts has a mean length of 3 cm...
Question:
In a manufacturing process, a random sample of 36 manufactured bolts has a mean length of 3 cm with a standard deviation of 0.3 cm. What is the 99 percent confidence interval for the true mean length of the manufactured bolt?
A). 2.802 to 3.198
B). 2.228 to 3.772
C). 2.864 to 3.136
D). 2.902 to 3.098
E). 2.884 to 3.117
Confidence Interval:
In this question, we will use the t distribution to calculate and construct the 99% confidence interval for the true mean length of the manufactured bolt. The t distribution is a sampling distribution with (n-1) degree of freedom.
Given that,
• Sample size, {eq}n = 36 {/eq}
• Mean, {eq}\bar{x} = 3 {/eq}
• Standard deviation, {eq}s=0.3 {/eq}
Degree of freedom, {eq}n-1 = 36-1 = 35 {/eq}
The 99% confidence interval for the population mean is defined as:
{eq}\bar{x} \pm t_{0.01/2}\times \frac{s}{\sqrt{n}} {/eq}
Excel function for the confidence coefficient:
=TINV(0.01,35)
Now,
{eq}3 \pm 2.724\times \frac{0.3}{\sqrt{36}}\\ 2.864 < \mu < 3.136 {/eq}
Therefore, Option (C) is correct.
Using the t Distribution to Find Confidence Intervals
from Statistics 101: Principles of Statistics
Chapter 9 / Lesson 6
6.2K
|
2019-08-19 20:20:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528481721878052, "perplexity": 2092.8547062223633}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00224.warc.gz"}
|
http://mathoverflow.net/feeds/user/5309
|
User makhalan duff - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T06:25:39Z http://mathoverflow.net/feeds/user/5309 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/130770/can-group-cohomology-be-interpreted-as-an-obstruction-to-lifts Can group cohomology be interpreted as an obstruction to lifts? Makhalan Duff 2013-05-15T22:58:17Z 2013-05-16T06:27:41Z <p>The standard way to view the first and second group cohomologies is this:</p> <h3>The Standard Story</h3> <p>Let $G$ be a group, and let $M$ be a commutative group with a $G$-action. Then the first cohomology has the following interpretation: $H^1(G,M)$ is bijective with sections (modulo conjugation by $M$) of the short exact sequence $$1\rightarrow M\rightarrow M\rtimes G\rightarrow G\rightarrow 1.$$ Furthermore, the group of $1$-cocycles, $Z^1(G,M)$ is bijective with the sections of this short exact sequence. (Not modulo anything.) In fact, this holds even if $M$ is non-abelian.</p> <p>The second cohomology $H^2(G,M)$ is bijective with the set of isomorphism classes of group extensions $$1\rightarrow M\rightarrow H\rightarrow G\rightarrow 1$$ for which there exists a (or equivalently for every) set theoretic section $s:G\rightarrow H$ such that $g\cdot m=s(g)m(s(g))^{-1}$. (Here $g\cdot m$ denotes the action of $g$ on $m$ coming from the $G$-module structure of $M$.)</p> <h3>Liftings</h3> <p>In a paper I have been reading, they have given an entirely different interpretation to the first cohomology. Namely:</p> <p>Let $A$ and $B$ be groups, and let $C$ be a normal abelian subgroup of $B$. Let $\bar \phi:A\rightarrow B/C$ be a homomorphism. Assume $\phi$ has a lift $\alpha:A\rightarrow B$. Then $Z^1(A,C)$ is bijective with the set of lifts of $\phi$ to homomorphisms from $A$ to $B$.</p> <p>The bijection goes like this: $\theta\in Z^1(A,C)$ goes to $\alpha\theta$.</p> <p>My question is: can one give an interpretation in terms of lifts to the second cohomology, or to the group of $2$-cocycles?</p> <p>More precisely:</p> <h3>Question</h3> <p>Let $A$ and $B$ be groups, and let $C$ be a normal abelian subgroup of $B$. Let $\bar \phi:A\rightarrow B/C$ be a homomorphism.</p> <p>Is it true that there exists a lift of $\bar \phi$ to a homomorphism from $A$ to $B$ if and only if $H^2(A,C)$ is trivial? Or is there a $2$-cocycle one can define (how would one define it?) such that there exists a lift of $\bar \phi$ if and only if it is trivial in $H^2(A,C)$? Or perhaps the right group to look at is the group of $2$-cocycles $Z^2(A,C)$ rather than the cohomology group?</p> <p>I don't know if such an interpretation exists, so this is just wishful thinking. Since this is the first time I've seen the interpretation of $Z^1(A,C)$ in terms of lifts, I was curious whether such an interpretation extends to the second cohomology.</p> http://mathoverflow.net/questions/58507/how-was-the-importance-of-the-zeta-function-discovered How was the importance of the zeta function discovered? Makhalan Duff 2011-03-15T07:11:27Z 2013-05-08T17:05:34Z <p>This question is similar to <a href="http://mathoverflow.net/questions/1880/why-do-zeta-functions-contain-so-much-information" rel="nofollow">http://mathoverflow.net/questions/1880/why-do-zeta-functions-contain-so-much-information</a> , but is distinct. If the answers to that question answer this one also, I don't understand why.</p> <p>The question is this: with the benefit of hindsight, the zeta function had become the basis of a great body of theory, leading to generalizations of CFT, and the powerful Langlands conjectures. But what made the 19th century mathematicians stumble on something so big? After all $\sum \frac{1}{n^s}$ is just one of many possible functions one can define that have to do with prime numbers. How and why did was the a priori fancifully defined function recognized as being of fundamental importance?</p> http://mathoverflow.net/questions/119439/does-the-proof-of-gaga-use-the-axiom-of-choice Does the proof of GAGA use the axiom of choice? Makhalan Duff 2013-01-21T03:05:06Z 2013-01-21T04:46:52Z <p>Serre's GAGA result roughly states the following. Let $X$ be a complex projective algebraic variety. Then the natural functor from the category of coherent sheaves over the algebraic structure sheaf of $X$ to the category of coherent sheaves over the analytic structure sheaf of $X$ is an equivalence of categories.</p> <p>This theorem always seemed to have the air of magic to me. Things that are analytic must come from algebra. I want to dust away some of this magic, and get a clearer picture. With this goal in mind, I have skimmed the proof of GAGA.</p> <p>The proof of GAGA is rather involved. It uses Cartan's theorem A for both the algebraic and analytic cases, the isomorphism of the completions of the stalks of the structure sheaf in the algebraic case and the analytic case, and a variety of technical results. After having done that for a few days, I still remain with a sense of amazement and a basic lack of understanding about what makes this work. This brings me to the precise phrasing of my question: (which will hopefully help me find the precise step where the magic happens)</p> <h3>Question</h3> <p>Does the proof of Serre's GAGA theorem use the axiom of choice? If so, at what step does this happen?</p> http://mathoverflow.net/questions/38649/online-math-history-lectures Online math history lectures Makhalan Duff 2010-09-14T05:08:09Z 2012-11-20T04:49:26Z <p>This question is somewhat similar to this: <a href="http://mathoverflow.net/questions/1714/best-online-math-videos" rel="nofollow">http://mathoverflow.net/questions/1714/best-online-math-videos</a></p> <p>I'm using the word "history" loosely here. What I'm looking for are those lectures that put various mathematical developments in perspective by explaining their origins. There's something very insightful about seeing someone talk about the origins of a concept, that makes things click. Especially if he or she partook in the inception of that development.</p> <p>So: where can I find such lectures online?</p> http://mathoverflow.net/questions/62682/centralizers-of-elements-in-free-profinite-groups Centralizers of elements in free profinite groups Makhalan Duff 2011-04-23T00:39:49Z 2012-09-20T02:12:42Z <p>I believe, although I can't say that I've given a rigorous proof, that for a free group $F_r$, and an element of it $a$, $C_{F_r}(\langle a \rangle)=$ the group generated by the elements $b \in F_r$ such that $a=b^n$ for some integer $n$ (I will say: the group of powers and roots of $a$).</p> <p>One may similarly ask (and this is my interest in this), given $a\in \hat{F_r}$ (the free profinite group on $r$ generators), what is $C_{\hat{F} _r}(\langle a \rangle)$? In particular, is it the profinite completion of the group of powers and roots of $a$?</p> http://mathoverflow.net/questions/102839/what-is-the-relationship-between-motivic-cohomology-and-the-theory-of-motives What is the relationship between motivic cohomology and the theory of motives? Makhalan Duff 2012-07-21T20:58:09Z 2012-07-22T01:24:39Z <p>I will begin by giving a rough sketch of my understanding of motives.</p> <p>In many expositions about motives (for example, www.jmilne.org/math/xnotes/MOT102.pdf), the category of motives is defined to be a category such that every Weil cohomology (viewed as a functor) factors through it. This does not define the category uniquely, nor does it imply that it exists.</p> <p>There are two concrete candidates that we can construct. The category of Chow motives, which is well-defined, is trivially a category of motives. However, it has some bad properties. For example, it is not Tannakian. The second candidate is the category of numerical motives. It too is well-defined, however it is only conjectured that it is category of motives (i.e., that every Weil cohomology factors through this category). This conjecture is closely related to (or perhaps even equivalent to?) Grothendieck's standard conjectures. That would be desirable, because the category of numerical motives is very well-behaved.</p> <p>Furthermore, the original motivation for motives is that Grothendieck has proven that if the category of numerical motives is indeed a category of motives, then the Weil conjectures are correct.</p> <p>So far, even though I a murky on many of the details, I follow the storyline.</p> <h3>Question</h3> <p>Where does "motivic cohomology" (in the sense of, for example, www.claymath.org/library/monographs/cmim02.pdf) fit into this story? </p> <p>I know that motivic cohomology has something to do with Milnor K-theory, but that is more or less where my understanding of the context of motivic cohomology ends. If motives are already an abstract object that generalizes cohomology, what does motivic cohomology signify? What is the motivation for defining it? What is the context in which it arose?</p> http://mathoverflow.net/questions/100798/what-makes-geometric-cft-easier-than-cft What makes Geometric CFT easier than CFT? Makhalan Duff 2012-06-27T18:49:47Z 2012-07-05T13:39:54Z <p>I've been reading: math.stanford.edu/~conrad/249BPage/handouts/geomcft.pdf</p> <p>in an attempt to shed some geometric light on class field theory. The last paragraph there reads:</p> <p><i> In case the ground field $k$ is perfect, the essential difficulty in the proof of class field theory – proving that the Artin map kills certain principal ideals – becomes easy to prove geometrically by means of the interpretation of geometric points of generalized Jacobians in terms of generalized ideal class groups. (More precisely, one has $J_m (k) = Cl_m (K)$ when $Br(k) = 1$, as happens when $k$ is finite but not when $k$ is a number field.) </i></p> <p>What precisely is he referring to? What particular part of class field theory ("that the Artin map kills certain principal ideals") becomes easier to prove, and why ("by means of the interpretation of geometric points of generalized Jacobians in terms of generalized ideal class groups.")?</p> http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity How does one understand geometric CFT in terms of modularity? Makhalan Duff 2012-06-29T21:38:38Z 2012-07-01T23:50:26Z <p>I have recently asked a question in a similar vein: <a href="http://mathoverflow.net/questions/100798/what-makes-geometric-cft-easier-than-cft" rel="nofollow">http://mathoverflow.net/questions/100798/what-makes-geometric-cft-easier-than-cft</a></p> <p>but I'm afraid I wasn't quite ripe to ask it yet. I have since consulted with the following sources:</p> <p><a href="http://jmilne.org/math/Books/ADTnot.pdf" rel="nofollow">http://jmilne.org/math/Books/ADTnot.pdf</a>, <a href="http://math.stanford.edu/~conrad/249BPage/handouts/geomcft.pdf" rel="nofollow">http://math.stanford.edu/~conrad/249BPage/handouts/geomcft.pdf</a> and <a href="http://arxiv.org/abs/hep-th/0512172" rel="nofollow">http://arxiv.org/abs/hep-th/0512172</a></p> <p>As well as some sources refreshing my memory on classic CFT:</p> <p><a href="http://people.maths.ox.ac.uk/gounelas/projects/bmo.pdf" rel="nofollow">http://people.maths.ox.ac.uk/gounelas/projects/bmo.pdf</a> and <a href="http://www.math.dartmouth.edu/~trs/expository-papers/tex/CFT.pdf" rel="nofollow">http://www.math.dartmouth.edu/~trs/expository-papers/tex/CFT.pdf</a> among others.</p> <p>My motivation for studying geometric class field theory was first and foremost to solidify my understanding of classic class field theory. I thought that perhaps the geometric intuition will shed some light on the rather massive apparatus of CFT.</p> <p>In order to be concrete, I will specify a version of geometric CFT:</p> <p>Theorem A: Let $C$ be a smooth projective, geometrically irreducible curve over a finite field $k$, and let $K$ be its function field. Then the (Artin reciprocity) map $\Phi_K:Div(C)\rightarrow \pi_1^{ab}(C)$ given by $p\mapsto Frob_p$ factors through $Pic(C)$, and induces an isomorphism between the profinite completion of $Pic(C)$ and $\pi_1^{ab}(C)$.</p> <p>This appears in Toth's master thesis (linked above) as Theorem 1.1.4. In order to understand this as analogous to classic CFT, one need only notice that $K^{\times}\backslash \mathbb{I}_K/ \prod _{p \mbox{ closed point in } C}\hat{O_p}$ (where $\mathbb{I}_K$ is the ideles, and $\hat{O_p}$ is the completion of the stalk at the closed point $p$) is isomorphic to $Pic(C)$.</p> <p>In other words the theorem above can be viewed as analogous to the adelic point of view of CFT (i.e. the isomorphism between the profinite completions of $K^{\times}\backslash \mathbb{I}_K/\prod_v O_v^{\times}$ with $Gal(K^{ab}/K)$, where $K$ is a number field). I am interested in understanding what the analogous picture in Geometric CFT to modular formulations (rather than adelic ones) of classic CFT.</p> <h3>Question</h3> <p>How does one understand geometric CFT (as described in the theorem above) in terms of modularity results? In other words, what is the analogous statement to the fact that (up to finitely many primes) the splitting of primes in an abelian extension of $\mathbb{Q}$ is determined by what those primes are conjugate to modulo some conductor? Is there a geometric intuition behind the analogous statement in geometric CFT?</p> <h3>EDIT</h3> <p>After reading the comments carefully, and going back to my old notes on Class Field Theory to refresh my mind about a few things that I got wrong in the comments, I have come to the conclusion that the following is the question that I really wanted to ask:</p> <p>We have:</p> <p>Theorem B: or $K$ a number field, we have that $Gal(K^{ab}/K)$ is isomorphic to the profinite completion of $K^{\times}\backslash \mathbb{I}_K/D_K$ where $D_K$ is the connected component of $1$.</p> <p>Theorem A above is not analogous to Theorem B because, as Felipe pointed out, Theorem A is only about abelian extensions that are unramified, whereas Theorem B allows ramification everywhere. My question is: what is the analogous statement to Theorem A in the Number Theory case?</p> <p>I am in particular confused by the subtlety regarding the distinction between $D_K$ and the product $\prod _{p \mbox{ closed point in } C}\hat{O_p}$. Are they the same?</p> http://mathoverflow.net/questions/79132/is-the-integrality-of-the-zeta-function-easy Is the integrality of the zeta function easy? Makhalan Duff 2011-10-25T22:39:46Z 2012-05-22T02:14:05Z <p>I'm trying to get the gist of the proof of the Weil conjectures. Let $X$ be a variety over $\mathbb{F}_{p^n}$. A priori $Z(X,t)\in \mathbb{Q}[[t]]$. Due to the Grothendieck-Lefschetz fixed point theorem, $Z(X,t)=\prod P_i(t)^{(-1)^{i+1}}$, where $P_i(t)$ is the characteristic polynomial of the Frobenius acting on $H^i(X,\mathbb{Q}_l)$ where $l$ is a fixed prime different from $p$. This implies that $Z(X,t)\in \mathbb{Q}_l(t)\cap \mathbb{Q}[[t]]$ for every prime $l$ different from $p$.</p> <p>Does this suffice to determine that it is in $\mathbb{Q}(t)$? If not, then how was it proven that it is in $\mathbb{Q}(t)$?</p> http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits Are semi-direct products categorical limits? Makhalan Duff 2012-05-05T17:39:45Z 2012-05-07T19:35:33Z <p>Products, are very elementary forms of categorical limits. My question is whether in the category of groups, semi-direct products are categorical limits.</p> <p>As was pointed in: <a href="http://unapologetic.wordpress.com/2007/03/08/split-exact-sequences-and-semidirect-products/" rel="nofollow">http://unapologetic.wordpress.com/2007/03/08/split-exact-sequences-and-semidirect-products/</a></p> <p>Bourbaki (General Topology, Prop. 27) gives a universal property:</p> <p>Let $f \colon N \to G$, $g \colon H \to G$ be two homomorphisms into a group $G$, such that $f(\phi_h(n)) = g(h)f(n)g(h^{-1})$ for all $n \in N$, $h \in H$. Then there is a unique homomorphism $k \colon N \rtimes H \to G$ extending $f$ and $g$ in the usual sense.</p> <p>However, I remain unsatisfied. The condition $f(\phi_h(n)) = g(h)f(n)g(h^{-1})$ is a condition on elements of groups, rather than a condition that says that some diagram is commutative.</p> <p>So the question remains: are semi-direct products in the category of groups categorical limits?</p> http://mathoverflow.net/questions/95938/are-all-henselian-fields-algebraic-over-complete-fields Are all henselian fields algebraic over complete fields? Makhalan Duff 2012-05-04T00:08:20Z 2012-05-04T03:19:33Z <h3>Motivations and Terminology</h3> <p>The term "henselian field" is ambiguous. What I mean when I say that $K$ is a henselian field is that there exists a henselian DVR $R$, such that $K=Frac(R)$. What I mean when I say that $L$ is a complete field is that there exists a complete DVR $S$ such that $L=Frac(S)$.</p> <p>Note that every complete field is henselian. Examples of complete fields are $\mathbb{Q}((t))$ (the field of formal Laurent series with coefficients in $\mathbb{Q}$), $\mathbb{Q}_p$ (the $p$-adics), and so forth.</p> <p>When I try to think of henselian fields that are not complete, the ones that immediately come to mind are algebraic over some complete field. For example $\mathbb{Q}((t))(\sqrt{2})$, $\mathbb{Q}_p^{un}$ (the maximal unramified extension of $\mathbb{Q}_p$), and so forth.</p> <h3>Question</h3> <p>Is it true that for every henselian field $K$ there exists a subfield $L\subset K$ such that $L$ is complete and such that $K/L$ is an algebraic extension?</p> <p>EDIT: I've re-emphasized this in the comments, but I think it is important to put this in the body of the question: both the term "henselian field" and "complete field" are used in many different contexts to mean different things.</p> <p>Note that under the definitions above $\mathbb{R}$ does not constitute as a complete field. (This is because $\mathbb{R}$ is not the fraction field of a complete DVR.)</p> <p>Also note that I do not consider $\mathbb{Q}((t^{1/n}))_{n\in\mathbb{N}}$ to be a henselian field. (This is because my definition of a henselian field requires it to be the fraction field of a henselian DVR, not a general henselian ring.)</p> http://mathoverflow.net/questions/91676/is-ramification-of-number-fields-first-order Is ramification of number fields first order? Makhalan Duff 2012-03-20T00:51:03Z 2012-03-20T18:07:07Z <p>Fix a prime number $p$. Is there a first order sentence $\phi_p$ in the language of fields such that $\phi_p$ holds in a number field $K$ if and only if the prime $p$ is unramified in the field extension $K/\mathbb{Q}$?</p> http://mathoverflow.net/questions/90100/for-which-fields-is-the-inverse-galois-problem-known For which fields is the inverse Galois problem known? Makhalan Duff 2012-03-03T02:13:07Z 2012-03-03T08:15:55Z <p>The inverse Galois problem is known for (or in Jarden's and Fried's terminology, the following fields are universally admissible) function fields over henselian fields (like $\mathbb{Q}_p(x)$); function fields over large fields (like $\mathbb{C}(x)$); and large Hilbertian fields (conjecturally $\mathbb{Q}^{ab}$, although I'm not certain that any field is known to be in this category).</p> <h2>Clarification:</h2> <p>A large field $K$ (a.k.a. an ample field) is a field such that if $V$ is a variety of dimension $\geq 1$ over $K$ with at least one smooth $K$-rational point, then it has infinitely many smooth $K$-rational points. For example any algebraically closed field is large.</p> <p>A Hilbertian field is more difficult to explain, but it suffices to say that any number field and any function field (over any field) is Hilbertian.</p> <h2>My question is:</h2> <p>Is there a proof (not a conjecture) that there exists a field $K$ which is neither a function field over a henselian field, nor a function field over a large field, nor a large Hilbertian field, such that the inverse Galois problem is true over that field? (i.e. that every finite group is realizable as a Galois group over that field)</p> http://mathoverflow.net/questions/70059/help-motivating-log-structures Help motivating log-structures Makhalan Duff 2011-07-11T21:46:26Z 2012-03-02T17:17:05Z <p>I'm currently reading a thesis that uses log-structures. I should mention that this is my first encounter with them, and the thesis (as well as my expertise) is scheme-theoretic (in fact stack-theoretic) and so the original geometric motivations are lost on me.</p> <p>Here is my meek understanding. For any scheme, we can give a log-structure. This is a sheaf , $M$, fibered in monoids, on the etale site over a scheme $S$; together with a morphism of sheaves fibered in moinoids $\alpha:M\rightarrow O_S$ such that when it is restricted to $\alpha^{-1}(O_S^{\times})$ it is an isomorphism.</p> <p>This $\alpha$ is called the exponential map, and for any $t\in O_S(U)$ (for some $U$), a preimage of it via $\alpha$ is called $log(t)$($\in M(U)$).</p> <p>I am curious about a few things, and puzzled about others. First, in terms of the notation, surely it's no coincidence that these are called exponential maps and log-structures. What is the geometric motivation for it?</p> <p>Second, these come up in the thesis I'm reading in the context of tame covers. I am puzzled about what, precisely, log-structures contribute. It seems to me, in extremely vague terms (commensurate with my understanding), that the point of log-structures in this context is that if you add this <em>extra information</em> to tame covers it somehow helps you construct <em>proper</em> moduli spaces of covers.</p> <p>On top of everything I'm also confused about the role of minimal log-structures' in all of this.</p> <p>In conclusion, if you can say anything at all about the motivations of log-structures in the geometric setting, or more importantly in the context of tame covers, I would extremely appreciate it. The plethora of notationally different texts on the subject is making it hard to understand the gist of what's going on.</p> <p>Also, if you have examples that I should have in mind when thinking about it, that would be ideal.</p> http://mathoverflow.net/questions/88793/are-there-polynomials-almost-all-of-whose-intersection-numbers-are-divisible-by Are there polynomials (almost) all of whose intersection numbers are divisible by some integer? Makhalan Duff 2012-02-18T01:21:59Z 2012-02-20T18:20:10Z <p>I've been playing around with some basic intersection theory, and I've wondered the following:</p> <p>For every two integers $n$ and $m$, and complex numbers $a_1,...,a_n$, are there polynomials $f_1(x),...,f_n(x)$ with coefficients in $\mathbb{C}$ such that the following holds:</p> <ol> <li>$f_i(0)=a_i$.</li> <li>For every complex number $b$, $v_{(x-b)}(f_i(x)-f_j(x))$ is divisible by $m$ (in other words all of the intersection numbers away from $0$ are divisible by $m$).</li> <li>$f_i\neq f_j$ for $i\neq j$.</li> </ol> <p>(The $a_i$'s needn't be different from one another)</p> <p>This is clearly true if $n\leq 2$ and every $m$ and $a_1,a_2$, but I can't think of a general way to do it for every $n$. Is it impossible?</p> http://mathoverflow.net/questions/80717/why-should-the-anabelian-geometry-conjectures-be-true Why should the anabelian geometry conjectures be true? Makhalan Duff 2011-11-11T22:56:01Z 2012-02-04T01:57:46Z <p>I had probed friends of mine about Grothendieck's motivation for making the anabelian geometry conjectures, and they gave me the following explanation:</p> <p>If $X$ is a hyperbolic curve over some field $K$ (think projective and of genus $\geq 2$), then, intuitively, its universal cover is the upper half plane. This means that to distinguish between any two hyperbolic curves, it suffices to distinguish between the actions on the upper-half plane that induce those two hyperbolic curves. In some vague way, this should be the same as distinguishing between their fundamental groups.</p> <p>This seems a little tenuous to me. Is there a modification of the above argument that gives a moral reason for why anabelian geometry should be correct? Is there a completely different moral reason for anabelian geometry? If so, what is it? What intuitive reason should I have to believe anabelian geometry (<em>beside</em> the mounting evidence that it is indeed true)?</p> http://mathoverflow.net/questions/86874/is-every-finite-group-a-quotient-of-the-grothendieck-teichmuller-group Is every finite group a quotient of the Grothendieck-Teichmuller group? Makhalan Duff 2012-01-28T01:33:23Z 2012-01-28T10:50:19Z <p>The Grothendieck-Teichmuller conjecture asserts that the absolute Galois group $Gal(\mathbb{Q})$ is isomorphic to the Grothendieck-Teichmuller group. I was wondering, would this conjecture imply the Inverse Galois Problem? I.e. is every finite group a quotient of the Grothendieck-Teichmuller group?</p> http://mathoverflow.net/questions/85705/is-the-other-extreme-of-hilbert-irreducibility-true Is the other extreme of Hilbert Irreducibility true? Makhalan Duff 2012-01-15T02:15:34Z 2012-01-15T02:48:07Z <p>Let $K$ be a number field (or perhaps more generally a Hilbertian field). Let $X_K\rightarrow \mathbb{P}^1_K$ be a regular (i.e. without extension of scalars) $G$-Galois branched cover. Hilbert's Irreducibility implies that there are infinitely many $K$-rational points on $\mathbb{P}^1_K$ such that the fiber product $Spec(K)\times_{\mathbb{P}^1_K}X_K$ is connected. (This is frequently stated as: there are infinitely many $K$-rational points on $\mathbb{P}^1_K$ such that specializing to them gives a $G$-Galois extension of fields over $K$.)</p> <p>My question is whether the other extreme of this is true. I.e., is there a $K$-rational point on $\mathbb{P}^1_K$ such that $Spec(K)\times_{\mathbb{P}^1_K}X_K$ has $|G|$ connected components (each one isomorphic to $Spec(K)$)? In other words, is there a $K$-rational point on $\mathbb{P}^1_K$ that "splits completely"?</p> <p>This reminds me of the statement that in a Galois extension of number fields there are infinitely many primes that split. This is a far from trivial statement that comes from Class Field Theory. However the analogy isn't perfect, so I don't immediately see how the same methods can be used here.</p> http://mathoverflow.net/questions/82533/a-question-related-to-hilberts-irreducibility-theorem A question related to Hilbert's Irreducibility Theorem Makhalan Duff 2011-12-03T03:54:18Z 2011-12-03T04:22:12Z <p>My question is whether for every extension of number fields $L\subset K$, and for every $f_0(x),...,f_n(x)$ in $K[x]$, there is some $\alpha\in L$ such that $$f_n(\alpha)T^n+...+f_1(\alpha)T+f_0(\alpha)$$ is irreducible as a polynomial in $K[T]$.</p> <p>If $L=K$ this is known from Hilbert's Irreducibility Theorem. I find it hard to believe that there is a counter-example to this, but on the other hand I can't seem to conjure up a proof.</p> http://mathoverflow.net/questions/58901/what-are-the-pillars-of-langlands What are the pillars of Langlands? Makhalan Duff 2011-03-19T03:14:35Z 2011-11-30T04:38:05Z <p>I had previously asked: <a href="http://mathoverflow.net/questions/47943/narratives-in-modular-curves" rel="nofollow">http://mathoverflow.net/questions/47943/narratives-in-modular-curves</a></p> <p>Since then, I've read quite a bit more (but not nearly enough) and I have a few follow up questions about the big picture. As you will soon see, I'm confused about how to think about things, and seeing the big picture will help me a lot in learning the specifics (learning in the dark is difficult!).</p> <p>As I understand it, the story goes like this. First, one defined for every number field $\zeta_K(s)=\sum_{\mathfrak{a}} \frac{1}{(N\mathfrak{a})^s}$. One then defines a Dirichlet character, and for any such one defines $L(\chi,s)$. Further, for any $1$-dimensional Galois representation, $\rho: Gal(K/\mathbb{Q}) \rightarrow \mathbb{C}$, one defines $L(\rho,s)$. Now, in the $1$-dimensional case, the main two theorems that comprise class field theory are: if $K$ is abelian over $\mathbb{Q}$ with group $G$, then $\zeta_K(s)=\prod_{\rho \in \hat{G}} L(\rho,s)$; and for any such $\rho$ there exists a unique primitive Dirichlet character $\chi$ such that $L(\rho,s)=L(\chi,s)$. So far I follow the story perfectly.</p> <p>There is also the issue of what if the base field is not $\mathbb{Q}$, which, admittedly, I don't fully have down.</p> <p>Already in dimension $2$ I have a hard time figuring out what generalizes what corresponding thing from dimension $1$. For Galois representations, one continues to define $L(\rho,s)$ in a similar manner: as the product over $p$ of the characteristic polynomials of the action of the corresponding Frobenius (whenever defined for that $p$! It is still a little murky to me what happens at the bad primes). But now we have modular forms coming in to the picture, and the whole theory of modular curves. So how does this fit in as a generalization of the 1-dimensional case? Here's my best guess, you can tell me if I'm right. For a modular form $f$, one defines the $L$-function for it by the $q$-expansion of $f$: If $f(z)=\sum a(n)e(nz)$ then $L(f,s)=\sum \frac{a(n)}{n^s}$. Then various things that I do not fully understand come into play, claiming things like: $L(s,f)=\prod_{q|N}(1-a(q)q^{-s})^{-1} \prod_{p\not |N} (1-a(p)p^{-s}+f(p)p^{k-1-2s})^{-1}$ (probably just for $f$'s with some property, akin to being primitive). It seems (is this true?) that Hecke theory implies that these $L$'s are `nice'' in the sense that they generalize Dirichlet $L$ functions. Is this the right way to see it? How? What is the $1$-dimensional analogue of modular functions, and modular curves?</p> <p>Then I imagine that one has the modularity theorem, one of whose versions is(?) that for every $2$-dimensional Galois representation there's a modular function for which $L(\rho,s)=L(f,s)$.</p> <p>You will notice that at no point did I talk about the adelic aspect. This is because I don't know where to put it. Is the adelic side easily equivalent to the (Dirichlet characters)-(modular functions) side?(are these two even on the same side?) Is it another pillar with which equivalence is far from trivial with both the Galois representations side AND the (Dirichlet characters)-(modular functions) side? In short -- I'm not sure what the pillars of Langlands are!</p> <p>Further, let us assume that we have some version of Langlands. Is there a conjectural equivalent form of $\zeta_K(s)=\prod_{\rho \in \hat{G}} L(\rho,s)$ for $K/\mathbb{Q}$ not abelian?</p> http://mathoverflow.net/questions/21122/intuition-behind-existence-of-moduli-space-of-stable-curves Intuition behind existence of moduli space of stable curves Makhalan Duff 2010-04-12T16:39:03Z 2011-11-15T21:56:38Z <p>I'm not entirely sure that the title is what I'm looking for. What I'm really asking is for intuition as to why $\bar{\mathcal{M}_g}$ is the compactification of $\mathcal{M}_g$. I'm sure this is covered in the more classic papers (like Deligne and Mumford), but I still find those hard to penetrate.</p> http://mathoverflow.net/questions/80840/what-is-the-intuition-behind-the-proof-of-the-algebraic-version-of-cartans-theor What is the intuition behind the proof of the algebraic version of Cartan's theorem A? Makhalan Duff 2011-11-13T18:46:04Z 2011-11-15T02:31:52Z <p>I am trying to understand the idea behind the proof of GAGA. A crucial step is the following:</p> <p>Theorem: Let $X=\mathbb{P}^r_{\mathbb{C}}$ (either as a variety or as an analytic space), and let $\mathcal{M}$ be a coherent sheaf on $X$. Then for $n>>0$, the twisted sheaf $\mathcal{M}(n)$ is generated by finitely many global sections.</p> <p>In the algebraic case, this is Theorem 5.17 in Hartshorne chapter II. If one tries to read the proof of Theorem 5.17, one sees that it depends on Lemma 5.14, which in turn is a generalization of Lemma 5.3. Lemma 5.3 seems to me to be a completely algebraic lemma with no geometric intuition. It will disrupt the flow of the question to state it here, so I will put it at the end. My point is that I don't see any intuition in this statement</p> <p>In the analytic case this is equivalent to <a href="http://en.wikipedia.org/wiki/Cartan_theorem_A" rel="nofollow">Cartan's Theorem A</a>. To quote the wikipedia page: "Naively, they imply that a holomorphic function on a closed complex submanifold $Z$ of a Stein manifold $X$ can be extended to a holomorphic function on all of $X$". I must confess that I have not read a proof of Cartan's Theorem A itself. But I would like to get some intuition about why it is true, and how it translates to the nilpotent proof found in Hartshorne...</p> <h3>Appendix</h3> <p>Lemma 5.3 in Hartshorne chapter II: Let $X=Spec(A)$ be an affine scheme, let $f\in A$, let $D(f)\subset X$ be the corresponding open set, and let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.</p> <p>a. If $s\in \Gamma(X,\mathcal{F})$ is such that its restriction to $D(f)$ is $0$, then for some $n>0$, $f^ns=0$.</p> <p>b. Given a section $t\in \mathcal{F}(D(f))$ of $\mathcal{F}$ over the open set $D(f)$, then for some $n>0$, $f^nt$ extends to a global section of $\mathcal{F}$ over $X$.</p> <p>The reason I call this a "nilpotent method" is because in complex algerbaic geometry $f^ns=0$ would imply that either $f=0$ or $s=0$.</p> http://mathoverflow.net/questions/80770/reference-request-riemanns-existence-theorem Reference Request: Riemann's Existence Theorem Makhalan Duff 2011-11-12T19:40:36Z 2011-11-13T03:46:20Z <p>By Riemann's existence theorem I mean this:</p> <p>Let $X$ be some variety defined over $\mathbb{C}$, and let $Y$ be a <em>topological</em> covering space of $X$. Then $Y$ can be given the structure of a variety over $\mathbb{C}$, and furthermore this can be done so that the covering map will be algebraic.</p> <p>It is frequently said that this theorem is not constructive. That is to say, that it is impossible to predict the polynomials that define $Y$ and the polynomial map that defines $Y\rightarrow X$. I want to read the proof, and understand for myself why this is true. Where can I find a good, preferably succinct, proof of this theorem in English?</p> http://mathoverflow.net/questions/80784/does-grothendieck-teichmuller-tell-us-something-about-galois-actions-or-just-abo Does Grothendieck-Teichmuller tell us something about Galois actions, or just about Gal(Q)? Makhalan Duff 2011-11-12T23:22:44Z 2011-11-12T23:22:44Z <p>Taking the approach in: <a href="http://www.msri.org/realvideo/ln/msri/1999/vonneumann/schneps/1/" rel="nofollow">http://www.msri.org/realvideo/ln/msri/1999/vonneumann/schneps/1/</a></p> <p>I view the Grothendieck-Teichmuller conjecture as saying that $Gal(\mathbb{Q})$ is isomorphic to a well understood object. That is, it is isomorphic to $Out^*$ of the fundamental group of the Teichmuller lego.</p> <p>This seems to indeed be informative about $Gal(\mathbb{Q})$! My question is whether the Grothendieck-Teichmuller philosophy has predictions about how $Gal(\mathbb{Q})$ acts on varieties defined over $\mathbb{Q}$ (for example $\mathbb{P}^1_{\mathbb{Q}}\smallsetminus${$0,1,\infty$}). From the way that I formulated the conjecture, it is not obvious to me that it does; but I think I am missing the greater picture.</p> http://mathoverflow.net/questions/79575/how-do-brauer-groups-relate-to-zeta-functions How do Brauer groups relate to zeta functions? Makhalan Duff 2011-10-31T02:30:32Z 2011-10-31T06:16:23Z <p>There are two approaches to class field theory that I was taught. The first, is the theory of $L$-functions, Dirichlet characters and so forth (which I described succintly in the question <a href="http://mathoverflow.net/questions/58901/what-are-the-pillars-of-langlands" rel="nofollow">http://mathoverflow.net/questions/58901/what-are-the-pillars-of-langlands</a> that I asked a long time ago), and the other is through Brauer groups.</p> <p>To be precise, I was taught that the following sketch is class field theory:</p> <p>Let $K$ be number field. Let $v$ be any non-archimedean place. Then there is some explicit isomorphism $inv_v:Br(K_v)\rightarrow \mathbb{Q}/\mathbb{Z}$. If $v$ is an archimedean place, then there is an explicit isomorphism $inv_v:Br(K_v)\rightarrow \mathbb{Z}/2\mathbb{Z}$ if $K_v$ is $\mathbb{R}$, and $0$ if $K_v$ is $\mathbb{C}$.</p> <p>Furthermore, the following sequence is exact:</p> <p>$1\rightarrow Br(K)\rightarrow \bigoplus_v Br(K_v)\rightarrow \mathbb{Q}/\mathbb{Z}\rightarrow 1$</p> <p>where the first morphism is obvious, and the last morphism is $\sum_v inv_v$.</p> <p>I take both of these approaches have a lot of content, but it is not clear how people think of them as <em>equivalent</em>. How does one build a dictionary between the one approach and the other?</p> http://mathoverflow.net/questions/79554/what-is-the-general-statement-of-hilbert-90 What is the general statement of Hilbert 90? Makhalan Duff 2011-10-30T22:56:52Z 2011-10-30T23:06:24Z <p>I know two generalizations of Hilbert 90, but I don't if there is a statement that contains both:</p> <h3>The first statement</h3> <p>Let $K$ be a field, then $H^1(Gal(K), GL_n(K^{sep}))=0$.</p> <h3>The second statement</h3> <p>Let $X$ be a scheme, then $H^1(X,\mathbb{G}_m)=Pic(X)$.</p> <h3>Question</h3> <p>Is there a nice characterization of $H^1(X,GL_n)$ (where $GL_n$ is treated as a sheaf in the etale topology) for a general scheme $X$?</p> http://mathoverflow.net/questions/79011/the-etale-fundamental-group-and-etale-cohomology-with-compact-support The etale fundamental group and etale cohomology with compact support Makhalan Duff 2011-10-24T19:00:20Z 2011-10-25T00:02:36Z <p>Before me, the following was asked: <a href="http://mathoverflow.net/questions/16566/etale-fundamental-group-and-etale-cohomology-of-curves" rel="nofollow">http://mathoverflow.net/questions/16566/etale-fundamental-group-and-etale-cohomology-of-curves</a></p> <p>However, that question dealt only with projective curves.</p> <h3>Question</h3> <p>Let $X$ be any scheme (or if you prefer something more concrete, a variety over some field), and let $l$ be some prime different from the characteristics of the residue fields of $X$ (respectively, the characteristic of the field over which the variety is defined), then is there an isomorphism $Hom_{cont}(\pi_1^{et}(X),\mathbb{Q}_l)\cong H^1_c(X,\mathbb{Q}_l)$?</p> http://mathoverflow.net/questions/74600/is-the-set-of-all-curves-that-have-a-galois-map-to-the-projective-line-zariski-cl Is the set of all curves that have a Galois map to the projective line Zariski closed in M_g? Makhalan Duff 2011-09-05T17:26:27Z 2011-09-11T20:42:33Z <p>Not all Riemann surfaces have branched Galois maps to the Riemann sphere. One way to see this is that if $C\rightarrow\mathbb{P}^1$ is Galois, this implies that $C$ is defined over its field of moduli (as a curve).</p> <p>What is the shape of all genus $g$ Riemann surfaces that have a Galois map to $\mathbb{P}^1$? Is it Zariski closed in the coarse moduli space of genus $g$ curves $M_g$?</p> http://mathoverflow.net/questions/74815/is-there-for-every-variety-x-an-abelian-variety-a-such-that-their-1st-l-adic-coho Is there for every variety X an abelian variety A such that their 1st l-adic cohomologies are isomorphic? Makhalan Duff 2011-09-08T01:03:43Z 2011-09-09T12:05:39Z <p>This question is somewhat inspired by Kevin Buzzard's answer to <a href="http://mathoverflow.net/questions/74776/what-is-the-interpretation-of-complex-multiplication-in-terms-of-langlands" rel="nofollow">http://mathoverflow.net/questions/74776/what-is-the-interpretation-of-complex-multiplication-in-terms-of-langlands</a> and somewhat from my own curiosity about such topics.</p> <p>Let $X$ be a variety over $\mathbb{Q}$. This variety induces a pure motive of weight $1$ (it induces pure motives of other weights, but I will focus on the one with weight $1$). I understand that weight $1$ motives come (conjecturally, of course) from weight $2$ newforms. </p> <p>Okay, now let's trace it back. Let's start with a weight $2$ newform. Then, as James D. Taylor alluded to (and from what I know from <a href="http://staff.science.uva.nl/~bmoonen/MTGps.pdf" rel="nofollow">http://staff.science.uva.nl/~bmoonen/MTGps.pdf</a>), the corresponding newform must be the pure motive of weight 1 that is induced by an abelian variety (this is special to the weight $2$ newforms).</p> <p>If so, then it seems that this proves that any pure motive of weight 1 is equal to the pure motive of weight 1 of a motive coming from some abelian variety.</p> <p>Put back in words that are not conjectural: Is it true that for every variety $X$ over $\mathbb{Q}$ there is an abelian variety over $\mathbb{Q}$, $A$, such that $H^1_{et}(X,\mathbb{Q}_l) \cong H^1 _{et} (A,\mathbb{Q}_l)$ as $Gal(\mathbb{Q})$-representations?</p> <p>Or perhaps is the following weaker statement true (if I somehow managed to get something wrong in the above): For every variety $X$ over $\mathbb{Q}$, $L(X,s)$ (coming from the action on the pure motive of weight 1 -- which is well defined even without motives, since one can create it using $l$-adic cohomology) = $\prod_i L(A_i,s)$ where the $A_i$'s are (finitely many) abelian varieties over $\mathbb{Q}$ and the $L$'s are coming from their pure motives of weight $1$.</p> <p>I would very much like to know, if the above is wrong, where exactly the fallacy was. But if everything above is right, then is this known without assuming crazy conjectures like the standard conjectures or forms of Langlands?</p> http://mathoverflow.net/questions/63964/given-a-branched-cover-with-branch-cycle-description-g-1-g-r-does-g-i Given a branched cover with branch cycle description $(g_1,...,g_r)$, does $g_i$ generate some decomposition group? Makhalan Duff 2011-05-05T01:27:55Z 2011-08-25T20:22:12Z <p>Classically: Let $a_1,...,a_r$ be points in $\mathbb{P}^1_{\mathbb{C}}$, and let $\alpha_1,...,\alpha_r$ be simple loops around the $a_i$, all counterclockwise, and none touching (so $\alpha_1...\alpha_r=1$ in the fundamental group of the projective line minus those points). An (pointed, to be pedantic) unramified $G$-cover (meaning a normal covering space with deck transformations$=G$) of $\mathbb{P}^1_{\mathbb{C}}-a_1,...,a_r$ is given by a surjection $\pi_1(\mathbb{P}^1_{\mathbb{C}}-a_1,...,a_r) \rightarrow G$. Let $g_i$ be the image of $\alpha_i$. We say that this $G$-Galois branched cover has branch cycle description $(g_1,...,g_r)$ (note that this depends on our choice of the $\alpha_i$'s). This covering map of curves can be extended to a map of (smooth) projective groups. It can then be shown by a simple topological argument that $g_i$ generates the inertia group (=decomposition group in this case) of some point above $a_i$.</p> <p>My question is whether (and if so, how?) this is also true for the $\overline{\mathbb{F}_p}$ case.</p> <p>Let me be precise. It is known via Grothendieck that $\pi_1^{(p)}(\mathbb{P}^1_{\overline{\mathbb{F}_p}}-a_1,...,a_r)=\widehat{\langle \alpha_1,...,\alpha_r|\prod \alpha_i =1 \rangle}^{(p)}$ (the $^{(p)}$ indicates that we're taking the inverse limit of all prime-to-$p$ finite quotients). Since these $\alpha_i$'s are given in SGA1 through a rather mysterious method, I wonder if the phenomenon described in the first paragraph is still true.</p> <p>My question, therefore, is: let $G$ be a prime-to-$p$ group, and let $X\rightarrow \mathbb{P}^1_{\overline{\mathbb{F}_p}}$ be a (pointed, to be pedantic) branched $G$-cover with branch points $a_1,...,a_r$. Let $\alpha_1,...,\alpha_r$ be such that $\pi_1^{(p)}(\mathbb{P}^1_{\overline{\mathbb{F}_p}}-a_1,...,a_r)=\widehat{\langle \alpha_1,...,\alpha_r|\prod \alpha_i =1 \rangle}^{(p)}$ (I'm almost positive that what I'm about to say is false if you're allowed to choose any such $\alpha_i$'s, so let's assume that we're taking the ones from Grothendieck's construction. If you see a better way of saying what the condition should be on the $\alpha_i$'s I would be very interested in that). Let the branch cycle description of this cover be $(g_1,...,g_r)$ (with respect to these $\alpha_i$'s). Is it true that $g_i$ generates the inertia group (=decomposition group in this case) of some point of $X$ above $a_i$?</p> <p>The topological argument that we were able to use for the $\mathbb{C}$ case seems to no longer apply...</p> http://mathoverflow.net/questions/130770/can-group-cohomology-be-interpreted-as-an-obstruction-to-lifts/130792#130792 Comment by Makhalan Duff Makhalan Duff 2013-05-16T14:13:06Z 2013-05-16T14:13:06Z Thanks! Makes perfect sense. http://mathoverflow.net/questions/119439/does-the-proof-of-gaga-use-the-axiom-of-choice Comment by Makhalan Duff Makhalan Duff 2013-01-21T04:47:25Z 2013-01-21T04:47:25Z Mariano, it is! Think for example on the case of curves. There it does reduce to saying "topological". http://mathoverflow.net/questions/119439/does-the-proof-of-gaga-use-the-axiom-of-choice/119441#119441 Comment by Makhalan Duff Makhalan Duff 2013-01-21T04:46:23Z 2013-01-21T04:46:23Z I'm not so sure that it's as straightforward as you say. If I give you an analytic coherent sheaf, would you be able to give me the algebraic coherent sheaf that induces it? http://mathoverflow.net/questions/102839/what-is-the-relationship-between-motivic-cohomology-and-the-theory-of-motives/102842#102842 Comment by Makhalan Duff Makhalan Duff 2012-07-22T01:26:05Z 2012-07-22T01:26:05Z It does indeed help! http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-07-02T02:19:09Z 2012-07-02T02:19:09Z $\mathbb{I}_{\mathbb{Q}}/\mathbb{Q}^{\times}$ by $D_{\mathbb{Q}}$, which is the product of the infinite places, and by $\prod \mathbb{Z}_p^{\times}$ (to account for not allow ramification anywhere). But I wonder where you're using something special about $\mathbb{Q}$, because this isn't true for all number fields, is it? http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-07-02T02:19:02Z 2012-07-02T02:19:02Z @Dror: that's very helpful. Let me ask you a few questions. 1. Is $D_K$ always the product over all infinite places? 2. Let $L$ be the maximal abelian extension of $K$ that is unramified over a specific set of primes $S$ of $O_K$. Are you saying that $L^{\times}N_{L/K}(\mathbb{I}_L)$ is equal to the product $\prod_{\mathfrak{p} \not \in S} O_{\mathfrak{p}} ^{\times}$? I guess what I'm trying to ask is: it seems that you're saying that the "reason" that there are no abelian unramified extensions of $\mathbb{Q}$ is that you get $Gal(\mathbb{Q}^{un,ab}/\mathbb{Q})$ by quotienting http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-06-30T16:03:14Z 2012-06-30T16:03:14Z $K^{\times}\backslash \mathbb{I}_K/\prod_{v\not \in S} O^{\times}_v$ is isomorphic to $Gal(K^{ab}/K)$. What is the correct way to think about this? http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-06-30T16:02:15Z 2012-06-30T16:02:15Z @Felipe: sorry for asking so many questions, but I'm confused again. Serre's book would be easier to read with a little motivation, so hopefully you'll indulge me. My perception was that if $S$ is a finite set of places, then $K^{\times}\backslash \mathbb{I}_K/\prod_{v\not \in S} O^{\times}_v$ would have a profinite completion which is isomorphic to $Gal(L/K)$ where $L$ is the maximal abelian cover of $K$ that is ramified only in $S$. But it seems that what you're saying is that if S is the set of infinite places, then http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-06-30T13:06:40Z 2012-06-30T13:06:40Z I'm sorry, my last comment was silly. You're right that $\pi_1^{ab}(C)\cong Gal(K^{ab,un}/K)$. My question is, why is the analogous statement for $K$ a number field that $K^{\times}\backslash \mathbb{I}_K/\prod_v O_v^{\times}$ is isomorphic (after taking profinite completions) to $Gal(K^{ab}/K)$ rather than $Gal(K^{ab,un}/K)$? Or am I wrong about that? For example $\mathbb{Q}^{\times}\backslash \mathbb{I}_{\mathbb{Q}}/\prod_p \mathbb{Z}_p^{\times}$ is far from trivial, isn't it? I guess I'm just finding it hard to draw the analogy. http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-06-30T01:24:31Z 2012-06-30T01:24:31Z In what sense? For example it is not true that every abelian cover of $\mathbb{P}^1_{\mathbb{F}_p}$ is unramified! (E.g., $y^2=x$ will define an abelian cover ramified over $0$ and $\infty$.) I feel like I'm missing the crucial point you're trying to get across... Can you tell me what I'm missing? http://mathoverflow.net/questions/100977/how-does-one-understand-geometric-cft-in-terms-of-modularity Comment by Makhalan Duff Makhalan Duff 2012-06-29T22:56:32Z 2012-06-29T22:56:32Z @Felipe: I am confused by your statement. The analogous theorem to the theorem I cited in the number theory case describes $Gal(K^{ab}/K)$ in terms of (the profinite completion of) a double quotient of the ideles of $K$. If the theorem I cited is only unramified class field theory, wouldn't it describe $Gal(K^{ab,un}/K)$ instead? http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits Comment by Makhalan Duff Makhalan Duff 2012-05-05T19:27:03Z 2012-05-05T19:27:03Z Mark, I'm still trying to figure out whether it's true even in the context of that question. Yiftach Barnea didn't explain why that isomorphism is true. http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits Comment by Makhalan Duff Makhalan Duff 2012-05-05T19:26:20Z 2012-05-05T19:26:20Z Let's put it in the simplest possible terms: is it possible to show that for $G$ and $H$ finitely generated, with an action $\phi_1:H\rightarrow Aut(G)$ which extends (by assumption) to an action $\phi_2:\hat{H}\rightarrow Aut(\hat{G})$ (where hat denotes profinite completion), it is true that $\hat{G}\rtimes \hat{H}$ is a profinite group? http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits Comment by Makhalan Duff Makhalan Duff 2012-05-05T19:11:46Z 2012-05-05T19:11:46Z Hmm... I'm looking at the question: <a href="http://mathoverflow.net/questions/60750/profinite-completion-of-a-semidirect-product" rel="nofollow" title="profinite completion of a semidirect product">mathoverflow.net/questions/60750/…</a> in which you also participated. In Yiftach Barnea's answer, I believe that he assumes that inverse limits do commute with semi-direct products when he says that $\hat{G}\rtimes \hat{H}\cong \varprojlim (G\rtimes H)/(G_n\rtimes N)$. Would you agree with his usage, or do you think he is wrong? http://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits Comment by Makhalan Duff Makhalan Duff 2012-05-05T18:58:13Z 2012-05-05T18:58:13Z Mark, it is indeed! Is it true? Do you have a reference, or can give a reason for it to be true?
|
2013-05-21 06:25:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108710289001465, "perplexity": 437.197421821707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699730479/warc/CC-MAIN-20130516102210-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://theinfolist.com/html/ALL/s/Group_isomorphism.html
|
TheInfoList
In
abstract algebra In algebra, which is a broad division of mathematics, abstract algebra (occasionally called modern algebra) is the study of algebraic structures. Algebraic structures include group (mathematics), groups, ring (mathematics), rings, field (mathema ...
, a group isomorphism is a
function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern comp ...
between two
group A group is a number A number is a mathematical object used to counting, count, measurement, measure, and nominal number, label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with ...
s that sets up a one-to-one correspondence between the elements of the groups in a way that respects the given group operations. If there exists an isomorphism between two groups, then the groups are called isomorphic. From the standpoint of group theory, isomorphic groups have the same properties and need not be distinguished.
# Definition and notation
Given two groups $\left(G, *\right)$ and $\left(H, \odot\right),$ a ''group isomorphism'' from $\left(G, *\right)$ to $\left(H, \odot\right)$ is a
bijective In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ...
group homomorphism In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ge ...
from $G$ to $H.$ Spelled out, this means that a group isomorphism is a bijective function $f : G \to H$ such that for all $u$ and $v$ in $G$ it holds that $f(u * v) = f(u) \odot f(v).$ The two groups $\left(G, *\right)$ and $\left(H, \odot\right)$ are isomorphic if there exists an isomorphism from one to the other. This is written: $(G, *) \cong (H, \odot)$ Often shorter and simpler notations can be used. When the relevant group operations are unambiguous they are omitted and one writes: $G \cong H$ Sometimes one can even simply write $G$ = $H.$ Whether such a notation is possible without confusion or ambiguity depends on context. For example, the equals sign is not very suitable when the groups are both subgroups of the same group. See also the examples. Conversely, given a group $\left(G, *\right),$ a set $H,$ and a
bijection In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ...
$f : G \to H,$ we can make $H$ a group $\left(H, \odot\right)$ by defining $f(u) \odot f(v) = f(u * v).$ If $H$ = $G$ and $\odot = *$ then the bijection is an automorphism (''q.v.''). Intuitively, group theorists view two isomorphic groups as follows: For every element $g$ of a group $G,$ there exists an element $h$ of $H$ such that $h$ 'behaves in the same way' as $g$ (operates with other elements of the group in the same way as $g$). For instance, if $g$ generates $G,$ then so does $h.$ This implies in particular that $G$ and $H$ are in bijective correspondence. Thus, the definition of an isomorphism is quite natural. An isomorphism of groups may equivalently be defined as an invertible
morphism In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and th ...
in the
category of groups In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ...
, where invertible here means has a two-sided inverse.
# Examples
In this section some notable examples of isomorphic groups are listed. * The group of all
real number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ...
s with addition, $\left(\R, +\right),$ is isomorphic to the group of
positive real numbers In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ...
with multiplication $\left\left(\R^+, \times\right\right)$: *:$\left(\R, +\right) \cong \left\left(\R^+, \times\right\right)$ via the isomorphism $f\left(x\right) = e^x$ (see
exponential function The exponential function is a mathematical function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out sequences of ...
). * The group $\Z$ of
integer An integer (from the Latin Latin (, or , ) is a classical language A classical language is a language A language is a structured system of communication Communication (from Latin ''communicare'', meaning "to share" or "to ...
subgroup In group theory, a branch of mathematics, given a group (mathematics), group ''G'' under a binary operation ∗, a subset ''H'' of ''G'' is called a subgroup of ''G'' if ''H'' also forms a group under the operation ∗. More precisely ...
of $\R,$ and the
factor group A quotient group or factor group is a math Mathematics (from Greek: ) includes the study of such topics as quantity ( number theory), structure (algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit= ...
$\R/\Z$ is isomorphic to the group $S^1$ of
complex number In mathematics, a complex number is an element of a number system that contains the real numbers and a specific element denoted , called the imaginary unit, and satisfying the equation . Moreover, every complex number can be expressed in the for ...
s of
absolute value In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities an ...
1 (with multiplication): *:$\R/\Z \cong S^1$ * The
Klein four-group In mathematics, the Klein four-group is a Group (mathematics), group with four elements, in which each element is Involution (mathematics), self-inverse (composing it with itself produces the identity) and in which composing any two of the three ...
is isomorphic to the
direct productIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ...
of two copies of $\Z_2 = \Z/2\Z$ (see
modular arithmetic #REDIRECT Modular arithmetic #REDIRECT Modular arithmetic#REDIRECT Modular arithmetic In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure ( ...
), and can therefore be written $\Z_2 \times \Z_2.$ Another notation is $\operatorname_2,$ because it is a
dihedral group In mathematics, a dihedral group is the group (mathematics), group of symmetry, symmetries of a regular polygon, which includes rotational symmetry, rotations and reflection symmetry, reflections. Dihedral groups are among the simplest example ...
. * Generalizing this, for all odd $n,$ $\operatorname_$ is isomorphic with the
direct productIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ...
of $\operatorname_n$ and $Z_2.$ * If $\left(G, *\right)$ is an
infinite cyclic group In group theory The popular puzzle Rubik's cube invented in 1974 by Ernő Rubik has been used as an illustration of permutation group">Ernő_Rubik.html" ;"title="Rubik's cube invented in 1974 by Ernő Rubik">Rubik's cube invented in 1974 by Er ...
, then $\left(G, *\right)$ is isomorphic to the integers (with the addition operation). From an algebraic point of view, this means that the set of all integers (with the addition operation) is the 'only' infinite cyclic group. Some groups can be proven to be isomorphic, relying on the
axiom of choice In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that ''a Cartesian product#Infinite Cartesian products, Cartesian product of a collection of non-empty sets is non-empty''. Informally put, the a ...
, but the proof does not indicate how to construct a concrete isomorphism. Examples: * The group $\left(\R, +\right)$ is isomorphic to the group $\left(\Complex, +\right)$ of all
complex number In mathematics, a complex number is an element of a number system that contains the real numbers and a specific element denoted , called the imaginary unit, and satisfying the equation . Moreover, every complex number can be expressed in the for ...
s with addition. * The group $\left\left(\Complex^*, \cdot\right\right)$ of non-zero complex numbers with multiplication as operation is isomorphic to the group $S^1$ mentioned above.
# Properties
The
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learnin ...
of an isomorphism from $\left(G, *\right)$ to $\left(H, \odot\right),$ is always where eG is the identity of the group $\left(G, *\right)$ If $\left(G, *\right)$ and $\left(H, \odot\right)$ are isomorphic, then $G$ is abelian if and only if $H$ is abelian. If $f$ is an isomorphism from $\left(G, *\right)$ to $\left(H, \odot\right),$ then for any $a \in G,$ the
order Order, ORDER or Orders may refer to: * Orderliness Orderliness is a quality that is characterized by a person’s interest in keeping their surroundings and themselves well organized, and is associated with other qualities such as cleanliness a ...
of $a$ equals the order of $f\left(a\right).$ If $\left(G, *\right)$ and $\left(H, \odot\right)$ are isomorphic, then $\left(G, *\right)$ is
locally finite groupIn mathematics, in the field of group theory, a locally finite group is a type of group (mathematics), group that can be studied in ways analogous to a finite group. Sylow subgroups, Carter subgroups, and abelian subgroups of locally finite groups h ...
if and only if $\left(H, \odot\right)$ is locally finite. The number of distinct groups (up to isomorphism) of order $n$ is given by sequence A000001 in
OEIS The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while a researcher at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to the ...
. The first few numbers are 0, 1, 1, 1 and 2 meaning that 4 is the lowest order with more than one group.
# Cyclic groups
All cyclic groups of a given order are isomorphic to $\left(\Z_n, +_n\right),$ where $+_n$ denotes addition modulo $n.$ Let $G$ be a cyclic group and $n$ be the order of $G.$ $G$ is then the group generated by $\langle x \rangle = \left\.$ We will show that $G \cong \left(\Z_n, +_n\right).$ Define $\varphi : G \to \Z_n = \,$ so that $\varphi\left\left(x^a\right\right) = a.$ Clearly, $\varphi$ is bijective. Then $\varphi\left(x^a \cdot x^b\right) = \varphi\left(x^\right) = a + b = \varphi\left(x^a\right) +_n \varphi\left(x^b\right),$ which proves that $G \cong \left\left(\Z_n, +_n\right\right).$
# Consequences
From the definition, it follows that any isomorphism $f : G \to H$ will map the identity element of $G$ to the identity element of $H,$ $f\left(e_G\right) = e_H$ that it will map inverses to inverses, and more generally, $b$th powers to $n$th powers, and that the inverse map $f^ : H \to G$ is also a group isomorphism. The relation "being isomorphic" satisfies all the axioms of an
equivalence relation In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ...
. If $f$ is an isomorphism between two groups $G$ and $H,$ then everything that is true about $G$ that is only related to the group structure can be translated via $f$ into a true ditto statement about $H,$ and vice versa.
# Automorphisms
An isomorphism from a group $\left(G, *\right)$ to itself is called an
automorphism In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ...
of this group. Thus it is a bijection $f : G \to G$ such that $f(u) * f(v) = f(u * v).$ An automorphism always maps the identity to itself. The image under an automorphism of a
conjugacy class In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ...
is always a conjugacy class (the same or another). The image of an element has the same order as that element. The composition of two automorphisms is again an automorphism, and with this operation the set of all automorphisms of a group $G,$ denoted by $\operatorname\left(G\right),$ forms itself a group, the of $G.$ For all abelian groups there is at least the automorphism that replaces the group elements by their inverses. However, in groups where all elements are equal to their inverse this is the trivial automorphism, e.g. in the
Klein four-group In mathematics, the Klein four-group is a Group (mathematics), group with four elements, in which each element is Involution (mathematics), self-inverse (composing it with itself produces the identity) and in which composing any two of the three ...
. For that group all permutations of the three non-identity elements are automorphisms, so the automorphism group is isomorphic to $S_3$ and $\operatorname_3.$ In $Z_p$ for a prime number $p,$ one non-identity element can be replaced by any other, with corresponding changes in the other elements. The automorphism group is isomorphic to $Z_$ For example, for $n = 7,$ multiplying all elements of $Z_7$ by 3, modulo 7, is an automorphism of order 6 in the automorphism group, because $3^6 \equiv 1 \pmod 7,$ while lower powers do not give 1. Thus this automorphism generates $Z_6.$ There is one more automorphism with this property: multiplying all elements of $Z_7$ by 5, modulo 7. Therefore, these two correspond to the elements 1 and 5 of $Z_6,$ in that order or conversely. The automorphism group of $Z_6$ is isomorphic to $Z_2,$ because only each of the two elements 1 and 5 generate $Z_6,$ so apart from the identity we can only interchange these. The automorphism group of $Z_2 \oplus Z_2 \oplus \oplus Z_2 = \operatorname_2 \oplus Z_2$ has order 168, as can be found as follows. All 7 non-identity elements play the same role, so we can choose which plays the role of $\left(1,0,0\right).$ Any of the remaining 6 can be chosen to play the role of (0,1,0). This determines which corresponds to $\left(1,1,0\right).$ For $\left(0,0,1\right)$ we can choose from 4, which determines the rest. Thus we have $7 \times 6 \times 4 = 168$ automorphisms. They correspond to those of the
Fano plane In finite geometry, the Fano plane (after Gino Fano) is the Projective plane#Finite projective planes, finite projective plane of order 2. It is the finite projective plane with the smallest possible number of points and lines: 7 points and 7 li ...
, of which the 7 points correspond to the 7 non-identity elements. The lines connecting three points correspond to the group operation: $a, b, \text c$ on one line means $a + b = c,$ $a + c = b,$ and $b + c = a.$ See also general linear group over finite fields. For abelian groups all automorphisms except the trivial one are called
outer automorphism In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ...
s. Non-abelian groups have a non-trivial
inner automorphism In abstract algebra an inner automorphism is an automorphism In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry ...
group, and possibly also outer automorphisms.
|
2022-07-05 19:03:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 134, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8757728934288025, "perplexity": 491.49493644947313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00343.warc.gz"}
|
https://www.tutorialspoint.com/serverless/serverless_packaging_dependencies.htm
|
# Serverless - Packaging Dependencies
#### Serverless Development with AWS Lambda and NodeJS
44 Lectures 7.5 hours
#### AWS LAMBDA- Serverless Computing
31 Lectures 3 hours
#### AWS Serverless Applications
25 Lectures 1 hours
In the previous chapter, we saw how to use plugins with serverless. We specifically looked at the Python Requirements plugin and saw how it can be used to bundle dependencies like numpy, scipy, pandas, etc.with your lambda function's application code. We even saw an example of deploying a function requiring the numpy dependency. We saw that it ran well locally but on the AWS Lambda console, you'd have encountered an error if you are on a Windows or Mac machine. Let's understand why the function runs locally but doesn't run after deployment.
If you look at the error message, you get some hints. I'm specifically referring to one line − 'Importing the numpy C-extensions failed.' Now, many important python packages like numpy, pandas, scipy, etc.require the compilation of C-extensions. If we compile them on a Windows or a Mac machine, then Lambda (linux environment) will throw up an error when trying to load them. So the important question is, what can be done to avoid this error. Come in docker!
## What is docker?
According to Wikipedia, docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. If you scan the Wikipedia page of docker a bit more, you will come across some more relevant statements − Docker can package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer. This enables the application to run in a variety of locations, such as on-premises, in a public cloud, and/or in a private cloud. I think it should be very clear after the above statements. We have an error coming up because C-extensions compiled on Windows/Mac don't work in Linux.
We can simply bypass that error by packaging the application in a container that can run on any OS. What docker does in the background to achieve this OS-level virtualization is beyond the scope of this chapter.
## Installing docker
You can head over to https://docs.docker.com/engine/install/ for the installation of Docker Desktop. If you are using Windows 10 Home Edition,the Windows version should be at least 1903 (May 2019 update). Therefore, you may want to upgrade your Windows 10 OS before installing Docker Desktop. No such limitations apply to Windows Pro or Enterprise versions.
## Using dockerizePip in serverless
Once Docker Desktop has been installed on your machine, you need to make only the following addition to your serverless.yml file to package your applications and dependencies using docker −
custom:
pythonRequirements:
dockerizePip: true
Please note, that if you have been following along with me since the previous chapter, it is likely that you have already deployed code to lambda once.This would have created a static cache in your local storage. By default, serverless would use that cache to bundle dependencies, and therefore, docker container won't be created.Therefore, to force serverless to create use docker, we will add another statement to pythonRequirements −
custom:
pythonRequirements:
dockerizePip: true
useStaticCache: false #not necessary if you will be deploying the code to lambda for the first time.
This last statement is not necessary if you are deploying to lambda for the first time. In general, you should set useStaticCache to true, since that will save you some packaging time when you haven't made any changes to the dependencies or the way they have to be packaged.
With these additions, the serverless.yml file now looks like −
service: hello-world-python
provider:
name: aws
runtime: python3.6
profile: yash-sanghvi
region: ap-south-1
functions:
hello_world:
handler: handler.hello
timeout: 6
memorySize: 128
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
useStaticCache: false #not necessary if you will be deploying the code to lambda for the first time.
Now, when you run the sls deploy -v command, make sure that docker is running in the background. On Windows, you can just search for Docker Desktop in the Start menu and double click on the app. You will soon get a message that it is running. You can also verify this through the small popup near the battery icon in Windows. If you can see the docker icon there, it is running.
Now when you run your function on the AWS Lambda console, it would work. Congratulations!!
However, in your 'Function Code' section on the AWS Lambda console, you would be seeing a message saying 'The deployment package of your Lambda function "hello-world-python-dev-hello_world" is too large to enable inline code editing. However, you can still invoke your function.'
Seems like the addition of the Numpy dependency has made the bundle size too large and as a result, we cannot even edit our application code in the lambda console. How do we solve that problem? Head on to the next chapter to find out.
|
2022-11-30 01:57:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1787821650505066, "perplexity": 2775.3890426793537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00034.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/proc.2015.0185
|
Article Contents
Article Contents
# Construction of highly stable implicit-explicit general linear methods
• This paper deals with the numerical solution of systems of differential equations with a stiff part and a non-stiff one, typically arising from the semi-discretization of certain partial differential equations models. It is illustrated the construction and analysis of highly stable and high-stage order implicit-explicit (IMEX) methods based on diagonally implicit multistage integration methods (DIMSIMs), a subclass of general linear methods (GLMs). Some examples of methods with optimal stability properties are given. Finally numerical experiments confirm the theoretical expectations.
Mathematics Subject Classification: Primary: 65L05; Secondary: 65L20.
Citation:
• [1] U. M. Ascher, S. J. Ruuth and R. J. Spiteri, Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations, Appl. Numer. Math, 25 (1997), 151-167. [2] U. M. Ascher, S. J. Ruuth and B. T. R. Wetton, Implicit-explicit methods for time-dependent partial differential equations, SIAM J. Numer. Anal., 32 (1995), 797-823. [3] S. Boscarino, Error analysis of IMEX Runge-Kutta methods derived from differential-algebraic systems, SIAM Journal on Numerical Analysis, 45 (2007), 1600-1621. [4] S. Boscarino and G. Russo, On a class of uniformly accurate IMEX Runge-Kutta schemes and applications to hyperbolic systems with relaxation, SIAM J. Sci. Comput., 31 (2009), 1926-1945. [5] M. Braś and A. Cardone, Construction of efficient general linear methods for non-stiff differential systems, Math. Model. Anal., 17 (2012), 171-189. [6] M. Braś, A. Cardone and R. D'Ambrosio, Implementation of explicit nordsieck methods with inherent quadratic stability, Math. Model. Anal., 18 (2013), 289-307. [7] J. C. Butcher, Diagonally-implicit multi-stage integration methods, Appl. Numer. Math., 11 (1993), 347-363. [8] M. P. Calvo, J. de Frutos and J. Novo, Linearly implicit Runge-Kutta methods for advection-reaction-diffusion equations, Appl. Numer. Math., 37 (2001), 535-549. [9] A. Cardone and Z. Jackiewicz, Explicit Nordsieck methods with quadratic stability, Numer. Algorithms, 60 (2012), 1-25. [10] A. Cardone, Z. Jackiewicz and H. Mittelmann, Optimization-based search for Nordsieck methods of high order with quadratic stability, Math. Model. Anal., 17 (2012), 293-308. [11] A. Cardone, Z. Jackiewicz, A. Sandu and H. Zhang, Extrapolated implicit-explicit Runge-Kutta methods, Math. Model. Anal., 19 (2014), 18-43. [12] A. Cardone, Z. Jackiewicz, A. Sandu and H. Zhang, Extrapolation-based implicit-explicit general linear methods, Numer. Algorithms, 65 (2014), 377-399. [13] J. Frank, W. Hundsdorfer and J. G. Verwer, On the stability of implicit-explicit linear multistep methods, Appl. Numer. Math., 25 (1997), 193-205. [14] W. Hundsdorfer and S. J. Ruuth, IMEX extensions of linear multistep methods with general monotonicity and boundedness properties, J. Comput. Phys., 225 (2007), 2016-2042. [15] W. Hundsdorfer and J. Verwer, Numerical solution of time-dependent advection-diffusion-reaction equations, vol. 33 of Springer Series in Comput. Mathematics, Springer-Verlag, 2003. [16] Z. Jackiewicz, General linear methods for ordinary differential equations, John Wiley & Sons Inc., Hoboken, NJ, 2009. [17] C. A. Kennedy and M. H. Carpenter, Additive Runge-Kutta schemes for convection-diffusion-reaction equations, Appl. Numer. Math., 44 (2003), 139-181. [18] L. Pareschi and G. Russo, Implicit-explicit Runge-Kutta schemes for stiff systems of differential equations, in Recent trends in numerical analysis, vol. 3 of Adv. Theory Comput. Math., Nova Sci. Publ., Huntington, NY, 2001, 269-288. [19] L. Pareschi and G. Russo, Implicit-Explicit Runge-Kutta schemes and applications to hyperbolic systems with relaxation, J. Sci. Comput., 25 (2005), 129-155. [20] W. M. Wright, The construction of order 4 DIMSIMs for ordinary differential equations, Numer. Algorithms, 26 (2001), 123-130. [21] H. Zhang and A. Sandu, A second-order diagonally-implicit-explicit multi-stage integration method, Procedia CS, 9 (2012), 1039-1046. [22] H. Zhang, A. Sandu and S. Blaise, High order implicit-explicit general linear methods with optimized stability regions, arXiv preprint, URL http://arxiv.org/abs/1407.2337. [23] H. Zhang, A. Sandu and S. Blaise, Partitioned and Implicit-Explicit General Linear Methods for ordinary differential equations, J. Sci. Comput., 61 (2014), 119-144. [24] E. Zharovski, A. Sandu and H. Zhang, A class of implicit-explicit two-step Runge-Kutta methods, SIAM J. Numer. Anal., 53 (2015), no. 1, 321-341.
Open Access Under a Creative Commons license
|
2023-03-20 20:03:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5747409462928772, "perplexity": 2614.2293661959557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00630.warc.gz"}
|
https://www.hackmath.net/en/math-problem/6634
|
# Curved surface area CSA
A cylinder 5cm high has a base radius(7/2) cm. Calculate the curved surface area.
Correct result:
S = 109.956 cm
#### Solution:
$h=5 \ \text{cm} \ \\ r=7/2=\dfrac{ 7 }{ 2 }=3.5 \ \text{cm} \ \\ \ \\ S=2 \pi \cdot \ r \cdot \ h=2 \cdot \ 3.1416 \cdot \ 3.5 \cdot \ 5=109.956 \ \text{cm}$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Please write to us with your comment on the math problem or ask something. Thank you for helping each other - students, teachers, parents, and problem authors.
Showing 1 comment:
Math student
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Next similar math problems:
• Greenhouse
Garden plastic greenhouse is shaped half cylinder with a diameter of 6 m and base length 20 m. At least how many m2 of plastic is need to its cover?
• Closed drum
Find the total surface area of a closed cylindrical drum if it's diameter is 50 cm and height is 45 cm . (π= 3.14)
• Circle - easy 2
The circle has a radius 6 cm. Calculate:
• Circle area
Calculate the circle area with a radius of 1.2 m.
• 22/7 circle
Calculate approximately area of a circle with radius 20 cm. When calculating π use 22/7.
• Circle from string
Martin has a long 628 mm string . He makes circle from it. Calculate the radius of the circle.
• Circle
What is the radius of the circle whose perimeter is 6 cm?
• Bicycle wheel
After driving 157 m bicycle wheel rotates 100 times. What is the radius of the wheel in cm?
• Athlete
How long length run athlete when the track is circular shape of radius 120 meters and an athlete runs five times in the circuit?
• The diameter
The diameter of a circle is 4 feet. What is the circle's circumference?
• Clock hands
The second hand has a length of 1.5 cm. How long does the endpoint of this hand travel in one day?
• Circle - simple
The circumference of a circle is 198 mm. How long in mm is its diameter?
• Coal mine
The towing wheel has a diameter of 1.7 meters. How many meters does the elevator cage lower when the wheel turns 32 times?
• Bicycle wheel
Bicycle wheel diameter is 62 cm. How many times turns the bicycle on the road 1 km long?
• Velocipede
The front wheel of velocipede from year 1880 had a diameter 1.8 m. If the front wheel turned again one then rear wheel 6 times. What was the diameter of the rear wheel?
• Well
Rope with a bucket is fixed on the shaft with the wheel. The shaft has a diameter 50 cm. How many meters will drop bucket when the wheels turn 15 times?
• Mine
Wheel in traction tower has a diameter 4 m. How many meters will perform an elevator cabin if wheel rotates in the same direction 89 times?
|
2020-08-15 04:11:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800124645233154, "perplexity": 1562.210926238765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00000.warc.gz"}
|
https://physics.stackexchange.com/questions/524703/integration-and-average-in-physics
|
# Integration and average in physics? [closed]
Many applications of physics theory involve computations of integrals. Examples are voltage, force due to liquid pressure, surfaces...
In some cases, when there is linear dependence between two terms - I guess -, the integral is exactly equal to the mean value of the extrema of integration. However, I still feel uncomfortable with such procedure. How can I recognize when it is possible to compute an average instead of an integral? Is there an example of reasoning I should go through?
Thanks very much!
Edit: I've added here an example, to be clearer:
When computing the capacitance of a spherical capacitor, at the end we get $$\frac{4\pi\epsilon_0r_ar_b}{r_b-r_a},$$ where $$4\pi\epsilon_0r_ar_b$$ is the mean value between the two surfaces $$4\pi{r_a^2}$$ and $$4\pi{r_b^2}.$$
The result, therefore, is $$C=\frac{\epsilon_0A_{av}}{d},$$ where $$A_{av}$$ is the average surface or area.
• The integral is the general solution. In some specific cases, you will find that the integral is solvable in closed form and alternative, simpler expressions can be used (which may or may not include an average). You see to know that. So what is the question? – Brick Jan 13 at 14:31
• @Brick I'm aware it is possible, but usually I see that later, a posteriori. So, I'm asking for a simple clarification of why that is true (i.e. which are the circumstamces where we can use an alternative to an integral). – Shootforthemoon Jan 13 at 14:38
Average of a given function, say $$f(x)$$ from $$x_1$$ to $$x_2$$ is found via the following integral:
$$\langle f(x) \rangle =\frac {\int_{x_1}^{x_2} f(x) dx}{x_2-x_1}$$
Now when the function you are considering is a linear function i.e., $$f(x)=x$$ (say) then you get:
$$\langle f(x) \rangle =\frac {\int_{x_1}^{x_2} x dx}{x_2-x_1}$$
$$\langle f(x) \rangle =\frac { x_2+x_1}{2}$$
|
2020-02-17 19:59:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261809587478638, "perplexity": 249.78819543867135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00544.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/120637-exam-study-guide-help.html
|
# Thread: Exam study guide help
1. ## Exam study guide help
This is of my final study guide, and Im confused by the question please help....
For $H,K \leq G$ Set $H$~ $K$ when for some $g \in G, K = g^{-1} Hg$
Finish the Statement: "|[ $H$]_~| = 1 $\iff$ $H$ is...." and justify your answer
2. Originally Posted by ElieWiesel
This is of my final study guide, and Im confused by the question please help....
For $H,K \leq G$ Set $H$~ $K$ when for some $g \in G, K = g^{-1} Hg$
Finish the Statement: "|[ $H$]_~| = 1 $\iff$ $H$ is...." and justify your answer
What does $\left|\left[H\right]_\sim\right|$ mean?
3. $\left|\left[H\right]_\sim\right|=1\iff \text{ for all } g\in G, H=g^{-1}Hg \iff H\text{ is normal subgroup of }G$
|
2016-12-07 10:51:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6841947436332703, "perplexity": 5251.7784463885055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542060.60/warc/CC-MAIN-20161202170902-00218-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://scipost.org/submissions/2007.11711v2/
|
# Quantum hypothesis testing in many-body systems
### Submission summary
As Contributors: Victor Godet · Jani Kastikainen Arxiv Link: https://arxiv.org/abs/2007.11711v2 (pdf) Code repository: https://github.com/victorgodet/quantum-hypothesis-testing Date submitted: 2020-08-21 09:30 Submitted by: Godet, Victor Submitted to: SciPost Physics Academic field: Physics Specialties: Condensed Matter Physics - Theory High-Energy Physics - Theory Quantum Physics
### Abstract
One of the key tasks in physics is to perform measurements in order to determine the state of a system. Often, measurements are aimed at determining the values of physical parameters, but one can also ask simpler questions, such as "is the system in state A or state B?". In quantum mechanics, the latter type of measurements can be studied and optimized using the framework of quantum hypothesis testing. In many cases one can explicitly find the optimal measurement in the limit where one has simultaneous access to a large number $n$ of identical copies of the system, and estimate the expected error as $n$ becomes large. Interestingly, error estimates turn out to involve various quantum information theoretic quantities such as relative entropy, thereby giving these quantities operational meaning. In this paper we consider the application of quantum hypothesis testing to quantum many-body systems and quantum field theory. We review some of the necessary background material, and study in some detail the situation where the two states one wants to distinguish are parametrically close. The relevant error estimates involve quantities such as the variance of relative entropy, for which we prove a new inequality. We explore the optimal measurement strategy for spin chains and two-dimensional conformal field theory, focusing on the task of distinguishing reduced density matrices of subsystems. The optimal strategy turns out to be somewhat cumbersome to implement in practice, and we discuss a possible alternative strategy and the corresponding errors.
###### Current status:
Editor-in-charge assigned
### Submission & Refereeing History
Submission 2007.11711v2 on 21 August 2020
## Reports on this Submission
### Report
Review "Quantum hypothesis testing in many-body systems"
========================================================
Techniques and concepts from quantum information theory are receiving increasing interest in the field of many-body physics and quantum gravity over the last years. In this submission the authors discuss the task of quantum hypothesis testing for one-parameter families of quantum states, assuming that the parameter (e.g., temperature) is perturbatively close. Their main results
are a new bound between the relative entropy variance and the relative entropy in this perturbative limit (which is important in this setting) and evaluating the respective quantities in some exemplary models of qubits, free fermionic (Gaussian) many-particle systems and conformal field theory. In the latter two examples they apply the results to reduced density matrices of the many-body
system in question. While these calculations are interesting from a technical point of view, a clear discussion of what one actually learns from these computations and how to interpret the results physically is lacking.
Besides deriving their results, the authors give a relatively thorough and nice review of quantum hypothesis testing. This review may in particular be useful for researchers outside the field of quantum information theory as it collects various results (but in itself does not yield new results). It does, however, have the down-side of making the paper rather long. To put it a bit strongly, at times one has the feeling that the authors want to report all that they have learned about hypothesis testing and quantum information theory, but it's less clear what the overall aim is.
The discussion mentions some natural open problems, which I agree should be studied, such as whether in many-body systems one can study, instead of taking many independent copies of the system, many disconnected subsystems of the single large many-body system. (Given the title of the work, it was my hope that some of them would have been tackled in this work.)
The authors also mention the connection of their work to research in quantum gravity which indeed seems like an interesting avenue for future work.
Overall I think the submission fulfills the "General acceptance criteria", but does not provide a breakthrough result. One may argue, though, that it
"opens a new pathway in an existing or a new research direction, with clear potential for multipronged follow-up work;"
or
"provides a novel and synergetic link between different research areas."
Hence it may be suitable for publication in scipost physics.
Remarks
-------
- The authors study parametrically close many-body states. However, there is a subtle issue of (non-)commuting limits here, namely taking the limit λ->0 and taking the thermodynamic limit. For example, even if two many-body systems have arbitrarily close temperatures (independent of system size) then in the thermodynamic limit their global states will generically be perfectly distinguishable in the single shot by a global energy measurement (their trace-distance will approach 1).
I think it would be important to discuss this issue in this paper.
- There are two different kind of expansions in this paper: One with respect to the parameter value, one with respect to the number of copies one takes in the hypothesis test. The expressions "first order" or "second order" are used differently in the two settings. It may therefore be useful to somehow distinguish these (for example by saying "leading order in n" vs. "first order in λ" or so).
- In secs. 3.2.1--3.2.3 the authors demonstrate that their, rigorously proven, theorem holds in particular simple examples. It's not clear to me what the use of these sections is, since the examples don't seem to deliver additional insight.
- As someone not very well versed in CFT techniques, I found the discussion of the CFT computations extremely dense and hard to follow (especially in contrast to the very explicit, detailed and pedagogical calculations in the rest of the paper). It would be very useful to be more explicit in the calculations, in particular if the aim of the paper is (as it seems to be) to bring together different communities.
Smaller points, typos etc.
--------------------------
- In the introduction it says that subsystem reduced density matrices of fermion chains are determined by two-point functions. This is only true for Gaussian states (free fermions) as studied later in the paper and should be specified.
- After eq. (2.6) it says that "[...] -log Q_s(ρ,σ) are the relative Rényi entropies [...]".
The quantity -log Q_s(ρ,σ) is only proportional to the relative Rényi entropy of order s?!
- At the bottom of page 5 it says that Q satisfies the data-processing inequality. Given the pedagogical aim of the paper, it may be useful to state what the data-processing inequality is.
- Footnote 9 on p. 12: ΔK is not yet defined.
- What the authors call "capacity of entanglement" is known under various names in information theory, such as variance of surprisal, varentropy etc.
- After (3.49) the authors say that if "β_2 -> ∞, ρ_2 reduces to the ground state, and the relative entropy variance vanishes (along with C(β_2)->0)." This is only true if the ground state of the Hamiltonian is non-degenerate.
- In sec 4.2 shouldn't the acceptance condition be | |E| - |\tilde E| |>= \mathcal E? I.e. with absolute value sign? Similarly for (4.23) etc.?
- After (4.33) it should probably read "completely positive" instead of "positive".
- In (6.28) and alter ΔA is not defined.
- Why is the sandwiched Rényi relative entropy suddenly mentioned after (6.33)?
- In sec. 6.3.3. it says that the XY spin chain can be mapped to a free fermion chain in the thermodynamic limit. This is also possible for finite chains (care must be taken with regard to the boundary condition) and indeed done in (6.85).
- I didn't understand what is meant with the last sentence before Sec. 7: "It is interesting that the breaking of translation invariance leads to non-commutativity from the perspective of two fermions"
- In (7.3), L_0 and c hasn't been defined.
- In (7.9,7.10) are referred to as giving the "entanglement spectrum", but they give states
(the eigenstates not the eigenvalues of the modular Hamiltonians).
- While maybe not directly relevant for the questions studied here (and hence does not need to be cited here), the authors may be interested in the recent preprint arxiv:2009.08391, which discusses mathematical properties of the relative entropy variance from an information theoretic point of view.
• validity: high
• significance: good
• originality: ok
• clarity: high
• formatting: excellent
• grammar: excellent
|
2021-05-09 17:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7430317997932434, "perplexity": 686.3990359655895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00333.warc.gz"}
|
https://daijiang.name/en/2014/06/05/phylogenetic-signal-of-trait/
|
# Phylogenetic signal of functional trait
This is my reading notes for Functional and Phylogenetic Ecology in R by Nathan Swenson. This post will be updated later after I learned more about this topic. Data can be found at here
## Phylogenetic signal
Niche Conservatism: Closely related species are found to be ecologically similar.
Phylogenetic niche conservatism: Closely related species are found to be ecologically similar; the tendency of lineages to retain their niche-related traits through speciation events and over macroevolutionary time.
Because all species are related they cannot be treated as independent observations. As a result, the assumption of independence is broken for traditional statistical analysis.
### Trait correlations
#### Independent contrasts
The phylogenetic independent contrasts (PICs) method is one of the most common approch for quantifying the correlations between two traits while considering the phylogenetic nonindependence of species. For each internal node in the phylogeny, a contrast will be calculated for a trait. A contrast is the difference in a trait between the two daughter nodes weighted by their branch lengths. The estimated trait value for an node is calculated as the mean of its daughter nodes weighted by their branch lengths. As a result, contrasts can be calculated from the tips of the phylogeny toward the root. If the traits are correlated after accounting for phylogenetic nonindependence, it is expected that the contrast values themselves, which are now statistically independent, are correlated. The PICs can be calculated with pic() function in the ape package.
traits = read.table("data/comparative.traits.txt", sep = "\t", header = T, row.names = 1)
library(ape)
# calculate contrasts for each node (11-19) for trait 1
(pic.x = pic(traits[my.phylo$tip.label, 1], my.phylo)) ## 11 12 13 14 15 16 17 18 19 ## -1.0614 0.7670 0.1739 -1.1430 -0.6850 0.4511 0.7431 -1.0035 0.3361 (pic.y = pic(traits[my.phylo$tip.label, 3], my.phylo))
## 11 12 13 14 15 16 17 18 19
## -0.3116 0.1698 0.3117 -0.2539 -0.1701 -5.0199 11.3501 -4.0144 0.1614
summary(lm(pic.y ~ pic.x - 1))
##
## Call:
## lm(formula = pic.y ~ pic.x - 1)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.954 -1.418 -0.048 1.886 9.811
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## pic.x 2.07 1.85 1.12 0.29
##
## Residual standard error: 4.29 on 8 degrees of freedom
## Multiple R-squared: 0.136, Adjusted R-squared: 0.0277
## F-statistic: 1.26 on 1 and 8 DF, p-value: 0.295
# plot(pic.y~pic.x) abline(lm(pic.y~pic.x-1))
#### Phylogenetic generalized least squares
A more general method for quantifying the correlation of two trait is to perform a phylogenetically informed regression. Phylogenetic generalized least squares (PGLS) regression is one of the most common one. PGLS incorporates phylogenetic nonindependence into generalized linear models in the form of a phylogenetic variance-covariance (VCV) matrix. The regression model uses an assumed model of trait evolution to generate an expected correlation structure (i.e., nonindependence) in the data. One most common assumed model of trait evolution is the Brownian Motion model, which can be applied using the corBrownian() function in the ape package.
cor.bm = corBrownian(phy = my.phylo)
trait.1 = traits[my.phylo$tip.label, 1] trait.3 = traits[my.phylo$tip.label, 3]
library(nlme)
pgls = gls(trait.3 ~ trait.1, correlation = cor.bm)
summary(pgls)
#### Phylogenetic eigenvector regression
The above two methods have a model of trait evolution. An alternative method that uses Euclidean distances from the phylogeny in the form of a phylogenetic distance matrix and assummes no model of trait evolution, which makes it controversial, is called phylogenetic eigenvector regression. It uses the same general class of eigenvector-based statistics designed to account for spatial autocorrelation in data with the exception that the goal now is to account for phylogenetic autocorrelation. The distance matrix is used in a principle components analysis to derive spatial or phylogenetic eigenvectors. The scores from the PCA, as well as one trait, will be independent variables, the other trait will be the dependent variable in the regression. How many axes from PCA to include as independent variables in the model? We can plot the axes of the PCA then decide.
Distance matrix — PCA — trait2 ~ score1 + score2 + … + trait1
p.dist.mat = cophenetic(my.phylo)
phylo.pca = princomp(p.dist.mat)
summary(phylo.pca)
# to visualize the load with the tree
library(phylobase)
obj4d = phylo4d(my.phylo, phylo.pca$scores[, 1:2]) table.phylo4d(obj4d) # fit the model pev.mod = lm(trait.3 ~ trait.1 + phylo.pca$scores[, 1] + phylo.pca\$scores[,
2])
summary(pev.mod)
Since the statistical power of phylogenetic eigenvector regression is low, also it does not assume a model of trait evolution, most comparative methods researchers and biologists usually DO NOT use this method.
### Quantifying phylogenetic signal
Regard phylogenetic signal as the degree to which variation in species trait values is predicted by the relatedness of species.
#### Mantel test
Mantel test quantifies the correlation between two distance matrices. Here, these two distance matrices are the phylogenetic distance matrix and a univariate or multivariate trait distance matrix. Despite its easy to understand, this method is now infrequently used since its lower statistical power and higher type I error rates compare with other new metrics. We can use the mantel() function from the vegan package to do this.
#### Blomberg’s K and significance tests
This method seeks to quantify the degree to which variation in a trait is explained by the structure of a given phylogeny. This value is then standardized by an expectation derived from Brownian motion trait evolution on the observed phylogeny. Since the statistics, K, is standardized, it allows for the comparison among other studies.
n = length(trait.1) # trait.1 is X parameter in Blomberg et al. n is parameter n.
my.vcv = vcv.phylo(my.phylo) # parameter V
inverse.vcv = solve(my.vcv) # parameter V^-1
root.value = sum(inverse.vcv %*% trait.1)/sum(inverse.vcv) # parameter a hat
# phylogenetic corrected trait mean
MSEo = (t(trait.1 - root.value) %*% (trait.1 - root.value))/(n - 1)
# the mean squared error of the trait
MSE = (t(trait.1 - root.value) %*% inverse.vcv %*% (trait.1 - root.value))/(n -
1)
# the mean squared error of the trait given the VCV
ratio.obs = MSEo/MSE # observed ratio of mean squared errors
ratio.expected = (sum(diag(my.vcv)) - (n/sum(inverse.vcv)))/(n - 1)
k = ratio.obs/ratio.expected # standardized and can be compared across studies
If K = 1, then it indicates that the observed variation in the trait is predicted by the structure of the phylogeny under a Brownian motion model of trait evolution. If K > 1, it suggests more phylogenetic signal than expected from Brownian motion. If K < 1, it suggests less phylogenetic signal than expected from Brownian motion.
Of course, there are R packages already that we can use to calculate K instead of hard code by hand. The null hypothesis will be: K = null expectation.
library(phytools)
phylosig(tree = my.phylo, x = trait.1, method = "K", test = T)
#### Pagel’s Lambda
This method seeks the transformation of the original phylogeny that best predicts the distribution of our traits on the phylogeny under a Brownian Motion model of trait evolution. The phylogeny is transformed using a parameter called lambda (lambda’s range: 0 to ~1: one retains the original tree, zero will produce a star tree where all spcies are equally related, i.e. a single polytomy). We can use transform() function of the geiger package to transform a tree.
With the decreasing of lambda from one to zero, internal nodels are pushed toward the root. As a result, the most recent common ancestor (MRCA) will be far away and trait variation is expected to be larger under Brownian motion model.
The general idea of this method is to search for the lambda value that transforms the original phylogeny such that the observed distribution of traits on the tips of the phylogeny is mirrored by that expected under Brownian motion on the transformed phylogeny. A low lambda indicates very little phylogenetic signal in the trait data given the original tree and a high lambda indicates relatively more phylogenetic signal in the trait data given the original tree.
The null hypothesis is: lambda = 0.
phylosig(tree = my.phylo, x = trait.1, method = "lambda", test = T)
#### Standardized contrast variance, unstandardized contrast means and randomization test
Recall that contrast for a node describes the magnitude of the difference of the trait values for daughter nodes. This difference can be standardized weighting by branch lengths or unstandardized where all btanch lengths are set to one. Thus large contrast values indicate that daughters are very divergent in trait aand lack of phylogenetic signal. The mean contrast value has been used in the past to quantify phylogenetic signal.
my.phylo.2 = my.phylo
# set all branch length to 1, and calculate unstandardized contrast.
my.phylo.2 = compute.brlen(my.phylo.2, method = 1)
mean(pic(trait.1, my.phylo.2)) # mean of unstandardized contrast of nodes
# after then, we can reshuffle the tips of the tree and get a null
# distribution of mean of pic() then we can test the significance of the
# value observed .
Another way is to use the variance of the standardized contrasts to quantify phylogenetic signal. The smaller the variation, the more similar of colsely related species.
var(pic(trait.1, my.plyo)) # use the branch length
# again, we can reshuffle th tips n times to get the null distribtuion and
# to get the p-value.
These two methods are not used as commom as the K and lambda methods.
### Quantifying the timing and magnitude of trait divergences
The phylogenetic signal we detected above are phylogeny-wide. However, we are often interested node-level signal or antisignal and how this signal changes with the position of the node in the phylogeny, i.e. whether large/small divergences tend to be correlated with the depth in the phylogeny.
We can calculate the pic of each node, then using randomization to get the null distribution of each node then we can get the rank of the observed value for each node and plot the rank on the tree (nodelabels(ranking)).
Check dtt() function of geiger package for more details.
|
2021-09-23 14:43:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6902638077735901, "perplexity": 2412.8396249787816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00572.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1100/2/a/e/
|
# Properties
Label 1100.2.a.e Level 1100 Weight 2 Character orbit 1100.a Self dual yes Analytic conductor 8.784 Analytic rank 0 Dimension 1 CM no Inner twists 1
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$1100 = 2^{2} \cdot 5^{2} \cdot 11$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1100.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$8.78354422234$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 220) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q + 2q^{3} + 4q^{7} + q^{9} + O(q^{10})$$ $$q + 2q^{3} + 4q^{7} + q^{9} - q^{11} + 4q^{13} - 4q^{19} + 8q^{21} + 6q^{23} - 4q^{27} - 6q^{29} + 8q^{31} - 2q^{33} - 2q^{37} + 8q^{39} + 6q^{41} - 8q^{43} - 6q^{47} + 9q^{49} + 6q^{53} - 8q^{57} - 12q^{59} + 2q^{61} + 4q^{63} + 10q^{67} + 12q^{69} - 12q^{71} + 16q^{73} - 4q^{77} + 8q^{79} - 11q^{81} - 12q^{87} + 6q^{89} + 16q^{91} + 16q^{93} - 14q^{97} - q^{99} + O(q^{100})$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 2.00000 0 0 0 4.00000 0 1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 1100.2.a.e 1
3.b odd 2 1 9900.2.a.bd 1
4.b odd 2 1 4400.2.a.e 1
5.b even 2 1 220.2.a.a 1
5.c odd 4 2 1100.2.b.a 2
15.d odd 2 1 1980.2.a.a 1
15.e even 4 2 9900.2.c.m 2
20.d odd 2 1 880.2.a.j 1
20.e even 4 2 4400.2.b.f 2
40.e odd 2 1 3520.2.a.d 1
40.f even 2 1 3520.2.a.bd 1
55.d odd 2 1 2420.2.a.b 1
60.h even 2 1 7920.2.a.o 1
220.g even 2 1 9680.2.a.bb 1
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
220.2.a.a 1 5.b even 2 1
880.2.a.j 1 20.d odd 2 1
1100.2.a.e 1 1.a even 1 1 trivial
1100.2.b.a 2 5.c odd 4 2
1980.2.a.a 1 15.d odd 2 1
2420.2.a.b 1 55.d odd 2 1
3520.2.a.d 1 40.e odd 2 1
3520.2.a.bd 1 40.f even 2 1
4400.2.a.e 1 4.b odd 2 1
4400.2.b.f 2 20.e even 4 2
7920.2.a.o 1 60.h even 2 1
9680.2.a.bb 1 220.g even 2 1
9900.2.a.bd 1 3.b odd 2 1
9900.2.c.m 2 15.e even 4 2
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$5$$ $$1$$
$$11$$ $$1$$
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(1100))$$:
$$T_{3} - 2$$ $$T_{7} - 4$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ 1
$3$ $$1 - 2 T + 3 T^{2}$$
$5$ 1
$7$ $$1 - 4 T + 7 T^{2}$$
$11$ $$1 + T$$
$13$ $$1 - 4 T + 13 T^{2}$$
$17$ $$1 + 17 T^{2}$$
$19$ $$1 + 4 T + 19 T^{2}$$
$23$ $$1 - 6 T + 23 T^{2}$$
$29$ $$1 + 6 T + 29 T^{2}$$
$31$ $$1 - 8 T + 31 T^{2}$$
$37$ $$1 + 2 T + 37 T^{2}$$
$41$ $$1 - 6 T + 41 T^{2}$$
$43$ $$1 + 8 T + 43 T^{2}$$
$47$ $$1 + 6 T + 47 T^{2}$$
$53$ $$1 - 6 T + 53 T^{2}$$
$59$ $$1 + 12 T + 59 T^{2}$$
$61$ $$1 - 2 T + 61 T^{2}$$
$67$ $$1 - 10 T + 67 T^{2}$$
$71$ $$1 + 12 T + 71 T^{2}$$
$73$ $$1 - 16 T + 73 T^{2}$$
$79$ $$1 - 8 T + 79 T^{2}$$
$83$ $$1 + 83 T^{2}$$
$89$ $$1 - 6 T + 89 T^{2}$$
$97$ $$1 + 14 T + 97 T^{2}$$
|
2020-09-21 19:29:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295380711555481, "perplexity": 13322.121690297114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00559.warc.gz"}
|
https://blog.shahadmahmud.com/byte-pair-encoding-and-subword-tokenization/
|
# Byte Pair Encoding (BPE) and Subword Tokenization
In almost every application related to NLP, we use text as a part of the data. To the models, the input is generally a list of words or sentences like “We will live on Mars soon”. To a model, we feed the text as a sequence of tokens. The tokens can be characters, space-separated words, a group of words, or even a part of a word (subword). Over the time different tokenization approaches have been taken. In the early stages, one-hot-encoding like tokenization techniques were used. Later we tried to hold the contextual information during tokenization. Thus word2vec model became the approach of choice gradually.
In word2vec, semantically similar tokens have a closer position on the vector plane. Like the words [“old”, “older” and “oldest”] will have a closer position in the word2vec plane. But as discussed in this post, this representation does not tell anything about the relationship of [“tall”, “taller”, and “tallest”]. But if we use subtokens like ‘er’ or ‘est’, the model can learn this relationship of other words also.
As the name suggests, in subword tokenization we break a word into frequent subwords and tokenize those subwords. Like we may break the work “tallest” into “tall” and “est”. Now the question arises, how can we get these subwords i.e. how will we know, into which subwords we will break a word? Byte Pair Encoding or BPE comes into play here!
## Byte Pair Encoding
BPE was first described as a data compression algorithm in 1994 by Philip Gage. It at first finds the most frequent byte pair and replace all occurrence of the pair by a byte that is not used in the data. Let’s consider a text as: aaabdaaabac. Byte pair aa is most frequent in the text. If we replace this pair with Z, i.e. Z = aa, the text becomes ZabdZabac. We keep track of this in a replacement table. So, in the table, we will put Z = aa. Now, ab is the most frequent pair. So we can replace it with Y = ab. Thus the text becomes ZYdZYac. Further ZY can be replaced by X, resulting in the text to be XdXac and the replacement table to be Z = aa, Y = ab, X = ZY.
In NLP. Byte Pair Encoding was first used in machine translation. In the paper titled – “Neural Machine Translation of Rare Words with Subword Units“, the authors used the subword tokenization technique by adapting the BPE algorithm. For this, instead of replacing the most common pair, they merged those and created a new subword token. Let’s dive a bit deeper into understanding and implementation for subword tokenization.
## BPE for Subword Tokenization
For the implementation, I have followed this post‘s code, which is a modification of the code released by the authors of the previously mentioned paper. Well, let’s start. We start with a text corpus. It can be a text from a book (like this) or for now, a simple paragraph.
At first, we count the frequencies of each word in the corpus. For each word, we add a special token (like <\w>) indicating the end of a word. So, a word this becomes this<\w>. Now, initially, we split the word into characters. The end token is treated as a single character. So, for this, we will get [t, h, i, s, <\w>]. Finally, after counting all the words of the corpus, we will get a vocabulary of space-separated words with the end token and their frequencies. This may look like this: {'t h i s <\w>': 10, 'b o o k <\w>': 2, 't h o r <\w> \': 7, 's o o n <\w>': 3}. We can implement this step like the following code. Here I have assumed that we are creating the vocab from a text file.
Now we start the training. At each iteration, we will count the frequency of each consecutive token pair. The pairs can be (t, h), (i, s), (s, <\w>), (o, o), etc. If we look, the pair (t, h) is present in 't h i s <\w>' with count 10 and 't h o r <\w>' with count 7. So the frequency of (t, h) is 10 + 7 = 17. We can get the pairs with frequencies with the following lines of code.
After counting the frequency of all pairs, we chose the most frequent pair. Then we merge this pair of tokens into a single token. Like if the most frequent token pair is (t, h), after merging the tokens we will get a new token 'th' and will replace all 't h' with 'th'. With the following function we can do the merging:
We will continue this iteration a fixed number of times. For each iteration, we will find the most frequent token pair, merge those tokens and update the vocabulary. After all iterations, we will sort the tokens according to the lengths and create a ‘word to token’ map. We can do the training with the following function:
### Why that ‘<\w>’?
You might have been thinking, why are we adding the <\w> or stop token. In subword tokenization, this stop token plays a significant role. This helps to understand if a subword is the end of a word or not. Without the stop token (<\w>) say there is a subword token 'sis'. Now, this can be in the word 'sis ter' or in the word 'the sis'. Both of these words have quite different meanings. But the token with <\w> ('sis<\w>') will indicate the model that this token is for the word 'the sis<\w>', not for 'sis<\w> ter'.
### Tokenizing a word
While tokenizing a word, at first we would add a stop token at the end. So, if we want to tokenize the word 'python', we would convert it to 'python<\w>'. At first, we would check if it is a known word, i.e. present in the vocabulary. If so, we would return the corresponding tokens.
Now, if the vocabulary does not contain the word, we will start iterating all the words from the vocabulary and see if a word is a substring of the given word. We will start from the longest word in the vocabulary and for this, we sorted the tokens at the end of training. The tokenization can be done with the following lines of code.
Generally, tokenization in this approach is computationally costly. That’s why we created the ‘words to tokens’ map, which can reduce the computations for some iterations in the case of rare words.
You can find the full code with examples and sample data here. Don’t forget to leave a star! 😀
That’s all for this post on Byte Pair Encoding and subword tokenization. I am a learner who is learning new things and trying to share with others. Let me know your thoughts on this post. Get more machine learning-related posts here.
I'm a Computer Science and Engineering student who loves to learn and spreading what is learnt. Besides I like cooking and photography!
## 1 Comment
1. October 31, 2021
Loved the way you explained it
|
2022-01-22 20:22:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5413933992385864, "perplexity": 1344.5459852630552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00056.warc.gz"}
|
http://accesscardiology.mhmedical.com/content.aspx?bookid=1782§ionid=121398132
|
Chapter 14
### INTRODUCTION
For decades, bipedal lymphangiography was the standard imaging test for nonsurgical assessment of the lymphatic system. At Stanford University (Stanford, CA), several thousands of lymphangiographies were performed over 20 to 25 years beginning in the 1960s.1
The technique allows detection of enlarged lymph nodes in lymphadenopathy but may also demonstrate internal architectural derangements within normal-sized lymph nodes. It may further demonstrate the lymphatic origin of a known fluid collection as in cases with chylaskos, lymphocele, chylothorax, or lymphatic fistula. Using an iodinated glycerol ester (lipiodol) as a contrast agent, the technique has also been shown to induce granulomatous reactions at the site of lymphatic leakage and thereby may support its successful treatment.
### ALTERNATIVE IMAGING TECHNIQUES
Because lymphangiography is invasive and technically challenging and requires an experienced investigator as well as a compliant patient, its application has continuously decreased with the introduction of cross-sectional imaging.1
Today sonography often serves as a first test for depiction of lymph nodes.2 Image characteristics, such as a rounded shape, loss of the hyperechoic hilus reflex, node enlargement, and an enhanced cortical Doppler signal from increased vascularity, have been shown to indicate malignancy.3,4 As a drawback, sonography is investigator dependent, may be time consuming, interferes with bowel gas, and often fails to precisely specify the origin of a fluid collection.
Indirect lymphangiography by intradermal pump injection of nonionic, water-soluble, dimeric, hexaiodinated contrast agents has been proposed but did not find its way into clinical routine.5,6
Lymphangioscintigraphy is valuable in the assessment of lymphedema and the detection of the sentinel node as the first tumor-draining lymph node in patients with melanoma and breast cancer.7,8,9,10,11,12,13,14,15,16,17,18
With cross-sectional imaging being optimized and broadly available, criteria for computed tomography (CT) and magnetic resonance (MR) detection of lymph nodes and lymphadenopathy has been described.19,20,21,22,23 These techniques offer three-dimensional capabilities, depict both lymphatic structures and the surrounding tissues and organs, and may even allow visualization of lymphatic structures not depictable by pedal lymphangiography (e.g., hypogastric and mesenteric nodes). On the other hand, these techniques often do not allow precise evaluation of the origin of a fluid collection and its leakage site and furthermore have no direct therapeutic potential.
Because CT and standard MR imaging (MRI) mainly apply size criteria for the assessment of lymph nodes, both techniques fail to detect lymphatic micrometastases within normal-sized lymph nodes. Therefore, the intravenous (IV) application of ultrasmall particles of iron oxide (USPIO) has been investigated for MR lymphography.24 These particles are absorbed by the reticuloendothelial system (RES), which leads to a signal loss in normal nodes in T2-weighted sequences. Metastatic ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessCardiology Full Site: One-Year Subscription
Connect to the full suite of AccessCardiology content and resources including textbooks such as Hurst's the Heart and Cardiology Clinical Questions, a unique library of multimedia, including heart imaging, an integrated drug database, and more.
|
2017-03-27 08:37:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22602860629558563, "perplexity": 6270.105814086031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00228-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://konfou.xyz/posts/nixos-without-display-manager/
|
# NixOS without display manager
A display manager or login manager is a graphical interface shown at the end of boot. It presents the user with a login screen, and when credentials are entered it starts a session on an X server. Examples of such software can be found in Debian, Arch, and Gentoo wiki. The default on a NixOS installation is LightDM, a relatively lightweight and highly customizable display manager with various front-ends (greeters) written in a variety of toolkits (NixOS defaults to the GTK one). Though a good choice, for an one-user system where user wants to just run their session without much else its features are mostly unneeded.
## Disabling the display manager
NixOS allows decleratively setting various options, among them is the default session and autologin which are dependent on the display manager. For example if the user named user (cleverly thought, right?) wants to login automatically at their i3 session the following configuration may be used.
...
services.xserver = {
displayManager = {
defaultSession = "none+i3";
enable = true;
user = user;
};
};
windowManager.i3.enable = true;
};
...
In order for the display manager to be suppressed the following option exists. According to the documentation, this enables a dummy pseudo-display manager. Basically it disables LightDM, the display-manager systemd service and sets Xorg’s log file to null (see <nixpkgs/…/start.nix>). Most importantly it enables the xinit and startx (wrapper to xinit) commands at the global environment.
...
services.xserver = {
displayManager = {
startx.enable = true;
};
};
...
Unfortunately it also means losing autologin functionality and some set up done by the display manager. In case anyone wonders, autologin is useful when full-disk encryption is used. As a password is entered during the boot the need for a second password just few seconds later is mostly pointless.
Automatical login of a user can be done creating a systemd (system) service. For this the following configuration can used, adapted from an @caadar’s gist. Specifically a new target is created, the kernel logging is suppressed, and the service logs in the user named user after the multi-user target has been reached.
...
systemd.targets = {
requires = [ "multi-user.target" ];
after = [ "multi-user.target" ];
unitConfig.AllowIsolate = "yes";
};
};
systemd.services = {
"[email protected]" = {
enable = true;
restartIfChanged = false;
description = "autologin service at tty1";
after = [ "suppress-kernel-logging.service" ];
serviceConfig = {
ExecStart = builtins.concatStringsSep " " ([
"@${pkgs.utillinux}/sbin/agetty" "agetty --login-program${pkgs.shadow}/bin/login"
"--autologin user --noclear %I $TERM" ]); Restart = "always"; Type = "idle"; }; }; "suppress-kernel-logging" = { enable = true; restartIfChanged = false; description = "suppress kernel logging to the console"; after = [ "multi-user.target" ]; wantedBy = [ "autologin-tty1.target" ]; serviceConfig = { ExecStart = "${pkgs.utillinux}/sbin/dmesg -n 1";
Type = "oneshot";
};
};
...
The restartIfChange is set to false so, if a nixos-rebuild takes place and the service has changed, the session won’t absurdly restart. Nevertheless it will restart if user decides to exit the sesssion.
## Autostarting X
There’re two ways to autostart X on console login. Either using the profile, or as a systemd user service. Theoretically the optimal will be the later since the profile is for shell configuration and environment set up rather running services. This is for the service manager to do (systemd in NixOS case). But it is also substantially more complicated (see Pitt’s slides) and moreover requires running a Xorg server as root (though the session will be run as user). In contrast the former way is mostly trivial and also runs Xorg as user.
First, we need to source the ~/.profile. This isn’t done by default. Rather the following configuration has to be added. This makes an /etc/profile.local sourced by /etc/profile which sources ~/.profile.
...
environment.etc = {
"profile.local".text = ''
# /etc/profile.local: DO NOT EDIT -- this file has been generated automatically.
if [ -f "$HOME/.profile" ]; then . "$HOME/.profile"
fi
'';
};
...
The startx will run the ~/.xinitrc file. Therefore to run i3 adding the following will be enough.
exec i3
Then at the end of ~/.profile the following will run startx if no display has been set and only on logging at tty1. Therefore it won’t run on other consoles or when the shell opens in a terminal.
if [ -z "$DISPLAY" ] && [$TTY == "/dev/tty1" ]; then
exec startx
fi
Someone may argue that it could be added in /etc/profile.local directly. But in that case X will run before any user configuration takes place.
## Set-up X
The display manager loads ~/.xprofile which is used to execute commands at the beginning of the user session, and ~/.Xresources which sets parameters for X applications. Therefore simply add the following before executing the window manager.
[ -f ~/.xprofile ] && . ~/.xprofile
[ -f ~/.Xresources ] && xrdb -merge ~/.Xresources
A display manager also loads the ~/.pam_environment. The variables used in it have to be moved to ~/.profile before starting up X. It should be noted that on NixOS there’s no /etc/environment or or /etc/security/pam_env.conf file, rather environment variables set in configuration.nix will be added in /etc/profile. This is the place where most of set up takes place making a display manager even less necessary.
Also the user’s dbus daemon has to be set. This is done adding the following to ~/.xinitrc, taken from nixos.wiki.
if test -z "$DBUS_SESSION_BUS_ADDRESS"; then eval$(dbus-launch --exit-with-session --sh-syntax)
fi
systemctl --user import-environment DISPLAY XAUTHORITY
if command -v dbus-update-activation-environment >/dev/null 2>&1; then
dbus-update-activation-environment DISPLAY XAUTHORITY
fi
The wiki also shows how to run X without installing it system-wide. This could be useful in a multi-user environment or/and if running Nix on another system (non-NixOS). For both cases some modifications are required.
Finally start the graphical-session target in systemd.
systemctl --user start graphical-session.target
The graphical-session target was made for services that require a graphical session to be running. As many services are starting at that run level, without the previous some of them will never run.
|
2021-04-15 13:02:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2010132074356079, "perplexity": 7310.935994404978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00076.warc.gz"}
|
https://www.gamedev.net/forums/topic/270902-taking-input-too-fast/
|
Jump to content
• Advertisement
Public Group
# taking input too fast
This topic is 5199 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
## Recommended Posts
hi, i seem to come across this common problem, which i "hack" my way around. im wondering if there is a general solution, or what im doing is totally wrong, or what. anyway, im using SDL for input, and i have this problem. ill use the left mouse button as example, but it happends with all keys or mouse buttons. -check directly if the left mouse button is pressed -if so, do something -the next frame comes, and that flag is still set that the mouse button is pressed, even though i only clicked it once, getting 30 or 800 FPS will register a mouse click over several frames -the mouse state is still registerd as being pressed down, so all of a sudden im clicking something i shouldnt be clicking. i found there are 2 ways around this. the first only allows the button to be pressed a single time, evne if its held down forever. the code looks like this:
static bool pressed = false;
if( some button is pushed)
{
if(!pressed)
DO_SOMETHING_HERE();
pressed = true;
}
else pressed = false;
this basically makes it so input is only taken that single time. the other way i do it is using a delay. i mainly need this in 2 cases, 1) i want the user to be able to hold down the botton, just not so fast as SDL can take it, and 2) i have a situation where its taking the input too fast in 2 different pieces of code. basically it looks something like...
static Uint32 delay = SDL_GetTicks();
if(some button is pushed)
{
if(SDL_GetTicks() - delay > 200)
DO_SOMETHING_HERE();
delay = SDL_GetTicks();
}
this only takes the input every 200 MS. anyway, these are both ugly hacks, and i dont like them. im having some weird input problems pop up in my code that i cant figure out and no one seems to have experianced before. mainly, for some reason, at some point, my mouse button gets stuck down. like its as if im pushing the mouse button, but im not. it doesnt happen alot, but it does happen. i notice that if i click somewhere, anywhere, it goes away. i also noticed that i cant click and drag the window while this is happening. pretty weird. i posted about my input problem on the SDL newsgroup, and the guy was wondering why i take input directly. he says i should just take it while polling. i dont get it! how could you take ALL the input for your entire game in your Polling() loop? how is that even possible? wouldnt it be a big sloppy mess? not to mention it would break encapsulation, like moving the Player for example, you would then need to move him from your input polling code, instead of inside the Player class somewhere. does anyone do this, or is this guy crazy? thanks a lot for any help!
#### Share this post
##### Share on other sites
Advertisement
I don't know how SDL deals with input, but the situation you are describing is very common. Generally, it is handled something like this:
down = get_input(); // assuming 1 means down and 0 means upup = ~down; // 1 means up and 0 means downchanged = down ^ previous_down; // 1 means changed up or downpressed = down & changed; // 1 means pressed this framereleased = up & changed; // 1 means released this frameprevious_down = down; // save for next frame
This is much like your first example. Note some of the lines above are not always needed.
#### Share this post
##### Share on other sites
When you take input, check to make sure you didn't do SOME_THING last frame also. That's all.
As far as the input OOPing, basically, what you want to do is have a class that encapsulates an action or a message like MOVE_LEFT or JUMP or whatever. So each frame, you loop through all the queued SDL events and process each one. If you get an SDLLEY_ENTER key pressed for example, and you want to shoot your gun, you create a new gun firing event object and send it somewhere else (either to the player or an intermediate class that knows about the player) or just "activiate it":
ActionClass action = ActionFireGun( some parameters );
SomeGlobalActionHandler.Queue( action );
-or-
Player.RespondToEvent( action );
or for polymorphic goodness, just try:
action.Act();
and just let it figure out what to do on it's own! ... or something sorta like that. The nice thing about encapsulating the action itself in an object is that you can fire off these objects from anywhere and let them figure out what to do without your knowledge. For instance, you could register an action object with a button press and the button will have no idea what the action it contains actually does. It shouldn't matter either. This approach lets you rearrange the button config and also trigger these actions from other sources as well. With the gun firing example, you can have the letter F, ENTER, and right mouse click all fire the gun. All they really do is fire off the same action object though (however you choose to implement that).
#### Share this post
##### Share on other sites
^^^ dangit, that was me.
#### Share this post
##### Share on other sites
hmmmm... so you think i should set up some sort of event handling system instead of taking input directly? it seems like a lot of work compared to just doing if(keys[SDKL_whatever]), you know?
but, is it sloppy take input directly? it just seems like there are a million places where i take input, so it would be a lot of work if each time i wanted to add something that was effected by input, i had to go and make another enum or whatever.
hey John, sorry but i didnt really understand that. i guess i have to freshen up on the bitwise operations.
thanks for anymore help.
#### Share this post
##### Share on other sites
Here's what I have come up with and I 'm pretty happy with it, finally.
I will only talk about the keyboard but the same applies for the mouse or whatever input device you 'd like.
I have a IInput interface and a DXInput class deriving fron IInterface (I can make a SDLInput class or whatever as long as it derives from IInput).
At the start of each frame I "poll" the status of the keyboard and keep it in a integer array. Let's say it's declared as "int m_Keys[256]".
I also keep another array handy to store the states I 'm interested in. *Not* the states as returned by polling the device, but my own states (int m_States[256]).
After I poll the keyboard state, I run a loop through all 256 elements of m_Keys and by checking it's current status with its previous frame's state I come to some very nice conclusions about the relative state changes, like if the key has just been pressed, or has just been released etc. The most interesting part is that using some bit manipulation, I "flag" this state in the states array.
If it seems confusing, here's example pseudocode:
// some flags for m_States#define RELEASED 0x00#define JUST_PRESSED 0x01#define JUST_RELEASED 0x02#define PRESSED 0x04for (int i = 0; i < 256; ++i){ if (m_Keys) { // the key was "polled" as "pressed"; let's check it out if (m_States & PRESSED == 0) // this key was *not* pressed last frame { // see? not only it's marked "pressed" but "just pressed" too... m_States = PRESSED | JUST_PRESSED; } // similar logic follows for other states else if (...) { } else if (...) { } }}
See what's going on here? I only have to use IInput::Key(int key, int flags) to check a key out. E.g input->Key(SDLK_C, JUST_PRESSED) will return true only if the "C" key was *just* pressed.
I have gone a step further and have added some timing and generate KEY_REPEAT too (for text input in my GUI classes). The point is you can add whatever flags in the m_States array. Anything that helps you in your game...
You can use this input handling class as a singleton or pass it as a parameter to your ProcessInput() function.
That's it. If something seems confusing I 'd be glad to clarify it.
HTH,
Yiannis.
#### Share this post
##### Share on other sites
//to simplify:
bool key_down[256]
bool key_just[256]
//each frame
for(int i=0;i<256;i++){
key_just = false;
if (key_down != REALKEYTESTER){
key_down = REALKEYTESTER;
key_just = true;
};
};
//REALKEYTESTER is the way you find out if a key was pressed!
//remember to clear everything at start!
//now you can easely test for:
bool KeyDown(int i){return key_down;};
bool KeyUp(int i){return !key_down;};
bool KeyJustDown(int i){return key_down && key_just;};
bool KeyJustUp(int i){return (!key_down) && key_just;};
//the KeyJustDown is what you look for if you want something to
//happend when you push the key...
//windows buttons (GUI) only reacts when you release the mouse,
//KeyJustUp()
//i also implemented a timer, counting when key was down, and
//zeroing when key is up, then you can set a limit to 0.5 secs
//for holding the button down (scrolling faster trough a list maybe)
#### Share this post
##### Share on other sites
The solution to the problem is called debouncing, just set a flag in 1 frame to true when something is pressed, and if in any of the following frames the button is released turn the flag off. if the flag is true in the next frame just dont do anything.
So only act on the buttong when the flag is true and the button has been pressed.
ace
#### Share this post
##### Share on other sites
This is the method I use, it works very well in most situations:
class Input{protected:int keyCount;unisgned char *keys, oldKeys;public:Input(); bool inline curKey(int index) { return keys[index] != 0); } bool inline oldKey(int index) { return oldKeys[index] != 0); } // Utility functions bool inline keyDown(int index) { return ( curKey(index))&&(!oldKey(index)); } bool inline keyStillDown(int index) { return ( curKey(index))&&( oldKey(index)); } bool inline keyUp(int index) { return (!curKey(index))&&( oldKey(index)); } bool inline keyStillUp(int index) { return (!curKey(index))&&(!oldKey(index)); }}Input::Input(){ unsigned char *tempKeys=SDL_GetKeyState(&keyCount); keys = malloc(sizeof(unsigned char) * keyCount); memcpy(keys,tempKeys,sizeof(unsigned char)*keyCount); oldKeys = malloc(sizeof(unsigned char) * keyCount);}
Then every time through the event loop, you call this function of Input:
void Input::Update(){ SDL_PumpEvents(); memcpy(oldKeys,keys,sizeof(unsigned char)*keyCount); unsigned char *tempKeys = SDL_GetKeyState(&keyCount); memcpy(keys,tempKeys,sizeof(unsigned char)*keyCount);}
credit SuperPig with this code, I shamelessly copied it from the enginuity articles ;)[/edit]
Hope this helps,
SwiftCoder
#### Share this post
##### Share on other sites
• Advertisement
### Announcements
• Advertisement
• ### Popular Contributors
1. 1
2. 2
3. 3
Rutin
15
4. 4
5. 5
• Advertisement
• 10
• 9
• 9
• 11
• 11
• ### Forum Statistics
• Total Topics
633682
• Total Posts
3013311
×
## Important Information
By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.
GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.
Sign me up!
|
2018-12-12 14:21:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17206653952598572, "perplexity": 3318.6280252287715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823895.25/warc/CC-MAIN-20181212134123-20181212155623-00530.warc.gz"}
|
http://mathematica.stackexchange.com/questions/20071/wolframalpha-selective-subpod-content/20103
|
# WolframAlpha Selective SubPod Content
I am sending queries to Wolfram | Alpha from Mathematica V9 and and importing the results back into Mathematica, but I am running into a challenge in formatting the request. Specifically, I would like a listing of all of the departing flights from LAX for the current day and, then, I want to select only those flights using Airbus A380 aircraft (there should be about 5 per day).
When I query for LAX departures it returns a subpod of the 680+/- departures for the current 24 hour window, but I am not able to retrieve the aircraft type. If I search for a specific flight, such as "American Airlines flight 1867", Wolfram | Alpha returns a subpod with the aircraft/equipment details. I checked Wolfram | Alpha's sources; both queries are going to the FAA ASDI database, so the data should be available. Any suggestions?
One other question: The subpod of LAX departures displays five results, but indicates 680 or so results are returned. Is there a way in Mathematica to tell Wolfram | Alpha to return all results without having to code "More" repeatedly? See the PodStates option below in the code.
WolframAlpha["Flights departing LAX",
{{"FlightsBetween:Scheduled:From:FlightData", 1}, "Content"},
PodStates ->
{"FlightsBetween:Scheduled:From:FlightData__More",
"FlightsBetween:Scheduled:From:FlightData__More",
"FlightsBetween:Scheduled:From:FlightData__More"}]
-
You could try playing around with PodStates->{"n@FlightsBetween:Scheduled:From:FlightData__More"} where n is some number, I tried setting it to something crazy high but the request just timed out. Found in Pod States section EDIT: The time it takes seems to scale linearly with n :( – ssch Feb 24 '13 at 14:58
Well, let's look at the limited good news first. While I don't believe there's any way to specify a single simple parameter indicating that you want to return all the data available, it is possible to programatically specify the number of pushes that you want using Table["More ...",{k}]. Typically, you can simulate lots of pushes and the output will be automatically truncated at the maximum number of pushes that the API allows. For example, you can push the "More digits" button up to six times when asking for an approximation to Pi. If you ask for more, the result is as if you asked for just six pushes.
Column[Table[{Row[{"number of pushes", " == ", k}],
Length[First[RealDigits[WolframAlpha["pi",
{{"DecimalApproximation", 1}, "ComputableData"},
"PodStates" -> Table["More digits", {k}]]]]]}, {k, 0, 10}]]
Also, note that I specified that I wanted the output returned as "ComputableData"; that way the result is returned in an accessible format, as opposed to as a WolframAlpha pod. Putting this all together, we can obtain a fair amount of your flight information as follows.
flightInfo = WolframAlpha["Flights departing LAX",
{{"FlightsBetween:Scheduled:From:FlightData", 1}, "ComputableData"},
"PodStates" -> Table["More", {6}]];
Since we asked for "ComputableData", we can further search this for flights headed to Chicago O'Hare. (Of course, this could also be accomplished using a better query.)
Select[flightInfo, StringMatchQ[#[[2]], __ ~~ "(KORD)" ~~ __] &]
(* Out: {
{"China Cargo Airline flight 227",
"estimated to depart for Chicago O'Hare (KORD) at 8:10 am EST (today)"},
{"United Airlines flight 1215",
"estimated to depart for Chicago O'Hare (KORD) at 10:08 am EST (today)"},
{"American Airlines flight 1636",
"estimated to depart for Chicago O'Hare (KORD) at 10:40 am EST (today)"}}
*)
Unfortunately, the command times out when we attempt to "push" the button more than six times and the "TimeConstraint" option doesn't seem to help. Also, there appears to be no way to access the aircraft type information that you want. Even though the data source provides it, Alpha still needs to be programmed to access it. You might consider leaving a comment.
-
Thanks for the help. I have noticed in other postings that this is a common problem. I think perhaps people want to use the WA link in M9 a bit more intensively than Wolfram are prepared to allow. – Nguyen Van Falk Mar 12 '13 at 3:32
|
2015-04-26 06:31:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.254835307598114, "perplexity": 2246.8948293717885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246653426.7/warc/CC-MAIN-20150417045733-00191-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/16897/trivial-line-bundles
|
# Trivial line bundles
Let $X$ be a variety and $\mathbb{C}$ be the field of complex. Then $L = X \times \mathbb{C}$ is a trivial line bundle. The set of sections of this line bundle is $\Gamma(X, L)$ which consisting all functions $s: X \to L$ such that $\pi \cdot s = id$, where $\pi: L \to X$ is the projection map. Why $\Gamma(X, L)$ is the same as the set of all regular functions on $X$? Thank you very much.
• Is the representation-theory tag appropriate? Jan 9 '11 at 17:29
• no, i got rid of it. Jan 9 '11 at 17:38
This is true for any trivial bundle. By the categorical definition of product we know that all maps $X \to X \times \mathbb{C}^n$ are in bijective correspondence with pairs of maps $X \to X$ and $X \to \mathbb{C}^n$. Now since we know it is a section and that $\pi$ is just the projection onto the $X$ factor the map $X \to X$ is forced to be the identity. So what is left, just our map $X \to \mathbb{C}^n$ which must be a morphism in whatever category you are working in, hence regular. Note that the case you asked about is when $n=1$ and that in fact $\mathbb{C}^n$ could be replace by any object $Y$ in whatever category you are working in. This is a fact about trivial bundles or products rather, not line bundles.
• Hi Sean, thank you very much. Is $\Gamma(X, L)$ in my question isomorphic to $\mathbb{C}$?
– user
Jan 9 '11 at 18:30
• Is $\Gamma(X, L)$ (or the space of all regular functions on $X$) a finite dimensional vector space? What is the basis of $\Gamma(X, L)$ (or the space of all regular functions on $X$)?
– user
Jan 9 '11 at 18:41
• I doubt that $\Gamma (X,L)$ is isomorphic to $\mathbb{C}$, I also don't really know any algebraic geometry. Usually though, regardless of whether or not the bundle is trivial, the sections of a bundle are some sort of module over the ring of functions on the base space. So in particular, yes, this space of sections is a vector space (that is true for any vector bundle). Jan 9 '11 at 20:22
• Hi Sean, thank you very much.
– user
Jan 9 '11 at 22:52
• @user: It depends. If your variety is compact then the only regular functions on it are constant maps, so $\Gamma(X,L) = \mathbb C$. If your variety is not compact, then you can have lots of functions, see for example if you take $X = \mathbb C$. Jul 18 '11 at 11:06
|
2021-09-20 02:52:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672747015953064, "perplexity": 119.78673475878036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00467.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-8-section-8-1-linear-equations-with-two-variables-exercises-page-296/3
|
## Elementary Technical Mathematics
First solve for y, then substitute each given value for x to complete the ordered pairs. $6x+2y=10$ $6x+2y-6x=10-6x\longrightarrow$ subtract 6x from each side $2y=10-6x$ $2y\div2=(10-6x)\div2\longrightarrow$ divide each side by 2 $y=5-3x$ $y=5-3(2)=5-6=-1$ $y=5-3(0)=5-0-5$ $y=5-3(-2)=5-(-6)=5+6=11$
|
2021-05-18 15:27:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7813782095909119, "perplexity": 445.46565620931494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00392.warc.gz"}
|
https://uen.pressbooks.pub/introductorychemistry/chapter/oxidation-reduction-reactions/
|
69 Oxidation-Reduction Reactions
LumenLearning
Redox Reactions Introduction
LEARNING OBJECTIVES
Explain the processes involved in a redox reaction and describe what happens to their various components.
KEY TAKEAWAYS
Key Points
• Oxidation and reduction reactions are defined by the movement of electrons
Key Terms
• : a shorthand term for “reduction-oxidation,” two methods of electron transfer that always occur together
Redox reactions are all around us. In fact, much of our technology, from fire to laptop batteries, is largely based on redox reactions. Redox ( reduction – oxidation ) reactions are those in which the oxidation states of the reactants change. This occurs because in such reactions, electrons are always transferred between species. Redox reactions take place through either a simple process, such as the burning of carbon in oxygen to yield carbon dioxide ($\text{CO}_2$), or a more complex process such as the oxidation of glucose ($\text{C}_2\text{H}_12\text{O}_6$) in the human body through a series of electron transfer processes.
The term “redox” comes from two concepts involved with electron transfer: reduction and oxidation. These processes are defined as follows:
• Oxidation is the loss of electrons or an increase in oxidation state by a molecule, atom, or ion.
• Reduction is the gain of electrons or a decrease in oxidation state by a molecule, atom, or ion.
A simple mnemonic for remembering these processes is “OIL RIG”—Oxidation Is Losing (electrons), Reduction Is Gaining (electrons).
Redox reactions are matched sets: if one species is oxidized in a reaction, another must be reduced. Keep this in mind as we look at the five main types of redox reactions: combination, decomposition, displacement, combustion, and disproportion.
|
2023-04-01 10:29:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5648109316825867, "perplexity": 2044.7320721485112}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00457.warc.gz"}
|
https://ncatlab.org/nlab/show/exact+couple
|
# nLab exact couple
Contents
### Context
#### Homological algebra
homological algebra
Introduction
diagram chasing
# Contents
## Idea
Exact couples are a way to encode data that makes a spectral sequence, specially adapted to the case that the underlying filtering along which the spectral sequence proceeds is induced from a tower of homotopy fibers, such as a Postnikov tower or Adams tower (see also at Adams spectral sequence).
## Definition
### Exact couples
###### Definition
Given an abelian category $\mathcal{C}$, an exact couple in $\mathcal{C}$ is a cyclic long exact sequence of three morphisms among two objects of the form
$\cdots \stackrel{k}{\longrightarrow} E \overset{j}{\to} D \overset{\varphi}{\longrightarrow} D \overset{k}{\to} E \overset{j}{\longrightarrow} \cdots \,.$
###### Remark
This being cyclic, it is usually depicted as a triangle
$\array{ D && \stackrel{\varphi}{\longrightarrow} && D \\ & {}_{\mathllap{j}}\nwarrow && \swarrow_{\mathrlap{k}} \\ && E }$
The archetypical example from which this and the following definition draw their meaning is example below.
### Spectral sequences from exact couples
###### Definition
A cohomology spectral sequence $\{E_r^{p,q}, d_r\}$ is
1. a sequence $\{E_r^{\bullet,\bullet}\}$ $r \in \mathbb{Z}$, $r \geq 2$ of bigraded abelian groups;
2. a sequence of differentials $\{d_r \colon E_r^{\bullet,\bullet} \longrightarrow E_r^{\bullet+r, \bullet-r+1}\}$
such that
• $H_{r+1}^{\bullet,\bullet}$ is the cochain cohomology of $d_r$:, i.e. $E_{r+1}^{\bullet, \bullet} = H(E_r^{\bullet,\bullet},d_r)$.
Given a $\mathbb{Z}$-graded abelian group_ $C^\bullet$ equipped with a decreasing filtration
$C^\bullet \supset \cdots \supset F^s C^\bullet \supset F^{s+1} C^\bullet \supset \cdots \supset 0$
such that
$C^\bullet = \underset{s}{\cup} F^s C^\bullet \;\;\;\; and \;\;\;\; 0 = \underset{s}{\cap} F^s C^\bullet$
then the spectral sequence is said to converge to $C^\bullet$, denoted,
$E_2^{\bullet,\bullet} \Rightarrow C^\bullet$
if
1. in each bidegree $(s,t)$ the sequence $\{E_r^{s,t}\}_r$ eventually becomes constant on a group
$E_\infty^{s,t} \coloneqq E_{\gg 1}^{s,t}$;
2. $E_\infty^{\bullet,\bullet}$ is the associated graded of the filtered $C^\bullet$ in that
$E_\infty^{s,t} \simeq F^s C^{s+t} / F^{s+1}C^{s+t}$.
The converging spectral sequence is called multiplicative if
1. $\{E_2^{\bullet,\bullet}\}$ is equipped with the structure of a bigraded object algebra;
2. $F^\bullet C^\bullet$ is equipped with the structure of a filtered graded algebra ($F^p C^k \cdot F^q C^l \subset F^{p+q} C^{k+l}$);
such that
1. each $d_{r}$ is a derivation with respect to the (induced) algebra structure on ${E_r^{\bullet,\bullet}}$, graded of degree 1 with respect to total degree;
2. the multiplication on $E_\infty^{\bullet,\bullet}$ is compatible with that on $C^\bullet$.
###### Definition
(derived exact couples)
An exact couple is three homomorphisms of abelian groups of the form
$\array{ D && \stackrel{g}{\longrightarrow} && D \\ & {}_{\mathllap{f}}\nwarrow && \swarrow_{\mathrlap{h}} \\ && E }$
such that the image of one is the kernel of the next.
$im(h) = ker(f)\,,\;\;\; im(f) = ker(g)\,, \;\;\; im(g) = ker(f) \,.$
Given an exact couple, then its derived exact couple is
$\array{ im(g) && \stackrel{g}{\longrightarrow} && im(g) \\ & {}_{\mathllap{f}}\nwarrow && \swarrow_{\mathrlap{h \circ g^{-1}}} \\ && H(E, h \circ f) } \,.$
Here and in the following we write $g^{-1}$ etc. for the operation of choosing a preimage under a given function $g$. In each case it is left implicit that the given expression is independent of which choice is made.
###### Proposition
(cohomological spectral sequence of an exact couple)
Given an exact couple, def. ,
$\array{ D_1 && \stackrel{g_1}{\longrightarrow} && D_1 \\ & {}_{\mathllap{f_1}}\nwarrow && \swarrow_{\mathrlap{h_1}} \\ && E_1 }$
its derived exact couple
$\array{ D_2 && \stackrel{g_2}{\longrightarrow} && D_2 \\ & {}_{\mathllap{f_2}}\nwarrow && \swarrow_{\mathrlap{h_2}} \\ && E_2 }$
is itself an exact couple. Accordingly there is induced a sequence of exact couples
$\array{ D_r && \stackrel{g_r}{\longrightarrow} && D_r \\ & {}_{\mathllap{f_r}}\nwarrow && \swarrow_{\mathrlap{h_r}} \\ && E_r } \,.$
If the abelian groups $D$ and $E$ are equipped with bigrading such that
$deg(f) = (0,0)\,,\;\;\;\; deg(g) = (-1,1)\,,\;\;\; deg(h) = (1,0)$
then $\{E_r^{\bullet,\bullet}, d_r\}$ with
\begin{aligned} d_r & \coloneqq h_r \circ f_r \\ & = h \circ g^{-r+1} \circ f \end{aligned}
is a cohomological spectral sequence, def. .
(As before in prop. , the notation $g^{-n}$ with $n \in \mathbb{N}$ denotes the function given by choosing, on representatives, a preimage under $g^n = \underset{n\;times}{\underbrace{g \circ \cdots \circ g \circ g}}$, with the implicit claim that all possible choices represent the same equivalence class.)
If for every bidegree $(s,t)$ there exists $R_{s,t} \gg 1$ such that for all $r \geq R_{s,t}$
1. $g \colon D^{s+R,t-R} \stackrel {\simeq}{\longrightarrow} D^{s+R -1, t-R-1}$;
2. $g\colon D^{s-R+1, t+R-2} \stackrel{0}{\longrightarrow} D^{s-R,t+R-1}$
then this spectral sequence converges to the inverse limit group
$G^\bullet \coloneqq \underset{}{\lim} \left( \cdots \stackrel{g}{\to} D^{s,\bullet-s} \stackrel{g}{\longrightarrow} D^{s-1, \bullet - s + 1} \stackrel{g}{\to} \cdots \right)$
filtered by
$F^p G^\bullet \coloneqq ker(G^\bullet \to D^{p-1, \bullet - p+1}) \,.$
(e.g. Kochmann 96, lemma 2.6.2)
###### Proof
We check the claimed form of the $E_\infty$-page:
Since $ker(h) = im(g)$ in the exact couple, the kernel
$ker(d_{r-1}) \coloneqq ker(h \circ g^{-r+2} \circ f)$
consists of those elements $x$ such that $g^{-r+2} (f(x)) = g(y)$, for some $y$, hence
$ker(d_{r-1})^{s,t} \simeq f^{-1}(g^{r-1}(D^{s+r-1,t-r+1})) \,.$
By assumption there is for each $(s,t)$ an $R_{s,t}$ such that for all $r \geq R_{s,t}$ then $ker(d_{r-1})^{s,t}$ is independent of $r$.
Moreover, $im(d_{r-1})$ consists of the image under $h$ of those $x \in D^{s-1,t}$ such that $g^{r-2}(x)$ is in the image of $f$, hence (since $im(f) = ker(g)$ by exactness of the exact couple) such that $g^{r-2}(x)$ is in the kernel of $g$, hence such that $x$ is in the kernel of $g^{r-1}$. If $r \gt R$ then by assumption $g^{r-1}|_{D^{s-1,t}} = 0$ and so then $im(d_{r-1}) = im(h)$.
(Beware this subtlety: while $g^{R_{s,t}}|_{D^{s-1,t}}$ vanishes by the convergence assumption, the expression $g^{R_{s,t}}|_{D^{s+r-1,t-r+1}}$ need not vanish yet. Only the higher power $g^{R_{s,t}+ R_{s+1,t+2}+2}|_{D^{s+r-1,t-r+1}}$ is again guaranteed to vanish. )
It follows that
\begin{aligned} E_\infty^{p,n-p} & = ker(d_R)/im(d_R) \\ & \simeq f^{-1}(im(g^{R-1}))/im(h) \\ & \simeq f^{-1}(im(g^{R-1}))/ker(f) \\ & \underoverset{\simeq}{f}{\longrightarrow} im(g^{R-1}) \cap im(f) \\ & \simeq im(g^{R-1}) \cap ker(g) \end{aligned}
where in last two steps we used once more the exactness of the exact couple.
(Notice that the above equation means in particular that the $E_\infty$-page is a sub-group of the image of the $E_1$-page under $f$.)
The last group above is that of elements $x \in G^n$ which map to zero in $D^{p-1,n-p+1}$ and where two such are identified if they agree in $D^{p,n-p}$, hence indeed
$E_\infty^{p,n-p} \simeq F^p G^n / F^{p+1} G^n \,.$
## Examples
### Exact couple of a tower of (co)-fibrations
###### Definition
A filtered spectrum is a spectrum $X$ equipped with a sequence $X_\bullet \colon (\mathbb{N}, \gt) \longrightarrow Spectra$ of spectra of the form
$\cdots \longrightarrow X_3 \stackrel{f_2}{\longrightarrow} X_2 \stackrel{f_1}{\longrightarrow} X_1 \stackrel{f_0}{\longrightarrow} X_0 = X \,.$
###### Remark
More generally a filtering on an object $X$ in (stable or not) homotopy theory is a $\mathbb{Z}$-graded sequence $X_\bullet$ such that $X$ is the homotopy colimit $X\simeq \underset{\longrightarrow}{\lim} X_\bullet$. But for the present purpose we stick with the simpler special case of def. .
###### Remark
There is no condition on the morphisms in def. . In particular, they are not required to be n-monomorphisms or n-epimorphisms for any $n$.
On the other hand, while they are also not explicitly required to have a presentation by cofibrations or fibrations, this follows automatically: by the existence of model structures for spectra, every filtering on a spectrum is equivalent to one in which all morphisms are represented by cofibrations or by fibrations.
This means that we may think of a filtration on a spectrum $X$ in the sense of def. as equivalently being a tower of fibrations over $X$.
The following remark unravels the structure encoded in a filtration on a spectrum, and motivates the concepts of exact couples and their spectral sequences from these.
###### Remark
Given a filtered spectrum as in def. , write $A_k$ for the homotopy cofiber of its $k$th stage, such as to obtain the diagram
$\array{ \cdots &\stackrel{}{\longrightarrow}& X_3 &\stackrel{f_2}{\longrightarrow}& X_2 &\stackrel{f_2}{\longrightarrow} & X_1 &\stackrel{f_1}{\longrightarrow}& X \\ && \downarrow && \downarrow && \downarrow && \downarrow \\ && A_3 && A_2 && A_1 && A_0 }$
where each stage
$\array{ X_{k+1} &\stackrel{f_k}{\longrightarrow}& X_k \\ && \downarrow^{\mathrlap{cofib(f_k)}} \\ && A_k }$
To break this down into invariants, apply the stable homotopy groups-functor. This yields a diagram of $\mathbb{Z}$-graded abelian groups of the form
$\array{ \cdots &\stackrel{}{\longrightarrow}& \pi_\bullet(X_3) &\stackrel{\pi_\bullet(f_2)}{\longrightarrow}& \pi_\bullet(X_2) &\stackrel{\pi_\bullet(f_2)}{\longrightarrow} & \pi_\bullet(X_1) &\stackrel{\pi_\bullet(f_1)}{\longrightarrow}& \pi_\bullet(X_0) \\ && \downarrow && \downarrow && \downarrow && \downarrow \\ && \pi_\bullet(A_3) && \pi_\bullet(A_2) && \pi_\bullet(A_1) && \pi_\bullet(A_0) } \,.$
Here each hook at stage $k$ extends to a long exact sequence of homotopy groups via connecting homomorphisms $\delta_\bullet^k$
$\cdots \to \pi_{\bullet+1}(A_k) \stackrel{\delta_{\bullet+1}^k}{\longrightarrow} \pi_\bullet(X_{k+1}) \stackrel{\pi_\bullet(f_k)}{\longrightarrow} \pi_\bullet(X_k) \stackrel{}{\longrightarrow} \pi_\bullet(A_k) \stackrel{\delta_\bullet^k}{\longrightarrow} \pi_{\bullet-1}(X_{k+1}) \to \cdots \,.$
If we understand the connecting homomorphism
$\delta_k \colon \pi_\bullet(A_k) \longrightarrow \pi_\bullet(X_{k+1})$
as a morphism of degree -1, then all this information fits into one diagram of the form
$\array{ \cdots &\stackrel{}{\longrightarrow}& \pi_\bullet(X_3) &\stackrel{\pi_\bullet(f_2)}{\longrightarrow}& \pi_\bullet(X_2) &\stackrel{\pi_\bullet(f_2)}{\longrightarrow} & \pi_\bullet(X_1) &\stackrel{\pi_\bullet(f_1)}{\longrightarrow}& \pi_\bullet(X_0) \\ && \downarrow &{}_{\mathllap{\delta_2}}\nwarrow & \downarrow &{}_{\mathllap{\delta_1}}\nwarrow & \downarrow &{}_{\mathllap{\delta_0}}\nwarrow & \downarrow \\ && \pi_\bullet(A_3) && \pi_\bullet(A_2) && \pi_\bullet(A_1) && \pi_\bullet(A_0) } \,,$
where each triangle is a rolled-up incarnation of a long exact sequence of homotopy groups (and in particular is not a commuting diagram!).
If we furthermore consider the bigraded abelian groups $\pi_\bullet(X_\bullet)$ and $\pi_\bullet(A_\bullet)$, then this information may further be rolled-up to a single diagram of the form
$\array{ \pi_\bullet(X_\bullet) & \stackrel{\pi_\bullet(f_\bullet)}{\longrightarrow} & \pi_\bullet(X_\bullet) \\ & {}_{\mathllap{\delta}}\nwarrow & \downarrow^{\mathrlap{\pi_\bullet(cofib(f_\bullet))}} \\ && \pi_\bullet(A_\bullet) }$
where the morphisms $\pi_\bullet(f_\bullet)$, $\pi_\bullet(cofib(f_\bullet))$ and $\delta$ have bi-degree $(0,-1)$, $(0,0)$ and $(-1,1)$, respectively.
Here it is convenient to shift the bigrading, equivalently, by setting
$\mathcal{D}^{s,t} \coloneqq \pi_{t-s}(X_s)$
$\mathcal{E}^{s,t} \coloneqq \pi_{t-s}(A_s) \,,$
because then $t$ counts the cycles of going around the triangles:
$\cdots \to \mathcal{D}^{s+1,t+1} \stackrel{\pi_{t-s}(f_s)}{\longrightarrow} \mathcal{D}^{s,t} \stackrel{\pi_{t-s}(cofib(f_s))}{\longrightarrow} \mathcal{E}^{s,t} \stackrel{\delta_s}{\longrightarrow} \mathcal{D}^{s+1,t} \to \cdots$
Data of this form is called an exact couple, def. below.
###### Definition
An unrolled exact couple (of Adams-type) is a diagram of abelian groups of the form
$\array{ \cdots &\stackrel{}{\longrightarrow}& \mathcal{D}^{3,\bullet} &\stackrel{i_2}{\longrightarrow}& \mathcal{D}^{2,\bullet} &\stackrel{i_1}{\longrightarrow} & \mathcal{D}^{1,\bullet} &\stackrel{i_0}{\longrightarrow}& \mathcal{D}^{0,\bullet} \\ && \downarrow^{\mathrlap{}} &{}_{\mathllap{k_2}}\nwarrow & {}^{\mathllap{j_2}}\downarrow &{}_{\mathllap{k_1}}\nwarrow & {}^{\mathllap{j_1}}\downarrow &{}_{\mathllap{k_0}}\nwarrow & {}_{\mathllap{j_0}}\downarrow \\ && \mathcal{E}^{3,\bullet} && \mathcal{E}^{2,\bullet} && \mathcal{E}^{1,\bullet} && \mathcal{E}^{0,\bullet} }$
such that each triangle is a rolled-up long exact sequence of abelian groups of the form
$\cdots \to \mathcal{D}^{s+1,t+1} \stackrel{i_s}{\longrightarrow} \mathcal{D}^{s,t} \stackrel{j_s}{\longrightarrow} \mathcal{E}^{s,t} \stackrel{k_s}{\longrightarrow} \mathcal{D}^{s+1,t} \to \cdots \,.$
The collection of this “un-rolled” data into a single diagram of abelian groups is called the corresponding exact couple.
###### Definition
An exact couple is a diagram (non-commuting) of abelian groups of the form
$\array{ \mathcal{D} &\stackrel{i}{\longrightarrow}& \mathcal{D} \\ & {}_{\mathllap{k}}\nwarrow & \downarrow^{\mathrlap{j}} \\ && \mathcal{E} } \,,$
such that this is exact sequence exact in each position, hence such that the kernel of every morphism is the image of the preceding one.
The concept of exact couple so far just collects the sequences of long exact sequences given by a filtration. Next we turn to extracting information from this sequence of sequences.
###### Remark
The sequence of long exact sequences in remark is inter-locking, in that every $\pi_{t-s}(X_s)$ appears twice:
$\array{ && & \searrow && \nearrow \\ && && \pi_{t-s-1}(X_{s+1}) \\ && & {}^{\mathllap{\delta_{t-s}^s}}\nearrow && \searrow^{\mathrlap{\pi_{t-s-1}(cofib(f_{s+1}))}} && && && \nearrow \\ && \pi_{t-s}(A_s) && \underset{def: \;\;d_1^{s,t}}{\longrightarrow} && \pi_{t-s-1}(A_{s+1}) && \stackrel{def: \; d_1^{s+1,t}}{\longrightarrow} && \pi_{t-s-2}(A_{s+2}) \\ & \nearrow && && && {}_{\mathllap{\delta_{t-s-1}^{s+1}}}\searrow && \nearrow_{\mathrlap{\pi_{t-s-2}(cofib(f_{s+2}))}} \\ && && && && \pi_{t-s-2}(X_{s+2}) \\ && && && & \nearrow && \searrow }$
This gives rise to the horizontal composites $d_1^{s,t}$, as show above, and by the fact that the diagonal sequences are long exact, these are differentials: $d_1^2 = 0$, hence give a chain complex:
$\array{ \cdots & \stackrel{}{\longrightarrow} && \pi_{t-s}(A_s) && \overset{d_1^{s,t}}{\longrightarrow} && \pi_{t-s-1}(A_{s+1}) && \stackrel{d_1^{s+1,t}}{\longrightarrow} && \pi_{t-s-2}(A_{s+2}) &&\longrightarrow & \cdots } \,.$
We read off from the interlocking long exact sequences what these differentials mean: an element $c \in \pi_{t-s}(A_s)$ lifts to an element $\hat c \in \pi_{t-s-1}(X_{s+2})$ precisely if $d_1 c = 0$:
$\array{ &\hat c \in & \pi_{t-s-1}(X_{s+2}) \\ && & \searrow^{\mathrlap{\pi_{t-s-1}(f_{s+1})}} \\ && && \pi_{t-s-1}(X_{s+1}) \\ && & {}^{\mathllap{\delta_{t-s}^s}}\nearrow && \searrow^{\mathrlap{\pi_{t-s-1}(cofib(f_{s+1}))}} \\ & c \in & \pi_{t-s}(A_s) && \underset{d_1^{s,t}}{\longrightarrow} && \pi_{t-s-1}(A_{s+1}) }$
This means that the cochain cohomology of the complex $(\pi_{\bullet}(A_\bullet), d_1)$ produces elements of $\pi_\bullet(X_\bullet)$ and hence of $\pi_\bullet(X)$.
In order to organize this observation, notice that in terms of the exact couple of remark , the differential
$d_1^{s,t} \;\coloneqq \; \pi_{t-s-1}(cofib(f_{s+1})) \circ \delta_{t-s}^s$
is a component of the composite
$d \coloneqq j \circ k \,.$
Some terminology:
###### Definition
Given an exact couple, def. ,
$\array{ \mathcal{D}^{\bullet,\bullet} &\stackrel{i}{\longrightarrow}& \mathcal{D}^{\bullet,\bullet} \\ & {}_{\mathllap{k}}\nwarrow & \downarrow^{\mathrlap{j}} \\ && \mathcal{E}^{\bullet,\bullet} }$
its page is the chain complex
$(E^{\bullet,\bullet}, d \coloneqq j \circ k) \,.$
###### Definition
Given an exact couple, def. , then the induced derived exact couple is the diagram
$\array{ \widetilde {\mathcal{D}} &\stackrel{\tilde i}{\longrightarrow}& \widetilde {\mathcal{D}} \\ & {}_{\mathllap{\tilde k}}\nwarrow & \downarrow^{\mathrlap{\tilde j}} \\ && \widetilde{\mathcal{E}} }$
with
1. $\tilde{\mathcal{E}} \coloneqq ker(d)/im(d)$;
2. $\tilde {\mathcal{D}} \coloneqq im(i)$;
3. $\tilde i \coloneqq i|_{im(i)}$;
4. $\tilde j \coloneqq j \circ (im(i))^{-1}$;
5. $\tilde k \coloneqq k|_{ker(d)}$.
###### Proposition
A derived exact couple, def. , is again an exact couple, def. .
###### Definition
Given an exact couple, def. , then the induced spectral sequence, def. , is the sequence of pages, def. , of the induced sequence of derived exact couples, def. , prop. .
###### Example
Consider a filtered spectrum, def. ,
$\array{ \cdots &\stackrel{}{\longrightarrow}& X_3 &\stackrel{f_2}{\longrightarrow}& X_2 &\stackrel{f_2}{\longrightarrow} & X_1 &\stackrel{f_1}{\longrightarrow}& X \\ && \downarrow && \downarrow && \downarrow && \downarrow \\ && A_3 && A_2 && A_1 && A_0 }$
and its induced exact couple of stable homotopy groups, from remark
$\array{ \mathcal{D} &\stackrel{i}{\longrightarrow}& \mathcal{D} \\ &{}_{\mathllap{k}}\nwarrow& \downarrow^{\mathrlap{j}} \\ && \mathcal{E} } \;\;\;\;\;\,\;\;\;\;\;\; \array{ \mathcal{D} &\stackrel{(-1,-1)}{\longrightarrow}& \mathcal{D} \\ &{}_{\mathllap{(1,0)}}\nwarrow& \downarrow^{\mathrlap{(0,0)}} \\ && \mathcal{E} }$
with bigrading as shown on the right.
As we pass to derived exact couples, by def. , the bidegree of $i$ and $k$ is preserved, but that of $j$ increases by $(1,1)$ in each step, since
$deg(\tilde j) = deg( j \circ im(i)^{-1}) = deg(j) + (1,1) \,.$
Therefore the induced spectral sequence has differentials of the form
$d_r \;\colon\; \mathcal{E}_r^{s,t} \longrightarrow \mathcal{E}_r^{s+r, t+r-1} \,.$
## References
The original article is
• W. S. Massey, Exact Couples in Algebraic Topology (Parts I and II), Annals of Mathematics, Second Series, Vol. 56, No. 2 (Sep., 1952), pp. 363-396 (pdf)
also
A class of examples leading to what later came to be known as the Atiyah-Hirzebruch spectral sequence is discussed in section XV.7 of
Textbook accounts include
Another review with an eye towards application to the Adams spectral sequence is in
Last revised on September 1, 2016 at 16:04:10. See the history of this page for a list of all contributions to it.
|
2019-01-19 04:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 158, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730529189109802, "perplexity": 592.9893246438329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00114.warc.gz"}
|
http://openstudy.com/updates/504f9766e4b03b79a332d962
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 4 years ago What is the simplest way to do this? 49^-3/2
• This Question is Closed
1. anonymous
hello
2. anonymous
Well, you can simplify $49^{-\frac{ 3 }{ 2 }}$ to $\left( 49^{\frac{ 1 }{ 2 }} \right)^{-3}$ And$49^{\frac{ 1 }{ 2 }}=\sqrt{49}=7$ so $\left( 49^{\frac{ 1 }{ 2 }} \right)^{-3}=\left( 7 \right)^{-3}=\frac{ 1 }{ 7\times7\times7 }$ This means that$49^{-\frac{ 3 }{ 2 }}=\frac{ 1 }{ 7\times7\times7 }$ If anything about this seems to complicated, let me know. The bottom line is that you're splitting the exponent into a power of 1/2 and a power of -3.
3. anonymous
Yes I was able to get to the answer with help of another system actually but I thought it was too complicated and I was wondering if there was a much simpler way to do this?
4. anonymous
In 49^1/2 . the denominator tells you if its a cube root or a square root or a quad root ?
5. anonymous
For fractional exponents, it's power-over-root.
6. anonymous
Oh alright. Thank you. That clarifies it.
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2016-10-26 04:13:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029306888580322, "perplexity": 853.1662883476164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720615.90/warc/CC-MAIN-20161020183840-00243-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://math.meta.stackexchange.com/questions/12805/new-math-teaching-stackexchange-is-looking-for-38-more-people-to-register-for-a
|
New Math Teaching Stackexchange is looking for 38 more people to register for a private beta
Edit: Thanks to the 162 people who have signed up so far! You've made this proposal one of the fastest-growing proposals of all time.
The math teaching SE proposal has reached the commitment phase, where it needs 200 people to register for a private beta. Many proposals stall here because they can't find 200 people to participate in a private beta; there is large initial interest, and then signups die out. If you are at all interested, please sign up at http://area51.stackexchange.com/proposals/64216/mathematics-learning-studying-and-education
The users in the private beta are supposed to ask/answer 10 questions over a period of about six months (although you can easily do this in a couple of weeks or even days).
• I signed up, but it created another account for me. How do I get it merged? – Calvin Lin Feb 21 '14 at 8:53
• Only moderators can merge accounts, I think. Thanks for signing up! Try contacting a moderator or searching meta for 'merge account'. – Brian Rushton Feb 21 '14 at 12:53
• @CalvinLin You created the account as an unregistered user, if you register the account (via the "register" option at the top middle on the profile page while logged in) with the same login as you use here, it should be linked. If you have any trouble, you can reach us through the "contact us" form, and I'll be able to work directly with you to fix that. – Grace Note Feb 21 '14 at 22:28
• What does "ask/answer" mean? We have to ask ten and answer ten? Or just satisfy asked + answered $\geq$ 10? – Jack M Feb 22 '14 at 17:08
• the second option – Brian Rushton Feb 22 '14 at 18:36
• "Proposed Q&A site for math educators, enthusiasts, students, professors." There seems to be overlap with the people who generally post on this Math forum. Is the intent to be more explicitly geared towards k-12 classroom math? Education issues and debates? – JackOfAll Feb 27 '14 at 12:54
• @JackOfAll It's geared towards questions about the process of teaching at any level. So questions would be like "How can I assign meaningful homework if all the solutions are online?" – Brian Rushton Feb 27 '14 at 13:44
• That isn't a teaching question, but a question of academic administration, institutional bureaucracy and the like. Construed as a teaching question the answer is trivial: the exercises are just as meaningful, and potentially more instructive, when solutions are available. The question you appear to have in mind is, "how can I continue to effectively pressure and manipulate students to perform mathematics exercises when solutions are easily found online", which is a question of bureaucracy, methods of control and other things irrelevant to mathematics. @BrianRushton – zyx Feb 27 '14 at 17:39
• Brian slowly backs away without making eye contact. His hands drop his list of possible questions as he disappears around the corner. @zyx picks up the list. "#2: Is electroshock therapy available for use in American academic institutions as a means of motivation?" He crumples the paper. "This is not the last time we will meet, Rushton." – Brian Rushton Feb 27 '14 at 17:45
• @zyx You're right, there is a serious point to be made here. You can browse the complete list of sample questions (the one I mentioned was my own, and not representative of all questions). An answer like you've begun here, backed up with psychological studies or other references, belongs on some stackexchange, whether math-specific or not. – Brian Rushton Feb 27 '14 at 17:54
• @zyx There is actually a large group of people involved in the proposal who want to make sure to avoid exactly what you're talking about. Specifically, they want to make sure that self-learners and other people in non-traditional teaching situations are represented and welcomed. You can see some of their comments in the discussion on names. quid in particular is a strong advocate of this viewpoint. My question got upvoted because some people agree with the student-control viewpoint (as you call it), and everyone else was in a rush to get the proposal started. – Brian Rushton Feb 27 '14 at 18:02
• @zyx I would recommend starting such a discussion in the are51 site and opening it up to discussion. I'm sure that some people would be very interested, including Bill Dubuque and others. – Brian Rushton Feb 27 '14 at 18:13
• I may do that, but since the site is being promoted here (and I don't necessarily intend to register for the private beta there), this is as good a place as any for the time being. – zyx Feb 27 '14 at 18:15
• @quid, there is a critical problem at the very core of the current proposal: it defines the new site in terms of Who (a site "for educators ... students and professors") rather than What (the subject matter). As long as that is the case, it will tend to prejudice the types of users and the content posted on the site toward a mentality of institutional administration, pressure and "incentives" as discussed above, that has little connection (except, perhaps, an adverse one) with teaching mathematics. Mixing some genuine teaching discussion with that does not remove the problem. – zyx Feb 28 '14 at 5:02
• Now 40 more and then it's beta-time. Looking forward to it. – JTP - Apologise to Monica Mar 1 '14 at 12:14
|
2020-08-09 03:30:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.295690655708313, "perplexity": 1607.2369434814148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00557.warc.gz"}
|
https://matchmaticians.com/questions/qsondh/find-conditions-for-all-chips-to-become-of-the-same-color-in
|
# Find conditions for all chips to become of the same color in this game
There are a white, b black, and c red chips on a table. In one step, you may choose two chips of different colors and replace each one by a chip of the third color. Find conditions for all chips to become of the same color. Suppose you have initially $13$ white $15$ black and $17$ red chips. Can all chips become of the same color? What states can be reached from these numbers?
|
2023-01-29 16:09:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40793439745903015, "perplexity": 307.0984594534584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00797.warc.gz"}
|
https://rdrr.io/cran/rrBLUP/man/GWAS.html
|
# GWAS: Genome-wide association analysis In rrBLUP: Ridge Regression and Other Kernels for Genomic Selection
## Description
Performs genome-wide association analysis based on the mixed model (Yu et al. 2006):
y = X β + Z g + S τ + \varepsilon
where β is a vector of fixed effects that can model both environmental factors and population structure. The variable g models the genetic background of each line as a random effect with Var[g] = K σ^2. The variable τ models the additive SNP effect as a fixed effect. The residual variance is Var[\varepsilon] = I σ_e^2.
## Usage
1 2 GWAS(pheno, geno, fixed=NULL, K=NULL, n.PC=0, min.MAF=0.05, n.core=1, P3D=TRUE, plot=TRUE)
## Arguments
pheno Data frame where the first column is the line name (gid). The remaining columns can be either a phenotype or the levels of a fixed effect. Any column not designated as a fixed effect is assumed to be a phenotype. geno Data frame with the marker names in the first column. The second and third columns contain the chromosome and map position (either bp or cM), respectively, which are used only when plot=TRUE to make Manhattan plots. If the markers are unmapped, just use a placeholder for those two columns. Columns 4 and higher contain the marker scores for each line, coded as {-1,0,1} = {aa,Aa,AA}. Fractional (imputed) and missing (NA) values are allowed. The column names must match the line names in the "pheno" data frame. fixed An array of strings containing the names of the columns that should be included as (categorical) fixed effects in the mixed model. K Kinship matrix for the covariance between lines due to a polygenic effect. If not passed, it is calculated from the markers using A.mat. n.PC Number of principal components to include as fixed effects. Default is 0 (equals K model). min.MAF Specifies the minimum minor allele frequency (MAF). If a marker has a MAF less than min.MAF, it is assigned a zero score. n.core Setting n.core > 1 will enable parallel execution on a machine with multiple cores (use only at UNIX command line). P3D When P3D=TRUE, variance components are estimated by REML only once, without any markers in the model. When P3D=FALSE, variance components are estimated by REML for each marker separately. plot When plot=TRUE, qq and Manhattan plots are generated.
## Details
For unbalanced designs where phenotypes come from different environments, the environment mean can be modeled using the fixed option (e.g., fixed="env" if the column in the pheno data.frame is called "env"). When principal components are included (P+K model), the loadings are determined from an eigenvalue decomposition of the K matrix.
The terminology "P3D" (population parameters previously determined) was introduced by Zhang et al. (2010). When P3D=FALSE, this function is equivalent to EMMA with REML (Kang et al. 2008). When P3D=TRUE, it is equivalent to EMMAX (Kang et al. 2010). The P3D=TRUE option is faster but can underestimate significance compared to P3D=FALSE.
The dashed line in the Manhattan plots corresponds to an FDR rate of 0.05 and is calculated using the qvalue package (Storey and Tibshirani 2003). The p-value corresponding to a q-value of 0.05 is determined by interpolation. When there are no q-values less than 0.05, the dashed line is omitted.
## Value
Returns a data frame where the first three columns are the marker name, chromosome, and position, and subsequent columns are the marker scores (-log_{10}p) for the traits.
## References
Kang et al. 2008. Efficient control of population structure in model organism association mapping. Genetics 178:1709-1723.
Kang et al. 2010. Variance component model to account for sample structure in genome-wide association studies. Nat. Genet. 42:348-354.
Storey and Tibshirani. 2003. Statistical significance for genome-wide studies. PNAS 100:9440-9445.
Yu et al. 2006. A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Genetics 38:203-208.
Zhang et al. 2010. Mixed linear model approach adapted for genome-wide association studies. Nat. Genet. 42:355-360.
## Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #random population of 200 lines with 1000 markers M <- matrix(rep(0,200*1000),1000,200) for (i in 1:200) { M[,i] <- ifelse(runif(1000)<0.5,-1,1) } colnames(M) <- 1:200 geno <- data.frame(marker=1:1000,chrom=rep(1,1000),pos=1:1000,M,check.names=FALSE) QTL <- 100*(1:5) #pick 5 QTL u <- rep(0,1000) #marker effects u[QTL] <- 1 g <- as.vector(crossprod(M,u)) h2 <- 0.5 y <- g + rnorm(200,mean=0,sd=sqrt((1-h2)/h2*var(g))) pheno <- data.frame(line=1:200,y=y) scores <- GWAS(pheno,geno,plot=FALSE)
### Example output
[1] "GWAS for trait: y"
[1] "Variance components estimated. Testing markers."
rrBLUP documentation built on Jan. 29, 2018, 1:04 a.m.
|
2018-11-13 01:06:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360503673553467, "perplexity": 4445.827042032695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00286.warc.gz"}
|
http://math.stackexchange.com/questions/112041/how-to-solve-for-x
|
How to solve for x
$W(4d^2 (1-x^2)^2) = abc^3x \sqrt{(\pi^2 (i-x^2)^2 + 16 x^2) }$
I have to find x ,i have the values of all other constants , I tried to separate it using partial fraction but I am stuck. a=3 b=4 c=7 d=9 w=19
-
I'm just curious: what's the value of $\mathrm{pie}$? – Rasmus Feb 22 '12 at 14:23
@user1067252, I edited you question. Is that how the equation should look like? – Pedro Tamaroff Feb 22 '12 at 14:23
Assuming that W is not a special function, you may want to consider the value of $x$ that makes $(\pi^2 (i-x^2)^2 + 16 x^2) >= 0$. This may give you a start. – Emmad Kareem Feb 22 '12 at 16:00
Expanding the terms you have x^4 on the left, which will go to x^8 when you square both sides to get rid of the square root. But you will have all terms in x having an even exponent, so you can define y=x^2 to get a quartic. These have a messy solution. Without knowing the constants I don't think we can help further. You might look for rational roots using the rational root theorem
-
|
2014-08-23 15:33:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883953869342804, "perplexity": 309.24739339904863}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826259.53/warc/CC-MAIN-20140820021346-00235-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-11-quadratic-functions-and-equations-11-1-quadratic-equations-11-1-exercise-set-page-707/65
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
The solutions are $x=-\displaystyle \frac{4}{3}$ and $x=-\displaystyle \frac{2}{3}$.
$9x^{2}+18x=-8\qquad$..divide both sides with $9$ $x^{2}+2x=-\displaystyle \frac{8}{9}\qquad$..add $1$ to both sides to complete the square ($\displaystyle \frac{1}{2}(2)=1$, and $(1)^{2}=1$.) $x^{2}+2x+1=-\displaystyle \frac{8}{9}+1\qquad$...simplify by applying the Perfect square formula ($(x+a)^{2}=x^{2}+2ax+a^{2}$) and adding like terms. $(x+1)^{2}=\displaystyle \frac{1}{9}$ According to the general principle of square roots: For any real number $k$ and any algebraic expression $x$ : $\text{If }x^{2}=k,\text{ then }x=\sqrt{k}\text{ or }x=-\sqrt{k}$. $x+1=\pm\sqrt{\frac{1}{9}}\qquad$...add $-1$ to each side. $x+1-1=\pm\sqrt{\frac{1}{9}}-1\qquad$...simplify. $x=-1\pm\sqrt{\frac{1}{9}}\qquad$...apply the quotient rule for radicals: $\displaystyle \sqrt{\frac{m}{n}}=\frac{\sqrt{m}}{\sqrt{n}}$ and simplify $x=-1+\displaystyle \frac{1}{3}$ or $x=-1-\displaystyle \frac{1}{3}$ $x=-\displaystyle \frac{2}{3}$ or $x=-\displaystyle \frac{4}{3}$
|
2021-04-22 03:14:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301254510879517, "perplexity": 764.378249929558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00278.warc.gz"}
|
https://subdomain.codeforces.cc/blog/entry/105896
|
### nor's blog
By nor, 2 months ago,
This blog was initially meant to request updates to the C compiler on Codeforces (given the large number of blogs complaining about "mysterious" issues in their submissions in C). While writing it, I realized that the chances of the said updates would be higher if I mentioned a few reasons why people would like to code in C instead of C++ (some of the reasons are not completely serious, as should be evident from the context).
Before people come after me in the comments saying that you should use Rust for a few reasons I will be listing below, please keep in mind that I agree that Rust is a great language with reasonably good design choices (most of them better than C++, at least), and using C is a personal choice.
### Commonly encountered issues in C on Codeforces
1. The C compiler on Codeforces is currently 32-bit, and hence it suffers from the same disadvantages as C++ faces compared to the 64-bit version. For instance, 64-bit arithmetic is slower than it could be (and leads to a ~2x slowdown in many cases).
2. There is an issue with slow IO using stdio, as I mentioned in this comment. Paraphrasing the issue here, this blog mentioned a bug that was making the earlier MinGW implementation of IO very slow, and it was patched to fix this issue. This is also why the newer C++ compilers on Codeforces have a decently fast implementation of scanf/printf compared to the older benchmarks in the blog. However, this is not the case for C. My hypothesis is that this issue wasn't patched for the C compiler. There is, however, a quick fix for this issue: add this line to the top of your submission, and it will revert to the faster implementation (at least as of now): #define __USE_MINGW_ANSI_STDIO 0. Thanks to ffao for telling me about this hack.
3. It seems that there is an issue with the ancient GCC version used for C11. For instance, when using the timespec struct to get high precision time, there is a compilation error (and it seems it is because of limited C11 support in GCC 5.1). Update: a similar issue arises (timespec_get not found) when I submit with C++20 (64) as well, so I think this is an issue with the C library rather than the compiler.
A possible fix to the above issues that works most of the time is to submit your code in C++20 (64 bit) instead of C11. You might face problems due to different behavior of specific keywords or ODR violations due to naming issues, but it should be obvious how to fix those errors manually.
### Some update requests
Coming to the original point of the blog, I have a couple of update requests (if any other updates can make C easier to use, feel free to discuss them in the comments). Tagging MikeMirzayanov so these updates can be considered in the immediate future.
• Adding C17 (64 bit) alongside the old C11 option: This should be doable using msys2 as done for C++. I am somewhat confident that not using MinGW for this purpose would make the slow IO issue disappear. Also, the reason why I recommend C17 instead of C11 is that C17 is the latest version that addresses defects in C11. Note that C17 and C18 are two names for the same thing. However, I still believe that the current C11 option should be available in the future as well, for the sake of completeness/more options/other reasons mentioned in the comments below.
• Fixing slow IO: If the above doesn't fix the slow IO issue, perhaps there could be a way to fix it in MinGW itself. I am not too familiar with this, though.
• Fixing the timespec_get issue (and other C11/C17 support issues): I am not exactly sure on how to go about fixing this, but since this seems to be a C library issue, it could be a good idea to try installing a C library that has this functionality (and is C11 compliant). The issue probably boils down to Windows not having libraries that support everything in the C standard, since C11/C17 works perfectly on my Linux system.
Now I'll discuss why C is relevant for competitive programming.
### Why use C?
#### Why C++ is preferred over C
Of course, while choosing programming languages for competitive programming, C++ is usually recommended to beginners (over C and other languages) because of reasons that include the following:
• It is fast: writing assembly manually almost always leads to a slower program than writing an equivalent program and letting the compiler generate the program. However, with C, this is a non-issue most of the time since common compilers support both C and C++ and often use the same optimizations.
• STL: there is (in my opinion) a reasonably well-designed standard library that contains handy data structures and algorithms with a fairly generic API. C has a much more limited library but is minimal in the true sense with a non-prohibitive overhead.
• Not just imperative/object-oriented: Using lambdas can make code much cleaner and life much easier. There is support for function pointers in the C standard library (for example, you can use comparators in qsort and so on, with some overhead, though it is generally better to write your own sort to avoid the overhead of using function pointers completely).
Some of its less-noticed advantages over C in the context of competitive programming (apart from the obvious benefits of having a good set of data structures and algorithms) are:
• Good random number generation: If you are writing a randomized solution, it is imperative that you randomize the seed using something that can't be predicted easily. Seeding rand with time(NULL) is pretty bad, so in C, the way to do this is very platform specific until C11 (and people are not aware of how significant a revision C11 was, compared to C99 and other revisions before it). Also, rand is terrible in general, so using a C port of a C++ STL random number generator is probably a better bet. For reference, this is a good random number generator.
• constexpr (and template) support: If you enjoy constant-factor optimization, you would probably miss this. In certain fast IO implementations, for instance, a constexpr array is built at compile time to use chunks of bytes at a time rather than a single byte. Templates are important because writing your own library comes in handy in contests. However, using _Generic, it is possible to write generic code to a certain extent as well.
#### Why C is actually not that bad
Despite these disadvantages, it is possible (and might also be helpful in training) to use C for contests. C11 (which should be abandoned in favor of C17 due to C17 having fixes to C11 defect reports) has a standard library that is seldom talked about and has some of the following features which make life simpler:
• tgmath.h: Type generic math for floating point numbers and complex numbers
• stdint.h and inttypes.h: Types like int64_t are also available in C, contrary to popular belief.
• As mentioned earlier, there is some support for generics and function pointers in the standard library.
• There is support for booleans (rather than having to use integer types for everything) as well. C23 changes that a bit, but that is not going to be an issue unless you're doing fairly advanced things.
If you are using GCC and glibc, there are more surprises in store for you that make C all the more usable:
• Most pragmas and builtins carry over from C++ to C, making optimizing things almost as easy as in C++.
• Contrary to popular belief, the glibc implementation uses mergesort for qsort if sufficient space is available (which is pretty much always the case in competitive programming), so no worrying about quicksort being hackable!
#### Why you would want to code in C
Now that I have explained why C isn't as bad to code in as you might have thought earlier, you might wonder what the point of using C is at all. After all, isn't C++ meant to be a more supposedly "user-friendly" language than C? (In fact, it is not, with all its language complexities -- for instance, I haven't heard of a popular language that has as many "value-categories" as C++ and is as easy to shoot yourself in the foot with, with the sole exception of JS -- but that is a topic for another day).
Straight to the point, competitive programming in C has the following benefits:
• It forces you to devise a more straightforward implementation and think the problem through in more detail, improving your problem-solving (at least marginally) and implementation skills. If you find yourself using horrible stuff like map<int, pair<set<int, greater<int>>, FenwickTree<double>>> for a problem whose editorial uses a single Fenwick tree or sorting + two pointers, you might want to consider using a language which is not this powerful (of course, using these complicated data structures can also lead to unnecessary TLEs and overcomplicate your implementation). C is the best example; in terms of algorithms bundled in the standard library, there are only two algorithms: sorting and binary search, which are well-known to be the only "hard-to-write-for-beginners" algorithms that you would need before you hit GM. This is, of course, a bit exaggerated, but if you want to hit GM, you would probably just want to get better at implementing easier problems first, after which it is only natural to try to write your own library. Writing a treap/hash table for simple data types is probably all you would ever need before that.
• It is much easier to "completely" learn C than to learn C++ (or most other popular modern languages), in the sense that you can go through the C standard in a couple of sittings and apply it as well. The cppreference page for C (yes, cppreference also has a C reference) is a good resource for learning C if you already know a bit of C and/or C++, and much less daunting compared to that for C++.
• It is a great way to kill time and/or challenge yourself, as well as learn about how stuff works under the hood; more modern languages tend to get lost under too many layers of abstraction, making it hard to reason about which parts of your program aren't working as you expected and which parts could be much faster if written differently.
• Depending on how you look at it, you could also use it to gain a false sense of superiority over people who abuse language features to get AC in the same time that you did.
Coding in C is probably not a big deal for you at all if you are rainboy, anyway.
• +136
» 2 months ago, # | -26 Bullshit.
• » » 2 months ago, # ^ | +42 Least you can do is act like an adult and explain what's wrong.
» 2 months ago, # | 0 Great blog.
» 2 months ago, # | 0 The moment I read the title of the blog, I was anticipating rainboy pun.
» 2 months ago, # | +32 While I really like C, and I have been using along with (or it instead of) C++, I think it must be said that C basically worse than C++ for competitive programming. People who use it instead of C++ are just doing it for the extra challenges.That being said, wouldn't a worse version of C (C11) provide even more challenge, compared to a version without defects (C17)? There are solution for the defects in C11, so it's not like those are impossible to live with.Personally I support the addition of C17 or newer C in the future, but I think C11 should not be lost for that to happen ("abandoned in favor of C17"). Once upon a time (pre 2017), C++98 and C++11 was available here, but they are now gone. From C++11 to C++14 there might not be much of a change, but from C++98 to C++11 is a massive different, some C++98 code wouldn't compile under C++11 for example. My reasons for preserving old versions of languages in general are: It's better to have more options Bureaucracy in onsite contests often means that newer versions of the language might not be available, personally, I have participated in onsite contests (in 2016-2017 or so) that only support C++98. Some contests also doesn't say anything about the version of the language (no feedback until the contest is over), so the best option is to write code that will compile and work in all the standards. Tldr: Some time people are forced to use old version of the languages, and it's good to have those version available for practicing.
• » » 2 months ago, # ^ | ← Rev. 3 → +26 Thanks for the detailed comment.Regarding your first point, the main reason why I say that C is relevant is basically getting some sort of a handicap while coding, in the hope that it will force you to think more towards the solution rather than starting implementing it right away -- I find it very easy to do in C++ but not in C. Of course, in actual competitions (and training to make implementation faster), it is strictly better to use C++.The C11 defect reports I mentioned are mostly concerned with some rare weird behavior (and weird edge/ambiguous cases that needed to be ironed out), and are almost irrelevant for competitive programming, so it should be fine to have C11 (64 bit) in most cases. The reason I suggested using C17 for the new option on CF was that C17 has a more sane set of defaults in those edge cases (and fixing defect reports is in fact the only change that happens; no new features whatsoever). C is a much more stable language than C++, and the changes from C11 to C17 are nowhere near as significant as those from any revision of C++ to another.I agree with you on not removing C11 (but only adding C17 (64 bit) as a new option) -- I should have phrased that in a better manner.
» 2 months ago, # | +25 Joke on you I'll just compete in Pascal.
» 3 weeks ago, # | 0 Having the compiler on linux machine would be helpful too.For example, in codeforces c11, %lf or %Lf is invalid formatting.
» 3 weeks ago, # | +45 Reading your "Why you would want to code in C" completely convinced me. That is, convinced me that I made the correct choice going for C++
|
2022-10-07 09:28:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32231053709983826, "perplexity": 1078.555051386993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00217.warc.gz"}
|
https://www.physicsforums.com/threads/surface-integral-problem-dont-need-to-use-jacobian-for-polar.557636/
|
# Surface integral problem - don't need to use Jacobian for polar?
1. Dec 6, 2011
### ishanz
1. The problem statement, all variables and given/known data
Evaluate the surface integral.
∫∫S x^2*z^2 dS
S is the part of the cone z^2 = x^2 + y^2 that lies between the planes z = 1 and z = 3.
2. Relevant equations
$\int \int _{S}F dS = \int \int _D F(r(u,v))|r_u\times r_v|dA$
$x=rcos(\theta)$
$y=rsin(\theta)$
3. The attempt at a solution
First, I parametrized the cone.
$z^2=x^2+y^2\Rightarrow z=\sqrt{x^2+y^2} \Rightarrow z=r$
Therefore, the cone's vector equation should be
${\bf R}(r,\theta)=rcos(\theta){\bf i}+rsin(\theta){\bf j}+r{\bf k}$
${\bf R}_r=cos(\theta){\bf i}+sin(\theta){\bf j}+{\bf k}$
${\bf R}_\theta=-rsin(\theta){\bf i}+rcos(\theta){\bf j}$
$|{\bf R}_r \times {\bf R}_\theta| = r\sqrt{2}$
$\int_0^{2\pi}\int_1 ^3 {(rcos(\theta))^2(r)^2(r\sqrt{2})}drd\theta=\frac{364\sqrt{2}\pi}{3}$
Now, this is the right answer as per the book. My question is, when we go from $dA$ to $drd\theta$, why don't we use the Jacobian for polar coordinates, r? Had we included the Jacobian, the degree of r in the final double integral would have been six instead of five, giving us a completely different answer. My confusion here: Why is $dA$ not equal to $rdrd\theta$?
Last edited by a moderator: Dec 7, 2011
2. Dec 6, 2011
### ishanz
Sorry, the final step of the integration process didn't come out right. Here it is:
$\int_0^{2\pi}\int_1 ^3 {(rcos(\theta))^2(r)^2(r\sqrt{2})}drd\theta=\frac{364\sqrt{2}\pi}{3}$
3. Dec 6, 2011
### Dick
The |R_r x R_theta| factor already includes the r in dA. If the surface were just the x-y plane and they wanted you to find area by integrating 1. Think about what that factor would be.
Last edited: Dec 6, 2011
4. Dec 7, 2011
### ishanz
I don't quite understand. I don't know if it's because of the totally methodical and unintuitive way that our professor has taught us (i.e., simply equating $dA$ to $rdrd\theta$ in all scenarios) or because of my own negligence. Does the $|{\bf R}_r\times{\bf R}_\theta|$ factor always take care of the Jacobian for surface integrals, then? In something akin to spherical coordinate integrals (e.g., if my vector ${/bf R}$ wer
5. Dec 7, 2011
### ishanz
God, I'm bad at this whole Latex + forum thing. I'm so sorry about the double posts... I think I accidentally hit submit or something.
I don't quite understand. I don't know if it's because of the totally methodical and unintuitive way that our professor has taught us (i.e., simply equating $dA$ to $rdrd\theta$ in all scenarios) or because of my own negligence. Does the $|{\bf R}_r\times{\bf R}_\theta|$ factor always take care of the Jacobian for surface integrals, then? In something akin to spherical coordinate integrals, would that factor take care of the whole $\rho^2sin(\phi)$ factor for me?
6. Dec 7, 2011
### Dick
r and theta here are really just a convenient parametrization of the surface. $dS=|r_u\times r_v| du dv$. I wouldn't substitute dA for du dv, if it's going to confuse you.
7. Dec 7, 2011
### ishanz
I see, that makes sense. Are there any situations in which I would ever have to actually worry about a Jacobian factor when doing a surface integral similar to the one I've described above? Or is it something I should only worry about when explicitly executing coordinate transforms?
8. Dec 7, 2011
### Dick
You don't have to worry about it if you use that formula. Like I said, the 'jacobian' part is the $|r_u\times r_v|$ factor.
9. Dec 7, 2011
### ishanz
Got it. Thanks very much, Dick.
|
2017-09-19 13:37:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6867628693580627, "perplexity": 893.478497958026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00209.warc.gz"}
|
http://diepio.wikia.com/wiki/User_blog:Banarama/How_Many_Builds_are_in_Diep.io%3F
|
## FANDOM
1,403 Pages
One of the core features of Diep.io is the ability to customize individual aspects of each tank, beyond the premade classes. One can upgrade health and body damage to ram enemy players, or bullet damage and reload to become a hurricane of death. However, just how many different builds are there? It's a difficult question to answer, with seemingly endless configurations of 33 points into 8 different stats. However, there is an elegant way to calculate it formulaically through mathematics. In this blog post, I'll explain this method in detail, and why it works. Note, however, that this method assumes knowledge of basic algebra, utilizing polynomials and exponents.
There isn't an easy way to simulate the ways to pick stats with typical grade-level math — after all, there are not only dozens of points to arrange, but every stat has a maximum number of points so not every basic configuration is possible. However, through the use of a technique called generating functions, one can represent the possibilities elegantly through polynomials. Here's an explanation of how they work:
Suppose for a moment that we have a much simpler case with just 2 different stats, say "Bullet Power" and "Ram Power", each with a maximum of 2 points, and 3 skill points. In each stat individually, there are three possibilities — it could have 2 points, 1 point, or no points at all. We can represent this — and read carefully, this is the lynchpin of the method — as a polynomial where the exponents represent the number of skill points put in. For the example I just gave, each stat could be represented as $x^0 + x^1 + x^2$ — think of each $x$ as a skill point, and the exponent represents how many points are put in.
Now, however, we have to consider both stats together. How do you do this? You just multiply — just like how multiple skill points were represented by multiplying the $x$'s, we represent multiple stats by multiplying each stat's polynomial equivalent together. For the scenario of having two stat bars, this gives us $(x^0 + x^1 + x^2)(x^0 + x^1 + x^2)$, or $(x^0 + x^1 + x^2)^2$. To get the number of different builds, we can just expand it, but doing the expansion by hand doesn't sound fun at all. Luckily, Wolfram|Alpha, created by the genius Stephen Wolfram who wrote three books on particle physics by the age of 14, lets us easily calculate it with the magic of digital technology — the result is $x^4 + 2x^3 + 3x^2 + 2x + 1$. Instead of each exponent showing how many skill points are used in each stat, when multiplied together each exponent represents how many skill points are used overall, combining the two stats. Meanwhile, the coefficients, or the number of terms for each power, represent the number of ways to use that many skill points, since each term is a different way to combine the upgrades in each stat.
Since we want to look at the number of ways that use all the skill points, but not going over the maximum, let's look at the $x^3$ term, which is all the ways to use up 3 skill points. The coefficient is $2$, meaning there are 2 ways to use 3 skill points. This makes sense, too, since the only way to use all 3 is to put 2 points into either one of the 2 stats and put 1 point in the other. For a better explanation of generating functions in general, see here.
Now that we've gone over a quick primer on how to use polynomials, we can start on the real stat table. There are 8 stats in the game, each of which can have up to 7 skill points normally. We can represent this as $(x^0 + x^1 + x^2 + x^3 + x^4 + x^5 + x^6 + x^7)^8$ like earlier, but with 8 stat bars that each can have anywhere from 0 to 7 points instead. Expanding this either through sheer masochism or the help of the Interwebs, we get a monstrous polynomial:
x^56 + 8 x^55 + 36 x^54 + 120 x^53 + 330 x^52 + 792 x^51 + 1716 x^50 + 3432 x^49 + 6427 x^48 + 11376 x^47 + 19160 x^46 + 30864 x^45 + 47748 x^44 + 71184 x^43 + 102552 x^42 + 143088 x^41 + 193705 x^40 + 254808 x^39 + 326124 x^38 + 406568 x^37 + 494166 x^36 + 586056 x^35 + 678588 x^34 + 767544 x^33 + 848443 x^32 + 916896 x^31 + 968976 x^30 + 1001568 x^29 + 1012664 x^28 + 1001568 x^27 + 968976 x^26 + 916896 x^25 + 848443 x^24 + 767544 x^23 + 678588 x^22 + 586056 x^21 + 494166 x^20 + 406568 x^19 + 326124 x^18 + 254808 x^17 + 193705 x^16 + 143088 x^15 + 102552 x^14 + 71184 x^13 + 47748 x^12 + 30864 x^11 + 19160 x^10 + 11376 x^9 + 6427 x^8 + 3432 x^7 + 1716 x^6 + 792 x^5 + 330 x^4 + 120 x^3 + 36 x^2 + 8 x + 1
That polynomial is utterly ridiculous, but we only need to look at the ways to use 33 skill points — $767544 x^{33}$. From the coefficient, we see that there are 767,544 different ways — now all we have to do is multiply by the number of tanks…wait, we forgot Smashers! There are three smashers (the Smasher, Landmine, Spike) that have 4 stats with up to 10 points in each, and one (the Auto Smasher) that has all 8 stats but with 10 points in each. These can be represented as $(x^0 + x^1 + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9 + x^{10})^4$ and $(x^0 + x^1 + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9 + x^{10})^8$ respectively — looking at the $x^{33}$ coefficients, we see that most Smasher classes have only 120 different ways to upgrade, but the Auto Smasher has 7,048,336 different ways — the most of any tank, thanks to its higher max upgrades in each stat.
Now it's time to put the numbers we have together to have the total number of builds in the game. There are 44 "regular" tanks, with a typical stat table. These each have 767,544 ways to upgrade. There are 3 Smasher-branch tanks with no bullet-related stats, each with a mere 120 ways to upgrade due to the paucity of stats on which to spend skill points. And finally, there's the Auto Smasher, which has more of everything and a whopping 7,048,336 different builds thanks to it. Doing the calculations, there are $44 \times 767,544 + 3 \times 120 + 1 \times 7,048,336$ or $\bold{40,820,632}$ different builds in the game! That's a shit ton of builds…but remember, most of those are probably garbage with random points in random stats in random classes that don't go together in the slightest :P
If you want to see a sampling of some actually good builds for every playable tank in the game, check out the Builds page!
|
2019-01-19 01:03:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671454071998596, "perplexity": 636.4835791852393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00113.warc.gz"}
|
https://cssplice.github.io/peml/tutorial.html
|
# Tutorial: Get Started Writing PEML
The Programming Exercise Markup Language (PEML) is designed to provide an ultra-human-friendly authoring format for describing automatically graded programming assignments.
## Core Syntax
You probably already know a little about what PEML is and why it was created. Some of the very rudimentary basics of PEML's syntax are shown here:
OK, enough with the syntax. Now start with this minimal version of an exercise and fill in your own content (you can download a slightly longer version here):
You can edit your own PEML files locally using any text editor, or you can edit PEML right in your web browser by opening our live editing/validation site in a separate tab:
PEML Live!
The fields in this minimal document are:
exercise_id
Provide a unique identifier for your exercise. Globally unique. Any sequence of non-whitespace characters is ok, but you may wish to stick to existing naming conventions used in other domains. For example, you could use a unique URL (perhaps where this exercise's home definition lives), or something like a Java package name (a dotted name, perhaps including a university's domain name as a prefix), or your email combined with a unique exercise suffix of your own devising.
title
Provide a descriptive title that will be used as a human-readable label for the exercise. The intent is for this to be the "title" shown to students in various contexts, either when viewing a single exercise or when viewing lists of exercises. While there is no specific length limit, ideally titles should be no more than "one line" in size, because of the various contexts where they might be displayed.
While a license isn't strictly required, it is strongly recommended. The id can be specified by a URL that identifies the license, or by a name (or abbreviated name) that is in common use, such as any of the license keywords used by github (an excellent source for potential license choices). You can even use "(C) 2021 , All rights reserved". While you could just specify an author email using author:, listing a license is better (everyone must assume "all rights reserved" if you do not). See author or license for more details).
Probably you. For an individual, either specify a unique, identifying email address, or as shown here, an email address along with a name using separate keys. For an organization, you can specify your information as author:, and then provide license.owner: to specify the organization owning the copyright.
instructions
Provide your exercise's instructions here. This isn't required in all contexts (for example, if providing an auto-grading setup), but you probably want to.
Now you have a PEML description!
## Identifying the Programming Language
If your exercise is a programming exercise, you probably will find it useful to identify the programming language it supports. PEML does this by allowing you to define the "programming system".
The language: key is the required one--specify the language that is supported using its common name (be careful of capitalization, since some tools processing descriptions might not treat the name case-insensitively), or its MIME type (to reduce ambiguity). Optionally, you can also specify a version, if your supporting files are version-dependent. Feel free to use gem-style version dependency constraints, although check with your educational tool to determine what is supported.
Note that the language: and version: keys here are listed inside a [systems] array. There's only one element in the array in this example, but PEML does allow authors to express exercises that support multiple programming systems. You can ignore that for now. However, this is a good chance to recap PEML's array notation.
In short, arrays (lists) in PEML have keys that are surrounded by square brackets instead of using a colon. The end of the list is marked with a pair of empty square brackets (which can be omitted if they are at the end of the file). All the keys between these two markers are part of the list. Like ArchieML, if you look at the keys inside, as soon as a key is duplicated, that is taken as the start of a new entry in the list. So an array with multiple entries might look like this:
## Associating Supporting Files
OK, so where's all the cool stuff? Like auto-grader inputs and all that?
PEML provides a very rich model for structuring this information for tool use. However, PEML relies heavily on convention over configuration to simplify the way those things are managed and to make it easier for authors to learn the minimal amount they need, and gradually add onto that core over time as more advanced situations arise.
### Starting Files Provided to the Student
Suppose you want to provide some file(s) to the student as the starting point for their solution. Just add them in the src/starter folder next to the PEML description itself. We recommend placing each exercise that uses additional resources in its own directory. This approach works whether the PEML description is located in a folder on the local machine, is packaged in a zip file or another form of archive, or hosted in a repository. You could also use the [src.starter.files] array key to provide this information in the PEML description itself, implicitly providing them by co-location is often simpler.
# Providing starter files for the user
dir
|-- exercise.peml
+-- src
+-- starter
|-- file1.ext
|-- file2.ext
+-- file3.ext
### Images Used in the Instructions
Suppose you want to provide images for use in your instructions. Instructions are typically written in Markdown or vanilla HTML, but can certainly refer to supplemental files, whether they be images, separate pages describing APIs, examples students can download, etc. While you could host these resources on your own website and use absolute URLs, you may wish for them to be packaged with the exercise. You can use the [public_html] key to specify these explicitly, or simply place them in a public_html folder.
# Providing images files for the instructions
dir
|-- exercise.peml
+-- public_html
| |-- image1.png
| |-- image2.png
+-- src
+-- starter
|-- file1.ext
|-- file2.ext
+-- file3.ext
### Test Case Files for Auto-grading
Suppose you want to provide some file(s) specifying the tests you want to use to check the behavior of answers to the exercise. These could be in the form of compilable program code, scripts, data files, or whatever notation/format is used by the auto-grading tool reading your PEML description. You can provide these as separate files under the suites folder, which corresponds to the [suites] array key.
# Providing test case files
dir
|-- exercise.peml
+-- public_html
| |-- image1.png
| |-- image2.png
+-- src
| +-- starter
| |-- file1.ext
| |-- file2.ext
| +-- file3.ext
+-- suites
|-- TestClass1.java
+-- TestClass2.java
### Other Supplemental Files for Auto-grading
Suppose your auto-grading tests use additional data files or other resources that need to be available during execution of your tests. Not all grading tools support such resources, but if they do, you can provide them in the environment/test folder, which corresponds to the [environment.test.files] key.
Note: If you'd rather use a docker image to provide the environmental setup for environments (for building, running, testing, or even the student's starting environmet) and your tool supports this kind of usage, you probably want to just specify image information directly in the PEML file--see Environments in the data model).
# Providing data resources for use in testing
dir
|-- exercise.peml
+-- public_html
| |-- image1.png
| |-- image2.png
+-- src
| +-- starter
| |-- file1.ext
| |-- file2.ext
| +-- file3.ext
+-- suites
| |-- TestClass1.java
| +-- TestClass2.java
+-- environment
+-- test
|-- file4.ext
|-- file5.ext
+-- file6.ext
### System-specific Files
Top-level keys like src.*, environment.*, or [suites] affect the whole exercise, which means they apply to all programming systems that the exercise supports. When the exercise only targets one programming language, that's probably fine. If, however, you write your exercise so that it supports multiple programming systems, you may wish to provide different resources for each system. You can do that by using the same directory structure for src/, environment/, and suites/, but placing them underneath systems/<language> (using the language name as specified in the PEML file--if a MIME type is used, replace the '/' in the MIME type with a '-').
Note: Most tools do not support exercises that support multiple programming systems. However, they should still support the first system listed, including system-specific settings provided in the way described here.
# Providing system-specific resources
dir
|-- exercise.peml
+-- systems
+-- Java
| +-- src
| | +-- starter
| | |-- Class1.java
| | +-- Class2.java
| +-- suites
| | |-- TestClass1.java
| | +-- TestClass2.java
| +-- environment
| +-- test
| |-- file3.ext
| +-- file4.ext
+-- python
+-- src
| +-- starter
| |-- class1.py
| +-- class2.py
+-- suites
| |-- test_class1.py
| +-- test_class2.py
+-- environment
+-- test
|-- file3.ext
+-- file4.ext
If files are specified at both the global and system-specific levels, the files available are the union of both, where files with the same path names in both locations are overridden by the system-specific contents.
## Using Data-driven Test Suites for Simple Cases
For exercises that have relatively simple scaffolding requirements for testing--that is, most or all of the tests follow the same basic format, but vary in some standardized ways such as different inputs or outputs--you may find writing test data directly into your PEML description to be more convenient than providing separate test cases.
For example, maybe you are working with a tool that only tests standard input/standard output behaviors, so every test you run consists only of a pair of input/output values. Then you might be able to describe your tests this way:
Here, the [suites] consist of a single test suite that contains two test cases. Each test case here contains two values, named "stdin" and "stdout". The tool translating this input into test cases would need to recognize those names and know how to use them, so check with your tool regarding support for this format and required naming conventions for variables. However, this is a fairly simple way to mark up the values when the situation permits.
In fact, the same content could be written in CSV, YAML, JSON, etc., depending on what your grading tool supports.
Here, the MIME types for the files were specified, but they can also be deduced from the file names, so the MIME type is redudant (not required). Also, because of the error-prone nature of manually quoting data in CSV format, PEML supports an alternative "text/x-unquoted-csv" where values are written in the same notation that would be used in the target programming system, allowing more natural use of expressions, native literal constructs, escapes, etc.
If you are lucky enough to have a tool that supports it, you may be able to provide a template used to translate a single test case record into executable code. You might even be able to designate tests as privately visible to instructors only, or publicly visible to students. For example:
## What Next?
If you haven't read through the whole PEML Introduction, that is your next step.
If you want to know more about how PEML came to be, why we're not using straight YAML or JSON, PEML's design goals, and its influences, read our About PEML page.
Finally, start digging into the PEML definition.
|
2021-04-15 04:12:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2805250585079193, "perplexity": 3108.5885910494676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00112.warc.gz"}
|
https://socratic.org/questions/the-angles-of-a-triangle-have-the-ratio-3-2-1-what-is-the-measure-of-the-smalles
|
# The angles of a triangle have the ratio 3:2:1. What is the measure of the smallest angle?
Jun 9, 2018
${30}^{\circ}$
#### Explanation:
$\text{the sum of the angles in a triangle } = {180}^{\circ}$
$\text{sum the parts of the ratio "3+2+1=6" parts}$
${180}^{\circ} / 6 = {30}^{\circ} \leftarrow \textcolor{b l u e}{\text{1 part}}$
$3 \text{ parts } = 3 \times {30}^{\circ} = {90}^{\circ}$
$2 \text{ parts } = 2 \times {30}^{\circ} = {60}^{\circ}$
$\text{the smallest angle } = {30}^{\circ}$
Jun 9, 2018
The smallest angle is /_C=30°
#### Explanation:
Let the triangle be $\Delta A B C$ and angles be $\angle A , \angle B , \angle C$
Now, we know that all the 3 angles of a triangle sum up to be 180° from the Triangle Sum Property.
$\therefore \angle A + \angle B + \angle C = 180$
$\therefore 3 x + 2 x + x = 180$ ... [Given that the ratio of angles is $3 : 2 : 1$]
$\therefore 6 x = 180$
$\therefore x = \frac{180}{6}$
:. x = 30°
Now assigning the angles their values,
/_A=3x=3(30)=90°
/_B=2x=2(30)=60°
/_C=x=(30)=30°
Now, as we can clearly observe, the smallest angle is $\angle C$
which is =30°
Hence, the smallest angle is of 30°.
|
2019-03-26 10:20:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6311746835708618, "perplexity": 1033.8118873280475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00059.warc.gz"}
|
https://gmatclub.com/forum/a-rectangle-is-plotted-on-the-standard-coordinate-plane-105678.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 09 Dec 2019, 18:50
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A rectangle is plotted on the standard coordinate plane,
Author Message
TAGS:
### Hide Tags
Intern
Joined: 23 Nov 2010
Posts: 8
A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
Updated on: 03 Dec 2010, 11:19
11
00:00
Difficulty:
55% (hard)
Question Stats:
58% (01:28) correct 42% (01:24) wrong based on 241 sessions
### HideShow timer Statistics
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
Originally posted by JenniferClopton on 02 Dec 2010, 13:13.
Last edited by JenniferClopton on 03 Dec 2010, 11:19, edited 1 time in total.
Intern
Joined: 23 Nov 2010
Posts: 8
### Show Tags
Updated on: 03 Dec 2010, 11:20
***OA INCORRECTLY LISTED AS "A." CORRECT ANSWER IS "E" AS CONFIRMED BELOW.***
I chose E. Does "standard coordinate plane" imply Q1? If not, couldn't the rectangle lie in either Q1 or Q4?
Furthermore, if Q1 is implied, then why is the answer not D?
Originally posted by JenniferClopton on 02 Dec 2010, 13:16.
Last edited by JenniferClopton on 03 Dec 2010, 11:20, edited 2 times in total.
Math Expert
Joined: 02 Sep 2009
Posts: 59622
### Show Tags
02 Dec 2010, 13:27
2
2
JenniferClopton wrote:
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
So we have two vertices O(0,0) and B(12,0). First note that OA may be either one of the sides or a diagonal.
(1) The length of the diagonal is 13 units --> clearly OA is not a diagonal, so it's one of the sides. Now, if we take the length of the other side to be equal to x then we'll have x^2+12^2=13^2 --> x=5. But from this we can not get the coordinates of the other vertices. As you correctly noted rectangle can be in I quadrant with the other two vertices at (0, 5) and (12, 5) OR in IV quadrant with the other two vertices at (0, -5) and (12, -5). Not sufficient.
(2) The distance between the origin and one of the other vertices is 5 units. Clearly insufficient.
(1)+(2) Still two answers are possible: (0, 5) and (12, 5) OR (0, -5) and (12, -5). Not sufficient.
_________________
Intern
Joined: 23 Nov 2010
Posts: 8
### Show Tags
02 Dec 2010, 13:38
OK, while I had not considered the additional possibilities of a rotated position, I concur that the answer is E.
Unless I have missed something, this question is wrong?
Are we absolutely sure that Q1 is not implied?
Attachments
File comment: Official Question
Untitled-2.jpg [ 102.97 KiB | Viewed 4898 times ]
Math Expert
Joined: 02 Sep 2009
Posts: 59622
### Show Tags
02 Dec 2010, 13:43
JenniferClopton wrote:
OK, while I had not considered the additional possibilities of a rotated position, I concur that the answer is E.
Unless I have missed something, this question is wrong?
Are we absolutely sure that Q1 is not implied?
Yes, I think answer A is wrong. Can you please post OE for it to see where they went wrong?
_________________
Manager
Joined: 30 Aug 2010
Posts: 80
Location: Bangalore, India
### Show Tags
03 Dec 2010, 05:02
JenniferClopton wrote:
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
The answr must be "E". Not "A". Please post the correct OA.
Here is the simle explanation.
Just imagine the rectangle in the co-ordinate place. You get the answer. No need to use any calculations/values.
The below shows the picture with two rectangles, one in red and another in green, which can be drawn for statement given and which have different co-ordinates except the two given in the question.
Attachment:
rectangles.JPG [ 7.93 KiB | Viewed 4829 times ]
Regards,
Murali.
Kudos?
Intern
Joined: 07 Sep 2010
Posts: 6
GMAT 1: 730 Q50 V38
### Show Tags
03 Dec 2010, 08:59
even if it is implied the answer shud be D not A becoz 5 cant be the diagonal as one side is already 12 and the diagonal will be greater than both sides.
Cheers,
Jaxis.
Intern
Joined: 23 Nov 2010
Posts: 8
### Show Tags
03 Dec 2010, 11:25
Bunuel wrote:
Yes, I think answer A is wrong.
muralimba wrote:
The answr must be "E". Not "A". Please post the correct OA.
The answer has been corrected above. Also, for any Grockit subscribers, I have reported the error to the Grockit support team.
Intern
Joined: 04 May 2009
Posts: 41
Location: Astoria, NYC
### Show Tags
07 Dec 2010, 08:17
wow good question i didnt see the other quadrant possibility.
Intern
Joined: 14 Feb 2013
Posts: 19
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Jul 2014, 06:44
Bunuel wrote:
JenniferClopton wrote:
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
So we have two vertices O(0,0) and B(12,0). First note that OA may be either one of the sides or a diagonal.
(1) The length of the diagonal is 13 units --> clearly OA is not a diagonal, so it's one of the sides. Now, if we take the length of the other side to be equal to x then we'll have x^2+12^2=13^2 --> x=5. But from this we can not get the coordinates of the other vertices. As you correctly noted rectangle can be in I quadrant with the other two vertices at (0, 5) and (12, 5) OR in IV quadrant with the other two vertices at (0, -5) and (12, -5). Not sufficient.
(2) The distance between the origin and one of the other vertices is 5 units. Clearly insufficient.
(1)+(2) Still two answers are possible: (0, 5) and (12, 5) OR (0, -5) and (12, -5). Not sufficient.
How can a rectangle have two unequal parallel sides (12 & 13)?
beats me
Math Expert
Joined: 02 Sep 2009
Posts: 59622
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Jul 2014, 06:48
hamzakb wrote:
Bunuel wrote:
JenniferClopton wrote:
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
So we have two vertices O(0,0) and B(12,0). First note that OA may be either one of the sides or a diagonal.
(1) The length of the diagonal is 13 units --> clearly OA is not a diagonal, so it's one of the sides. Now, if we take the length of the other side to be equal to x then we'll have x^2+12^2=13^2 --> x=5. But from this we can not get the coordinates of the other vertices. As you correctly noted rectangle can be in I quadrant with the other two vertices at (0, 5) and (12, 5) OR in IV quadrant with the other two vertices at (0, -5) and (12, -5). Not sufficient.
(2) The distance between the origin and one of the other vertices is 5 units. Clearly insufficient.
(1)+(2) Still two answers are possible: (0, 5) and (12, 5) OR (0, -5) and (12, -5). Not sufficient.
How can a rectangle have two unequal parallel sides (12 & 13)?
beats me
It cannot. Where in my solution is written this?
_________________
Intern
Joined: 14 Feb 2013
Posts: 19
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Jul 2014, 07:05
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.[/quote]
So we have two vertices O(0,0) and B(12,0). First note that OA may be either one of the sides or a diagonal.
(1) The length of the diagonal is 13 units --> clearly OA is not a diagonal, so it's one of the sides. Now, if we take the length of the other side to be equal to x then we'll have x^2+12^2=13^2 --> x=5. But from this we can not get the coordinates of the other vertices. As you correctly noted rectangle can be in I quadrant with the other two vertices at (0, 5) and (12, 5) OR in IV quadrant with the other two vertices at (0, -5) and (12, -5). Not sufficient.
(2) The distance between the origin and one of the other vertices is 5 units. Clearly insufficient.
(1)+(2) Still two answers are possible: (0, 5) and (12, 5) OR (0, -5) and (12, -5). Not sufficient.
You say clearly OA is not a diagonal, it is one of the side. I think you are mentioning to two sets of parallel lines having length 12 & 13?
Also, how can 13 not be a diagonal, given that it has been explicitly mentioned in the question that it IS a diagonal.
Math Expert
Joined: 02 Sep 2009
Posts: 59622
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Jul 2014, 07:31
hamzakb wrote:
You say clearly OA is not a diagonal, it is one of the side. I think you are mentioning to two sets of parallel lines having length 12 & 13?
Also, how can 13 not be a diagonal, given that it has been explicitly mentioned in the question that it IS a diagonal.
You are not reading the question and the solution carefully. Also, with geometry and coordinate geometry question it's always a good idea to make a sketch:
Attachment:
Untitled.png [ 8.01 KiB | Viewed 3623 times ]
OA is NOT the diagonal it's one of the sides, diagonal = 13. Two possible rectangles.
Hope it's clear now.
_________________
Intern
Joined: 14 Feb 2013
Posts: 19
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Jul 2014, 08:27
Bunuel wrote:
hamzakb wrote:
You say clearly OA is not a diagonal, it is one of the side. I think you are mentioning to two sets of parallel lines having length 12 & 13?
Also, how can 13 not be a diagonal, given that it has been explicitly mentioned in the question that it IS a diagonal.
You are not reading the question and the solution carefully. Also, with geometry and coordinate geometry question it's always a good idea to make a sketch:
Attachment:
Untitled.png
OA is NOT the diagonal it's one of the sides, diagonal = 13. Two possible rectangles.
Hope it's clear now.
I get it. I didn't understand properly before.
Thanks a lot!! Your posts are the most helpful I've found on the net
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 8245
GMAT 1: 760 Q51 V42
GPA: 3.82
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
22 Oct 2015, 13:27
Forget conventional ways of solving math questions. In DS, Variable approach is the easiest and quickest way to find the answer without actually solving the problem. Remember equal number of variables and independent equations ensures a solution.
A rectangle is plotted on the standard coordinate plane, with vertices at the origin and (12,0). What are the coordinates of the other two vertices?
(1) The length of the diagonal is 13 units.
(2) The distance between the origin and one of the other vertices is 5 units.
There is one variable (b), and 2 equations are given; the answer is likely to be (D).
From condition 1, b=5, -5. So this is an insufficient condition as it does not give a unique answer.
condition 2, similarly, gives b=5, -5, so this is also insufficient for the same reason.
Even if we combine the 2 conditions, b=5,-5, so as a whole, they are insufficient, so the answer is going to be (E).
For cases where we need 1 more equation, such as original conditions with “1 variable”, or “2 variables and 1 equation”, or “3 variables and 2 equations”, we have 1 equation each in both 1) and 2). Therefore, there is 59 % chance that D is the answer, while A or B has 38% chance and C or E has 3% chance. Since D is most likely to be the answer using 1) and 2) separately according to DS definition. Obviously there may be cases where the answer is A, B, C or E.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only \$79 for 1 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Non-Human User
Joined: 09 Sep 2013
Posts: 13731
Re: A rectangle is plotted on the standard coordinate plane, [#permalink]
### Show Tags
28 Nov 2019, 07:31
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: A rectangle is plotted on the standard coordinate plane, [#permalink] 28 Nov 2019, 07:31
Display posts from previous: Sort by
|
2019-12-10 01:51:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8698578476905823, "perplexity": 1176.1853516156093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00401.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/129855-homeomorphic-spaces.html
|
# Math Help - Homeomorphic spaces
1. ## Homeomorphic spaces
I came up with this nice little theorem, can anyone verify or deny it?
I can show my (very very long...need like six lemmas) proof, but it goes like this:
Theorem:: Let $\left\{X_j\right\}_{j\in\mathcal{J}}$ and $\left\{Y_j\right\}_{j\in\mathcal{J}}$ be two collections of topological spaces such that $X_j\approx Y_j$ for all $j\in\mathcal{J}$ ( $\approx$ means homeomorphic). Then, $\prod_{j\in\mathcal{J}}X_j\approx\prod_{j\in\mathc al{J}}Y_j$ under the product topology.
Any comments would be nice.
2. Originally Posted by Drexel28
I came up with this nice little theorem, can anyone verify or deny it?
I can show my (very very long...need like six lemmas) proof, but it goes like this:
Theorem:: Let $\left\{X_j\right\}_{j\in\mathcal{J}}$ and $\left\{Y_j\right\}_{j\in\mathcal{J}}$ be two collections of topological spaces such that $X_j\approx Y_j$ for all $j\in\mathcal{J}$ ( $\approx$ means homeomorphic). Then, $\prod_{j\in\mathcal{J}}X_j\approx\prod_{j\in\mathc al{J}}Y_j$ under the product topology.
Any comments would be nice.
It turns out this is correct.
If anyone wants to see a proof, take a loot at my blog at the bottom of the page.
|
2014-04-18 15:17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7438939213752747, "perplexity": 578.509412756792}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.rdocumentation.org/packages/lasso2/versions/1.2-20/topics/gcv.l1ce
|
gcv.l1ce
0th
Percentile
gcv()' Methods for l1ce' and l1celist' Objects.
This is a method for the function gcv() for objects inheriting from class l1ce or l1celist.
Keywords
models
Usage
# S3 method for l1ce
gcv(object, type = c("OPT", "Tibshirani"),
gen.inverse.diag = 0, …)
# S3 method for l1celist
gcv(object, type = c("OPT", "Tibshirani"),
gen.inverse.diag = 0, …)
Arguments
object
an object of class l1ce or l1celist.
type
character (string) indicating whether to use the covariance formula of Osborne, Presnell and Turlach or the formula of Tibshirani.
gen.inverse.diag
if Tibshirani's formula for the covariance matrix is used, this value is used for the diagonal elements of the generalised inverse that appears in the formula that corresponds to parameters estimated to be zero. The default is 0, i.e. use the Moore-Penrose inverse. Tibshirani's code uses gen.inverse.diag = 1e11.
further potential arguments passed to methods.
Details
See documentation.
gcv for the general behavior of this function; l1ce.object and l1celist.object for description of the object` argument.
|
2019-10-21 05:43:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31109946966171265, "perplexity": 4460.350110693228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00242.warc.gz"}
|
https://www.doxygen.nl/helpers.html
|
Helper tools and scripts
Since the number of tools/scripts is ever growing, I've divided them into a number of categories:
# Doxygen extensions
#### Nassi-Shneiderman diagrams
Eckard Klotz has started a projected called Moritz. It generates Nassi-Shneiderman diagrams of functions and methods in C/C++ sources as HTML files, which could be included in software documentation or simply viewed in a web browser.
# Filters to add support for other languages
## Perl
Bret Jordan has written a filter Another filter is written by Thomas Aeby.
## Javascript
For those working with JavaScript it may be good to know that Jörg Schaible has written a perl script to let doxygen deal with it. Unfortunately his site at berlios.de has been removed. If you google for js2doxy.pl you can still find copies of the script however.
## Object Pascal
Darren Bowles has started a project Pas2dox which converts Object Pascal into a C++ style syntax, that doxygen can then happily parse. The project's goal is to allow doxygen to be used for Delphi/Kylix projects.
## Visual Basic
Mathias Henze wrote an awk script that converts Visual Basic code into something doxygen understands. The package includes a small batch-wrapper. To use it, one has to put following line for the config-option "INPUT_FILTER":
C:/path/to/filer/tools/vbfilter.bat C:\path\to\filter\tools
Some Unix tools like sh.exe, gawk.exe and tee.exe are required to be available under the supplied path. They can be downloaded here.
An alternative filter for Visual Basic is provided by Basti Grembowietz. It requires python and comes with the following usage instructions:
The VB-Files have to be prepared the following way:
'* <comments directly handed to doxygen>
<vb- Function / Sub / Member>
For shorter usage also this is allowed:
<vb-member (recommended)> '* <doxygen-comment>
For comments addressing to a class / module, '! must be used.
<p<blockquote>
Giovanni De Cicco wrote an Add-in called VBDoxyAddin for VB6 developers based on the script of Mathias.
For the newer VB.NET (and also for classic VB) an awk based filter was written by Vsevolod Kukol.
## MatLab
Ben Heasly has written a filter for doxygen that converts MatLab scripts (including the new object oriented features) into something doxygen can understand. An alternative filter is provided by Ian Martins. There is also one by Fabrice which can be found on Matlab Central.
## Pro*C
Darren Bowles has created Proc2Dox, which is a pre-processor addon for doxygen to add support for Pro*C code.
## Assembly
Bogdan Drozdowski and Rick Foos have written a filter to convert assembly into C like code which doxygen can then parse.
## Lua
Alec Chen has written a filter to make doxygen parse Lua code. Simon Dales has another lua filter to do the same.
Sebastian Schaefer has a written a filter to make doxygen parser the GLSL shader language.
## Qt QML
Aurélien Gâteau has written a filter for Qt's QML files.
## GOB-doc
The GOB-doc filter parses *.gob files and produces C++ class definitions.
## Prolog
Dr Beco wrote a filter for Prolog.
## CAPL
Bretislav Rychta wrote filter for CAPL the CAN Access Programming Language.
## STX
Jan Lochmatter has written a filter for the STX language, which is used for PLC's (programmable logic controller) from the german manufacturer Jetter.
# Increase ease of use
## DoxyAssist
ChEeTaH wrote DoxyAssist. This tool can use a template doxygen configuration file and adjust its settings for specific projects and sub projects it manages. For each (sub) project doxygen is run separately, resulting in separate documentation sets for each sub project. It will combine these parts back for the Qt Assistant, making sure the appropriate filters are available. Qt Assistant can then be used to view the documentation as a whole, or easily limit the view to a subproject. There is special support for Drupal, for which all contributed modules are automatically discovered and added as subproject (see this demonstration).
## Eclipse
Eclox is a doxygen frontend plugin for Eclipse.
## Visual studio
If you use Visual Studio .NET have a look at Steve King's set of addins. Greg Engelstad has written a perl script to parse a Visual Studio .NET solution file (.sln) and run doxygen for each separate project contained therein.
Jason Williams has written an Addin for Visual Studio 2005 & 2008 which is able to auto-generate doxygen (or DocXml) style comments from most code elements (file, namespace, class, struct, enum, function, etc). It parses C, C++, C# and Java code to produce fully formed doxygen comments, and can update those comments if the code element is changed, and word-wrap the descriptions to keep them tidy. It uses a set of user-editable rules to provide automatic descriptions of elements, parameters and return codes, minimizing the effort involved in generating doc comments.
jgallardo has also written a Addin for Visual Studio that eases browsing the documentation generated by doxygen.
An addin for Visual Studio 2005 called DoxyComment was created by Troels Gram. It is designed to assist you in inserting context sensitive comment blocks into C/C++ source files. DoxyComment also comes with an xslt template that lets you generate documentation like the MSDN library.
If you are using Microsoft's Developer Studio 6.0, an add-in called DoxBar is available that can be used to run doxygen from within Developer Studio and to search through the generated HTML help files.
Note: I do not have enough time to maintain DoxBar myself anymore, so I moved DoxBar to sourceforge. Olivier Sannier has introduced a number of improvements to DoxBar. If you too want to join the development team, please register as a user at sourceforge and mail me your user name.
Bernhard Nowara has written a profile editor, which is a doxywizard-like tool for Windows. He also created an enhanced version of DoxBar that includes his editor and some macros for Visual Studio to ease the preparation of the source code for doxygen. These changed have been merged into more recent version of doxygen by Olivier Sannier.
FeinSoftware has released a development tool for Microsoft Visual Studio .NET (Visual C++) called CommentMaker, which creates customizable function header that developers can adjust to most specific documentation requirements. By default it generates doxygen compatible comments.
## TechPubs Tools
Glenn Maxey has released The TechPubs Tools (TPT) which wraps around any number of mini-HTML systems and creates a comprehensive HTML system complete with table of contents and an auto-generated index/concordance. TPT consists of Perl programs, UNIX shell scripts, and master template files (HTML).
## Tree view applet
Those having performance problems with the TREEVIEW option, could try this script written by Glenn Maxey.
## MSDN integration
For those using Windows and wanting to integrate the compressed HTML generated by doxygen into MSDN look at this MSDN integrator utility.
## Configure via Perl
Richard Y. Kim has written a perl module to use/configure doxygen more easily from perl scripts.
If you're into Ant have a look at Karthik A Kumar's Ant task object's for doxygen.
## Automake/autoconf integration
Oren Ben-Kiki shows how to integrate doxygen with Automake and Autoconf.
## Embed HTML
Wilfred Nilsen has written a tool (for windows) to combine multiple HTML files and embed them in a single executable. This could be used in combination with the doxygen's HTML output.
## Trac plugin
If you are using Trac to track your issue then have a look at this plugin to embed doxygen documentation.
## CMake integration
Stefan Majewsky has written a blog entry about integrating doxygen in a CMake based project.
## DoxyGrouper
Raja Kajiev write a tool called DoxyGrouper which can help to include items into groups on directory structure basis.
## qtres2dox
Markus Schwartz write a tool called qtres2dox which can generate an input file for doxygen from .ui and .qrc files.
## Doxynum
Doxynum is a program for automatic numbering of sections and drawings, as well as for creating contents in documentation, generated using the doxygen software
## Visual Studio
Alexander Manenko wrote EnhancedCommentsCpp which is a Visual Studio Editor extension that highlights doxygen documentation tags inside C++ comments.
## VIM
If VIM is your favorite editor (it is mine!), Michael Geddes wrote a syntax highlighting script on top of C/C++/IDL/Java. Ralf Schandl also has a some macros and syntax highlighting files for you. Emilio Riva sent yet an alternative vim highlighting file.
## Emacs
If you are using the Emacs editor, take a look at epydoc-el which is a lisp script to simplify writing doxygen comments.
Ryan Sammartino maintains a project called Doxymacs at sourceforge, which produced an elisp package to make using doxygen from within {X}Emacs easier.
## UltraEdit
If UltraEdit is your editor of choice, take a look at Dominik Stadler's script for information on how to enable syntax highlighting for doxygen comment blocks.
## TSE Pro/32
For SemWare's TSE Pro/32 editor Howard Kapustein has provided syntax definition files for doxygen style comment blocks.
## Delphi/C++ Builder
If you use Delphi or C++ Builder in combination with GExperts you can use this XML macro written by Miguel A. Richard to serve as a template for comments in your code.
# Migration from other doc tools
## Cocoon
If you want to convert Cocoon (or C++) style comments into Qt-style comments on the fly you might want to try the filterComments.pl script written by Paul S. Strauss. Use it in combination with doxygen's INPUT_FILTER configuration option.
## AutoDuck
Martin Slater wrote a python script duck2dox that can be used to convert AutoDuck style comments to doxygen comments. Steven Blackburn has written an alternative filter written in C++. Brian Szuter has written yet another filter after trying the other filters and finding them inadequate for his needs.
# XML/XSLT examples
## Breathe
Micheal Jones has written an extension to reStructuredText and Sphinx to be able to read and render the doxygen XML output.
## C# doxmlparser
Baneu Mihai has written a wrapper around doxygen's xml parser (found in addon/doxmlparser) to make it accessible from C#.
## Doxyclean
Matt Ball has written a script called doxyclean to convert doxygen's output into something that closer resembles Apple's own documentation.
## Doxygen.NET
Thomas Hansen and Kariem Ali have written Doxygen.NET which provides .NET object wrappers for the XML output generated by doxygen.
## Dox
Narech Koumar has written a tool called Dox which reads the XML output produced by doxygen and turns that into formatted HTML whose style resembles that of Javadoc's.
## XSLT examples
If you want to see how you can use XSLT to transform doxygen's XML output into something else (HTML/CHM in this case), have a look at Chelpanov's example. It has some limitations though:
• C++ is not supported
• It is slower than using doxygen directly or writing a SAX based transformation in C from XML to HTML.
• It support the CHM format only.
• It works on Windows platform only.
• Only some of the features of doxygen are supported.
and some additional requirements:
• HTML Help WorkShop
• Microsoft .Net Framework 1.1
If you have comments or suggestions please send them to Chelpanov (remove the the NO_SPAM part from the mail address).
Bo Peng wrote a small XSLT script to extract information for SWIG/Python interfaces.
|
2020-07-16 17:22:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4391227662563324, "perplexity": 10692.831434237116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657172545.84/warc/CC-MAIN-20200716153247-20200716183247-00276.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=310006
|
# Difference between voltage and voltage drop?
by VenaCava
Tags: difference, voltage
P: 20 1. The problem statement, all variables and given/known data A power station delivers 440 kW of power through 3 ohm lines. How much power is wasted if it is delievered at 12000v? 2. Relevant equations v=IR P=I^2R P=IV 3. The attempt at a solution I believe you are supposed to solve it like this but I do not understand why: I=P/V = 440000/12000=36.67 A P lost =I^2R=(36.67)^2 (3) = 4033 W But my gut instict tells me to do this which I believe is wrong from what I've read: P=V^2/R = (12000)^2/3 = 4.8 x 10^7 W P lost = P - Pused = (4.8 x 10^7 - 440000) =4.756 x 10^7W I think I'm getting confused with what "V" is. I keep googling it and all I can tell is that I don't understand the different between voltage and voltage drop. I'm not clear what either is. Could anyone please explain? Why does P=I^2R give you the power lost rather than the original power (440 kW) or the power used?
P: 1,339 You cannot talk about voltage at a single point, you can only talk about a change in voltage. You'll notice that V = Ed, this is the difference in potential between the distance d. Power is a change in energy and obvious way to look at it is to say that if there is a change in energy it has to be a negative since the resistor will heat up.
P: 241
Quote by VenaCava But my gut instict tells me to do this which I believe is wrong from what I've read: P=V^2/R = (12000)^2/3 = 4.8 x 10^7 W P lost = P - Pused = (4.8 x 10^7 - 440000) =4.756 x 10^7W
No you are wrong. You cannot use V=12000V for calculating power loss $P_{loss}$ in line. You have use the voltage drop across the line to find $P_{loss}$.
Since we don't know that, (It is equal to difference between voltage at supply end ($V_{s}$) and voltage at load end (12000V). Since we dont know $V_{s}$, but we know current through the line, we can find power lost directly from the relation
[itex] P_{loss}=I^{2}R[/tex]
Related Discussions Engineering, Comp Sci, & Technology Homework 3 Introductory Physics Homework 3 General Physics 2 Electrical Engineering 1 Introductory Physics Homework 4
|
2013-12-21 16:24:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4797379970550537, "perplexity": 706.7384737124219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776257/warc/CC-MAIN-20131218054936-00052-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://patientparadise.com/making-machine-learning-matter-to-clinicians-model-actionability-in-medical-decision-making/
|
We propose a metric that measures a model’s ability to potentially augment medical decision-making by reducing uncertainty in specific clinical scenarios. Practically, we envision this metric being used during the early phases of model development (i.e., before calculating net benefit) for multiclass models in dynamic care environments like critical care, which are becoming increasingly common in healthcare19,20,21,22,23.
To introduce our metric mathematically, we first contend that reducing uncertainty in medical decision-making might mirror the considerations of a partially observable Markov Decision Process (POMDP). In a POMDP framework, the clinician seeks to determine the “correct” diagnosis (in their belief state) and “optimal” treatment by predicting outcomes given a particular action taken. As such, there are two key probability distributions involved: one at the diagnosis phase where the clinician seeks to clarify the distribution of possible diagnoses, and a second at the treatment phase where the clinician seeks to clarify the distribution of future states given actions (i.e., treatments) chosen. Actionable ML should reduce the uncertainty of these distributions.
The degree of uncertainty reduction in these key distributions can be quantified on the basis of entropy. Entropy is a measurable concept from information theory that quantifies the level of uncertainty for a random variable’s possible outcomes24. We propose that clinicians may value entropy reduction, and our actionability metric is therefore predicated on the principle that actionability increases with ML’s ability to progressively decrease the entropy of probability distributions central to medical decision-making (Fig. 1).
Returning to the multiclass model that predicts the diagnosis in a critically unwell patient with fever (among a list of possible diagnoses such as infection, malignancy, heart failure, drug fever, etc.), an ML researcher might use the equation below. The equation is for illustration purposes, acknowledging that additional data are needed to determine the reasonable diagnoses in the differential diagnosis list and their baseline probabilities. This “clinician alone” model might be obtained by asking a sample of clinicians to evaluate scenarios in real-time or retrospectively to determine reasonable diagnostic possibilities and their probabilities based on available clinical data.
For each sample in a test dataset, the entropy of the output from the candidate model (i.e., the probability distribution of predicted diagnoses) is calculated and compared to the entropy of the output from the reference model, which by default is the clinician alone model but can also be other ML models. The differences are averaged across all samples to determine the net reduction in entropy (ML—reference) as illustrated below using notation common to POMDPs:
(1) Clinician Alone Model:
$$H^s_c = – \mathop {\sum}\limits_{s_t \in S} {p_c(s_t|o_t)log\;p_c(s_t|o_t)}$$
(2) With ML Model 1:
$$H^s_{m1} = – \mathop {\sum}\limits_{s_t \in S} {p_{m1}(s_t|o_t)log\;p_{m1}(s_t|o_t)}$$
(3) With ML Model 2:
$$H^s_{m2} = – \mathop {\sum}\limits_{s_t \in S} {p_{m2}(s_t|o_t)log\;p_{m2}(s_t|o_t)}$$
Whereby, $$s_t \in S$$ is the patient’s underlying state (e.g., infection) at time t within a domain S corresponding to a set of all reasonable possible states (e.g., different causes of fever, including but not limited to infection) and $$o_t \in O$$are the clinical observations (e.g., prior diagnoses and medical history, current physical exam, laboratory data, imaging data, etc.) at time t within a domain O corresponding to the set of all possible observations.
Therefore, the actionability of the candidate ML model at the diagnosis (i.e., current state) phase (Δs) can be quantified as: $$\Delta ^{{{s}}} = {{{H}}}^{{{s}}}_{{{0}}} – {{{H}}}^{{{s}}}_{{{m}}}$$, where $${{{H}}}_{{{0}}}^{{{s}}}$$ is the entropy corresponding to the reference distribution (typically the clinician alone model, corresponding to $${{{H}}}^{{{s}}}_{{{c}}}$$).
Basically, the model learns the conditional distribution of the various possible underlying diagnoses given the observations (see example calculation in supplemental Fig. 1). The extent of a model’s actionability is the measurable reduction in entropy when one uses the ML model versus the reference model.
Continuing with the clinical example above, the clinician must then choose an action to perform, for example, which antibiotic regimen to prescribe among a choice of many reasonable antibiotic regimens. Each state-action pair maps probabilistically to different potential future states, which therefore have a distribution entropy. Acknowledging that additional data are needed to define the relevant transition probabilities $$p^ \ast (s_{t + 1}|s_{t,}a_t)$$ (i.e., benefit:risk ratios) for each state-action pair (which ideally can be estimated by clinicians or empirically derived data from representative retrospective cohorts) an ML researcher might perform an actionability assessment of candidate multiclass models. The actionability assessment hinges on comparing the entropies of the future state distributions with and without ML and is calculated in a similar fashion to the diagnosis phase, where differences in distribution entropy (reference model – candidate ML model) are calculated for each sample in the test dataset and then averaged. The following equation, or a variation of it, might be used to determine actionability during the treatment phase of care:
Future state probability distribution (P (st+1|st)
(4) Without ML (e.g., clinician alone action/policy):
$$p_c(s_{t + 1}|s_t) = \mathop {\sum}\limits_{a_t \in A} {p^ \ast (s_{t + 1}|s_{t,}a_t)\pi _c(a_t|s_t)}$$
(5) With ML (e.g., the trained model recommended action/policy):
$$p_m(s_{t + 1}|s_t) = \mathop {\sum}\limits_{a_t \in A} {p^ \ast (s_{t + 1}|s_{t,}a_t)\pi _m(a_t|s_t)}$$
Whereby, St+1 is the desired future state (e.g., infection resolution), St is the current state (e.g., fever) at time t, $$a_t \in A$$ is the action taken at time t within a domain A corresponding to a set of reasonable possible actions (i.e., different antibiotic regimens), $$\pi _c(a_t|s_t)$$ is the policy chosen by the clinician at time t (e.g., treat with antibiotic regimen A) and $$\pi _m(a_t|s_t)$$ is the policy recommended by ML at time t (e.g., treat with antibiotic regimen B).
Entropy (H) of the future state probability distribution
Each future state probability distribution comes from a distribution of possible future states with associated entropy, which we illustrate as:
(6) Without ML:
$$H^a_0 = – \mathop {\sum}\limits_{s_{t + 1} \in S} {p_0(s_{t + 1}|s_t)log\;p_0(s_{t + 1}|s_t)}$$
(7) With ML:
$$H^a_m = – \mathop {\sum}\limits_{s_{t + 1} \in S} {p_0(s_{t + 1}|s_t)log\;p_m(s_{t + 1}|s_t)}$$
Therefore, the actionability of the candidate ML model at the action (i.e., future state) phase (Δa) can be quantified as $$\Delta ^{{{a}}} = {{{H}}}^{{{a}}}_0 – {{{H}}}^{{a}}_{{{m}}}$$, where $${{{H}}}_0^{{{a}}}$$ is the entropy corresponding to the reference distribution (typically the clinician alone model).
The model essentially learns the conditional distribution of the future states given actions taken in the current state, and actionability is the measurable reduction in entropy when one uses the ML model versus the reference (typically clinician alone) model.
|
2023-02-07 14:10:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6521992683410645, "perplexity": 1793.4228271460759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00095.warc.gz"}
|
https://motls.blogspot.com/2020/07/anti-sjw-allies-who-arent-real-anti-sjw.html?m=0
|
## Monday, July 27, 2020 ... //
### Anti-SJW "allies" who aren't real anti-SJW allies
While many Western (and especially Anglo-Saxon) countries are decaying and events from the U.S. and U.K. look increasingly surreal to everyone in countries like mine where people haven't gone nuts yet, lots of people tell me that they're allies. Sadly, they only tell it to me – and they mostly use anonymous nicknames, anyway. On top of that, I am often told things like the following:
I am against the SJWs. But: Shut up. The whole future belongs to the SJWs. The future under their leadership is unavoidable unless some miracle happens, or unless we introduce a full-blown fascist society.
Or:
George Soros may be shorting the stock market and paying the fake news journalists for bad news in order to bring the U.S. economy to the knees (and to hurt Donald Trump and everyone aligned with his fate along the way). Incidentally, I just shorted Dow Jones for the third time because it will surely go to 18,000. Isn't it exciting? I will earn XY dollars when it's at 18,000.
Cool.
First, it's just wrong that they need to use anonymous nicknames – this shows their cowardliness; and it deservedly lowers their influence. Second, we may ask: How do these people differ from the SJWs themselves? Or more specifically: How does the effect of these people's proclamations differ from the effect of the SJWs' proclamations?
The first assertion about the future is exactly isomorphic to the claims by the SJWs themselves who bully everyone who isn't as brain-dead as they are and who paint themselves as the progress, the future, the correct side of the history, and so on. If you are bullying me in the same way as they are, in what sense you are not an SJW? How can I distinguish you from them at all?
The second quote is about the shortselling. Once again, this guy (this is a real example, not one that I invented) hates George Soros for his positions on the stock market combined with his propaganda-related activities. People like Soros (or the warming alarmists – these sets of people are almost identical, anyway) have the plan of ruining the whole economy (or whole crucial industries of the economy) and make a much tinier profit for themselves at the same moment. But in what sense does the self-described anti-Soros warrior differ from Soros himself? Like the previous one, he is spreading the – totally unjustified – idea that the likes of Soros must be the ultimate winners who will bring the economy to the knees and ruin the Western civilization completely.
I just don't see any material difference here. I sincerely hope that all the people who want to create some runaway collapse of the U.S. or global stock market (which is what is needed for their substantial profits, so it's clear that they want to cooperate on this scenario) will lose their bets. I hope that such people will be miserable and suicidal. If some of them also tell me that they are actually agreeing with me, it just doesn't make a sufficient difference from my viewpoint. I see no reason to invent an exception for them. I would really find it immoral to maintain such an exception. So I want them to lose everything and be miserable, too. And yes, I think it is more likely that they will lose.
Another question is "who is responsible for the distortion of news and propaganda". So there is no doubt that e.g. Google manipulates the searches (of news and generic websites) in the direction to make the far left-wing lies more visible; and to make the truth and conservative opinions less visible or invisible. That's surely evil and lots of people in that company unquestionably deserve death penalty. The same comment applies to many similar companies in Silicon Valley.
But there's also the other side of these mechanisms: the consumers. The people who can be manipulated in this simple way are cowardly sheep and gullible morons and they are perhaps a more important part of the problem than the far left terrorists who are working against both shareholders and users and who are undermining the integrity of Google's algorithms. A sensible person verifies a sufficient fraction of the information he is getting so he will notice that some searches have collectively become biased or manipulative. In most cases, he will be able to localize the individual manipulative texts, too.
So the real point is that it is probably more important to walk the walk and not just talk the talk. Especially if you only "talk the talk" in a private conversation – and even that is often under anonymous nicknames – it is simply not enough; you are not an ally. What I find insulting is when these would-be anti-SJW warriors treat people like me as if we were equal to the SJWs (if not less than them!). I am sorry but it is not the case. The life of a person with integrity (and knowledge) matters much more than the life of many SJWs or other people without integrity (and knowledge). Why don't you just ignore the dishonest people such as the SJWs? Why do you cooperate on spreading the lie that their lives or even their opinions matter? If you keep on doing these things, you won't be a victim of the destruction of the Western system in your country; you will be an accomplice.
Self-fulfilling prophesies are a part of the picture. It may happen in some cases that events occur because people have believed that they would occur which caused them to behave in a way that brings the events closer. But every human being has free will – the ability to decide whether he wants to participate in a self-fulfilling prophesy. If someone promotes the idea of a self-fulfilling prophesy, the idea that others should consider a very bad future to be nearly unavoidable, then they are helping to establish this bad future.
I must also mention that some of these fake allies join the bad people in spreading distorted or downright untrue interpretations of events in the past. For example, some of them incorrectly say that XY was convicted of UV (even though he was convincted of something else or nothing at all). Sometimes, they even add that XY should be careful now because of these (distorted or non-existent events) – which simply means that fully join the SJWs in the intimidation of XY. I could give you lots of examples that I have experienced. Another example is that self-described anti-SJWs often parrot the insulting lie that Viktor Orbán nurtures "illiberalism" when he removes Women's Studies from university-worthy subjects or when he demands transparent funding of NGOs. There is nothing "illiberal" about these things at all.
These countries are decaying not because the SJWs are strong or powerful or clever – they are neither – but because those who market themselves as the advocates of the actual values that have passed the test of time are so weak, cowardly, incoherent, and hypocritical.
|
2021-04-19 08:55:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3872152268886566, "perplexity": 1457.9291675842862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00607.warc.gz"}
|
http://baileysoriginalirishtech.blogspot.com/2016/09/script-kitties-early-trick-or-treat.html
|
Wednesday, September 14, 2016
Script Kitties Early Trick or Treat, Part 1
Some of my old sysadmin tricks became useful again when I analyzed some malware targeting Windows Scripting Host (WSH). In this article I'll share a trick, and in the next, I'll share a treat.
When logic gets hairy, both developers and malware analysts open a debugger to get more information. But what can be done when the target platform is WSH? As it happens, there are debuggers for this, too, and they can be had by installing either Microsoft Office or Microsoft Visual Studio in your dynamic analysis VM. To invoke the debugger, use the /X switch of either cscript.exe or wscript.exe, e.g.:
wscript.exe /X rat3ie.vbs
Here's the Visual Studio debugger, halting on line 1 of a craptacular VBScript RAT:
This gives the ability to view local variables in the Locals tab (at bottom), set breakpoints, and step through code.
That's all for this little nugget. Next time, I'll post a tool I wrote in 2006 that came in handy for conveniently and interactively evaluating VBScript and JScript to de-obfuscate strings and experiment with malware functionality.
|
2021-06-22 04:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3756513297557831, "perplexity": 6944.507012883135}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00129.warc.gz"}
|
https://gmatclub.com/forum/if-the-least-common-multiple-of-integers-x-and-y-is-105745.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Aug 2018, 15:16
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If the least common multiple of integers x and y is 840,
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Director
Joined: 07 Jun 2004
Posts: 604
Location: PA
If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
03 Dec 2010, 18:09
6
00:00
Difficulty:
55% (hard)
Question Stats:
63% (01:09) correct 37% (01:42) wrong based on 391 sessions
### HideShow timer Statistics
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
_________________
If the Q jogged your mind do Kudos me : )
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8204
Location: Pune, India
Re: GCF LCM DS [#permalink]
### Show Tags
04 Dec 2010, 07:08
7
gettinit wrote:
VeritasPrepKarishma wrote:
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
Remember a property of LCM and GCF:
If x and y are two positive integers, LCM * GCF = x* y
Stmnt 1: Just the GCF will give you the product of the two numbers. Not their individual values.
Stmnt 2: Knowing the value of y and LCM is not sufficient to get x.
x could be 840 or 105 or 5 etc.
Using both together, you get x = 840*56/168 = 280
Karishma is this a known property? I have never ran across it before? I am not sure I understand why it works either. Can you please explain?
It is taught at school (though curriculums across the world vary)
Let us take an example to see why this works:
$$x = 60 = 2^2 * 3 * 5$$
$$y = 126 = 2*3^2*7$$
Now GCF here will be $$6 (= 2*3)$$ (because that is all that is common to x and y)
LCM will be whatever is common taken once and the remaining i.e. $$(2*3) * 2*5 * 3*7$$
When you multiply GCF with LCM, you get
$$(2*3) * (2*3 *2*5 * 3*7)$$ i.e. whatever is common comes twice and everything else that the two numbers had.
I can re-arrange this product to write it as $$(2*3 * 2*5) * (2*3 * 3*7)$$ i.e. 60*126
This is the product of the two numbers.
Since GCF has what is common to them and LCM has what is common to them written once and everything else, GCF and LCM will always multiply to give the two numbers.
Can you now think what will happen in case of 3 numbers?
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
##### General Discussion
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8204
Location: Pune, India
Re: GCF LCM DS [#permalink]
### Show Tags
03 Dec 2010, 19:12
4
2
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
Remember a property of LCM and GCF:
If x and y are two positive integers, LCM * GCF = x* y
Stmnt 1: Just the GCF will give you the product of the two numbers. Not their individual values.
Stmnt 2: Knowing the value of y and LCM is not sufficient to get x.
x could be 840 or 105 or 5 etc.
Using both together, you get x = 840*56/168 = 280
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Manager
Joined: 13 Jul 2010
Posts: 148
Re: GCF LCM DS [#permalink]
### Show Tags
03 Dec 2010, 20:09
VeritasPrepKarishma wrote:
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
Remember a property of LCM and GCF:
If x and y are two positive integers, LCM * GCF = x* y
Stmnt 1: Just the GCF will give you the product of the two numbers. Not their individual values.
Stmnt 2: Knowing the value of y and LCM is not sufficient to get x.
x could be 840 or 105 or 5 etc.
Using both together, you get x = 840*56/168 = 280
Karishma is this a known property? I have never ran across it before? I am not sure I understand why it works either. Can you please explain?
Math Expert
Joined: 02 Sep 2009
Posts: 48110
Re: GCF LCM DS [#permalink]
### Show Tags
04 Dec 2010, 07:17
1
2
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
The property Karishma used is often tested on GMAT. So, it's a must know property:
for any positive integers $$x$$ and $$y$$, $$x*y=GCD(x,y)*LCM(x,y)$$.
xy-a-multiple-of-102540.html?hilit=most%20important#p797667
data-sufficiency-problem-95872.html?hilit=most%20important#p737970
Hope it helps.
_________________
Manager
Joined: 13 Jul 2010
Posts: 148
Re: GCF LCM DS [#permalink]
### Show Tags
04 Dec 2010, 10:12
Thank you Karishma, makes sense. If I use three numbers I don't think this property will work based on the following example:
36 - 2^2*3^2
90 - 2*5*3^2
72- 2^3 * 3^2
GCF - 2*3^2 = 18
LCM - 2^3*3^2*5 = 360
so gcf*lcm=360*18=6480 which does not equal 36*90*72.
Bunuel thanks for the examples, helpful in reinforcing.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8204
Location: Pune, India
Re: GCF LCM DS [#permalink]
### Show Tags
04 Dec 2010, 14:03
gettinit wrote:
Thank you Karishma, makes sense. If I use three numbers I don't think this property will work based on the following example:
36 - 2^2*3^2
90 - 2*5*3^2
72- 2^3 * 3^2
GCF - 2*3^2 = 18
LCM - 2^3*3^2*5 = 360
so gcf*lcm=360*18=6480 which does not equal 36*90*72.
Bunuel thanks for the examples, helpful in reinforcing.
Yes, that's right. It works only for two numbers.
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Intern
Joined: 23 Nov 2010
Posts: 6
Location: India
Re: GCF LCM DS [#permalink]
### Show Tags
18 Dec 2010, 20:44
Karishma,
Thanks for the detailed explanation.
Shanif.
Ur imagination is ur only limit
Intern
Joined: 13 May 2012
Posts: 30
Schools: LBS '16 (A)
GMAT 1: 760 Q50 V41
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
27 Sep 2012, 18:41
The question does not say the integers are positive. Is it implied? Factors are always positive so are GCDs and LCMs, but x could be a positive or a negative integer?
Posted from my mobile device
Intern
Joined: 13 May 2012
Posts: 30
Schools: LBS '16 (A)
GMAT 1: 760 Q50 V41
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
27 Sep 2012, 18:43
In that case the answer is E
Posted from my mobile device
Manager
Joined: 26 Sep 2013
Posts: 200
Concentration: Finance, Economics
GMAT 1: 670 Q39 V41
GMAT 2: 730 Q49 V41
Re: GCF LCM DS [#permalink]
### Show Tags
15 Oct 2013, 17:50
VeritasPrepKarishma wrote:
Let us take an example to see why this works:
$$x = 60 = 2^2 * 3 * 5$$
$$y = 126 = 2*3^2*7$$
Now GCF here will be $$6 (= 2*3)$$ (because that is all that is common to x and y)
LCM will be whatever is common taken once and the remaining i.e. $$(2*3) * 2*5 * 3*7$$
When you multiply GCF with LCM, you get
$$(2*3) * (2*3 *2*5 * 3*7)$$ i.e. whatever is common comes twice and everything else that the two numbers had.
I can re-arrange this product to write it as $$(2*3 * 2*5) * (2*3 * 3*7)$$ i.e. 60*126
This is the product of the two numbers.
Since GCF has what is common to them and LCM has what is common to them written once and everything else, GCF and LCM will always multiply to give the two numbers.
Can you now think what will happen in case of 3 numbers?
You've got to be kidding me....that makes perfect sense. I'm glad I'm taking the GMAT, I'm learning all these fascinating formulas I've never used before.
Manager
Joined: 26 Sep 2013
Posts: 200
Concentration: Finance, Economics
GMAT 1: 670 Q39 V41
GMAT 2: 730 Q49 V41
Re: GCF LCM DS [#permalink]
### Show Tags
15 Oct 2013, 18:01
Bunuel wrote:
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
The property Karishma used is often tested on GMAT. So, it's a must know property:
for any positive integers $$x$$ and $$y$$, $$x*y=GCD(x,y)*LCM(x,y)$$.
xy-a-multiple-of-102540.html?hilit=most%20important#p797667
data-sufficiency-problem-95872.html?hilit=most%20important#p737970
Hope it helps.
Is there a sheet with the 'must-know' type stuff on it? I know there's the PDF you guys put together, but is there anything that is just a one or two page listing of the commonly used formulas like this?
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8204
Location: Pune, India
Re: GCF LCM DS [#permalink]
### Show Tags
15 Oct 2013, 21:32
AccipiterQ wrote:
Is there a sheet with the 'must-know' type stuff on it? I know there's the PDF you guys put together, but is there anything that is just a one or two page listing of the commonly used formulas like this?
The list of must-know formulas would be really short and you would know most of the formulas on it (e.g. Distance = Speed*Time, Sum of first n positive integers = n(n+1)/2, area of a circle = pi*r^2 etc). Even if there are a couple that you don't know, you will come across them while preparing so just jot them down.
There will be many more formulas that you could find useful in particular questions but you can very easily manage without them. Also, learning too many formulas creates confusion about their usage - when to use which one - and hence they should be avoided.
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Senior Manager
Joined: 08 Apr 2012
Posts: 395
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
18 Jun 2014, 23:39
VeritasPrepKarishma wrote:
rxs0005 wrote:
If the least common multiple of integers x and y is 840, what is the value of x?
(1) The greatest common factor of x and y is 56.
(2) y = 168
Remember a property of LCM and GCF:
If x and y are two positive integers, LCM * GCF = x* y
Stmnt 1: Just the GCF will give you the product of the two numbers. Not their individual values.
Stmnt 2: Knowing the value of y and LCM is not sufficient to get x.
x could be 840 or 105 or 5 etc.
Using both together, you get x = 840*56/168 = 280
Hi Karishma,
Stmnt 2: if we know the value of y, and the least common multiple, i can't think of another number that X could be other than 5.
Can you give an example?
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8204
Location: Pune, India
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
18 Jun 2014, 23:50
ronr34 wrote:
Hi Karishma,
Stmnt 2: if we know the value of y, and the least common multiple, i can't think of another number that X could be other than 5.
Can you give an example?
I have given some values that x can take from statement 2 - "x could be 840 or 105 or 5 etc"
Note that we don't know the value of GCF from stmnt 2 alone. Hence, x could take many different values. Only when we know the GCF too, can we find the value of x.
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Intern
Joined: 13 May 2015
Posts: 4
Location: India
GMAT Date: 09-12-2015
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
18 Jun 2015, 22:41
VeritasPrepKarishma wrote:
ronr34 wrote:
Hi Karishma,
Stmnt 2: if we know the value of y, and the least common multiple, i can't think of another number that X could be other than 5.
Can you give an example?
I have given some values that x can take from statement 2 - "x could be 840 or 105 or 5 etc"
Note that we don't know the value of GCF from stmnt 2 alone. Hence, x could take many different values. Only when we know the GCF too, can we find the value of x.
Hi Karishma:
Can you please help me understand how can one find other numbers of x whose LCM will 840?
Retired Moderator
Joined: 06 Jul 2014
Posts: 1247
Location: Ukraine
Concentration: Entrepreneurship, Technology
GMAT 1: 660 Q48 V33
GMAT 2: 740 Q50 V40
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
18 Jun 2015, 23:52
raj4ueclerx wrote:
VeritasPrepKarishma wrote:
ronr34 wrote:
Hi Karishma,
Stmnt 2: if we know the value of y, and the least common multiple, i can't think of another number that X could be other than 5.
Can you give an example?
I have given some values that x can take from statement 2 - "x could be 840 or 105 or 5 etc"
Note that we don't know the value of GCF from stmnt 2 alone. Hence, x could take many different values. Only when we know the GCF too, can we find the value of x.
Hi Karishma:
Can you please help me understand how can one find other numbers of x whose LCM will 840?
Hello raj4ueclerx.
Here is nice thread exclusively about finding LCM and GCF
having-issues-with-finding-lcm-and-gcf-can-someone-help-146965.html
_________________
Intern
Joined: 13 May 2015
Posts: 4
Location: India
GMAT Date: 09-12-2015
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
19 Jun 2015, 01:13
Thanks for the suggestion....but I am still unable to find...How to come up with various values of X given that LCM(x, Y=168) = 840. Can some one please help me or show me the reverse calculation to derive x?
Retired Moderator
Joined: 06 Jul 2014
Posts: 1247
Location: Ukraine
Concentration: Entrepreneurship, Technology
GMAT 1: 660 Q48 V33
GMAT 2: 740 Q50 V40
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
19 Jun 2015, 03:42
2
raj4ueclerx wrote:
Thanks for the suggestion....but I am still unable to find...How to come up with various values of X given that LCM(x, Y=168) = 840. Can some one please help me or show me the reverse calculation to derive x?
Hello raj4ueclerx
at first you should find all primes in both numbers
168 = 2*2*2*3*7
840 = 2*2*2*3*5*7
So we see that 840 has 5 as prime and 168 not. This number is only difference between 168 and 840
so any number that equal to product of this numbers 2, 2, 2, 3, 5, 7 (any combination that include 5) will give as LCM = 840 with number 168
for example
2*5 = 10 LCM (10, 168) = 840
3*5 = 15 LCM (15, 168) = 840
2*2*2*5 = 40 LCM (40, 168) = 840
and so on
Does that makes sense?
_________________
Intern
Joined: 13 May 2015
Posts: 4
Location: India
GMAT Date: 09-12-2015
Re: If the least common multiple of integers x and y is 840, [#permalink]
### Show Tags
19 Jun 2015, 04:45
at first you should find all primes in both numbers
168 = 2*2*2*3*7
840 = 2*2*2*3*5*7
So we see that 840 has 5 as prime and 168 not. This number is only difference between 168 and 840
so any number that equal to product of this numbers 2, 2, 2, 3, 5, 7 (any combination that include 5) will give as LCM = 840 with number 168
for example
2*5 = 10 LCM (10, 168) = 840
3*5 = 15 LCM (15, 168) = 840
2*2*2*5 = 40 LCM (40, 168) = 840
and so on
Does that makes sense?[/quote]
Million thanks ...this helps
Re: If the least common multiple of integers x and y is 840, &nbs [#permalink] 19 Jun 2015, 04:45
Go to page 1 2 Next [ 24 posts ]
Display posts from previous: Sort by
# If the least common multiple of integers x and y is 840,
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-08-21 22:16:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6711122393608093, "perplexity": 2278.3713786554495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219109.94/warc/CC-MAIN-20180821210655-20180821230655-00373.warc.gz"}
|
https://math.stackexchange.com/questions/3159054/entire-function-which-preserves-unit-disk-and-fixes-0-and-1
|
# Entire function which preserves unit disk and fixes $0$ and $1$
Suppose $$f \colon \mathbb{C} \to \mathbb{C}$$ is an entire function such that $$f(0) = 0, f(1) = 1$$ and $$\vert f(z) \vert \leq 1$$ if $$\vert z \vert \leq 1$$. I want to show that then $$f'(1)$$ is real and $$f'(1) \geq 1$$. The Schwarz lemma comes to mind, but it doesn't seem to be of any help here. I tried showing $$f'(1) = \overline{f'(1)}$$ by considering $$g(z) = \overline{g(\overline{z})}$$, but this didn't help either. I'm not really sure how to use the hypotheses here. Any hints?
|
2019-10-23 23:39:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475799560546875, "perplexity": 38.919380064408735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00365.warc.gz"}
|
https://lqp2.org/node/332
|
# The extended algebra of observables for Dirac fields and the trace anomaly of their stress-energy tensor
Claudio Dappiaggi, Thomas-Paul Hack, Nicola Pinamonti
April 03, 2009
We discuss from scratch the classical structure of Dirac spinors on an arbitrary globally hyperbolic, Lorentzian spacetime, their formulation as a locally covariant quantum field theory, and the associated notion of a Hadamard state. Eventually, we develop the notion of Wick polynomials for spinor fields, and we employ the latter to construct a covariantly conserved stress-energy tensor suited for back-reaction computations. We shall explicitly calculate its trace anomaly in particular.
@article{Dappiaggi:2009xj, author = "Dappiaggi, Claudio and Hack, Thomas-Paul and Pinamonti, Nicola", title = "{The Extended algebra of observables for Dirac fields and the trace anomaly of their stress-energy tensor}", journal = "Rev. Math. Phys.", volume = "21", year = "2009", pages = "1241-1312", doi = "10.1142/S0129055X09003864", eprint = "0904.0612", archivePrefix = "arXiv", primaryClass = "math-ph", reportNumber = "DESY-09-049, ZMP-HH-09-9", SLACcitation = "%%CITATION = ARXIV:0904.0612;%%" }
Keywords:
Dirac fields, QFT on curved spacetimes, trace anomaly
|
2019-07-22 16:27:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330842852592468, "perplexity": 3886.503322625488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00055.warc.gz"}
|
http://www.h-net.org/~mac/textures.html
|
A Review of Blue Sky Research's Textures Typesetting Program
By William D. Walker
Agricultural Economics, Michigan State University
walker5@pilot.msu.edu
CONTENTS
INTRODUCTION TO TeX
WHO USES TEXTURES?
THE VIRTUES OF TeX
EXTENTIONS TO TeX: LaTeX and AMSLaTeX
INTRODUCTION TO TEXTURES
HARDWARE REQUIREMENTS
WHAT'S IN THE BOX
THE EDITOR
FONTS
FLASH MODE
GRAPHICS
TECHNICAL SUPPORT
MISCELLANEOUS COMPLAINTS
SUMMARY
APPENDIX A: A SAMPLE LaTeX INPUT FILE
BIBLIOGRAPHY
BLUE SKY CONTACT INFORMATION
INTRODUCTION TO TeX
This is a review of Blue Sky Research's Textures typesetting program. Textures is a Macintosh implementation of Donald Knuth's TeX program. Textures (and TeX) is an extraordinarily powerful typesetting program. It is programmable, employs sophisticated algorithms for line breaking, hyphenation and so on, employs a text markup language that is of particular value in representing mathematics, is available on scores of platforms (TeX, that is, not Textures), and is multilingual. Textures improves on standard TeX (while remaining compatible with it) in font handling, file organization, and previewing. I am impressed with Textures (not to mention TeX). As a user of other TeX systems, I found Textures' clean interface, font handling, and previewing features extraordinary. It was faster than other systems as well.
Before discussing Textures itself, I will give a too brief description of TeX. Interested readers are encouraged to consult the references. TeX is versatile and powerful and has been employed for innumerable types of documents in countless disciplines. I can but scratch the surface.
Donald Knuth, a mathematician and computer scientist, now emeritus professor at Stanford University, began developing TeX (pronounced 'techhh') and its companion font generation program METAFONT in 1978. (Knuth 1979.) Knuth designed TeX for the "creation of beautiful books---and especially for books that contain a lot of mathematics." (Knuth 1986.) Knuth was disturbed by the increasing expense and decreasing quality of hand set type. METAFONT produces pleasing fonts from design parameters. (It was designed before PostScript or TrueType.)
TeX was written using the WEB system, allowing it to be ported to different systems. There are TeX implementations for Amiga, Atari, Macintosh, MSDOS, Windows NT, OS/2, Unix and VMS, among others. Unlike with other programs, there is little difficulty transferring files among systems. This is for three reasons. (1) there is a clear standard for a program to be "TeX"; (2) TeX input files are marked up ASCII text---easily transferable; and (3) TeX is built from the ground up and does not rely on system-specific features. A TeX input file will produce identical output on all TeX systems. This has made TeX popular in the academic community.
TeX has been frozen by Knuth. Apart from major bug corrections, he will make no changes to the program. (Knuth 1990.) TeX's version number now converges to pi (it is currently at 3.14159). METAFONT has become less important as other font technologies emerge. In particular, Textures uses PostScript versions of Knuth's Computer Modern fonts (as well as other PostScript and TrueType fonts).
WHO USES TEXTURES?
According to Blue Sky Tech Support, about half of their direct sales are to academics (they cannot track resales by vendors). Other major users are scientific publishing houses and scientific government agencies and laboratories. LaTeX's (see below) language abilities and format-switching abilities have an appeal in the humanities as well. According to Blue Sky, they are developing an Xtension for QuarkXpress to integrate Textures into traditional desktop publishing. I personally expect many of the design concepts in TeX to be incorporated into other publishing tools. (I have read, for instance, that AmiPro's equation editor uses TeX markup internally.)
THE VIRTUES OF TeX
TeX is flexible. Indeed it is more accurate to describe TeX as a programming language than a typesetter. (A BASIC compiler has been written in TeX!) (Goossens 1994.) This makes TeX extraordinarily versatile and possibly daunting. Fortunately, basic typesetting is relatively simple. Users who find themselves battling against their wordprocessors' limitations may find TeX's extendability appealing. Whereas WordPerfect, Nisus and others provide macro languages to modify their basic functions, TeX is in some sense only a programming language. Ordinary use of TeX does not require programming knowledge but even amateurs can "tweak" fairly easily.
TeX is international. There are at least 12 TeX users groups around the world. A partial list of languages supported by TeX is arabtex; chinese; devanagari; english; ethiopian; french; german; greek; hebrew; icelandic; indian; italian; japanese; korean; malayalam; oriental; polish; portuguese; scyrillic; swedish; tamil; telugu; turkish and vietnamese. The Babel system for LaTeX (see below) supports the following languages: breton; catalan; croatian; czech; danish; dutch; english; esperant; estonian; finnish; francais; galician; german; irish; italian; isorbian; magyar; norsk; polish; portuges; romanian; scottish; slovak; slovene; spanish; swedish; turkish; and usorbian.
(I have not used TeX's language capabilities. The above lists were culled from the Comprehensive TeX Archive Network. I do not claim they are exhaustive. Interested readers should consult the references (listed below).)
TeX is precise. TeX calculates text placement in RSU's (Ridiculously Small Units) which are approximately 100 times smaller than the wavelength of visible light. (There are 65536 RSU's to a point.) TeX employs very good algorithms for line breaking, hyphenation and other typesetting tasks. TeX output looks good.
TeX is mature. This system has been in use for a long time, as far as programs go. It has an extremely professional underlying design (the sort of work that comes from love of quality rather than love of profits). There are hundreds of extensions available and interested users can design their own.
Though TeX is a powerful typesetting system, it has shortcomings. The most important in my opinion are in usability, incorporation of graphics, and compatibility with other wordprocessors' file formats. The file formats problem is largely unsolvable (though there are TeX to RTF, TeX to WordPerfect, and other translators available) since TeX is vastly more versatile and changeable than wordprocessors. (Though remember that TeX files are marked-up ASCII and therefore readable by any system---with the commands intact.) Graphics will be discussed in the review of Textures proper. There is controversy of the usability issue. TeX is clearly not WYSIWYG and one must learn TeX's syntax. On the other hand, the markup commands employed by TeX (and LaTeX, see below) are intuitive. In particular, the mathematics commands are vastly easier to use than graphical equation editors. Some feel that extensions such as LaTeX (see below) make TeX easier to use by providing "canned" formats. In any event, TeX is a powerful, and therefore complicated, system. For people who write short letters, memos and reports (without much mathematics), TeX is overkill. (In my opinion Word, WordPerfect and other major wordprocessors are also overkill for these tasks.) People who write longer, more technical documents, or whose publishers prefer the TeX format, can benefit from TeX's flexibility and quality.
EXTENTIONS TO TeX: LaTeX and AMSLaTeX
LaTeX, a package which extends (and modifies) TeX, was developed by Leslie Lamport in the early 1980's. It adopts the document-markup philosophy. In general, documents have a logical structure which should be represented to the computer. For instance, quotations, headers, emphasized text, and lists are all structured pieces of a document. The way in which these pieces are formatted may change (e.g. the indentation of a quotation may change) but the logical structure remains. LaTeX provides standardized formats for letters, books, articles, reports, and slides. (See the sample in Appendix A.) The typist marks logical pieces of the document (for instance, quotations are marked as "\begin{quote} ... [quotation] ... \end{quote}"). If a different appearance is desired for quotations, the quotation environment is changed globally. This results in uniform appearance, and the ability to quickly reformat a document for different settings (e.g. transforming a journal article into a format for inclusion in a book). In the mathematics community in particular, documents are marked-up generically and then given the appropriate style (e.g. for American Mathematical Society journals) at publication time. Though TeX is frozen, LaTeX continues to undergo development. The LaTeX3 project is working on, among other things, allowing easier changes to style files (book, report, etc.) and improving bibliography handling.
Included in the LaTeX world are programs to manage and format bibliographies (e.g. BibTeX, the Camel Citator, etc.) and indices (e.g. MakeIndex). The bibliography tools for LaTeX allow reformatting of a general bibliography file into forms suitable for hundreds of different publications. (There are tools available for plain TeX bibliographies as well.) The standard approach in LaTeX is to mark citations in the document with the "\citation" command, give LaTeX the name of a separate ASCII file which contains bibliographic data (and keys to connect the entries to the "\citation" commands), have the program BibTeX do the work of formatting the bibliography (in the selected bibliography style), then have LaTeX combine the formatted bibliography with the document. The bibliography data files must be in a particular format. If one wanted to use a different bibliography manager, one would need to dump the contents of a data file into an ASCII file with the proper format. There is at least one extension to the BibTeX approach (the Camel citator, visit http://rumple.soas.ac.uk/camel-link/camel.html) which can directly use ProCite, EndNotes Plus, Reference Manager and Papyrus.
One more package deserves mention. TeX has a strong heritage in mathematical typesetting. It was so good at typesetting math that it was adopted by the American Mathematical Society (AMS). The AMS has developed an extension to LaTeX, called AMSLaTeX (as well as an earlier extension to plain TeX: AMS-TeX) which further improves LaTeX's already powerful mathematical typesetting abilities. For writers whose documents contain some mathematics, and who care about output quality, LaTeX and AMSLaTeX are vital. Though other programs such as WordPerfect and MS Word offer equation editors, they are clumsy to use, produce relatively poor output, and, most importantly, cannot be extended by the user to handle new typesetting requirements. AMSLaTeX explicitly allows the creation of new mathematics symbols which obey mathematical typesetting rules relating to spacing, line length and height, placement of limits and other things.
INTRODUCTION TO TEXTURES
As a skillful programmer (TeX embodies Knuth's philosophy of "elegant programming"), Knuth created a program that does its job exceptionally well but without much concern for the user's comfort. Many of TeX's users, particularly in the early years, were programmers and mathematicians---the sort of people who love to tinker with algorithms. This was also the era of command-line operating systems. As a result, TeX is primarily a command-line program whose Unix-like flavor shows through.
Textures does its best to transform TeX into a comfortable Macintosh program. Though the basic TeX system is there (including, of course, TeX's document markup syntax), Textures cleans up and groups the scattered files which make up the typical TeX system, adds a nice system for graphics, incorporates Macintosh font handling, integrates printing and provides an integrated (though simple) Macintosh text editor which includes a customizable macros menu to simplify typing of [La]TeX commands. The show piece of Textures is its "flash mode" in which the document is continually typeset while you edit it. (In ordinary TeX implementations, the document is written, saved, typeset, previewed or printed, edited and retypeset.) In addition, Textures has recompiled some of the TeX code to increase speed.
HARDWARE REQUIREMENTS
Textures is a full TeX implementation (version 3.1415). Textures runs on all Macs. A PowerPC Native version is available for a higher price. Blue Sky's specifies a 4 meg memory partition for running the LaTeX format. The key variables for memory usage are the document size and format (Plain TeX is easier than LaTeX). Textures runs under System 6.04 or later (including System 7.5---my installation). It currently does not like Quickdraw GX. One needs Adobe Type Manager for clean screen display and for printing on QuickDraw (non PostScript) printers. ATM is not included but is available from Blue Sky for \$15. (The ATM included with Quickdraw GX (System 7.5) works on my system though I do not have any other parts of Quickdraw GX installed.) Textures prints through the Chooser like any well-behaved Mac program and consequently can print to any Chooser extension (printers, fax machines, etc.). (This is only surprising to people familiar with TeX on other systems.)
WHAT'S IN THE BOX
The Textures package includes eight disks (five for Textures and the PostScript Computer Modern fonts, three for LateX2e), the Textures manual, an installation pamphlet, an order form for TeX and LaTeX reference books, and a pamphlet depicting the Computer Modern typefaces. (As a nice touch, the fonts pamphlet has concealed within it Mark Twain's essay on the evils of watch repair.) When registering, users are sent a TeX or LaTeX reference manual of their choice (less-than-full-price customers, including students and software reviewers, do not get the free book). The Blue Sky manual is very well done. Its index is excellent and it covers installation and technical topics clearly. It does not tell you how to use TeX or LaTeX. If you do not have one of the [La]TeX reference manuals when you install Textures you will feel powerless.
Installation is straightforward. Total disk size is around 8 megabytes for the (full) 680x0 version including BibTeX, MakeIndex and the Latex2e source code (for those whizzes who want to recompile it). The PostScript fonts are around 2 megabytes. (My installation is 14 megabytes but includes AMSLaTeX and the AMSLaTeX fonts.)
THE EDITOR
Textures includes a text editor for creating TeX input files. It features ordinary Mac cut and paste, find and change, word wrapping, an autoindent feature, the ability to mark places in the file and the standard Mac cursor and highlighting commands. It is minimalist to say the least. The lack of wordprocessor features like bold and centering is obviously valid since TeX is not a WYSIWYG wordprocessor. More disturbing is the lack of editing tools like twiddling (switching characters), delete word, line and paragraph commands, capitalize word commands and others. A recent files menu would be useful.
The shortcomings of the editor are less serious since Textures receives Apple Events and can be used with other text editors. Among the many available for the Mac are the freeware BBEdit Light and Tex-Edit Plus (available on InfoMac among others) and the Shareware Alpha (ftp://www.cs.umd.edu/pub/faculty/keleher/Alpha/). There is also a GNU Emacs for the Mac (believe it or not). ("Tex-Edit" refers to the state of Texas, not TeX.)
The Textures editor includes a Macros menu. This is not to be confused with the macro languages of other wordprocessors. The Textures editor macros have a simple purpose: the insertion of canned [La]TeX text. Each TeX format (plain TeX, LaTeX, AMSLaTeX, or user defined formats) can be assigned a set of macros. For instance, the LaTeX macros menu has entries for the standard commands for formats (book, article, etc.), sectioning (chapter, section, etc.), environments (lists, math mode, quotations, centering, footnotes, marginal notes, etc.), and common symbols (the Greek alphabet and some math symbols). One can add keystroke equivalents to the macros but the number of keys is currently limited to only 12 keys (though Blue Sky "expects this to improve in future versions" (Textures Users Guide)).
The macro menu is nice (and is user-customizable) but it has two shortcomings. First, the macros for LaTeX are out of date and incomplete. For instance, LaTeX now has a Slides format which is not included in the Textures format menu. It would be straightforward for me to add it but I would prefer not having to. Second, the macro language is very simple. It has nine commands, three define the menu appearance, one is for comments, the remainder affect the macro itself. For example, the "bold" command inserts "{\bf }" (the LaTeX bold environment) and places the cursor just before the right bracket. A more complicated language, such as that used by the Alpha editor, would make the bold command interactive by defining a key for moving the cursor past the right bracket when the bold text was entered. In this example that hardly seems necessary, but in more complicated environments (like the letter environment which has "fields" for return address, addressee, opening, closing, signature, enclosures and cc) it would be useful. On the other hand, Alpha's macro language is more than I want to learn.
FONTS
A word about TeX fonts. Since TeX is a sophisticated system, it needs detailed font information. TeX separates metric information (width, height, depth, kerning, ligatures, etc.) from face information (the font's appearance). Early (and many current) TeX implementations had TeX Font Metric (TFM) files and Packed Pixel (PK) font files (bitmapped fonts). One needed different TFM and PK fonts for different display sizes (10 pt vs. 12 pt) and different qualities (300 dpi vs. 600 dpi). The use of outline fonts (PostScript and TrueType) reduces the need for different size and quality files (outline fonts are scalable and print at arbitrary quality). Note however that even scalable fonts have a design size. A 10 pt designed font scaled to 30 pt looks different than a 30 pt designed font at 30 pt.
Textures includes PostScript fonts (a version of Knuth's Computer Modern font) and can use TrueType fonts as well. (Note that Textures includes PostScript fonts at many different design sizes. Compare this to the Macintosh Operating system and typical PostScript font collections which provide a single design size, usually around 18 pt.) The factor limiting Textures' use of a font is the availability of metric information. PostScript fonts contain sufficient metric information (usually left unused by Mac applications) which is extricable using the Textures EdMetrics tool (which is easy to use though not self-explanatory). TrueType fonts may or may not have sufficient metric information (also extricable by EdMetrics). The Mac's Helvetica does not. Times does. (The lack of metrics information is not fatal, it just reduces output quality.) Textures can used bitmapped fonts as well if necessary. (The American Mathematical Society distributes bitmapped special symbol fonts in various sizes. Visit http://e-math.ams.org/.)
Textures ability to conveniently use any PostScript or TrueType font is an improvement over other TeX implementations but will seem ordinary to regular Mac users.
FLASH MODE
In ordinary TeX, one repeats a typeset-preview-correct cycle. The delay between making a correction and seeing the new output can be annoying. (Though it does discourage the habit of making numerous style changes to a document while you write it. How many of you haven't restyled a title page repeatedly instead of doing actual work?)
Textures avoids this delay through its flash mode. In flash mode, the typesetting engine retypesets the document as you type. Since this is still TeX, the typesetting involves a complete run through of the document, so changes do not appear instantaneously. TeX has to get from the document's beginning to the point of the change. For users of TeX on other systems, flash mode is fantastic. For WYSIWYG fans it is slow. For example, in plain TeX, it took about 10 seconds for a change at the end of a five page document to be reflected in the typeset window. In LaTeX the same change took about 22 seconds since LaTeX does more setup calculations. (The change took 14 second using the suggested "canned" format trick for LaTeX.) Textures starts over at each keystroke so to see the results one must pause in typing. Blue Sky quotes speed improvements of 3.7 times for the PowerPC native version. Note that speed depends critically on the document's format (Plain TeX, precompiled LaTeX, or LaTeX---from fastest to slowest). Screen redraw depends on the PowerPC nativeness (nativity?) of ATM.
When composing a document, I turn flashmode off. It slows down typing, results in continuous disk access, and doesn't show me anything useful. When composing I am interested in what I am writing, not in how it looks. But for correcting the final copy, flash mode is wonderful. The correct and retypeset cycle is still there but is automated. Textures starts the typesetting for me and I do not have to save the document and switch to a separate TeX program as one must do with other implementations. The delay in seeing the final product is not as annoying as you may imagine. My process is to read through the typeset output until I find an error, correct that error and then look at the input text while the typesetting continues. I can scan the input text for syntax or content errors for the ten seconds until the typeset view is available. Given the benefits of TeX, the delay is worth it, particularly since the delay only occurs during the final proofing stage.
GRAPHICS
Textures imports graphics through copy/paste, PICT, and Encapsulated PostScript Files (EPSF). Macros for including EPSF in TeX documents are included. Textures can export typeset pages via copy/paste and via the Adobe Illustrator/EPSF format. (Blue Sky suggests viewing TeX output as an input to further document (graphics) processing---a reversal of the usual TeX "paradigm.")
Textures is not a WYSIWYG system. There are no drawing tools (like those in WordPerfect for example) in Textures, though one can incorporate PostScript commands (yes, the real low level commands) directly into a TeX document. (This is clearly a capability for wizards only.) One can place a given graphic from another application (in EPSF or PICT format) directly into a TeX document. One specifies the size of the box, text flow and other things directly in TeX. One commonly uses the included TeX macros to do this---such as the BoxedEPS, EPSF, or PSFIG macros. Textures includes a pictures window (similar to the Mac Scrapbook) for display of PICT images. These images must still be included in the document by use of commands similar to those for EPSF files. The pictures window merely serves to collect PICT images in one place and provides information on the images' dimensions. When previewing the typeset document, the images appear properly in the typeset document. The typeset preview window is an accurate rendition of the final output (limited by the resolution of the screen and the scale of the bitmap preview). Note that for PostScript files to appear in the typeset preview window, they must include a PICT preview.
TECHNICAL SUPPORT
I had no technical questions for myself (a credit to the manual) but I interacted somewhat with Tech Support in preparing this review. Blue Sky is a small, dedicated company. Their people are friendly. As far as I can tell, Blue Sky's programmers work hard to produce a good program and take suggestions from users. (Can the same be said of Microsoft?)
One problem: Blue Sky cannot provide support for the myriad extensions to TeX and LaTeX. Blue Sky provides a referral service to professional TeX consultants. There are also numerous references on TeX and LaTeX---both in print and on-line.
According to Tech Support, most user questions are about printing, fonts, incorporation of graphics or creating new LaTeX formats. Blue Sky's web site has answers to such frequent questions.
MISCELLANEOUS COMPLAINTS
I had a few minor complaints. The close file dialog box lacks command key equivalents (e.g. for "No, I don't want to save this file"). Document file names cannot contain spaces (this is TeX's fault, not Textures). In the typeset window, one cannot conveniently scroll from the bottom of one page to the top of the next. Switching pages keeps you at the same part of the page, top or bottom. (WordPerfect 3.0 had this same problem.) But these are cosmetic problems. The core of Textures was fast, usable and stable.
SUMMARY
Textures is a TeX system. This is either a fundamental flaw or a fundamental perfection, depending on your perspective (I choose the latter). Readers of this article will likely be interested in TeX's output quality or programmability. Its mathematical typesetting ability may also be of interest. If you are interested in Textures but fear TeX I encourage you to check out TeX for the Beginner by Wynter Snow and A Guide to LaTeX by Helmut Kopka and Patrick Daly. Both are beginner books. The TeXbook (Knuth 1986.) is technical but a lot of fun. The standard LaTeX book is LaTeX: A Document Preparation System by Leslie Lamport.
Textures failings are largely cosmetic. The editor is too minimalist, the Macro language needs strengthening, dialog boxes could be spruced up. Its core function, typesetting TeX files, it does fast and well. Its incorporation of standard Mac features (graphics, font selection, etc.) is well done. On the whole, I am impressed.
APPENDIX A: A SAMPLE LaTeX INPUT FILE
With a typical wordprocessor like WordPerfect, one formats the pieces of a document individually. For a letter, one sets the return address at the right margin, selects a font, skips a few lines, sets the addressee in another font, then begins the body text. Though one can attach styles to paragraphs, one cannot identify pieces of text with any greater precision.
In contrast, in LaTeX, one uses the predefined Letter environment. One can modify this environment within LaTeX or design something similar in Plain TeX. The environment is a "program". It defines fields for return address, addressee, opening, closing, signature, enclosures and cc. The placement, font and style of these fields is defined in the format. One creates a plain text file using LaTeX command syntax for the letter. For instance, the full letter environment might look like this:
\documentclass[12pt]{letter} \address{%
Joe Macuser \\
Anywhere \\
Anytime \\
Any PowerMac \\ }
\date{August 8, 1995} % Or use TeX's date command: \date.
\signature{Joe}
\begin{document}
\begin{letter}{%
Bill Gates \\
Microsoft Corporation \\ }
\opening{Dear Napoleon,}
% BODY OF LETTER
\closing{Sincerely,}
\encl{A copy of MacUser}
\cc{John Dvorak}
\end{letter}
\end{document}
Note: \ marks commands,
% is a comment character (remainder of line is ignored),
\\ forces a new line,
{} enclose required arguments,
[] enclose optional arguments.
BIBLIOGRAPHY
(Goossens 1994) Michael Goossens. The LaTeX Companion. Addison, Reading, Mass. 1994.
(Knuth 1979) Donald E. Knuth. "Mathematical Typography." Bulletin of the American Mathematical Society, March 1979, Vol. 1, No. 2, pp. 337-372. The Josiah William Gibbs Lecture, January 4, 1978. Reprinted in TeX and METAFONT, New Directions in Typesetting. American Mathematical Society and Digital [Equipment Corp.] Press. 1979.
(Knuth 1986) Donald E. Knuth. The TeXbook, volume A of Computers and Typesetting. Addison-Wesley, Reading, 1986. (A more recent version may be available).
(Knuth 1990) Donald E. Knuth. The Future of TeX and METAFONT. TUGboat, 11(4):489, November, 1990.
(Kopka) A Guide to LaTeX. Helmut Kopka and Patrick Daly.
(Lamport) LaTeX: A Document Preparation System by Leslie Lamport.
(Snow) TeX for the Beginner. Wynter Snow.
(Textures Users Guide) Textures Users Guide. Blue Sky Research. Written by Mark Metzler.
TeX Frequently Asked Questions.
WWW Page.
The Comprehensive TeX Archive Network (CTAN) contains gigabytes of TeX related files including documentation and the many extensions available for TeX.
Blue Sky Contact Information
Blue Sky Research is located in Portland, Oregon.
Their Web page contains detailed information on product pricing, student discounts, specifications, technical support, documentation and upgrades.
Blue Sky Sales's email address is sales@bluesky.com
or you can call 800-622-8398/503-222-9571
or you can fax 503-222-1643
or you can send real mail to
Blue Sky Research
534 SW Third Ave.
Portland, OR 97204 USA
jmfarmer@students.wisc.edu
All materials on this page and related H-Mac pages are copyright the individual authors and may not be used without permission of the individual authors.
Last Update: 31 August 95
|
2013-06-19 08:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707857370376587, "perplexity": 5585.376489084768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708546926/warc/CC-MAIN-20130516124906-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/202988/is-the-chebyshev-distance-convex
|
# Is the Chebyshev distance convex?
Consider the Chebyshev distance in two dimensions: $$C[x,y] := \max\left(\text{abs}(x-x_0),\text{abs}(y-y_0)\right)$$
Is $C[x,y]$ a convex function of $(x,y)$? Now I think, say $\frac{dC[x,y]}{dx}$ is not smooth so I don't think we can use the condition $f^{''}>0$ to prove the convexity of the Chebyshev distance.
-
If $f$ and $g$ are two convex functions, so is $h:=\max\{f,g\}$. Indeed, if $u,v$ are in the domain of definition and $t\in [0,1]$ then $$h(tu+(1-t)v)=\max\{f(tu+(1-t)v),g(tu+(1-t)v)\}\\\leq \max\{tf(u)+(1-t)f(v),tg(u)+(1-t)g(v)\}.$$ Since $$tf(u)+(1-t)f(v)\leq t\max\{f(u),g(u)\}+(1-t)\max\{f(v),g(v)\},$$ and the same is true replacing $f$ by $g$ in the RHS, we get the result.
To conclude this problem, use the fact that the map $x\mapsto |x-x_0|$ is convex using triangular inequality.
|
2016-02-10 05:11:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98977130651474, "perplexity": 67.12237008334485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00001-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/11784/fun-applications-of-representations-of-finite-groups/
|
# Fun applications of representations of finite groups
Are there some fun applications of the theory of representations of finite groups? I would like to have some examples that could be explained to a student who knows what is a finite group but does not know much about what is a repersentation (say knows the definition). The standard application that is usually mentioned is Burnside's theorem http://en.wikipedia.org/wiki/Burnside_theorem. The application may be of any kind, not necessarely in math. But math applications are of course very wellcome too!!! It will be very helpfull also if you desribe a bit this application.
• This should be big-list and comunity-wikified, I guess... – Mariano Suárez-Álvarez Jan 14 '10 at 22:17
• Big-list is a classification of a "type of question", not the number of resopnses it receives. – Harry Gindi Jan 14 '10 at 22:54
• The Erlangen program might be relevant. – Anweshi Jan 15 '10 at 0:17
• For a bit more see the cousin of this thread in Math.SE. – Jyrki Lahtonen Jul 18 '18 at 20:06
• In chemistry, you can use representation theory to decompose vibrations of a fairly symmetric molecule into modes. An example: consider a two-atom molecule in one dimension, whose atoms are vibrating. The motion can be decomposed into same-direction and opposite-direction vibrations. When you work it out, this corresponds to the projection of a representation $V$ of $\mathbf{Z}/2$ (the tensor product of $\mathbf{C}^2$ coming from the action on the atoms, and $\mathbf{C}$ coming from the action on the ambient space) into its irreducible components. – Meow Jun 12 '19 at 12:16
An example from Kirillov's book on representation theory: write numbers 1,2,3,4,5,6 on the faces of a cube, and keep replacing (simultaneously) each number by the average of its neighbours. Describe (approximately) the numbers on the faces after many iterations.
Another example I like to use in the beginning of a group reps course: write down the multiplication table in a finite group, and think of it as of a square matrix whose entries are formal variables corresponding to elements of the group. Then the determinant of this matrix is a polynomial in these variables. Describe its decomposition into irreducibles. This question, which Frobenius was asked by Dedekind, lead him to invention of group characters.
A function in two variables can be uniquely decomposed as a sum of a symmetric and antisymmetric (skew-symmetric) function. What happens for three and more variables - what types of symmetries do exist there?
• Can you provide reference information for Kirillov's book? – alex Jan 16 '10 at 1:32
• @alex: Elements of the theory of representations, Springer, 1976 – Vladimir Dotsenko Jan 18 '10 at 23:42
As Anweshi noted a moment ago, a classic answer is the use of character tables by chemists (as explained in this book for instance). The symmetry group of a molecule controls its vibrational spectrum, as observed by IR spectrosocopy. When Kroto et al. discovered $C_{60}$, they used this method to demonstrate its icosahedral symmetry.
• Also of interest is the use of representation theory for finite groups in understanding electronic structure: en.wikipedia.org/wiki/Term_symbol – Steve Huntsman Jan 14 '10 at 22:58
• Ah thanks a lot for the reference and explanations! You have a soft corner for Chemistry? – Anweshi Jan 14 '10 at 23:45
• Chemists really have courses on group representation theory, I had a friend who was a chemist who taught one. That said, if you have to know too much chemistry it might not be so easy to incorporate in a mathematics class! – Kevin McGerty Jan 15 '10 at 1:16
• The first part of Serre's book on representations of finite groups is written explicitly for chemists. When I saw a copy the other day, I was ashamed to see that a chemist who followed his course knew more about representations of finite groups than I do! – Joel Fine Feb 2 '10 at 12:38
I'm surprised no one's mentioned this: the fact that a Frobenius kernel is a normal subgroup. It's not clear a priori that the set of elements not fixing any points should be a subgroup at all. I'm told there is no completely group theoretic proof (with no character theory) of this fact yet.
I love the proof of the theorem of Hurwitz that a normed division algebra has to have dimension 1, 2 or 4 using the representation theory of elementary 2-groups.
Later: the original reference for the argument is [Eckmann, Beno. Gruppentheoretischer Beweis des Satzes von Hurwitz-Radon über die Komposition quadratischer Formen. (German) Comment. Math. Helv. 15, (1943). 358--366. MR0009936 (5,225e)] I don't know of a more recent exposition, except some notes of a short course by Esther Galina a few years ago (which should be in her webpage---in spanish, though)
Serre mentions in his book that the first part and examples are all very relevant to quantum chemists. If you can dig it up, it might turn out to be very exciting. Perhaps, it is for understanding crystal structure, etc..
• Anweshi, thanks! I have the book of Serre "Linear Representations of Finite Groups" in my hands. Graduate Texts in Math. Serre mentions "quatnum Chemistry" in one phrase, but that is it... Is this the book? – Dmitri Panov Jan 14 '10 at 23:05
• That is the book. The first part is written for Serre's wife, Josiane Serre, for teaching her students in quantum chemistry. Somehow it is all very relevant there, though I have no idea. Maybe someone else can explain it better, with references to Chemistry books. – Anweshi Jan 14 '10 at 23:42
• I never understood Serre's phrase "quantum chemistry", let alone "quantum chemists". What other kinds are there - "deluded chemists"? – Tim Perutz Jan 14 '10 at 23:44
• @Tim- Just because a mathematician does representation theory doesn't mean they think set theory is untrue, or that it isn't an underlying basis for their work. It's just that most of the set theory they see is boring, and other interesting problems dominate. I would guess the chemistry thing is analogous. – Ben Webster Jan 15 '10 at 16:37
• Quantum chemistry is the application of quantum mechanics to chemical problems, i.e. solving the Schrodinger equations for collections of atoms and molecules. It can be considered a subset of physical chemistry, which aims to reconcile what physics tells us about the universe with what chemists see molecules do. The boundary dividing it from chemical physics and molecular physics is not always meaningful. The examples of Chapter 5 are what us chemists call point groups. They are important to understand the spectra of molecules with symmetry, such as methane (Td), C60 (Ih), and benzene (D6h). – Jiahao Chen Aug 19 '10 at 5:13
There's lots of stuff relating the representation theory of the symmetric group to sorting and shuffling. Persi Diaconis has worked on the latter to great effect.
Maybe this is not an "application", but I certainly found it fun when learning about representations of the symmetric group. Combinatorially, there is a clear correspondence between transpose-invariant partitions (ie partition diagrams that are symmetric along the main diagonal) and partitions involving distinct odd integers. (eg (3,2,1) corresponds to (5,1). )
Here is a sketch of the representation theory behind the result: the Specht modules from transpose-invariant partitions are precisely the irreducible representations of S_n that decompose into 2 irreducible representations when restricted to A_n. We view irreducible characters and conjugacy-class-indicator-functions as two bases on the vector space of A_n class functions, and deduce that the number of these reducible-on-restriction representations is equal to the number of S_n conjugacy classes which split as two A_n conjugacy classes. An S_n conjugacy class splits into two A_n conjugacy classes precisely when it doesn't commute with any odd cycles, which is to say all factors of its cycle decomposition have distinct odd length.
I've seen other instances where representation theory "explains" a combinatorial coincidence (eg q-dimension formula of various Lie algebras), so I think of this example as "typical" of the connection between representation theory and combinatorics.
EDIT: The background comes from pages 18-25 of Anton Evseev's lecture notes here, and this particular statement is exercise 1 of sheet 3. I have what I believe is a complete solution to the exercise, but I'm having a little trouble pdf-ing it, hopefully it should be on my homepage (under "writings") by Tuesday (along with my own exposition of the background, which may expand on those official notes a little).
• I once told a friend of mine who was a logician about the tensor product in the representations of the symmetric group. She exclaimed "You mean I can multiply partitions! This is amazing!" – Ben Webster Jan 16 '10 at 15:57
• Interest is present ;) – darij grinberg Jan 21 '10 at 23:01
This is maybe stretching it a little bit, but Tim Gowers' quasi-random groups describes and references some extremal combinatorial properties of graphs constructed from the groups $PSL_2(\mathbb{F}_q)$ which ultimately rely on the fact that they have no non-trivial low-dimensional irreducible representations.
Let $G$ be a finite group with $n$ elements and $k$ conjugacy classes. Denote by $m=|G:[G,G]|$ the index of the commutator. Then $n+3m\geq 4k$.
It is less impressive than many other answers, but I find this inequality particularly nice, especially having in mind that there are some nontrivial examples of equality, all are explicitly listed. I do not know the proof whithout using representations.
• Can you give a reference? – Tom De Medts Mar 25 '15 at 12:35
• At first, I may give a proof. There are $k$ irreducible representations of $G$, $m$ of them have dimension 1, the others have dimension at least 2. Sum of squares of dimensions equals $n\geq m+4(k-m)$ as desired. At second, classification of equality case, i.e. groups with at most 2-dimensional irreducible representations, is given here: Amitsur, S.A. Groups with representations of bounded degree. II. (English) [J] Ill. J. Math. 5, 198-205 (1961). [ISSN 0019-2082] emis.de/cgi-bin/Zarchive?an=0100.25704 – Fedor Petrov Mar 25 '15 at 12:56
One very basic and fun application of representations of finite groups (or really, actions of finite groups) would be the study of various puzzles, like the Rubik Cube. David Singmaster has a nice little book titled "Handbook of Cubik Math" which could potentially be used for material in an undergraduate course.
Here's a blog post I wrote, based on Georgi's book. The example is solving for the normal modes of oscillation of a system of identical masses and identical springs. More generally, you can use the automorphism group of the graph they form to do it for more complicated configurations.
Check out this book: Group theory and physics
There are some fun problems in the beginning of these notes by Vera Serganova.
At the Joint Meetings, I heard a fun and very interesting talk by Michael Orrison on applications of representation theory in voting theory. It was really neat! You should be able to find out more here.
The decomposition of the curvature tensor of a (pseudo) riemmanian manifold into scalar+ traceless Ricci + Weyl (the latter into SD+ASD in dim=4) is an application of the representation theory of the orthogonal group. There are many more examples in differential geometry (eg the decomposition of the intrinsic torsion tensor of an almost hermitian manifold into 4 irreducibles etc).
Now you may object because the orthogonal group (say over R) is not a finite group, but Weyl showed that the theory of the tensor representations of the classical groups is intimately related to the representation theory of the symmetric group.
• Not to mention that any reasonable person knows that finite should just mean compact. – Ben Webster Jan 16 '10 at 15:56
Bosons and fermions. Quantum mechanics texts, such as Dirac's classic, explain that in a system of indistinguishable particles in space, exchange of particles is modelled by a change in phase of the state vector. These phases form a 1-dimensional representation of the symmetric group. Since all transpositions are conjugate, there are just two possibilities: bosons (trival rep) and fermions (sign rep), and no other(on)s.
• I think this only holds in R^n for n>2, since you're representing the fundamental group of the configuration space of points in R^n. For n=2, you can get anyons. – S. Carnahan Jan 15 '10 at 8:16
• Little gets past MO's collective eyes!:) Yes, my phrase "in space" was a little more pointed than it appeared... – Tim Perutz Jan 15 '10 at 14:41
I would like to add the McKay correspondence. The symmetry groups of regular solids are easy groups to introduce. Then you want the double covers. It amazes me you can construct the irreducibles and characters so simply from the two-dimensional.
Another result I like is Molien's theorem. The action on the polynomial ring seems complicated at first sight. However this is a straightforward way to calculate the dimensions of the spaces of invariant polynomials.
Ising gauge theory on a finite lattice is basically determined by a coupling constant and a gauge-induced unitary representation from $\mathbb{Z}^M_2$ to $U(\mathcal{H}_2^{\otimes L})$.
Here $\mathcal{H}_2$ is the Hilbert space of a single spin variable, $L$ is the number of links in the lattice, and $M$ is the number of them that comprise a maximal tree (for a periodic $d$-dimensional lattice with period $N$ in each dimension, so that there are $N^d$ sites, $M = N^d – 1$ and $L = dN^d$, so in the infinite-volume limit, $L \sim d \cdot M$).
• Steve, thanks a lot! Though it is a bit dense for me to understand :)... Could you please expand it just a bit, or give a refference? For example, are you considering a periodic lattice in R^2? – Dmitri Panov Jan 14 '10 at 22:43
• I have some notes on this from four or five years ago that elaborate on work from the seventies (e.g. Fradkin, E. and Susskind, L. “Order and disorder in gauge systems and magnets”. Phys. Rev. D 17, 2637 (1978)). I will email you a PDF of these notes if you want. – Steve Huntsman Jan 14 '10 at 22:48
• Steve, please email me, this will be nice! dpanov@imperial.ac.uk – Dmitri Panov Jan 14 '10 at 22:57
• Done. Hope you find them worthwhile. Best/SH – Steve Huntsman Jan 14 '10 at 23:03
Wallpaper groups and the crystallographic restriction theorem for the plane are a wonderful application/example of finite group theory and group actions.
This is a really good relevant clip: http://www.youtube.com/watch?v=7zLi47yYlcc#t=7m43s (queued up at the relevant point, whence came the #t=7m34s).
Also, Bronowski was a mathematician.
You might want to look at Section 3.1 of "Group Theory and Physics" by Shlomo Sternberg, Cambridge Univ Press, 1994. This explains, through a simple example, how (in Sternberg's words) "molecular spectroscopy is an application of Schur's lemma". The argument is elementary in nature. The last chapter of the book by James & Liebeck (Representations and Characters of Groups 2e, Cambridge Univ Press, 2001) is a longer exposition of the same idea. I notice another post here about work by Diaconis - Diaconis has a book called "Group Representations in Probability and Statistics" which is available for free download. See the link at
http://www.math.columbia.edu/~khovanov/resources/
This page has links to dozens of useful articles. Also, there is the book "Unitary Group Representations in Physics, Probability and Number theory" by George W Mackey (Benjamin/Cummings Publ Co, 1978). This is more advanced than the others though. For applications to quantum chemistry there is (amongst many) "Chemical Applications of Group Theory" by F Albert Cotton, published by John Wiley. If you want to see how Section 2.7 of Serre's book is actually used in practice by chemists, see Chapter 6 in Cotton's book.
• Apologies for the intrusive question. Just to satisfy curiosity -- Are you the author of "“An arithmetic-geometric method in the study of the subgroups of the modular group”? – Anweshi Jan 24 '10 at 11:50
• No, I am not. It is a fairly common name... :-) – Ravi Kulkarni Jan 24 '10 at 14:56
• @Anweshi Ravi Kulkarni taught at Queens College at the City of New York as well as the Graduate Center of CUNY for nearly 40 years after getting his PHD at Harvard under Shlomo Sternberg.He's an expert in differential and classical geometry,as well as complex analysis.I had the priviledge of being his student on more then one occasion. Sadly,my role as his student did not match his as a teacher the final time.I'm happy to report for future mathematics majors that he's back there after several years in his native India.I'm hoping he'll remain there until his official retirement. – The Mathemagician Oct 5 '10 at 22:08
• @Ravi First,welcome back,I didn't get a chance to get caught up with you at length yet.Second-to your excellent answer and recommendations,I'll add only the following: "Elements of Molecular Symmetry" by Yngve Ohm. This is a graduate level presentation of group representation theory for chemists that's not only much more readable then Cotton,but much more mathematical-it develops a great deal of formal group theory along the way.Both Sternberg and Serre should be in every mathematican's,physicist's,and chemist's library in my opinion. – The Mathemagician Oct 5 '10 at 22:14
|
2021-05-13 18:09:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6893600821495056, "perplexity": 711.8952910883496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00334.warc.gz"}
|
https://www.atqed.com/matplotlib-graphs
|
# Python Matplotlib multiple plots - How to draw graphs in the same plane
You can draw two graphs in the same place at once by Python Matplotlib. Before plotting, importing NumPy and Matplotlib in Python.
import numpy
from matplotlib import pyplot
x = numpy.arange(-5, 5, 0.1)
y = x * x
z = x + 2
pyplot.plot(x, y, x, z)
pyplot.savefig('plot.png')
x ranges from -5 to 5 and the step is 0.1. y and z represents quadratic curve an line respectively.
pyplot module in matplotlib has plot method plotting (x, y) and (x, z). To draw 2 graphs, the arguments of plot is x, y, x, z, not x, y, z.
|
2022-08-16 07:10:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18324115872383118, "perplexity": 2628.3296160600617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00360.warc.gz"}
|
https://www.groundai.com/project/optimal-and-approximate-q-value-functions-for-decentralized-pomdps/
|
Optimal and Approximate Q-value Functions for Decentralized POMDPs
# Optimal and Approximate Q-value Functions for Decentralized POMDPs
\nameFrans A. Oliehoek \emailf.a.oliehoek@uva.nl
\addrIntelligent Systems Lab Amsterdam, University of Amsterdam
Amsterdam, The Netherlands \AND\nameMatthijs T.J. Spaan \emailmtjspaan@isr.ist.utl.pt
\addrInstitute for Systems and Robotics, Instituto Superior Técnico
Lisbon, Portugal \AND\nameNikos Vlassis \emailvlassis@dpem.tuc.gr
\addrDepartment of Production Engineering and Management, Technical University of Crete
Chania, Greece
###### Abstract
Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent frameworks like MDPs and POMDPs, planning can be carried out by resorting to Q-value functions: an optimal Q-value function is computed in a recursive manner by dynamic programming, and then an optimal policy is extracted from . In this paper we study whether similar Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs), and how policies can be extracted from such value functions. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. This computation, however, is infeasible for all but the smallest problems. Therefore, we analyze various approximate Q-value functions that allow for efficient computation. We describe how they relate, and we prove that they all provide an upper bound to the optimal Q-value function . Finally, unifying some previous approaches for solving Dec-POMDPs, we describe a family of algorithms for extracting policies from such Q-value functions, and perform an experimental evaluation on existing test problems, including a new firefighting benchmark problem.
322008289–35309/0705/08 \ShortHeadingsOptimal and Approximate Q-Value Functions for Dec-POMDPs Oliehoek, Spaan & Vlassis \firstpageno289
## 1 Introduction
One of the main goals in artificial intelligence (AI) is the development of intelligent agents, which perceive their environment through sensors and influence the environment through their actuators. In this setting, an essential problem is how an agent should decide which action to perform in a certain situation. In this work, we focus on planning: constructing a plan that specifies which action to take in each situation the agent might encounter over time. In particular, we will focus on planning in a cooperative multiagent system (MAS): an environment in which multiple agents coexist and interact in order to perform a joint task. We will adopt a decision-theoretic approach, which allows us to tackle uncertainty in sensing and acting in a principled way.
Decision-theoretic planning has roots in control theory and in operations research. In control theory, one or more controllers control a stochastic system with a specific output as goal. Operations research considers tasks related to scheduling, logistics and work flow and tries to optimize the concerning systems. Decision-theoretic planning problems can be formalized as Markov decision processes (MDPs), which have have been frequently employed in both control theory as well as operations research, but also have been adopted by AI for planning in stochastic environments. In all these fields the goal is to find a (conditional) plan, or policy, that is optimal with respect to the desired behavior. Traditionally, the main focus has been on systems with only one agent or controller, but in the last decade interest in systems with multiple agents or decentralized control has grown.
A different, but also related field is that of game theory. Game theory considers agents, called players, interacting in a dynamic, potentially stochastic process, the game. The goal here is to find optimal strategies for the agents, that specify how they should play and therefore correspond to policies. In contrast to decision-theoretic planning, game theory has always considered multiple agents, and as a consequence several ideas and concepts from game theory are now being applied in decentralized decision-theoretic planning. In this work we apply game-theoretic models to decision-theoretic planning for multiple agents.
### 1.1 Decision-Theoretic Planning
In the last decades, the Markov decision process (MDP) framework has gained in popularity in the AI community as a model for planning under uncertainty [\BCAYBoutilier, Dean, \BBA HanksBoutilier et al.1999, \BCAYGuestrin, Koller, Parr, \BBA VenkataramanGuestrin et al.2003]. MDPs can be used to formalize a discrete time planning task of a single agent in a stochastically changing environment, on the condition that the agent can observe the state of the environment. Every time step the state changes stochastically, but the agent chooses an action that selects a particular transition function. Taking an action from a particular state at time step induces a probability distribution over states at time step .
The agent’s objective can be formulated in several ways. The first type of objective of an agent is reaching a specific goal state, for example in a maze in which the agent’s goal is to reach the exit. A different formulation is given by associating a certain cost with the execution of a particular action in a particular state, in which case the goal will be to minimize the expected total cost. Alternatively, one can associate rewards with actions performed in a certain state, the goal being to maximize the total reward.
When the agent knows the probabilities of the state transitions, i.e., when it knows the model, it can contemplate the expected transitions over time and construct a plan that is most likely to reach a specific goal state, minimizes the expected costs or maximizes the expected reward. This stands in some contrast to reinforcement learning (RL) [\BCAYSutton \BBA BartoSutton \BBA Barto1998], where the agent does not have a model of the environment, but has to learn good behavior by repeatedly interacting with the environment. Reinforcement learning can be seen as the combined task of learning the model of the environment and planning, although in practice often it is not necessary to explicitly recover the environment model. In this article we focus only on planning, but consider two factors that complicate computing successful plans: the inability of the agent to observe the state of the environment as well as the presence of multiple agents.
In the real world an agent might not be able to determine what the state of the environment exactly is, because the agent’s sensors are noisy and/or limited. When sensors are noisy, an agent can receive faulty or inaccurate observations with some probability. When sensors are limited the agent is unable to observe the differences between states that cannot be detected by the sensor, e.g., the presence or absence of an object outside a laser range-finder’s field of view. When the same sensor reading might require different action choices, this phenomenon is referred to as perceptual aliasing. In order to deal with the introduced sensor uncertainty, a partially observable Markov decision process (POMDP) extends the MDP model by incorporating observations and their probability of occurrence conditional on the state of the environment [\BCAYKaelbling, Littman, \BBA CassandraKaelbling et al.1998].
The other complicating factor we consider is the presence of multiple agents. Instead of planning for a single agent we now plan for a team of cooperative agents. We assume that communication within the team is not possible.111As it turns out, the framework we consider can also model communication with a particular cost that is subject to minimization [\BCAYPynadath \BBA TambePynadath \BBA Tambe2002, \BCAYGoldman \BBA ZilbersteinGoldman \BBA Zilberstein2004]. The non-communicative setting can be interpreted as the special case with infinite cost. A major problem in this setting is how the agents will have to coordinate their actions. Especially, as the agents are not assumed to observe the state—each agent only knows its own observations received and actions taken—there is no common signal they can condition their actions on. Note that this problem is in addition to the problem of partial observability, and not a substitution of it; even if the agents could freely and instantaneously communicate their individual observations, the joint observations would not disambiguate the true state.
One option is to consider each agent separately, and have each such agent maintain an explicit model of the other agents. This is the approach as chosen in the Interactive POMDP (I-POMDP) framework [\BCAYGmytrasiewicz \BBA DoshiGmytrasiewicz \BBA Doshi2005]. A problem in this approach, however, is that the other agents also model the considered agent, leading to an infinite recursion of beliefs regarding the behavior of agents. We will adopt the decentralized partially observable Markov decision process (Dec-POMDP) model for this class of problems [\BCAYBernstein, Givan, Immerman, \BBA ZilbersteinBernstein et al.2002]. A Dec-POMDP is a generalization to multiple agents of a POMDP and can be used to model a team of cooperative agents that are situated in a stochastic, partially observable environment.
The single-agent MDP setting has received much attention, and many results are known. In particular it is known that an optimal plan, or policy, can be extracted from the optimal action-value, or Q-value, function , and that the latter can be calculated efficiently. For POMDPs, similar results are available, although finding an optimal solution is harder (PSPACE-complete for finite-horizon problems, \citeRPapadimitriou87).
On the other hand, for Dec-POMDPs relatively little is known except that they are provably intractable (NEXP-complete, \citeRBernstein02Complexity). In particular, an outstanding issue is whether Q-value functions can be defined for Dec-POMDPs just as in (PO)MDPs, and whether policies can be extracted from such Q-value functions. Currently most algorithms for planning in Dec-POMDPs are based on some version of policy search [\BCAYNair, Tambe, Yokoo, Pynadath, \BBA MarsellaNair et al.2003b, \BCAYHansen, Bernstein, \BBA ZilbersteinHansen et al.2004, \BCAYSzer, Charpillet, \BBA ZilbersteinSzer et al.2005, \BCAYVarakantham, Marecki, Yabu, Tambe, \BBA YokooVarakantham et al.2007], and a proper theory for Q-value functions in Dec-POMDPs is still lacking. Given the wide range of applications of value functions in single-agent decision-theoretic planning, we expect that such a theory for Dec-POMDPs can have great benefits, both in terms of providing insight as well as guiding the design of solution algorithms.
### 1.2 Contributions
In this paper we develop theory for Q-value functions in Dec-POMDPs, showing that an optimal Q-function can be defined for a Dec-POMDP. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. We also show that given , an optimal policy can be computed by forward-sweep policy computation, solving a sequence of Bayesian games forward through time (i.e., from the first to the last time step), thereby extending the solution technique of \citeAEmery-Montemerlo04 to the exact setting.
Computation of is infeasible for all but the smallest problems. Therefore, we analyze three different approximate Q-value functions and that can be more efficiently computed and which constitute upper bounds to . We also describe a generalized form of that includes , and . This is used to prove a hierarchy of upper bounds: .
Next, we show how these approximate Q-value functions can be used to compute optimal or sub-optimal policies. We describe a generic policy search algorithm, which we dub Generalized () as it is a generalization of the algorithm by \citeASzer05MAA, that can be used for extracting a policy from an approximate Q-value function. By varying the implementation of a sub-routine of this algorithm, this algorithm unifies and forward-sweep policy computation and thus the approach of \citeAEmery-Montemerlo04.
Finally, in an experimental evaluation we examine the differences between , , and for several problems. We also experimentally verify the potential benefit of tighter heuristics, by testing different settings of on some well known test problems and on a new benchmark problem involving firefighting agents.
This article is based on previous work by \citeAOliehoek07aamas—abbreviated OV here—containing several new contributions: (1) Contrary to the OV work, the current work includes a section on the sequential rational description of and suggests a way to compute in practice (OV only provided a normative description of ). (2) The current work provides a formal proof of the hierarchy of upper bounds to (which was only qualitatively argued in the OV paper). (3) The current article additionally contains a proof that the solutions for the Bayesian games with identical payoffs given by equation (4.2) constitute Pareto optimal Nash equilibria of the game (which was not proven in the OV paper). (4) This article contains a more extensive experimental evaluation of the derived bounds of , and introduces a new benchmark problem (firefighting). (5) Finally, the current article provides a more complete introduction to Dec-POMDPs and existing solution methods, as well as Bayesian games, hence it can serve as a self-contained introduction to Dec-POMDPs.
### 1.3 Applications
Although the field of multiagent systems in a stochastic, partially observable environment seems quite specialized and thus narrow, the application area is actually very broad. The real world is practically always partially observable due to sensor noise and perceptual aliasing. Also, in most of these domains communication is not free, but consumes resources and thus has a particular cost. Therefore models as Dec-POMDPs, which do consider partially observable environments are relevant for essentially all teams of embodied agents.
Example applications of this type are given by \citeAEmery-Montemerlo05phd, who considered multi-robot navigation in which a team of agents with noisy sensors has to act to find/capture a goal. \citeABecker04TransIndepJAIR use a multi-robot space exploration example. Here, the agents are Mars rovers and have to decide on how to proceed their mission: whether to collect particular samples at specific sites or not. The rewards of particular samples can be sub- or super-additive, making this task non-trivial. An overview of application areas in cooperative robotics is presented by \citeAArai02, among which is robotic soccer, as applied in RoboCup [\BCAYKitano, Asada, Kuniyoshi, Noda, \BBA OsawaKitano et al.1997]. Another application that is investigated within this project is crisis management: RoboCup Rescue [\BCAYKitano, Tadokoro, Noda, Matsubara, Takahashi, Shinjoh, \BBA ShimadaKitano et al.1999] models a situation where rescue teams have to perform a search and rescue task in a crisis situation. This task also has been modeled as a partially observable system [\BCAYNair, Tambe, \BBA MarsellaNair et al.2002, \BCAYNair, Tambe, \BBA MarsellaNair et al.2003, \BCAYNair, Tambe, \BBA MarsellaNair et al.2003a, \BCAYOliehoek \BBA VisserOliehoek \BBA Visser2006, \BCAYPaquet, Tobin, \BBA Chaib-draaPaquet et al.2005].
There are also many other types of applications. \citeANair05ndPOMDPs,Lesser03distSensorNetsBook give applications for distributed sensor networks (typically used for surveillance). An example of load balancing among queues is presented by \citeACogill04approxDP. Here agents represent queues and can only observe queue sizes of themselves and immediate neighbors. They have to decide whether to accept new jobs or pass them to another queue. Another frequently considered application domain is communication networks. \citeAPeshkin01PhD treated a packet routing application in which agents are routers and have to minimize the average transfer time of packets. They are connected to immediate neighbors and have to decide at each time step to which neighbor to send each packet. Other approaches to communication networks using decentralized, stochastic, partially observable systems are given by \citeAOoi96,Tao01,Altman02.
### 1.4 Overview of Article
The rest of this article is organized as follows. In Section 2 we will first formally introduce the Dec-POMDP model and provide background on its components. Some existing solution methods are treated in Section 3. Then, in Section 4 we show how a Dec-POMDP can be modeled as a series of Bayesian games and how this constitutes a theory of Q-value functions for BGs. We also treat two forms of optimal Q-value functions, , here. Approximate Q-value functions are described in Section 5 and one of their applications is discussed in Section 6. Section 7 presents the results of the experimental evaluation. Finally, Section 8 concludes.
## 2 Decentralized POMDPs
In this section we define the Dec-POMDP model and discuss some of its properties. Intuitively, a Dec-POMDP models a number of agents that inhabit a particular environment, which is considered at discrete time steps, also referred to as stages [\BCAYBoutilier, Dean, \BBA HanksBoutilier et al.1999] or (decision) epochs [\BCAYPutermanPuterman1994]. The number of time steps the agents will interact with their environment is called the horizon of the decision problem, and will be denoted by . In this paper the horizon is assumed to be finite. At each stage every agent takes an action and the combination of these actions influences the environment, causing a state transition. At the next time step, each agent first receives an observation of the environment, after which it has to take an action again. The probabilities of state transitions and observations are specified by the Dec-POMDP model, as are rewards received for particular actions in particular states. The transition- and observation probabilities specify the dynamics of the environment, while the rewards specify what behavior is desirable. Hence, the reward model defines the agents’ goal or task: the agents have to come up with a plan that maximizes the expected long term reward signal. In this work we assume that planning takes place off-line, after which the computed plans are distributed to the agents, who then merely execute the plans on-line. That is, computation of the plan is centralized, while execution is decentralized. In the centralized planning phase, the entire model as detailed below is available. During execution each agent only knows the joint policy as found by the planning phase and its individual history of actions and observations.
### 2.1 Formal Model
In this section we more formally treat the basic components of a Dec-POMDP. We start by giving a mathematical definition of these components.
###### Definition 2.1.
A decentralized partially observable Markov decision process (Dec-POMDP) is defined as a tuple where:
• is the number of agents.
• is a finite set of states.
• is the set of joint actions.
• is the transition function.
• is the immediate reward function.
• is the set of joint observations.
• is the observation function.
• is the horizon of the problem.
• , is the initial state distribution at time .222 denotes the set of probability distributions over .
The Dec-POMDP model extends single-agent (PO)MDP models by considering joint actions and observations. In particular, we define as the set of joint actions. Here, is the set of actions available to agent . Every time step, one joint action is taken. In a Dec-POMDP, agents only know their own individual action; they do not observe each other’s actions. We will assume that any action can be selected at any time. So the set does not depend on the stage or state of the environment. In general, we will denote the stage using superscripts, so denotes the joint action taken at stage , is the individual action of agent taken at stage . Also, we write for a profile of actions for all agents but .
Similarly to the set of joint actions, is the set of joint observations, where is a set of observations available to agent . Every time step the environment emits one joint observation , from which each agent only observes its own component , as illustrated by Figure 1. Notation with respect to time and indices for observations is analogous to the notation for actions. In this paper, we will assume that the action- and observation sets are finite. Infinite action- and observation sets are very difficult to deal with even in the single-agent case, and to the authors’ knowledge no research has been performed on this topic in the partially observable, multiagent case.
Actions and observations are the interface between the agents and their environment. The Dec-POMDP framework describes this environment by its states and transitions. This means that rather than considering a complex, typically domain-dependent model of the environment that explains how this environment works, a descriptive stance is taken: A Dec-POMDP specifies an environment model simply as the set of states the environment can be in, together with the probabilities of state transitions that are dependent on executed joint actions. In particular, the transition from some state to a next state depends stochastically on the past states and actions. This probabilistic dependence models outcome uncertainty: the fact that the outcome of an action cannot be predicted with full certainty.
An important characteristic of Dec-POMDPs is that the states possess the Markov property. That is, the probability of a particular next state depends on the current state and joint action, but not on the whole history:
P(st+1|st,at,st−1,at−1,...,s0,a0)=P(st+1|st,at). (2.1)
Also, we will assume that the transition probabilities are stationary, meaning that they are independent of the stage .
In a way similar to how the transition model describes the stochastic influence of actions on the environment, the observation model describes how the state of the environment is perceived by the agents. Formally, is the observation function, a mapping from joint actions and successor states to probability distributions over joint observations: . I.e., it specifies
P(ot|at−1,st). (2.2)
This implies that the observation model also satisfies the Markov property (there is no dependence on the history). Also the observation model is assumed stationary: there is no dependence on the stage .
Literature has identified different categories of observability [\BCAYPynadath \BBA TambePynadath \BBA Tambe2002, \BCAYGoldman \BBA ZilbersteinGoldman \BBA Zilberstein2004]. When the observation function is such that the individual observation for all the agents will always uniquely identify the true state, the problem is considered fully- or individually observable. In such a case, a Dec-POMDP effectively reduces to a multiagent MDP (MMDP) as described by \citeABoutilier96mmdp. The other extreme is when the problem is non-observable, meaning that none of the agents observes any useful information. This is modeled by the fact that agents always receive a null-observation, . Under non-observability agents can only employ an open-loop plan. Between these two extremes there are partially observable problems. One more special case has been identified, namely the case where not the individual, but the joint observation identifies the true state. This case is referred to as jointly- or collectively observable. A jointly observable Dec-POMDP is referred to as a Dec-MDP.
The reward function is used to specify the goal of the agents and is a function of states and joint actions. In particular, a desirable sequence of joint actions should correspond to a high ‘long-term’ reward, formalized as the return.
###### Definition 2.2.
Let the return or cumulative reward of a Dec-POMDP be defined as total of the rewards received during an execution:
r(0)+r(1)+⋯+r(h−1), (2.3)
where is the reward received at time step .
When, at stage , the state is and the taken joint action is , we have that . Therefore, given the sequence of states and taken joint actions, it is straightforward to determine the return by substitution of by in (2.3).
In this paper we consider as optimality criterion the expected cumulative reward, where the expectation refers to the expectation over sequences of states and executed joint actions. The planning problem is to find a conditional plan, or policy, for each agent to maximize the optimality criterion. In the Dec-POMDP case this amounts to finding a tuple of policies, called a joint policy that maximizes the expected cumulative reward.
Note that, in a Dec-POMDP, the agents are assumed not to observe the immediate rewards: observing the immediate rewards could convey information regarding the true state which is not present in the received observations, which is undesirable as all information available to the agents should be modeled in the observations. When planning for Dec-POMDPs the only thing that matters is the expectation of the cumulative future reward which is available in the off-line planning phase, not the actual reward obtained. Indeed, it is not even assumed that the actual reward can be observed at the end of the episode.
Summarizing, in this work we consider Dec-POMDPs with finite actions and observation sets and a finite planning horizon. Furthermore, we consider the general Dec-POMDP setting, without any simplifying assumptions on the observation, transition, or reward models.
### 2.2 Example: Decentralized Tiger Problem
Here we will describe the decentralized tiger problem introduced by \citeANair03_JESP. This test problem has been frequently used [\BCAYNair, Tambe, Yokoo, Pynadath, \BBA MarsellaNair et al.2003b, \BCAYEmery-Montemerlo, Gordon, Schneider, \BBA ThrunEmery-Montemerlo et al.2004, \BCAYEmery-Montemerlo, Gordon, Schneider, \BBA ThrunEmery-Montemerlo et al.2005, \BCAYSzer, Charpillet, \BBA ZilbersteinSzer et al.2005] and is a modification of the (single-agent) tiger problem [\BCAYKaelbling, Littman, \BBA CassandraKaelbling et al.1998]. It concerns two agents that are standing in a hallway with two doors. Behind one of the doors is a tiger, behind the other a treasure. Therefore there are two states: the tiger is behind the left door () or behind the right door (). Both agents have 3 actions at their disposal: open the left door (), open the right door () and listen (). But they cannot observe each other’s actions. In fact, they can only receive 2 observations. Either they hear a sound left () or right ().
At the state is or with probability . As long as no agent opens a door the state doesn’t change, when a door is opened, the state resets to or with probability . The full transition, observation and reward model are listed by \citeANair03_JESP. The observation probabilities are independent, and identical for both agents. For instance, when the state is and both perform action , both agents have a 85% chance of observing , and the probability of both hearing the tiger left is .
When the agents open the door for the treasure they receive a positive reward, while they receive a penalty for opening the wrong door. When opening the wrong door jointly, the penalty is less severe. Opening the correct door jointly leads to a higher reward.
Note that, when the wrong door is opened by one or both agents, they are attacked by the tiger and receive a penalty. However, neither of the agents observe this attack nor the penalty and the episode continues. Arguably, a more natural representation would be to have the episode end after a door is opened or to let the agents observe whether they encountered the tiger or treasure, however this is not considered in this test problem.
### 2.3 Histories
As mentioned, the goal of planning in a Dec-POMDP is to find a (near-) optimal tuple of policies, and these policies specify for each agent how to act in a specific situation. Therefore, before we define a policy, we first need to define exactly what these specific situations are. In essence such situations are those parts of the history of the process that the agents can observe.
Let us first consider what the history of the process is. A Dec-POMDP with horizon specifies time steps or stages . At each of these stages, there is a state , joint observation and joint action . Therefore, when the agents will have to select their -th actions (at ), the history of the process is a sequence of states, joint observations and joint actions, which has the following form:
(s0,o0,a0,s1,o1,a1,...,sk−1,ok−1).
Here is the initial state, drawn according to the initial state distribution . The initial joint observation is assumed to be the empty joint observation: .
From this history of the process, the states remain unobserved and agent can only observe its own actions and observations. Therefore an agent will have to base its decision regarding which action to select on the sequence of actions and observations observed up to that point.
###### Definition 2.3.
We define the action-observation history for agent , , as the sequence of actions taken by and observations received by agent . At a specific time step , this is:
→θti=(o0i,a0i,o1i…,at−1i,oti).
The joint action-observation history, is the action-observation history for all agents:
→θt=⟨→θt1,…,→θtn⟩.
Agent ’s set of possible action-observation histories at time is . The set of all possible action-observation histories for agent is .333Note that in a particular Dec-POMDP, it may be the case that not all of these histories can actually be realized, because of the probabilities specified by the transition and observation model. Finally the set of all possible joint action-observation histories is given by . At , the action-observation history is empty, denoted by .
We will also use a notion of history only using the observations of an agent.
###### Definition 2.4.
Formally, we define the observation history for agent , , as the sequence of observations an agent has received. At a specific time step , this is:
→oti=(o0i,o1i,…,oti).
The joint observation history, is the observation history for all agents:
→ot=⟨→ot1,…,→otn⟩.
The set of observation histories for agent at time is denoted . Similar to the notation for action-observation histories, we also use and and the empty observation history is denoted .
Similarly we can define the action history as follows.
###### Definition 2.5.
The action history for agent , , is the sequence of actions an agent has performed. At a specific time step , we write:
→ati=(a0i,a1i,…,at−1i).
Notation for joint action histories and sets are analogous to those for observation histories. Also write etc. to denote a tuple of observation-, action-observation histories, etc. for all agents except . Finally we note that, clearly, an (joint) action-observation history consists of an (joint) action- and an (joint) observation history: .
### 2.4 Policies
As discussed in the previous section, the action-observation history of an agent specifies all the information the agent has when it has to decide upon an action. For the moment we assume that an individual policy for agent is a deterministic mapping from action-observation sequences to actions.
The number of possible action-observation histories is usually very large as this set grows exponentially with the horizon of the problem. At time step , there are action-observation histories for agent . As a consequence there are a total of
of such sequences for agent . Therefore the number of policies for agent becomes:
|Ai|(∣∣Ai∣∣⋅∣∣Oi∣∣)h−1(∣∣Ai∣∣⋅∣∣Oi∣∣)−1, (2.4)
which is doubly exponential in the horizon .
#### 2.4.1 Pure and Stochastic Policies
It is possible to reduce the number of policies under consideration by realizing that a lot of policies specify the same behavior. This is illustrated by the left side of Figure 2, which clearly shows that under a deterministic policy only a subset of possible action-observation histories are reached. Policies that only differ with respect to an action-observation history that is not reached in the first place, manifest the same behavior. The consequence is that in order to specify a deterministic policy, the observation history suffices: when an agent takes its action deterministically, he will be able to infer what action he took from only the observation history as illustrated by the right side of Figure 2.
###### Definition 2.6.
A pure or deterministic policy, , for agent in a Dec-POMDP is a mapping from observation histories to actions, . The set of pure policies of agent is denoted .
Note that also for pure policies we sometimes write . In this case we mean the action that specifies for the observation history contained in . For instance, let , then . We use to denote a joint policy, a profile specifying a policy for each agent. We say that a pure joint policy is an induced or implicit mapping from joint observation histories to joint actions . That is, the mapping is induced by individual policies that make up the joint policy. Also we use , to denote a profile of policies for all agents but .
Apart from pure policies, it is also possible to have the agents execute randomized policies, i.e., policies that do not always specify the same action for the same situation, but in which there is an element of chance that decides which action is performed. There are two types of randomized policies: mixed policies and stochastic policies.
###### Definition 2.7.
A mixed policy, , for an agent is a set of pure policies, , along with a probability distribution over this set. Thus a mixed policy is an element of the set of probability distributions over .
###### Definition 2.8.
A stochastic or behavioral policy, , for agent is a mapping from action-observation histories to probability distributions over actions, .
When considering stochastic policies, keeping track of only the observations is insufficient, as in general all action-observation histories can be realized. That is why stochastic policies are defined as a mapping from the full space of action-observation histories to probability distributions over actions. Note that we use and to denote a policy (space) in general, so also for randomized policies. We will only use , and when there is a need to discriminate between different types of policies.
A common way to represent the temporal structure in a policy is to split it in decision rules that specify the policy for each stage. An individual policy is then represented as a sequence of decision rules . In case of a deterministic policy, the form of the decision rule for stage is a mapping from length- observation histories to actions .
#### 2.4.2 Special Cases with Simpler Policies.
There are some special cases of Dec-POMDPs in which the policy can be specified in a simpler way. Here we will treat three such cases: in case the state is observable, in the single-agent case and the case that combines the previous two: a single agent in an environment of which it can observe the state.
The last case, a single agent in a fully observable environment, corresponds to the regular MDP setting. Because the agent can observe the state, which is Markovian, the agent does not need to remember any history, but can simply specify the decision rules of its policy as mappings from states to actions: . The complexity of the policy representation reduces even further in the infinite-horizon case, where an optimal policy is known to be stationary. As such, there is only one decision rule , that is used for all stages.
The same is true for multiple agents that can observe the state, i.e., a fully observable Dec-POMDP as defined in Section 2.1. This is essentially the same setting as the multiagent Markov decision process (MMDP) introduced by \citeABoutilier96mmdp. In this case, the decision rules for agent ’s policy are mappings from states to actions , although in this case some care needs to be taken to make sure no coordination errors occur when searching for these individual policies.
In a POMDP, a Dec-POMDP with a single agent, the agent cannot observe the state, so it is not possible to specify a policy as a mapping from states to actions. However, it turns out that maintaining a probability distribution over states, called belief, , is a Markovian signal:
P(st+1|at,ot,at−1,ot−1,…,a0,o0)=P(st+1|bt,at),
where the belief is defined as
∀stbt(st) ≡ P(st|ot,at−1,ot−1,…,a0,o0)=P(st|bt−1,at−1,ot).
As a result, a single agent in a partially observable environment can specify its policy as a series of mappings from the set of beliefs to actions .
Unfortunately, in the general case we consider, no such space-saving simplifications of the policy are possible. Even though the transition and observation model can be used to compute a joint belief, this computation requires knowledge of the joint actions and observations. During execution, the agents simply have no access to this information and thus can not compute a joint belief.
#### 2.4.3 The Quality of Joint Policies
Clearly, policies differ in how much reward they can expect to accumulate, which will serve as a criterion of a joint policy’s quality. Formally, we consider the expected cumulative reward of a joint policy, also referred to as its value.
###### Definition 2.9.
The value of a joint policy is defined as
V(π) ≡ E[h−1∑t=0R(st,at)∣∣π,b0], (2.5)
where the expectation is over states, observations and—in the case of a randomized —actions.
In particular we can calculate this expectation as
V(π)=h−1∑t=0∑→θt∈→Θt∑st∈SP(st,→θt|π,b0)∑at∈AR(st,at)Pπ(at|→θt), (2.6)
where is the probability of as specified by , and where is recursively defined as
P(st,→θt|π,b0)=∑st−1∈SP(st,→θt|st−1,→θt−1,π)P(st−1,→θt−1|π,b0), (2.7)
with
P(st,→θt|st−1,→θt−1,π)=P(ot|at−1,st)P(st|st−1,at−1)Pπ(at−1|→θt−1) (2.8)
a term that is completely specified by the transition and observation model and the joint policy. For stage we have that .
Because of the recursive nature of it is more intuitive to specify the value recursively:
Vπ(st,→θt)=∑at∈APπ(at|→θt)⎡⎣R(st,at)+∑st+1∈S∑ot+1∈OP(st+1,ot+1|st,at)Vπ(st+1,→θt+1)⎤⎦, (2.9)
with . The value of joint policy is then given by
V(π)=∑s0∈SVπ(s0,→θ∅)b0(s0). (2.10)
For the special case of evaluating a pure joint policy , eq. (2.6) can be written as:
V(π)=h−1∑t=0∑→θt∈→ΘtP(→θt|π,b0)R(→θt,π(→θt)), (2.11)
where
R(→θt,at)=∑st∈SR(st,at)P(st|→θt,b0) (2.12)
denotes the expected immediate reward. In this case, the recursive formulation (2.9) reduces to
Vtπ(st,→ot)=R(st,π(→ot))+∑st+1∈S∑ot+1∈OP(st+1,ot+1|st,π(→ot))Vt+1π(st+1,→ot+1). (2.13)
Note that, when performing the computation of the value for a joint policy recursively, intermediate results should be cached. A particular -pair (or -pair for a stochastic joint policy) can be reached from states of the previous stage. The value is the same, however, and should be computed only once.
#### 2.4.4 Existence of an Optimal Pure Joint Policy
Although randomized policies may be useful, we can restrict our attention to pure policies without sacrificing optimality, as shown by the following.
###### Proposition 2.1.
A Dec-POMDP has at least one optimal pure joint policy.
###### Proof.
See appendix A.1.∎
## 3 Overview of Dec-POMDP Solution Methods
In order to provide some background on solving Dec-POMDPs, this section gives an overview of some recently proposed methods. We will limit this review to a number of finite-horizon methods for general Dec-POMDPs that are related to our own approach.
We will not review the work performed on infinite-horizon Dec-POMDPs, such as the work by \citeAPeshkin00,Bernstein05,Szer05BestFirstSearch,Amato06msdm,Amato07uai. In this setting policies are usually represented by finite state controllers (FSCs). Since an infinite-horizon Dec-POMDP is undecidable [\BCAYBernstein, Givan, Immerman, \BBA ZilbersteinBernstein et al.2002], this line of work, focuses on finding -approximate solutions [\BCAYBernsteinBernstein2005] or (near-) optimal policies for given a particular controller size.
There also is a substantial amount of work on methods exploiting particular independence assumptions. In particular, transition and observation independent Dec-MDPs [\BCAYBecker, Zilberstein, Lesser, \BBA GoldmanBecker et al.2004b, \BCAYWu \BBA DurfeeWu \BBA Durfee2006] and Dec-POMDPs [\BCAYKim, Nair, Varakantham, Tambe, \BBA YokooKim et al.2006, \BCAYVarakantham, Marecki, Yabu, Tambe, \BBA YokooVarakantham et al.2007] have received quite some attention. These models assume that each agent has an individual state space and that the actions of one agent do not influence the transitions between the local states of another agent. Although such models are easier to solve, the independence assumptions severely restrict their applicability. Other special cases that have been considered are, for instance, goal oriented Dec-POMDPs [\BCAYGoldman \BBA ZilbersteinGoldman \BBA Zilberstein2004], event-driven Dec-MDPs [\BCAYBecker, Zilberstein, \BBA LesserBecker et al.2004a], Dec-MDPs with time and resource constraints [\BCAYBeynier \BBA MouaddibBeynier \BBA Mouaddib2005, \BCAYBeynier \BBA MouaddibBeynier \BBA Mouaddib2006, \BCAYMarecki \BBA TambeMarecki \BBA Tambe2007], Dec-MDPs with local interactions [\BCAYSpaan \BBA MeloSpaan \BBA Melo2008] and factored Dec-POMDPs with additive rewards [\BCAYOliehoek, Spaan, Whiteson, \BBA VlassisOliehoek et al.2008].
A final body of related work which is beyond the scope of this article are models and techniques for explicit communication in Dec-POMDP settings [\BCAYOoi \BBA WornellOoi \BBA Wornell1996, \BCAYPynadath \BBA TambePynadath \BBA Tambe2002, \BCAYGoldman \BBA ZilbersteinGoldman \BBA Zilberstein2003, \BCAYNair, Roth, \BBA YohooNair et al.2004, \BCAYBecker, Lesser, \BBA ZilbersteinBecker et al.2005, \BCAYRoth, Simmons, \BBA VelosoRoth et al.2005, \BCAYOliehoek, Spaan, \BBA VlassisOliehoek et al.2007b, \BCAYRoth, Simmons, \BBA VelosoRoth et al.2007, \BCAYGoldman, Allen, \BBA ZilbersteinGoldman et al.2007]. The Dec-POMDP model itself can model communication actions as regular actions, in which case the semantics of the communication actions becomes part of the optimization problem [\BCAYXuan, Lesser, \BBA ZilbersteinXuan et al.2001, \BCAYGoldman \BBA ZilbersteinGoldman \BBA Zilberstein2003, \BCAYSpaan, Gordon, \BBA VlassisSpaan et al.2006]. In contrast, most approaches mentioned typically assume that communication happens outside the Dec-POMDP model and with pre-defined semantics. A typical assumption is that at every time step the agents communicate their individual observations before selecting an action. \citeAPynadath02_com_MTDP showed that, under assumptions of instantaneous and cost-free communication, sharing individual observations in such a way is optimal.
### 3.1 Brute Force Policy Evaluation
Because there exists an optimal pure joint policy for a finite-horizon Dec-POMDP, it is in theory possible to enumerate all different pure joint policies, evaluate them using equations (2.10) and (2.13) and choose the best one. The number of pure joint policies to be evaluated is:
O(|A∗|n(|O∗|h−1)|O∗|−1), (3.1)
where and denote the largest individual action and observation sets. The cost of evaluating each policy is . The resulting total cost of brute-force policy evaluation is
O(|A∗|n(|O∗|h−1)|O∗|−1×|S|×|O∗|nh), (3.2)
which is doubly exponential in the horizon .
### 3.2 Alternating Maximization
\citeA
Nair03_JESP introduced Joint Equilibrium based Search for Policies (JESP). This method guarantees to find a locally optimal joint policy, more specifically, a Nash equilibrium: a tuple of policies such that for each agent its policy is a best response for the policies employed by the other agents . It relies on a process we refer to as alternating maximization. This is a procedure that computes a policy for an agent that maximizes the joint reward, while keeping the policies of the other agents fixed. Next, another agent is chosen to maximize the joint reward by finding its best-response to the fixed policies of the other agents. This process is repeated until the joint policy converges to a Nash equilibrium, which is a local optimum. The main idea of fixing some agents and having others improve their policy was presented before by \citeAChades02, but they used a heuristic approach for memory-less agents. The process of alternating maximization is also referred to as hill-climbing or coordinate ascent.
\citeA
Nair03_JESP describe two variants of JESP, the first of which, Exhaustive-JESP, implements the above idea in a very straightforward fashion: Starting from a random joint policy, the first agent is chosen. This agent then selects its best-response policy by evaluating the joint reward obtained for all of its individual policies when the other agents follow their fixed policy.
The second variant, DP-JESP, uses a dynamic programming approach to compute the best-response policy for a selected agent . In essence, fixing the policies of all other agents allows for a reformulation of the problem as an augmented POMDP. In this augmented POMDP a state consists of a nominal state and the observation histories of the other agents . Given the fixed deterministic policies of other agents , such an augmented state is a Markovian state, and all transition and observation probabilities can easily be derived from .
Like most methods proposed for Dec-POMDPs, JESP exploits the knowledge of the initial belief by only considering reachable beliefs in the solution of the POMDP. However, in some cases the initial belief might not be available. As demonstrated by \citeAVarakantham06AAMAS, JESP can be extended to plan for the entire space of initial beliefs, overcoming this problem.
### 3.3 MAA∗
\citeA
Szer05MAA introduced a heuristically guided policy search method called multiagent A* (). It performs a guided A*-like search over partially specified joint policies, pruning joint policies that are guaranteed to be worse than the best (fully specified) joint policy found so far by an admissible heuristic.
In particular considers joint policies that are partially specified with respect to time: a partial joint policy specifies the joint decision rules for the first stages. For such a partial joint policy a heuristic value is calculated by taking , the actual expected reward achieves over the first stages, and adding , a heuristic value for the remaining stages. Clearly when is an admissible heuristic—a guaranteed overestimation—so is .
starts by placing the completely unspecified joint policy in an open list. Then, it proceeds by selecting partial joint policies from the list and ‘expanding’ them: generating all by appending all possible joint decision rules for next time step (. The left side of Figure (3) illustrates the expansion process. After expansion, all created children are heuristically valuated and placed in the open list, any partial joint policies with less than the expected value of some earlier found (fully specified) joint policy , can be pruned. The search ends when the list becomes empty, at which point we have found an optimal fully specified joint policy.
### 3.4 Dynamic Programming for Dec-POMDPs
incrementally builds policies from the first stage to the last . Prior to this work, \citeAHansen04 introduced dynamic programming (DP) for Dec-POMDPs, which constructs policies the other way around: starting with a set of ‘1-step policies’ (actions) that can be executed at the last stage, they construct a set of 2-step policies to be executed at , etc.
It should be stressed that the policies maintained are quite different from those used by . In particular a partial policy in has the form . The policies maintained by DP do not have such a correspondence to decision rules. We define the time-to-go at stage as
τ=h−t. (3.3)
Now denotes a -steps-to-go sub-tree policy for agent . That is, is a policy tree that has the same form as a full policy for the horizon- problem. Within the original horizon- problem is a candidate for execution starting at stage . The set of -steps-to-go sub-tree policies maintained for agent is denoted Dynamic programming for Dec-POMDPs is based on backup operations: constructing a set of sub-tree policies from a set . For instance, the right side of Figure 3 shows how , a 3-steps-to-go sub-tree policy, is constructed from two . Also illustrated is the difference between this process and expansion (on the left side).
Dynamic programming consecutively constructs for all agents . However, the size of the set is given by
|Qτ=k+1i|=|Ai||Qτ=ki||Oi|,
and as a result the sizes of the maintained sets grow doubly exponential with . To counter this source of intractability, \citeAHansen04 propose to eliminate dominated sub-tree policies. The expected reward of a particular sub-tree policy depends on the probability over states when is started (at stage ) as well as the probability with which the other agents select their sub-tree policies . If we let denote a sub-tree profile for all agents but , and the set of such profiles, we can say that is dominated if it is not maximizing at any point in the multiagent belief space: the simplex over . \citeauthorHansen04 test for dominance over the entire multiagent belief space by linear programming. Removal of a dominated sub-tree policy of an agent may cause a sub-tree policy of an other agent to become dominated. Therefore \citeauthorHansen04 propose to iterate over agents until no further pruning is possible, a procedure known as iterated elimination of dominated policies [\BCAYOsborne \BBA RubinsteinOsborne \BBA Rubinstein1994].
Finally, when the last backup step is completed the optimal policy can be found by evaluating all joint policies for the initial belief .
### 3.5 Extensions on DP for Dec-POMDPs
In the last few years several extensions to the dynamic programming algorithm for Dec-POMDPs have been proposed. The first of these extensions is due to \citeASzer06_PBDP. Rather than testing for dominance over the entire multiagent belief space, \citeauthorSzer06_PBDP propose to perform point-based dynamic programming (PBDP). In order to prune the set of sub-tree policies , the set of all the belief points that can possibly be reached by deterministic joint policies are generated. Only the sub-tree policies that maximize the value at some are kept. The proposed algorithm is optimal, but intractable because it needs to generate all the multiagent belief points that are reachable through all joint policies. To overcome this bottleneck, \citeauthorSzer06_PBDP propose to randomly sample one or more joint policies and use those to generate .
\citeA
SeukenZ07IJCAI also proposed a point-based extension of the DP algorithm, called memory-bounded dynamic programming (MBDP). Rather than using a randomly selected policy to generate the belief points, they propose to use heuristic policies. A more important difference, however, lies in the pruning step. Rather than pruning dominated sub-tree policies , MBDP prunes all sub-tree policies except a few in each iteration. More specifically, for each agent maxTrees sub-tree policies are retained, which is a parameter of the planning method. As a result, MBDP has only linear space and time complexity with respect to the horizon. The MBDP algorithm still depends on the exhaustive generation of the sets which now contain sub-tree policies. Moreover, in each iteration all joint sub-tree policies have to be evaluated for each of the sampled belief points. To counter this growth, \citeASeuken07IMBDP proposed an extension that limits the considered observations during the backup step to the maxObs most likely observations.
Finally, a further extension of the DP for Dec-POMDPs algorithm is given by \citeAAmato07msdm. Their approach, bounded DP (BDP), establishes a bound not on the used memory, but on the quality of approximation. In particular, BDP uses -pruning in each iteration. That is, a that is maximizing in some region of the multiagent belief space, but improves the value in this region by at most , is also pruned. Because iterated elimination using - pruning can still lead to an unbounded reduction in value, \citeauthorAmato07msdm propose to perform one iteration of -pruning, followed by iterated elimination using normal pruning.
### 3.6 Other Approaches for Finite-Horizon Dec-POMDPs
There are a few other approaches for finite-horizon Dec-POMDPs, which we will only briefly describe here. \citeAAras07icaps proposed a mixed integer linear programming formulation for the optimal solution of finite-horizon Dec-POMDPs. Their approach is based on representing the set of possible policies for each agent in . In sequence form, a single policy for an agent is represented as a subset of the set of ‘sequences’ (roughly corresponding to action-observation histories) for the agent. As such the problem can be interpreted as a combinatorial optimization problem, which \citeauthorAras07icaps propose to solve with a mixed integer linear program.
\citeA
Oliehoek07idc also recognize that finding a solution for Dec-POMDPs in essence is a combinatorial optimization problem and propose to apply the Cross-Entropy method [\BCAYde Boer, Kroese, Mannor, \BBA Rubinsteinde Boer et al.2005], a method for combinatorial optimization that recently has become popular because of its ability to find near-optimal solutions in large optimization problems. The resulting algorithm performs a sampling-based policy search for approximately solving Dec-POMDPs. It operates by sampling pure policies from an appropriately parameterized stochastic policy, and then evaluates these policies either exactly or approximately in order to define the next stochastic policy to sample from, and so on until convergence.
Finally, \citeAEmery-Montemerlo04,Emery-Montemerlo05 proposed to approximate Dec-POMDPs through series of Bayesian games. Since our work in this article is based on the same representation, we defer a detailed explanation to the next section. We do mention here that while \citeauthorEmery-Montemerlo04 assume that the algorithm is run on-line (interleaving planning and execution), no such assumption is necessary. Rather we will apply the same framework during a off-line planning phase, just like the other algorithms covered in this overview.
## 4 Optimal Q-value Functions
In this section we will show how a Dec-POMDP can be modeled as a series of Bayesian games (BGs). A BG is a game-theoretic model that can deal with uncertainty [\BCAYOsborne \BBA RubinsteinOsborne \BBA Rubinstein1994]. Bayesian games are similar to the more well-known normal form, or matrix games, but allow to model agents that have some private information. This section will introduce Bayesian games and show how a Dec-POMDP can be modeled as a series of Bayesian games (BGs). This idea of using a series of BGs to find policies for a Dec-POMDP has been proposed in an approximate setting by \citeAEmery-Montemerlo04. In particular, they showed that using series of BGs and an approximate payoff function, they were able to obtain approximate solutions on the Dec-Tiger problem, comparable to results for JESP (see Section 3.2).
The main result of this section is that an optimal Dec-POMDP policy can be computed from the solution of a sequence of Bayesian games, if the payoff function of those games coincides with the Q-value function of an optimal policy , i.e., with the optimal Q-value function . Thus, we extend the results of \citeAEmery-Montemerlo04 to include the optimal setting. Also, we conjecture that this form of can not be computed without already knowing an optimal policy . By transferring the game-theoretic concept of sequential rationality to Dec-POMDPs, we find a description of that is computable without knowing up front.
### 4.1 Game-Theoretic Background
Before we can explain how Dec-POMDPs can be modeled using Bayesian games, we will first introduce them together with some other necessary game theoretic background.
#### 4.1.1 Strategic Form Games and Nash Equilibria
At the basis of the concept of a Bayesian game lies a simpler form of game: the strategic- or normal form game. A strategic game consists of a set of agents or players, each of which has a set of actions (or strategies). The combination of selected actions specifies a particular outcome. When a strategic game consists of two agents, it can be visualized as a matrix as shown in Figure 4. The first game shown is called ‘Chicken’ and involves two teenagers who are driving head on. Both have the option to drive on or chicken out. Each teenager’s payoff is maximal () when he drives on and his opponent chickens out. However, if both drive on, a collision follows giving both a payoff of . The second game is the meeting location problem. Both agents want to meet in location A or B. They have no preference over which location, as long as both pick the same location. This game is fully cooperative, which is modeled by the fact that the agents receive identical payoffs.
###### Definition 4.1.
Formally, a strategic game is a tuple , where is the number of agents, is the set of joint actions, and with is the payoff function of agent .
Game theory tries to specify for each agent how to play. That is, a game-theoretic solution should suggest a policy for each agent. In a strategic game we write to denote a policy for agent and for a joint policy. A policy for agent is simply one of its actions (i.e., a pure policy), or a probability distribution over its actions (i.e., a mixed policy). Also, the policy suggested to each agent should be rational given the policies suggested to the other agent; it would be undesirable to suggest a particular policy to an agent, if it can get a better payoff by switching to another policy. Rather, the suggested policies should form an equilibrium, meaning that it is not profitable for an agent to unilaterally deviate from its suggested policy. This notion is formalized by the concept of Nash equilibrium.
###### Definition 4.2.
A pure policy profile specifying a pure policy for each agent is a Nash Equilibrium (NE) if and only if
ui(⟨α1,…,αi,…,αn⟩)≥ui(⟨α1,…,α′i,…,αn⟩),∀i:1≤i≤n, ∀α′i∈Ai. (4.1)
This definition can be easily extended to incorporate mixed policies by defining
\citeA
Nash50 proved that when allowing mixed policies, every (finite) strategic game contains at least one NE, making it a proper solution for a game. However, it is unclear how such a NE should be found. In particular, there may be multiple NEs in a game, making it unclear which one to select. In order to make some discrimination between Nash equilibria, we can consider NEs such that there is no other NE that is better for everyone.
###### Definition 4.3.
A Nash Equilibrium is referred to as Pareto Optimal (PO) when there is no other NE that specifies at least the same payoff for all agents and a higher payoff for at least one agent:
∄α′(∀iui(α′)≥ui(α)∧∃iui(α′)>ui(α)).
In the case when multiple Pareto optimal Nash equilibria exist, the agents can agree beforehand on a particular ordering, to ensure the same NE is chosen.
#### 4.1.2 Bayesian Games
A Bayesian game [\BCAYOsborne \BBA RubinsteinOsborne \BBA Rubinstein1994] is an augmented normal form game in which the players hold some private information. This private information defines the type of the agent, i.e., a particular type of an agent corresponds to that agent knowing some particular information. The payoff the agents receive now no longer only depends on their actions, but also on their private information. Formally, a BG is defined as follows:
###### Definition 4.4.
A Bayesian game (BG) is a tuple , where is the number of agents, is the set of joint actions, is the set of joint types over which a probability function is specified, and is the payoff function of agent .
In a normal form game the agents select an action. Now, in a BG the agents can condition their action on their private information. This means that in BGs the agents use a different type of policies. For a BG, we denote a joint policy , where the individual policies are mappings from types to actions: . In the case of identical payoffs for the agents, the solution of a BG is given by the following theorem:
###### Theorem 4.1.
For a BG with identical payoffs, i.e., , the solution is given by:
β∗=\operatornamewithlimitsargmaxβ∑θ∈ΘP(θ)u(θ,β(θ)), (4.2)
where is the joint action specified by for joint type . This solution constitutes a Pareto optimal Nash equilibrium.
###### Proof.
The proof consists of two parts: the first shows that is a Nash equilibrium, the second shows it is Pareto optimal.
##### Nash equilibrium proof.
It is clear that satisfying 4.2 is a Nash equilibrium by rewriting from the perspective of an arbitrary agent as follows:
β∗i = \operatornamewithlimitsargmaxβi[maxβ≠i∑θ∈ΘP(θ)u(θ,β(θ))], = \operatornamewithlimitsargmaxβi[maxβ≠i∑θi∑θ≠iP(θ≠i|θi)[∑θ≠iP(⟨θi,θ≠i⟩)]P(θi)u(θ,β(θ))], = \operatornamewithlimitsargmaxβi[maxβ≠i∑θiP(θi)∑θ≠iP(θ≠i|θi)u(θ,β(θ))], = \operatornamewithlimitsargmaxβi∑θiP(θi)∑θ≠iP(θ≠i|θi)u(⟨θi,θ≠i⟩,⟨βi(θi),β∗≠i(θ≠i)⟩),
which means that is a best response for . Since no special assumptions were made on , it follows that is a Nash equilibrium.
##### Pareto optimality proof.
Let us write for the payoff agent expects for when performing when the other agents use policy profile . We have that
Vθi(ai,β≠i)=∑θ≠iP(θ≠i|θi)u(⟨θi,θ≠i⟩,⟨ai,β≠i(θ≠i)⟩).
Now, a joint policy satisfying (4.2) is not Pareto optimal if and only if there is another Nash equilibrium that attains at least the same payoff for all agents and for all types and strictly more for at least one agent and type. Formally is not Pareto optimal when such that:
∀i∀θiVθi(β∗i(θi),β∗≠i)≤Vθi(βi′(θi),β≠i′)∧∃i∃θiVθi(β∗i(θi),β∗≠i)
We prove that no such can exist by contradiction. Suppose that is a NE such that (4.3) holds (and thus is not Pareto optimal). Because satisfies (4.2) we know that:
∑θ∈ΘP(θ)u(θ,β∗(θ))≥∑θ∈ΘP(θ)u(θ,β′(θ)), (4.4)
and therefore, for all agents
P(θi,1)Vθi,1(β∗i(θi,1),β∗≠i)+...+P(θi,|Θi|)Vθi,|Θi|(β∗i(θi,|Θi|),β∗≠i)≥P(θi,1)Vθi,1(β′i(θi,1),β′≠i)+...+P(θi,|Θi|)Vθi,|Θi|(β′i(θi,|Θi|),β′≠i)
holds. However, by assumption that satisfies (4.3) we get that
∃jVθi,j(β∗i(θi,j),β∗≠i)
Therefore it must be that
∑k≠jP(θi,k)Vθi,k(β∗i(θi,k),β∗≠i)>∑k≠jP(θi,k)Vθi,k(β′i(θi,k),β′≠i),
and thus that
∃kV
|
2020-08-04 19:31:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002719283103943, "perplexity": 737.991314729633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735882.86/warc/CC-MAIN-20200804191142-20200804221142-00457.warc.gz"}
|
https://chemistry.stackexchange.com/questions/101860/how-to-sloppy-freeze-a-dihedral-angle-within-a-range-in-a-gaussian-modredundant
|
# How to sloppy freeze a dihedral angle within a range in a Gaussian modredundant optimization?
I would like to perform an optimization in Gaussian 09 of my molecule with two dihedral angles being frozen. I would like that the values of these dihedral angles are not fixed, but can relax within a range of ±5°. In Gaussian 03 it seems to be possible using Opt=ModRedundant keyword and the following syntax:
[Type] N1 [N2 [N3 [N4]]] [[+=]value] [A | F] [[min] max]]
I have tried using the same syntax in Gaussian 09, but it doesn't work. Is it possible to perform such calculation in Gaussian 09 or 16 without using the scan approach?
The calculation fails with the following error:
The following ModRedundant input section has been read:
Invalid extra data found.
• Do you know it is possible in G03 or do you just think so? Sep 21 '18 at 17:07
TL;DR: It is impossible (=not documented if it were) with Gaussian 09 Rev. E.01 and older, it might be possible with Gaussian 16, but in all cases not via opt(modredundant).
I am unaware of any sloppy freezing possibility in pre-Gaussian 16, and for the latter one has to dig much deeper in the manual and start constructing it; but at least it works in theory.
## Gaussian 03
Unfortunately, you have misinterpreted the commands you found in the manual. You can find the full description for the opt keyword in via The Internet Archive, but I'll share the important parts here for convenience.
The construct
[Type] N1 [N2 [N3 [N4]]] [[+=]value] [A | F] [[min] max]]
indeed exists for a opt(modredundant) input section, but it behaves not as a sloppy freeze, but as a range exclusion criterion, which you can find a little lower in the manual:
[...]
An asterisk (*) in the place of an atom number indicates a wildcard. Min and max then define a range (or maximum value if min is not given) for coordinate specifications containing wildcards. The action specified by the action code is taken only if the value of the coordinate is in the range.
[...]
For example, the input
D * * * * F 175.0
freezes all defined dihedral angles that are smaller than 175.0°.
The input
D * 4 5 * F 160.0 175.0
freezes all defined dihedral angles around the bond between atom 4 and 5 in the interval 160.0 - 175.0°.
The input
D 3 4 5 6 F 160.0 175.0
is a syntax error because it does not contain a wildcard.
## Gaussian 09
The option modredundant to the opt keyword still exists, but has a slightly modified input. Specifically the [[+=]value]] part has been removed. Again you can look up the full page on The Internet Archive.
The examples above should, however, still behave the same.
[...]
Lines in a ModRedundant input section use the following syntax:
[Type] N1 [N2 [N3 [N4]]] [A | F] [[min] max]]
[...]
An asterisk (*) in the place of an atom number indicates a wildcard. Min and max only apply to coordinate specifications containing wildcards. The action then specified by the action code is taken only if the value of the coordinate is in the minmax range (or below maximum value if min is not given).
[...]
Therefore you also still get the error "Invalid extra data found".
## Gaussian 16
In this version, while opt(modredundant) is still maintained, a new feature is rolled out. As far as I can see, there are no changes to Gaussian 09 regarding the modredundant input section. You can find it live on the Gaussian homepage or a copy on The Internet Archive.
According to this, the above examples should still produce the same results.
The new feature is Generalized Internal Coordinates or GIC. See a description on the Gaussian homepage or on The Internet Archive. You can see the GIC in action in my answer on Constrain bond angle In Gaussian molecular structure optimization and on Gaussian: Relaxed scan with modredundant optimization and dummy atoms.
In theory you should be able to freeze a GIC if it meets certain criteria for example from the manual:
The following example removes an angle coordinate generated by default if ≥179.9°, substituting a linear bend:
A(1,2,3) Remove Min=179.9 ! Remove angle coordinate if too large.
L(1,2,3,0,-2) Add IfNot A(1,2,3) !+ the angle coordinate not active.
I have prepared a small example, but I am really not sure if it worked as intended. For cis-butadien the dihedral distortions should be small, so there isn't much visible. There are no syntax errors, but I am not really sure whether the section actually did what it should. Try it yourself and investigate further. (! recognises comments)
#p PM6 ! Semiempirical method for fast testing
opt ! Optimise
scf(xqc,MaxCycle=250) ! Switch to quadratic conv. if necessary, more cycles
symmetry(none) ! Do not use symmetry
optimisation run
[Free title card can have multiple lines]
0 1
C 0.000000 0.000000 0.000000
H 0.000000 0.000000 1.089000
H 1.026720 0.000000 -0.362996
H -0.513358 -0.889166 -0.363001
C -0.754249 1.306394 -0.533333
H -0.738680 2.222526 0.055206
C -1.531373 1.266770 -1.931370
H -2.044734 2.155933 -2.294370
C -1.615181 -0.098364 -2.761662
H -2.540288 -0.117779 -3.335874
H -0.765588 -0.163525 -3.439797
H -1.596709 -0.942933 -2.074433
sloppyfix=D(1,5,7,9) ! Define an alias for the dihedral
sloppyfix freeze min=5.0 max=355.0 ! freeze in intervall
! Empty line above is important.
That the setup somewhat works can be seen with the following example of dihydrogen. First the atoms are separated 400 pm (=4.00 Å), and the value should be frozen at 250 pm (=2.50 Å), the PM6 optimised value is 76 pm (=0.76 Å).
#p PM6 opt(AddGIC,MaxStep=5) scf(xqc,MaxCycle=250) symmetry(none)
Free title card
0 1
H -2.000000 0.000000 0.000000
H 2.000000 0.000000 0.000000
theHHbond=R(1,2)
theHHbond freeze max=2.5
! Empty line above is important.
Note that it will be frozen after the condition triggers, so it will not be able to the value that triggers it. Here are the values and messages you'll find in the optimisation run:
----------------------------
! Initial Parameters !
! (Angstroms and Degrees) !
-------------------------- --------------------------
! Name Definition Value Derivative Info. !
--------------------------------------------------------------------------------
! theHHbond R(1,2) 4.0 estimate D2E/DX2 !
--------------------------------------------------------------------------------
[...]
Variable Old X -DE/DX Delta X Delta X Delta X New X
theHHbon 4.97283 -0.03796 0.00000 -1.00000 -1.00000 3.97283
Item Value Threshold Converged?
Maximum Force 0.037962 0.000450 NO
RMS Force 0.037962 0.000300 NO
Maximum Displacement 0.500000 0.001800 NO
RMS Displacement 0.707107 0.001200 NO
Predicted change in Energy=-4.653046D-02
Lowest energy point so far. Saving SCF results.
Internal coordinate with a value of 3.9728 a.u. has been conditionally frozen
because it is in Min/Max range.
The coordinate's label:
theHHbond
[...]
Variable Old X -DE/DX Delta X Delta X Delta X New X
theHHbon 3.97283 -0.07253 0.00000 0.00000 0.00000 3.97283
Item Value Threshold Converged?
Maximum Force 0.000000 0.000450 YES
RMS Force 0.000000 0.000300 YES
Maximum Displacement 0.000000 0.001800 YES
RMS Displacement 0.000000 0.001200 YES
Predicted change in Energy=-0.000000D+00
Optimization completed.
-- Stationary point found.
----------------------------
! Optimized Parameters !
! (Angstroms and Degrees) !
-------------------------- --------------------------
! Name Definition Value Derivative Info. !
--------------------------------------------------------------------------------
! theHHbond R(1,2) 2.1023 -DE/DX = -0.0725 !
--------------------------------------------------------------------------------
Lowest energy point so far. Saving SCF results.
Largest change from initial coordinates is atom 1 0.949 Angstoms.
Note that in this example there is only one coordinate, ergo the optimisation finishes when the freeze condition is triggered. In this case there is quite a significant deviation from the condition max=2.5 and the actual value when it is frozen 2.1023. I would expect it to behave better with larger systems.
|
2021-10-27 17:37:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4738178849220276, "perplexity": 3417.603132691387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00150.warc.gz"}
|
http://libros.duhnnae.com/2017/sep/150527565162-Z-2-spin-liquid-and-chiral-antiferromagnetic-phase-in-Hubbard-model-on-the-honeycomb-lattice-duality-between-Schwinger-fermion-and-Schwinger-boson-.php
|
# $Z 2$ spin liquid and chiral antiferromagnetic phase in Hubbard model on the honeycomb lattice: duality between Schwinger-fermion and Schwinger-boson representations - Condensed Matter > Strongly Correlated Electrons
$Z 2$ spin liquid and chiral antiferromagnetic phase in Hubbard model on the honeycomb lattice: duality between Schwinger-fermion and Schwinger-boson representations - Condensed Matter > Strongly Correlated Electrons - Download this document for free, or read online. Document in PDF available to download.
Abstract: In our previous work, we identify the Sublattice-Pairing State SPS inSchwinger-fermion representation as the spin liquid phase discovered in recentnumerical study on a honeycomb lattice. In this paper, we show that SPS isidentical to the zero-flux $Z 2$ spin liquid in Schwinger-boson representationfound by Wang\cite{Wang2010} by an explicit duality transformation. SPS isconnected to an \emph{unusual} antiferromagnetic ordered phase, which we termas chiral-antiferromagnetic CAF phase, by an O4 critical point. CAF phasebreaks the SU2 spin rotation symmetry completely and has three Goldstonemodes. Our results indicate that there is likely a hidden phase transitionbetween CAF phase and simple AF phase at large $U-t$. We propose numericalmeasurements to reveal the CAF phase and the hidden phase transition.
Author: Yuan-Ming Lu, Ying Ran
Source: https://arxiv.org/
|
2019-04-20 16:24:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.492061585187912, "perplexity": 5284.904556887393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182858-00256.warc.gz"}
|
https://solvedlib.com/n/how-many-grams-of-the-excess-reactant-will-remain-after,20624788
|
# How many grams of the excess reactant will remain after the reaction is complete?mass of excess reactant:
###### Question:
How many grams of the excess reactant will remain after the reaction is complete? mass of excess reactant:
#### Similar Solved Questions
##### An ac generator supplies an rms voltage of 5.00 V to an RL circuit. At a...
An ac generator supplies an rms voltage of 5.00 V to an RL circuit. At a frequency of 20.0 kHz the rms current in the circuit is 45.0 mA; at a frequency of 26.0 kHz the rms current is 40.0 mA. What is the values of L in this circuit? What is the values of R in this circuit?...
##### Ud find 3 w 9. Find the value of sind+ caso if tame=hand & is in...
ud find 3 w 9. Find the value of sind+ caso if tame=hand & is in I quadrant. 10. Find the exact values, (i) tan (sint 2 , (ii) cse (cos 17 ); 11. Find the exact values . (i) Cos 23° Cos 22 – sina3 sin 22° (ii) sin 1950 lii) Sin (sin 3 - sind 24) liv (cot 14 (cot 2 )* (cot 3)* ... *...
##### Problem 5 (Total: 20 points)A: Use exponential smoothing with a 0.2 aud a QAto forecast donut sales for May: Assume that the forecast for January was for 28 donuts: (10 points)B. Using MAD, which smothing constatnt provides the most accurate result? (10 points)Donut SalesMonth13 [January 14 February March April
Problem 5 (Total: 20 points) A: Use exponential smoothing with a 0.2 aud a QAto forecast donut sales for May: Assume that the forecast for January was for 28 donuts: (10 points) B. Using MAD, which smothing constatnt provides the most accurate result? (10 points) Donut Sales Month 13 [January 14 Feb...
##### What is the purpose of oral argument? Why would an attorney waive the right to make...
What is the purpose of oral argument? Why would an attorney waive the right to make oral argument?...
##### Usc the Reieteucrsununamneedeil for this questionUsc E6zt cnerg} ICements thc OWL Table Reference (scc Rcferences buttor Strain Enetgy luctemcnts I calculate thte cncrE}" difference belivcen the twc alr comoumnatone of the compcurd below Specify substilueni posittons (axizl cquatorial) the morc stable chair Estimate thc pcrcett of te mord stable chail equilibriut 259€ (To detetitine thc Fcrcent of the Inore stable chair equilibrium; first calculate E+y: and then usc this Talc - find the p
Usc the Reieteucrs ununam needeil for this question Usc E6zt cnerg} ICements thc OWL Table Reference (scc Rcferences buttor Strain Enetgy luctemcnts I calculate thte cncrE}" difference belivcen the twc alr comoumnatone of the compcurd below Specify substilueni posittons (axizl cquatorial) the m...
##### Factor completely.$t^{6}+1$
Factor completely. $t^{6}+1$...
##### 5. (10 pts) square-bottomed box with no top has fixed volume of 1000 f Determine the dimensions of the box that would minimize the amount of material reauired to make the box: Show the units and justify your answer!
5. (10 pts) square-bottomed box with no top has fixed volume of 1000 f Determine the dimensions of the box that would minimize the amount of material reauired to make the box: Show the units and justify your answer!...
##### Question 1What is the major product for the following reaction? Nal acetoneEtONa EtOH , heat
Question 1 What is the major product for the following reaction? Nal acetone EtONa EtOH , heat...
##### Carefully state the Intermediate Value Theorem and Use the IVT show that there is root of the given equation in the specified interval: (2 Points) Use the IVT show that there is root of the given equation in the specified interval (3 points) Vx = I-X (0, 1)
Carefully state the Intermediate Value Theorem and Use the IVT show that there is root of the given equation in the specified interval: (2 Points) Use the IVT show that there is root of the given equation in the specified interval (3 points) Vx = I-X (0, 1)...
Quartz Instruments had retained earnings of 147.000 at December 31, 2019 Not Income for 2020 totalled 06.000, and vidends for 2020 were $35.000 How much and caring short por December 1, 2017 OA$182.000 OB. $144.000 C. 5190,000 OD. 5233,000 Quartz Instruments had retained earnings of$147,000 at Dec...
|
2023-01-29 18:29:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6282268166542053, "perplexity": 8893.924509305596}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00765.warc.gz"}
|
http://tanergungor.blogspot.com/2015/06/robot-navigation-q-learning-algorithm.html
|
## Thursday, June 11, 2015
### Robot Navigation - Q learning algorithm
Objective
The aim of this lab is to understand the reinforcement learning subject of the autonomous robots course and implement a reinforcement learning algorithm to learn a policy that moves a robot to a goal position. The algorithm is the Q-learning algorithm and it will be implemented in Matlab.
1 - Introduction
The reinforcement learning algorithm does not force the robot to plan path by using any path planning algorithm, rather the algorithm learns optimal solution by randomly moving inside map for several times. It is an approximation of natural learning process, where unknown problem is solved just by trial and error method. The following sections will briefly discuss about the implementation and the results obtained by the algorithm.
Environment: The environment used for this lab experiment is shown below.
Figure-1: Environment used for the implementation.
States and Actions: The size of the given environment is 20$\times$14 = 280 states. The robot can only do 4 different actions: ←, ↑, →, ↓. Thus, the size of the Q matrices would be 280$\times$4 = 1120 cells.
Dynamics: Dynamics make the robot move towards a direction according to the actions. The robot will move one cell per iteration to the direction of the action that we select, unless there is an obstacle or the wall in front of it, in which case it will stay in the same position.
Reinforcement function: Reinforcement function assigns reward at each cell, +1 for goal cell and -1 otherwise.
2 - The Algorithm
Q-Learning is an Off-Policy algorithm for Temporal Difference learning. It learns the optimal policy even when actions are selected according to a more exploratory or even random policy. The pseudo-code we used for the implementation is shown below:
Figure-2: The pseudo code of Q-learning algorithm.
• $\alpha$ - the learning rate, set between 0 and 1. Setting it to 0 means that the Q-values are never updated, hence nothing is learned. Setting a high value such as 0.9 means that learning can occur quickly.
• $\gamma$ - discount factor, also set between 0 and 1. This models the fact that future rewards are worth less than immediate rewards. Mathematically, the discount factor needs to be set less than 0 for the algorithm to converge.
• $max_{\alpha}$ - the maximum reward that is attainable in the state following the current one. i.e the reward for taking the optimal action thereafter.
This procedural approach can be translated into plain english steps as follows:
• Initialize the Q-values table, Q(s, a).
• Observe the current state, s.
• Choose an action, a, for that state based on one of the action selection policies explained in the next chapter ($\varepsilon$-soft, $\varepsilon$-greedy or softmax).
• Take the action, and observe the reward, r, as well as the new state, s'.
• Update the Q-value for the state using the observed reward and the maximum reward possible for the next state. The updating is done according to the forumla and parameters described above.
• Set the state to the new state, and repeat the process until a terminal state is reached.
2.1 - Action Selection Policies
As mentioned above, there are three common policies used for action selection. The aim of these policies is to balance the trade-off between exploitation and exploration, by not always exploiting what has been learnt so far.
• $\varepsilon$-greedy - most of the time the action with the highest estimated reward is chosen, called the greediest action. Every once in a while, say with a small probability , an action is selected at random. The action is selected uniformly, independant of the action-value estimates. This method ensures that if enough trials are done, each action will be tried an infinite number of times, thus ensuring optimal actions are discovered.
• $\varepsilon$-soft - very similar to -greedy. The best action is selected with probability 1 - and the rest of the time a random action is chosen uniformly.
• softmax - one drawback of -greedy and -soft is that they select random actions uniformly. The worst possible action is just as likely to be selected as the second best. Softmax remedies this by assigning a rank or weight to each of the actions, according to their action-value estimate. A random action is selected with regards to the weight associated with each action, meaning the worst actions are unlikely to be chosen. This is a good approach to take where the worst actions are very unfavourable.
It is not clear which of these policies produces the best results overall. The nature of the task will have some bearing on how well each policy influences learning. If the problem we are trying to solve is of a game playing nature, against a human opponent, human factors may also be influencial.
3 - The Implementation
The algorithm has been implemented in MATLAB. The problem consists in finding the goal in a finite 2D environment that is closed and contains some obstacles as shown in the figure (Fig.1). The map given with the lab manual has been used to implement this algorithm. The goal position is given as (18,3) for the environment.
After implementing the algorithm, we got the following Q matrix for the given map.
$\tiny Q_1 = \begin{smallmatrix} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & -9.2943 & -9.2237 & -9.1390 & -9.0439 & -8.9383 & 0 & 0 & -8.5404 & -8.3806 & -8.2018 & -8.0029 & -7.7821 & -7.5369 & 0 & 0 & -0.3124 & 0.7610 & -0.3110 & 0 \\ & 0 & -9.2239 & -9.1390 & -9.0441 & -8.9384 & -8.8208 & 0 & 0 & -8.3815 & -8.2028 & -8.0037 & -7.7825 & -7.5369 & -7.2640 & 0 & 0 & -0.1000 & -0.2679 & -0.1000 & 0 \\ & 0 & -9.1390 & -9.0441 & -8.9383 & -8.8207 & -8.6900 & -8.5445 & -8.3830 & -8.2035 & -8.0042 & -7.7830 & -7.5370 & -7.2638 & -6.9605 & 0 & 0 & -0.4111 & -0.1000 & -0.3182 & 0 \\ & 0 & -9.0443 & -8.9386 & -8.8210 & -8.6903 & -8.5449 & -8.3835 & -8.2041 & -8.0048 & -7.7833 & -7.5372 & -7.2639 & -6.9603 & -6.6232 & 0 & 0 & -1.4003 & -0.4277 & -1.4244 & 0 \\ & 0 & -8.9396 & -8.8258 & -8.7018 & -8.5480 & -8.3945 & -8.2068 & -8.0173 & -7.7855 & -7.5375 & -7.2641 & -6.9603 & -6.6230 & -6.2482 & 0 & 0 & -2.2785 & -1.3779 & -2.2364 & 0 \\ & 0 & -9.0512 & -8.9513 & -8.8418 & 0 & 0 & 0 & 0 & 0 & -7.2641 & -6.9604 & -6.6231 & -6.2482 & -5.8315 & 0 & 0 & -2.9556 & -2.2357 & -3.0598 & 0 \\ & 0 & -9.1429 & -9.0484 & -8.9420 & 0 & 0 & 0 & 0 & 0 & -7.0100 & -6.6289 & -6.2864 & -5.8579 & -5.3980 & -4.9220 & -4.3757 & -3.6911 & -3.0204 & -3.6700 & 0 \\ & 0 & -9.2117 & -9.1366 & -9.0429 & -8.9376 & -8.8202 & -8.6898 & 0 & 0 & -7.3026 & -6.9989 & -6.6591 & -6.2971 & -5.9022 & -5.4483 & -4.8861 & -4.3715 & -3.6570 & -4.2932 & 0 \\ & 0 & -9.1386 & -9.0436 & -8.9381 & -8.8207 & -8.6899 & -8.5446 & 0 & 0 & -7.5762 & -7.2695 & -6.9658 & -6.6430 & -6.2820 & -5.8705 & -5.3831 & -4.9359 & -4.2940 & -4.9371 & 0 \\ & 0 & -9.0524 & -8.9505 & -8.8226 & -8.7125 & -8.5453 & -8.4023 & -8.2374 & -8.0203 & -7.8171 & -7.5767 & -7.3031 & -6.9796 & -6.6225 & 0 & -5.8293 & -5.4190 & -4.9325 & -5.4252 & 0 \\ & 0 & -9.1498 & -9.0542 & -8.9570 & -8.8325 & -8.6999 & -8.5490 & -8.4054 & -8.2301 & -8.0035 & -7.8082 & -7.5710 & -7.2651 & 0 & 0 & -6.2765 & -5.9030 & -5.3661 & -5.8961 & 0 \\ & 0 & -9.2214 & -9.1390 & -9.0480 & -8.9367 & -8.8243 & -8.6869 & -8.5480 & -8.3945 & -8.2068 & -8.0173 & -7.7855 & 0 & 0 & 0 & -6.6225 & -6.2654 & -5.8288 & -6.2654 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix}$
$\tiny Q_2 = \begin{smallmatrix} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & -9.3078 & -9.2723 & -9.2089 & -9.1246 & -9.0379 & 0 & 0 & -8.6219 & -8.5167 & -8.3643 & -8.1977 & -7.9979 & -7.7741 & 0 & 0 & -1.2748 & -0.3031 & -1.2584 & 0 \\ & 0 & -9.3148 & -9.2640 & -9.2199 & -9.1365 & -9.0427 & 0 & 0 & -8.6620 & -8.5348 & -8.3782 & -8.2008 & -8.0022 & -7.7811 & 0 & 0 & -1.2781 & -0.2737 & -1.2751 & 0 \\ & 0 & -9.2691 & -9.2023 & -9.1372 & -9.0436 & -8.9380 & -8.6892 & -8.5436 & -8.5425 & -8.3813 & -8.2017 & -8.0030 & -7.7816 & -7.5366 & 0 & 0 & -0.3165 & 0.7594 & -0.3164 & 0 \\ & 0 & -9.2139 & -9.1368 & -9.0433 & -8.9373 & -8.8205 & -8.6892 & -8.5439 & -8.3823 & -8.2027 & -8.0040 & -7.7822 & -7.5358 & -7.2636 & 0 & 0 & -1.2849 & -0.3165 & -1.2845 & 0 \\ & 0 & -9.1371 & -9.0435 & -8.9381 & -8.8206 & -8.6900 & -8.5444 & -8.3829 & -8.2035 & -8.0043 & -7.7827 & -7.5365 & -7.2631 & -6.9603 & 0 & 0 & -2.1564 & -1.2849 & -2.1555 & 0 \\ & 0 & -9.0439 & -8.9387 & -8.8213 & 0 & 0 & 0 & 0 & 0 & -7.7818 & -7.5368 & -7.2635 & -6.9599 & -6.6227 & 0 & 0 & -2.9407 & -2.1564 & -2.9394 & 0 \\ & 0 & -9.1382 & -9.0441 & -8.9387 & 0 & 0 & 0 & 0 & 0 & -7.5357 & -7.2631 & -6.9600 & -6.6228 & -6.2481 & -5.3675 & -4.8535 & -3.6466 & -2.9407 & -3.6447 & 0 \\ & 0 & -9.2127 & -9.1368 & -9.0431 & -9.0333 & -8.9321 & -8.8181 & 0 & 0 & -7.2632 & -6.9597 & -6.6223 & -6.2472 & -5.8305 & -5.3675 & -4.8531 & -4.2817 & -3.6466 & -4.2790 & 0 \\ & 0 & -9.2173 & -9.2076 & -9.1328 & -9.0412 & -8.9365 & -8.8201 & 0 & 0 & -7.5363 & -7.2630 & -6.9593 & -6.6217 & -6.2466 & -5.8299 & -5.3668 & -4.8524 & -4.2817 & -4.8496 & 0 \\ & 0 & -9.2125 & -9.1378 & -9.0435 & -8.9376 & -8.8203 & -8.6896 & -8.3827 & -8.2032 & -7.7824 & -7.5363 & -7.2628 & -6.9590 & -6.6215 & 0 & -5.8256 & -5.3639 & -4.8527 & -5.3608 & 0 \\ & 0 & -9.1382 & -9.0433 & -8.9376 & -8.8196 & -8.6889 & -8.5436 & -8.3820 & -8.2026 & -8.0033 & -7.7816 & -7.5357 & -7.2625 & 0 & 0 & -6.2373 & -5.8250 & -5.3657 & -5.8206 & 0 \\ & 0 & -9.2149 & -9.1346 & -9.0409 & -8.9353 & -8.8179 & -8.6868 & -8.5417 & -8.3801 & -8.2008 & -8.0021 & -7.7813 & 0 & 0 & 0 & -6.6048 & -6.2378 & -5.8254 & -6.2298 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix}$
$\tiny Q_3 = \begin{smallmatrix} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & -9.3002 & -9.3124 & -9.2598 & -9.2151 & -9.1209 & 0 & 0 & -8.6505 & -8.6573 & -8.5303 & -8.3772 & -8.1943 & -7.9950 & 0 & 0 & -1.2693 & -1.2596 & -0.3107 & 0 \\ & 0 & -9.2481 & -9.2853 & -9.2197 & -9.1374 & -9.0432 & 0 & 0 & -8.5306 & -8.5373 & -8.3796 & -8.2016 & -8.0011 & -7.7814 & 0 & 0 & -0.3150 & -0.2689 & 0.7594 & 0 \\ & 0 & -9.2160 & -9.2179 & -9.1371 & -9.0433 & -8.9380 & -8.8203 & -8.6893 & -8.5437 & -8.3821 & -8.2020 & -8.0034 & -7.7824 & -7.5360 & 0 & 0 & -1.2842 & -1.2848 & -0.3164 & 0 \\ & 0 & -9.1343 & -9.1378 & -9.0436 & -8.9379 & -8.8203 & -8.6896 & -8.5438 & -8.3828 & -8.2035 & -8.0039 & -7.7824 & -7.5360 & -7.2632 & 0 & 0 & -2.1557 & -2.1558 & -1.2844 & 0 \\ & 0 & -9.0424 & -9.0443 & -8.9386 & -8.8209 & -8.6902 & -8.5447 & -8.3835 & -8.2040 & -8.0048 & -7.7829 & -7.5367 & -7.2634 & -6.9596 & 0 & 0 & -2.9399 & -2.9403 & -2.1557 & 0 \\ & 0 & -9.1230 & -9.1254 & -9.0429 & 0 & 0 & 0 & 0 & 0 & -7.5360 & -7.5367 & -7.2637 & -6.9598 & -6.6225 & 0 & 0 & -3.6461 & -3.6457 & -2.9394 & 0 \\ & 0 & -9.2024 & -9.1912 & -9.1148 & 0 & 0 & 0 & 0 & 0 & -7.2627 & -7.2635 & -6.9602 & -6.6227 & -6.2481 & -5.8312 & -5.3682 & -4.8536 & -4.2807 & -3.6447 & 0 \\ & 0 & -9.2354 & -9.2138 & -9.1890 & -9.1226 & -9.0390 & -8.9321 & 0 & 0 & -7.5343 & -7.5350 & -7.2617 & -6.9586 & -6.6214 & -6.2461 & -5.8289 & -5.3654 & -4.8515 & -4.2788 & 0 \\ & 0 & -9.1934 & -9.2098 & -9.1368 & -9.0430 & -8.9372 & -8.8200 & 0 & 0 & -7.7816 & -7.7810 & -7.5351 & -7.2622 & -6.9580 & -6.6195 & -6.2434 & -5.8279 & -5.3634 & -4.8496 & 0 \\ & 0 & -9.1304 & -9.1343 & -9.0433 & -8.9378 & -8.8207 & -8.6897 & -8.5443 & -8.3827 & -8.2031 & -8.0030 & -7.7807 & -7.5350 & -7.2598 & 0 & -6.1654 & -6.2230 & -5.8213 & -5.3612 & 0 \\ & 0 & -9.1971 & -9.1956 & -9.1300 & -9.0410 & -8.9351 & -8.8179 & -8.6866 & -8.5415 & -8.3807 & -8.1995 & -8.0015 & -7.7783 & 0 & 0 & -6.5028 & -6.4498 & -6.2222 & -5.8212 & 0 \\ & 0 & -9.2355 & -9.2192 & -9.1863 & -9.0734 & -9.0293 & -8.9198 & -8.8052 & -8.6773 & -8.5275 & -8.3586 & -8.1660 & 0 & 0 & 0 & -6.7050 & -6.7333 & -6.5172 & -6.2306 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix}$
$\tiny Q_4 = \begin{smallmatrix} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & -9.2953 & -9.2238 & -9.1390 & -9.0438 & -9.0383 & 0 & 0 & -8.5405 & -8.3804 & -8.2017 & -8.0028 & -7.7820 & -7.7789 & 0 & 0 & -0.3121 & -1.2707 & -1.2546 & 0 \\ & 0 & -9.2240 & -9.1391 & -9.0441 & -8.9384 & -8.9371 & 0 & 0 & -8.3815 & -8.2028 & -8.0037 & -7.7824 & -7.5369 & -7.5359 & 0 & 0 & 0.7594 & -0.2673 & -0.3122 & 0 \\ & 0 & -9.1390 & -9.0441 & -8.9383 & -8.8208 & -8.6900 & -8.5445 & -8.3830 & -8.2035 & -8.0043 & -7.7830 & -7.5371 & -7.2639 & -7.2629 & 0 & 0 & -0.3165 & -1.2843 & -1.2825 & 0 \\ & 0 & -9.0443 & -8.9385 & -8.8210 & -8.6902 & -8.5449 & -8.3834 & -8.2041 & -8.0048 & -7.7833 & -7.5373 & -7.2639 & -6.9603 & -6.9599 & 0 & 0 & -1.2849 & -2.1552 & -2.1530 & 0 \\ & 0 & -8.9390 & -8.8214 & -8.6907 & -8.5454 & -8.3840 & -8.2046 & -8.0052 & -7.7836 & -7.5375 & -7.2641 & -6.9603 & -6.6230 & -6.6224 & 0 & 0 & -2.1564 & -2.9391 & -2.9376 & 0 \\ & 0 & -9.0439 & -8.9388 & -8.9360 & 0 & 0 & 0 & 0 & 0 & -7.2641 & -6.9605 & -6.6230 & -6.2481 & -6.2473 & 0 & 0 & -2.9407 & -3.6442 & -3.6420 & 0 \\ & 0 & -9.1381 & -9.0441 & -9.0322 & 0 & 0 & 0 & 0 & 0 & -6.9606 & -6.6233 & -6.2483 & -5.8315 & -5.3684 & -4.8538 & -4.2820 & -3.6466 & -4.2794 & -4.2763 & 0 \\ & 0 & -9.2124 & -9.1367 & -9.0429 & -8.9375 & -8.8203 & -8.8191 & 0 & 0 & -7.2631 & -6.9595 & -6.6221 & -6.2471 & -5.8304 & -5.3675 & -4.8531 & -4.2815 & -4.8481 & -4.8453 & 0 \\ & 0 & -9.1385 & -9.0436 & -8.9381 & -8.8207 & -8.6899 & -8.6892 & 0 & 0 & -7.5363 & -7.2630 & -6.9593 & -6.6217 & -6.2466 & -5.8297 & -5.3667 & -4.8524 & -5.3621 & -5.3403 & 0 \\ & 0 & -9.0443 & -8.9387 & -8.8210 & -8.6901 & -8.5447 & -8.3831 & -8.2036 & -8.0041 & -7.7824 & -7.5362 & -7.2628 & -6.9590 & -6.9557 & 0 & -5.8252 & -5.3645 & -5.8162 & -5.8043 & 0 \\ & 0 & -9.1381 & -9.0431 & -8.9374 & -8.8196 & -8.6888 & -8.5434 & -8.3820 & -8.2025 & -8.0031 & -7.7817 & -7.5356 & -7.5316 & 0 & 0 & -6.2374 & -5.8254 & -6.2076 & -6.1673 & 0 \\ & 0 & -9.2148 & -9.1344 & -9.0405 & -8.9353 & -8.8175 & -8.6866 & -8.5413 & -8.3800 & -8.2009 & -8.0020 & -7.9625 & 0 & 0 & 0 & -6.6043 & -6.2370 & -6.4064 & -6.4853 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix}$
The resultant optimal policy is shown below:
$Policy = \begin{matrix} & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o \\ & o & \bigtriangledown & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & o & o & \bigtriangledown & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangledown & o & o & \triangleright & \bigtriangledown & \triangleleft & o \\ & o & \bigtriangledown & \bigtriangledown & \triangleright & \bigtriangledown & \bigtriangledown & o & o & \bigtriangledown & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & \bigtriangledown & o & o & \triangleright & G & \triangleleft & o \\ & o & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & \triangleright & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & \bigtriangledown & \bigtriangledown & \bigtriangledown & \bigtriangledown & o & o & \triangleright & \bigtriangleup & \triangleleft & o \\ & o & \bigtriangledown & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangledown & \triangleright & \bigtriangledown & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & o & o & \bigtriangleup & \bigtriangleup & \triangleleft & o \\ & o & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & \bigtriangledown & o & o & \triangleright & \bigtriangleup & \bigtriangleup & o \\ & o & \bigtriangleup & \bigtriangleup & \bigtriangleup & o & o & o & o & o & \triangleright & \bigtriangledown & \triangleright & \triangleright & \bigtriangledown & o & o & \bigtriangleup & \bigtriangleup & \bigtriangleup & o \\ & o & \triangleright & \triangleright & \bigtriangleup & o & o & o & o & o & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & \bigtriangleup & \triangleleft & o \\ & o & \bigtriangledown & \bigtriangledown & \bigtriangledown & \triangleright & \bigtriangledown & \bigtriangledown & o & o & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & \triangleleft & o \\ & o & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangledown & \bigtriangledown & o & o & \bigtriangleup & \bigtriangleup & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & \bigtriangleup & \bigtriangleup & o \\ & o & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & o & \triangleright & \bigtriangleup & \bigtriangleup & \bigtriangleup & o \\ & o & \triangleright & \triangleright & \triangleright & \bigtriangleup & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & \triangleright & \bigtriangleup & o & o & \bigtriangleup & \bigtriangleup & \bigtriangleup & \bigtriangleup & o \\ & o & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \triangleright & \bigtriangleup & \triangleright & \bigtriangleup & o & o & o & \triangleright & \triangleright & \bigtriangleup & \bigtriangleup & o \\ & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o & o \\ \end{matrix}$
Graphical representation of the State Value Function, V, as:
Figure-3: Graphical representation of the State Value Function, V.
$\tiny V = \begin{smallmatrix} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & -9.2943 & -9.2237 & -9.1390 & -9.0438 & -8.9383 & 0 & 0 & -8.5404 & -8.3804 & -8.2017 & -8.0028 & -7.7820 & -7.5369 & 0 & 0 & -0.3121 & 0.7610 & -0.3107 & 0 \\ & 0 & -9.2239 & -9.1390 & -9.0441 & -8.9384 & -8.8208 & 0 & 0 & -8.3815 & -8.2028 & -8.0037 & -7.7824 & -7.5369 & -7.2640 & 0 & 0 & 0.7594 & -0.2673 & 0.7594 & 0 \\ & 0 & -9.1390 & -9.0441 & -8.9383 & -8.8207 & -8.6900 & -8.5445 & -8.3830 & -8.2035 & -8.0042 & -7.7830 & -7.5370 & -7.2638 & -6.9605 & 0 & 0 & -0.3165 & 0.7594 & -0.3164 & 0 \\ & 0 & -9.0443 & -8.9385 & -8.8210 & -8.6902 & -8.5449 & -8.3834 & -8.2041 & -8.0048 & -7.7833 & -7.5372 & -7.2639 & -6.9603 & -6.6232 & 0 & 0 & -1.2849 & -0.3165 & -1.2844 & 0 \\ & 0 & -8.9390 & -8.8214 & -8.6907 & -8.5454 & -8.3840 & -8.2046 & -8.0052 & -7.7836 & -7.5375 & -7.2641 & -6.9603 & -6.6230 & -6.2482 & 0 & 0 & -2.1564 & -1.2849 & -2.1555 & 0 \\ & 0 & -9.0439 & -8.9387 & -8.8213 & 0 & 0 & 0 & 0 & 0 & -7.2641 & -6.9604 & -6.6230 & -6.2481 & -5.8315 & 0 & 0 & -2.9407 & -2.1564 & -2.9394 & 0 \\ & 0 & -9.1381 & -9.0441 & -8.9387 & 0 & 0 & 0 & 0 & 0 & -6.9606 & -6.6233 & -6.2483 & -5.8315 & -5.3684 & -4.8538 & -4.2820 & -3.6466 & -2.9407 & -3.6447 & 0 \\ & 0 & -9.2117 & -9.1366 & -9.0429 & -8.9375 & -8.8202 & -8.6898 & 0 & 0 & -7.2631 & -6.9595 & -6.6221 & -6.2471 & -5.8304 & -5.3675 & -4.8531 & -4.2815 & -3.6466 & -4.2788 & 0 \\ & 0 & -9.1385 & -9.0436 & -8.9381 & -8.8207 & -8.6899 & -8.5446 & 0 & 0 & -7.5363 & -7.2630 & -6.9593 & -6.6217 & -6.2466 & -5.8297 & -5.3667 & -4.8524 & -4.2817 & -4.8496 & 0 \\ & 0 & -9.0443 & -8.9387 & -8.8210 & -8.6901 & -8.5447 & -8.3831 & -8.2036 & -8.0041 & -7.7824 & -7.5362 & -7.2628 & -6.9590 & -6.6215 & 0 & -5.8252 & -5.3639 & -4.8527 & -5.3608 & 0 \\ & 0 & -9.1381 & -9.0431 & -8.9374 & -8.8196 & -8.6888 & -8.5434 & -8.3820 & -8.2025 & -8.0031 & -7.7816 & -7.5356 & -7.2625 & 0 & 0 & -6.2373 & -5.8250 & -5.3657 & -5.8206 & 0 \\ & 0 & -9.2148 & -9.1344 & -9.0405 & -8.9353 & -8.8175 & -8.6866 & -8.5413 & -8.3800 & -8.2008 & -8.0020 & -7.7813 & 0 & 0 & 0 & -6.6043 & -6.2370 & -5.8254 & -6.2298 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{smallmatrix}$
The evolution of effectiveness has been computed at every 250 episodes of the main loop. The rewards are then averaged for 100 episodes. The graph shows randomness locally but its property is changing globally as the number of required iteration is decreasing with the increase of number of episodes. And eventually it gets saturated at some point which indicates minimum number of iterations needed for this particular map. Graphical representation of the evolution of the effectiveness is shown below:
Figure-4: Graphical representation of the evolution of the effectiveness.
4 - The results & conclusion
This laboratory work was very helpful to understand the theoretical background of the Q learning algorithm. It helped me to explore more information about it and how to apply to the robot learning.
|
2017-02-27 04:22:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715955495834351, "perplexity": 623.2301389659292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00164-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/speed-of-a-soliton-and-its-energy.304182/
|
# Speed of a soliton and its energy
1. Apr 1, 2009
### somasimple
Hi All,
A one dimensional soliton will have the mathematical form $f(x - ct)$ where c is the speed of propagation and f represents some envelope function.
If we suppose that c1 is a speed and c2 another one where c2>c1.
If E1 is the energy needed to maintain the speed and shape of the soliton.
What happens when the speed change from c1 to c2?
1/ if the shape is constant?
2/ if the shape is reduced?
3/ Is the energy needed enlarged/augmented?
|
2017-09-22 23:10:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5277805328369141, "perplexity": 1392.7528553599902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689373.65/warc/CC-MAIN-20170922220838-20170923000838-00367.warc.gz"}
|
https://math.stackexchange.com/questions/3666427/maximum-number-of-subrectangles-that-lie-completely-within-a-rectangle
|
# Maximum number of subrectangles that lie completely within a rectangle
Suppose I have $$n$$ rectangles in the 2D plane, as shown on the left. I am interested in partitioning the region inside these rectangles into disjoint sub-rectangles and counting the number of resulting subrectangles. Toward that end, I create grid lines along the edges of the rectangles (i.e., $$2n$$ vertical lines and $$2n$$ horizontal lines), as shown on the right.
In this specific example, there are $$n=5$$ rectangles and 81 sub-rectangles in the grid, 58 of which lie in the union of the original $$n$$ rectangles, shown in yellow below.
In the general case:
1. What is the maximum number of subrectangles that can lie fully inside an original rectangle? A loose upper bound is to observe that there are exactly $$(2n-1) \times (2n-1)$$ subrectangles in the grid , all of which are disjoint, and each subrectangle is either fully contained in or fully not contained in another rectangle. Is there a tighter bound?
2. What is the naive bound in $$d$$ dimensions? Is there a tighter bound? Specifically, we are now counting the number of disjoint $$d$$-dimensional subcubes in the grid that partition $$n$$ $$d$$-dimensional cubes?
The maximum possible is in fact $$(2n-1)^2$$, i.e. the trivial bound is tight. This can be achieved by:
• Having a big rectangle that encompass all others,
• Giving all rectangles different $$x$$- and $$y$$- coordinates for all their edges.
In short, the grid of $$(2n-1) \times (2n-1)$$ sub-rectangles are all inside the big one.
The solution clearly generalizes to $$d$$ dimensions, achieving the trivial bound of $$(2n-1)^d$$.
The more interesting question is what numbers are possible? E.g. in the $$n=5$$ case, the above showed $$(2n-1)^2 = 9^2 = 81$$ is possible. I can also imagine $$79$$ in my head. But I havent found a way to get to exactly $$80$$ (and I think it may be impossible)...
• oh whoops, your OP had $2(n-1)$ (probably a typo) and i blindly copied it! the relevant term should be $(2n -1)$, not $2(n-1)$. fixed now. – antkam May 11 at 4:10
|
2020-07-08 08:34:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014084935188293, "perplexity": 193.17744890824346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00007.warc.gz"}
|
http://oeis.org/wiki/Prime_powers
|
This site is supported by donations to The OEIS Foundation.
# Prime powers
Prime powers are prime numbers raised to powers. For example, 729 is a prime power, being 36. Technically, the primes themselves are prime powers, too, with exponent 1, but generally an exponent of 2 or greater is meant. (Along similar lines, the number 1 is technically a prime power as well, with the exponent being 0.)
## Number of factorizations of n into prime powers greater than 1
$a(n) = \prod_{i=1}^{\omega(n)} p(\alpha_i), \,$
where $\scriptstyle \omega(n) \,$ is the number of distinct prime factors of $\scriptstyle n \,$, $\scriptstyle p(n) \,$ is the number of partitions of $\scriptstyle n \,$ and the $\scriptstyle \alpha_i \,$ are the exponents of the distinct prime factors $\scriptstyle p_i \,$ of $\scriptstyle n \,$
$n = \prod_{i=1}^{\omega(n)} p_i^{\alpha_i} \,$
## Sequences
### Sequences for p^k (p prime, k >= 0)
Prime powers p^k (p prime, k >= 0). (A000961)
{1, 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, ...}
a(1) = 1; for n > 1, a(n) = prime root of n-th prime power (A025473)
{1, 2, 3, 2, 5, 7, 2, 3, 11, 13, 2, 17, 19, 23, 5, 3, 29, 31, 2, 37, 41, 43, 47, 7, 53, 59, 61, 2, 67, 71, 73, 79, 3, 83, 89, 97, 101, 103, 107, 109, 113, 11, 5, 127, 2, 131, 137, 139, 149, 151, 157, 163, 167, 13, ...}
Exponent of n-th prime power (A000961). (A025474)
{0, 1, 1, 2, 1, 1, 3, 2, 1, 1, 4, 1, 1, 1, 2, 3, 1, 1, 5, 1, 1, 1, 1, 2, 1, 1, 1, 6, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 7, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5, 1, 8, 1, 1, 1, 1, ...}
If n = k-th prime power then k else 0, n >= 1. (A095874)
{1, 2, 3, 4, 5, 0, 6, 7, 8, 0, 9, 0, 10, 0, 0, 11, 12, 0, 13, 0, 0, 0, 14, 0, 15, 0, 16, 0, 17, 0, 18, 19, 0, 0, 0, 0, 20, 0, 0, 0, 21, 0, 22, 0, 0, 0, 23, 0, 24, 0, 0, 0, 25, 0, 0, 0, 0, 0, 26, 0, 27, 0, 0, 28, 0, 0, 29, ...}
1 if n, n >= 1, is a prime power p^k (k >= 0), otherwise 0. (A010055)
{1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, ...}
Number of prime powers <= n, n >= 1. (A065515)
{1, 2, 3, 4, 5, 5, 6, 7, 8, 8, 9, 9, 10, 10, 10, 11, 12, 12, 13, 13, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 19, 19, 19, 19, 19, 20, 20, 20, 20, 21, 21, 22, 22, 22, 22, 23, 23, 24, 24, 24, 24, 25, 25, 25, ...}
### Sequences for p^k (p prime, k >= 1)
If n, n >= 1, is a prime power p^k, k >= 1, then n, otherwise 1. (A100994)
{1, 2, 3, 4, 5, 1, 7, 8, 9, 1, 11, 1, 13, 1, 1, 16, 17, 1, 19, 1, 1, 1, 23, 1, 25, 1, 27, 1, 29, 1, 31, 32, 1, 1, 1, 1, 37, 1, 1, 1, 41, 1, 43, 1, 1, 1, 47, 1, 49, 1, 1, 1, 53, 1, 1, 1, 1, 1, 59, 1, 61, 1, 1, 64, 1, 1, 67, ...}
a(n) = 1 unless n, n >= 1, is a prime or prime power when a(n) = the prime in question (exponential of Mangoldt function M(n), which is log(p) if n=p^k otherwise 0). (A014963)
{1, 2, 3, 2, 5, 1, 7, 2, 3, 1, 11, 1, 13, 1, 1, 2, 17, 1, 19, 1, 1, 1, 23, 1, 5, 1, 3, 1, 29, 1, 31, 2, 1, 1, 1, 1, 37, 1, 1, 1, 41, 1, 43, 1, 1, 1, 47, 1, 7, 1, 1, 1, 53, 1, 1, 1, 1, 1, 59, 1, 61, 1, 1, 2, 1, 1, 67, ...}
If n, n >= 1, is a prime power p^k, k >= 1, then k, otherwise 0. (A100995)
{0, 1, 1, 2, 1, 0, 1, 3, 2, 0, 1, 0, 1, 0, 0, 4, 1, 0, 1, 0, 0, 0, 1, 0, 2, 0, 3, 0, 1, 0, 1, 5, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 6, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, ...}
Number of factorizations of n into prime powers greater than 1; number of Abelian groups of order n. (A000688)
{1, 1, 1, 2, 1, 1, 1, 3, 2, 1, 1, 2, 1, 1, 1, 5, 1, 2, 1, 2, 1, 1, 1, 3, 2, 1, 3, 2, 1, 1, 1, 7, 1, 1, 1, 4, 1, 1, 1, 3, 1, 1, 1, 2, 2, 1, 1, 5, 2, 2, 1, 2, 1, 3, 1, 3, 1, 1, 1, 2, 1, 1, 2, 11, 1, 1, 1, 2, 1, 1, 1, 6, 1, 1, 2, 2, ...}
Number of prime powers <= n, n >= 1, with exponents > 0. (A025528)
{0, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 8, 9, 9, 9, 10, 11, 11, 12, 12, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 18, 18, 18, 18, 18, 19, 19, 19, 19, 20, 20, 21, 21, 21, 21, 22, 22, 23, 23, 23, 23, 24, 24, 24, 24, ...}
### Sequences for p^k (p prime, k = 0 or k >= 2)
Prime powers p^k, k = 0 or k >= 2, thus excluding the primes, n >= 1. (A025475)
{1, 4, 8, 9, 16, 25, 27, 32, 49, 64, 81, 121, 125, 128, 169, 243, 256, 289, 343, 361, 512, 529, 625, 729, 841, 961, 1024, 1331, 1369, 1681, 1849, 2048, 2187, 2197, 2209, 2401, 2809, 3125, 3481, ...}
|
2013-05-18 13:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18872180581092834, "perplexity": 16.273074184205772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382398/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/160008/solve-discretized-partial-differential-equation?noredirect=1
|
# Solve discretized partial differential equation
I want to numerically solve the following exemplary partial differential equation in discretized form. My question is more about the technique how one would implement this in Mathematica (not about a specific solution to the equation). The discretized equation reads as follows:
f[n+1,j]=f[n]+dt*(f[n,j]+j*(f[n,j+1]-f[n,j]))
where n is e.g. the n-th time step (n*dt) and j e.g. the j-th spatial step (j*dx).
Note that the initial function f[0,j] shall be given for all j (one can restrict it e.g. to the interval j=[-100,100]).
I tried to implement it with FoldList[] but fail to match the dimensions of my array. Is FoldList[] the wrong idea here or how could one do it?
Help is very much appreciated. Thank you!
In principle, I could also implement it with loops. However, I wonder whether this is the most performant way.
• Why reinvent the wheel when NDSolve exists and can handle PDEs? – Chris K Nov 15 '17 at 19:41
• You are right, but as far as I know NDSolve won't apply to the equation I need to solve eventually (integro-differential equation). – Display Name Nov 15 '17 at 19:43
• You are probably right. Downloaded the .nb file, but where is the code? – Display Name Nov 15 '17 at 20:00
• @DisplayName Depending on where the integral is, you might still be able to use the method of lines to convert your integro-differential equation into a large set of ODEs, which NDSolve will handle better than your implementation of Euler's method. – Chris K Nov 15 '17 at 20:28
• Yeah, I tried this already to transform it into a system of ODEs, but wasn't successful either. Maybe I need to think a little bit more about it. – Display Name Nov 15 '17 at 20:32
|
2019-09-18 12:39:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.341985285282135, "perplexity": 621.5412528037054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00146.warc.gz"}
|
https://quant.stackexchange.com/questions/29660/using-a-constant-as-a-numeraire/29666
|
# Using a Constant as a Numeraire
Please provide steps to justify the below.
1) Can we use a constant as a numeraire?
Related Question: Scaling Stock Price and Strike etc. by a Constant
The rest of standard Geometric Brownian Motion and Black Scholes assumptions apply.
• If you prefer thinking in millions of dollars rather than in dollars, you can. Aug 16 '16 at 6:13
• Let we can do it. Please continue
– user16651
Aug 16 '16 at 6:50
• i am not wholly sure what the question. Is it "can we do martingale pricing with $S_t/N_t$ a martingale when $N_t$ is a constant? " The answer to that is no! Aug 17 '16 at 6:16
• @MarkJoshi Thanks for your comment. I think I need to make this two questions. 1) Can a constant be a numeraire? 2) What is the impact on option prices when the underlying price and strike are scaled by a constant? Aug 17 '16 at 11:09
• @MarkJoshi What's the difference between choosing a constant as your numeraire, and choosing a hypothetical instrument with zero drift and zero volatility?
– will
Aug 17 '16 at 13:13
Either $r=0$ in which $B_t$ is constant and is a valid numeraire (as is any multiple of it.)
or $r \neq 0$ in which case an asset of constant value would give an arbitrage since we could take $$B_t - N_t$$ with $B_0 = N_0$ and get a riskless profit. (or the opposite if $r<0.$) and so it would be a very flawed model.
A Numeraire must be a tradeable asset. If you can find a constant tradeable asset, then yes a constant can be used as a numeraire.
• Can we assume that there is a traded asset with constant price. Surely we can buy and sell something at the same value (though it defeats the purpose of a trade, which is to buy and sell for a profit ?? ) One instance where I can think of such a constant price asset being used is to fill it as part of a bigger trade to make up for part of the value ?? Please let me know if this would be a valid assumption. Aug 17 '16 at 4:22
• This is the right answer. Furthermore such a tradeable asset will not exist, unless the interest rate is zero. The accepted answer does not make any sense to me. Aug 17 '16 at 6:26
• @user249613 Even a dollar tomorrow is not worth the same as a dollar today unless interest rates are exactly zero. As Kiwiakos points out the asset will not exist. Another way to think about it is that option values will be the same regardless of the numeraire (redo the BS derivation using the stock as a numeraire to demonstrate this). The only way for the price to coincide when using a constant numeraire is if the constant asset itself is the bond; that is, a bond with zero interest rates. Aug 17 '16 at 11:33
• @user903 Thanks for the very helpful answer. I see the issue with my question and perhaps the need for two subquestions .. I have edited the question now to reflect that. Aug 17 '16 at 11:34
Actually, all investments, retirement accounts, mutual fund accounts, utility bills, supermarket price listings are reported or stated in the Constant Numeraire, which may also be called Dollar-kept-under-the-mattress Numeraire
It is the most widely (indeed the only) Numeraire used in real life.
How nice it would be if my retirement account or mutual fund account reported my accumulated wealth in the Bank Account Numeraire. Or atleast in the Inflation Numeraire. Even better, in the Nominal GDP Numeraire.
However, all reporting is necessarily required to be made in the Constant "Dollar-under-the-mattress" Numeraire
For ease of derivatives pricing, we change the numeraire from Constant Numeraire to Bank Account numeraire, or T-forward measure or whatever Numeraire but always convert the computed price back to Constant "Dollar-under-the-mattress" Numeraire because that is the value that mutual funds, retirement funds, investments need to report.
Use a constant to scale the numeraire and S and K would scale the same, volatility (of relative returns) would remain unchanged, S/K and ln(S/K) would remain unchanged and of course time to strike and the interest rate; now look in the Black formula and you see that the call price would scale like S and K as you would expect.
• Thanks for your answer. So can we conclude that, all option prices remain unchanged by a change of constant numeraire and by changing the strike accordingly (That is scaling the price and strike by a constant). Are there any examples of options where this might not hold? Aug 16 '16 at 14:06
• No, that would if nothing else violate the future = call - put arbitrage Aug 16 '16 at 14:08
• Sorry meant to add earlier, not just vanilla options, but including exotics, American, Bermudean etc.. Aug 16 '16 at 14:41
• In the practical cases the option price should scale with the units of underlying it is written on. Otherwise you would have to factor in and specify how to handle events like stock-splits. But for the general case I guess you could have derivatives where the price would vary non-trivially with the size of the underlying. Aug 16 '16 at 14:56
• But then it isn't constant... Aug 16 '16 at 16:04
|
2021-09-16 15:06:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6739413738250732, "perplexity": 932.2913521935077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00455.warc.gz"}
|
http://moodle.unishivaji.ac.in/course/info.php?id=1652
|
Course Outcomes: The course is designed to familiarize the students with the fundamental
topics, principles and methods of functional analysis. After studying this course, students will
have a demonstrable knowledge of normed spaces, Banach spaces, Hilbert space, continuous
linear transformations between such spaces, bounded linear functionals and finite dimensional
spectral theorem.
|
2020-10-27 09:03:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627073764801025, "perplexity": 1528.7514624114299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893845.76/warc/CC-MAIN-20201027082056-20201027112056-00000.warc.gz"}
|
https://www.transtutors.com/questions/multiple-products-most-businesses-sell-several-products-at-varying-prices-the-produc-1225854.htm
|
# Multiple Products Most businesses sell several products at varying prices. The products often have
1. Multiple Products Most businesses sell several products at varying prices. The products often have different unit variable costs. Thus, the total profit and the breakeven point depend on the proportions in which the products are sold. Sales mix is the relative contribution of sales among various products sold by a firm. Assume that the sales of Jordan, Inc., are the following for a typical year:
Product A Units Sold 18,000 Sales Mix 80% B 4,500 20 Total 22,500 100%
Assume the following unit selling prices and unit variable costs:
Product A Selling Price $80 Variable Cost per Unit$ 65 Unit Contribution Margin $15 B 140 100 40 Fixed costs are$400,000 per year, of which $60,000 are batch-related and$340,000 are facilities- related. Assume sales mix is constant in units.
Required
1. Determine the breakeven point in units.
2. Determine the number of units required for a before-tax net profit of $40,000. ## 1 Approved Answer 5 Ratings, (9 Votes) 1. Breakeven point in units= Fixed costs/Contribution margin per unit Contribution= ($15 * 80%) + ($40 * 20%) =$12 + $8 =$20
Breakeven point= $400000/$20= 20000 units
Breakeven sales in units of the productions:
Product A=...
## Related Questions in ABC Costing
• ### Hi I need help with the attached HW problems.
(Solved) December 29, 2014
Hi I need help with the attached HW problems.
Traditional allocation systems use only drivers which vary directly with volume of products produced such as direct labour in dollars, direct labour in hours, or by machine hours. These...
• ### Makati Plc makes and sells a number of products. Products A and B are products for the market prices...
(Solved) March 10, 2015
A Product B Other Products Production/Sales(units ) 5,000 10,000 40,000 ZMW’000 ZMW’000 ZMW’000 Total direct material cost 80 300 2,020 Total direct labour cost
A). i). Calculation of product cost and desired price based on existing absorption method: Particulars Product A Product B Other Products Direct material cost 80,000.00 300,000.00...
• ### 4 Cost Accounting Questions
(Solved) August 25, 2012
are producing 300,000 of these each year , of which 200,000 are transferred to the Small Motor Division at a price of $16.50 each and 100 ,000 are sold to external customers at$ 20 each. I know we can sell every carburetor we make to external customers. By being forced to transfer internally, we...
On this basis, allocation would be (100,000/150)*50 or $33,333 for H&S and (100,000/150)*100 or$66,667 for the Engineering department. This allocation seems justified as it is done on the...
• ### Carter Co. sells two products, Arks and Bins. Last year Carter sold 14,000 units of Arks and 56,000
(Solved) March 14, 2015
Carter Co. sells two products , Arks and Bins. Last year Carter sold 14,000 units of Arks and 56,000 units of Bins. Related data are: Product Unit Selling Price Unit Variable Cost Unit Contribution Margin Arks $120$ 80 $40 Bins #### Answer Preview : Answer 12 Product Units Sales Mix Arks 14,000 20% (14,000/70,000 * 100) Bins 56,000 80% (56,000/70,000 * 100) 70,000 Answer 13 Product Units Selling Price Sales Arks 14,000 120 1,680,000... • ### Casito plans to sell 65,000 units of Product #347 in 2011 at a price of$150 per unit.
(Solved) July 18, 2014
to sell 65 ,000 units of Product #347 in 2011 at a price of $150 per unit . Product #658 is a recent addition to Casito’s product line. This product incorporates the latest technology and can be sold at a premium price ; the company expects to sell 40,000 units of this product in 2011 for$300 per...
|
2018-09-21 16:03:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2229137420654297, "perplexity": 6813.1690907665525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157216.50/warc/CC-MAIN-20180921151328-20180921171728-00484.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-2-equations-inequalities-and-problem-solving-2-5-problem-solving-2-5-exercise-set-page-126/83
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$$7x-18$$
Simplifying the expression using the rules of Algebra, it follows: $$x-3\left(2x-4\left(x-1\right)+2\right) \\ x-3\left(-2x+6\right) \\ x+6x-18 \\ 7x-18$$
|
2020-02-22 13:23:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235792517662048, "perplexity": 2681.6726438672895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00287.warc.gz"}
|
http://en.wikipedia.org/wiki/Multivariable_calculus
|
Multivariable calculus
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus in more than one variable: the differentiation and integration of functions involving multiple variables, rather than just one.
Typical operations
Limits and continuity
A study of limits and continuity in multivariable calculus yields many counter-intuitive results not demonstrated by single-variable functions. For example, there are scalar functions of two variables with points in their domain which give a particular limit when approached along any arbitrary line, yet give a different limit when approached along a parabola. For example, the function
$f(x,y) = \frac{x^2y}{x^4+y^2}$
approaches zero along any line through the origin. However, when the origin is approached along a parabola $y=x^2$, it has a limit of 0.5. Since taking different paths toward the same point yields different values for the limit, the limit does not exist.
Continuity in each argument does not imply multivariate continuity: For instance, in the case of a real-valued function with two real-valued parameters, $f(x,y)$, continuity of $f$ in $x$ for fixed $y$ and continuity of $f$ in $y$ for fixed $x$ does not imply continuity of $f$. As an example, consider
$f(x,y)= \begin{cases} \frac{y}{x}-y & \text{if } 1 \geq x > y \geq 0 \\ \frac{x}{y}-x & \text{if } 1 \geq y > x \geq 0 \\ 1-x & \text{if } x=y>0 \\ 0 & \text{else}. \end{cases}$
It is easy to check that all real-valued functions (with one real-valued argument) that are given by $f_y(x):= f(x,y)$ are continuous in $x$ (for any fixed $y$). Similarly, all $f_x$ are continuous as $f$ is symmetric with regards to $x$ and $y$. However, $f$ itself is not continuous as can be seen by considering the sequence $f(\frac{1}{n},\frac{1}{n})$ (for natural $n$) which should converge to $f(0,0)=0$ if $f$ was continuous. However, $\lim_{n \to \infty} f(\frac{1}{n},\frac{1}{n}) = 1.$
Partial differentiation
The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.
Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator ($\nabla$) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function.
Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.
Multiple integration
The multiple integral expands the concept of the integral to functions of any variable. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral.
The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves.
Fundamental theorem of calculus in multiple dimensions
In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the famous integral theorems of vector calculus:
In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds.
Applications and uses
Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular,
Domain/Codomain Applicable techniques
Curves $f: \mathbb{R} \to \mathbb{R}^n$ Lengths of curves, line integrals, and curvature.
Surfaces $f: \mathbb{R}^{2} \to \mathbb{R}^n$ Areas of surfaces, surface integrals, flux through surfaces, and curvature.
Scalar fields $f: \mathbb{R}^n \to \mathbb{R}$ Maxima and minima, Lagrange multipliers, directional derivatives.
Vector fields $f: \mathbb{R}^m \to \mathbb{R}^n$ Any of the operations of vector calculus including gradient, divergence, and curl.
Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics.
Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus. Quantitative analysts in finance also often use multivariate calculus to predict future trends in the stock market.
|
2013-12-10 02:52:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120152592658997, "perplexity": 195.13995798947516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164005827/warc/CC-MAIN-20131204133325-00077-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/553236/how-do-i-format-words-as-variable-names
|
# How do I format words as variable names?
I am trying to find a way to make the equations that appear in this text a bit better looking... here is my suggestions:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$$Risk_{i,t}=c+\alpha Risk_{i,t-1}+\beta_{1} CEO_\text{PC}_{i,t}+\beta_{2} BR\_ind_{i,t}+\beta_{k} Z_{i,t}+\epsilon_{i,t}$$
\end{document}
Here a caption of the equation that I'm looking for :
Some options you have for typesetting words as variables include:
### As Operator Names
The \operatorname command from amsmath typesets its argument like the operators sin or log. In particular, you get a bit of extra spacing between α and Risk: \alpha \Risk_i is typeset exactly like \alpha \log_i.
One downside is that the spacing between \Risk and \cdot will be wrong, so you would need to write something like \Risk\! \cdot \alpha, or {\Risk} \cdot \alpha.
\documentclass{article}
\usepackage{amsmath}
\usepackage{fontspec}
%Formatting for a MWE on TeX.SX:
\usepackage[paperwidth=10cm]{geometry}
\pagestyle{empty}
\newcommand{\Risk}{\operatorname{Risk}}
\newcommand{\CEOPC}{\operatorname{CEO\_PC}}
\newcommand{\BRind}{\operatorname{BR\_ind}}
\begin{document}
$$\begin{split} \Risk_{i,t} = c + &\alpha \Risk_{i,t-1} + \beta_{1}\CEOPC_{i,t} + \\ &\beta_{2}\BRind_{i,t} + \beta_{k}Z_{i,t}+\epsilon_{i,t} \end{split}$$
\end{document}
## As Formatted Text
You can use the command \textnormal in math mode to set short phrases of text, and use any text-mode formatting you want. In this example, I typeset them as slanted, not italic, text.
\documentclass{article}
\usepackage{amsmath}
\usepackage{fontspec}
%Formatting for a MWE on TeX.SX:
\usepackage[paperwidth=10cm]{geometry}
\pagestyle{empty}
\newcommand\variablename[1]{\mathop{\textnormal{\slshape #1}}\nolimits}
\newcommand{\Risk}{\variablename{Risk}}
\newcommand{\CEOPC}{\variablename{CEO\_PC}}
\newcommand{\BRind}{\variablename{BR\_ind}}
\begin{document}
$$\begin{split} \Risk_{i,t} = c + &\alpha \Risk_{i,t-1} + \beta_{1}\CEOPC_{i,t} + \\ &\beta_{2}\BRind_{i,t} + \beta_{k}Z_{i,t}+\epsilon_{i,t} \end{split}$$
\end{document}
If you used \text, as in your example, the formatting of the text preceding the equation would bleed through. In some situations, though, you might want that: if you’re using \Risk in a heading that’s typeset as bold sans-serif, you might want your math symbols in bold sans-serif too.
Wrapping it in \mathop gives you spacing like the operator \lim, but then subscripts would be set beneath like \displaystyle lim_{\epsilon \to 0}. So, inhibit this with \nolimits.
At Mico’s suggestion, I moved the formatting into a new command \variablename and used it to define the other macros. This also lets you change the formatting of all full-word variables in one place, and write \variablename{Return} without having to declare a macro.
### As Math Text Alphabets
This is what the alphabets \mathrm, \mathit, \mathbf, etc. are for.
\documentclass{article}
\usepackage{amsmath}
\usepackage{fontspec}
%Formatting for a MWE on TeX.SX:
\usepackage[paperwidth=10cm]{geometry}
\pagestyle{empty}
\newcommand\variablename[1]{\mathop{\mathit{#1}}\nolimits}
\newcommand{\Risk}{\variablename{Risk}}
\newcommand{\CEOPC}{\variablename{CEO\_PC}}
\newcommand{\BRind}{\variablename{BR\_ind}}
\begin{document}
$$\begin{split} \Risk_{i,t} = c + &\alpha \Risk_{i,t-1} + \beta_{1}\CEOPC_{i,t} + \\ &\beta_{2}\BRind_{i,t} + \beta_{k}Z_{i,t}+\epsilon_{i,t} \end{split}$$
\end{document}
By default and in most font packages, the shapes of the letters in \mathit are very similar to the math symbols in\mathnormal, but with \mathit, you get ligatures, kerning, and italic correction. You would definitely notice the difference between \mathit{fl} and \mathnormal{fl}.
### Declaring a New Math Font
You can declare new math alphabets like \mathrm and \mathit as well. In unicode-math, you would use \setmathfontface, and in legacy NFSS, you would use \DeclareMathAlphabet.
\documentclass{article}
\usepackage{amsmath}
\usepackage{iftex}
\iftutex
\usepackage{unicode-math}
\setmathfontface{\mathvar}{lmsans10-oblique.otf}[Ligatures={Common,Rare}]
\else
\usepackage[T1]{fontenc}
\DeclareMathAlphabet{\mathvar}{T1}{lmss}{m}{sl}
\fi
%Formatting for a MWE on TeX.SX:
\usepackage[paperwidth=10cm]{geometry}
\pagestyle{empty}
\newcommand\variablename[1]{\mathop{\mathvar{#1}}\nolimits}
\newcommand{\Risk}{\variablename{Risk}}
\newcommand{\CEOPC}{\variablename{CEO\_PC}}
\newcommand{\BRind}{\variablename{BR\_ind}}
\begin{document}
$$\begin{split} \Risk_{i,t} = c + &\alpha \Risk_{i,t-1} + \beta_{1}\CEOPC_{i,t} + \\ &\beta_{2}\BRind_{i,t} + \beta_{k}Z_{i,t}+\epsilon_{i,t} \end{split}$$
\end{document}
This example, which uses Latin Modern Sans Oblique, is somewhat contrived because either unicode-math or isomath define the alphabet \mathsfit.
### Update
Henri Menke in the comments had another good suggestion, saying he uses
\newcommand*\diff{\mathop{}\!\mathrm{d}}
to get operator-like spacing on the left and ordinary spacing on the right in expressions like dx.
If you write the equations the way you did, these variable names should be typeset as operators. Not everyone thinks this is correct. If you do not, you should be careful to always write, for example, \alpha \cdot \Risk instead of \alpha \Risk: you do not want to typeset a·mass as amass.
• Always complete and nice your answer. +1. – Sebastiano Jul 12 at 19:16
• These are not operators, so the usage of \mathop is semantically wrong in my opinion. – Henri Menke Jul 13 at 4:39
• @HenriMenke Is there a better way to tell LaTeX to give them operator-like spacing? – Davislor Jul 13 at 4:46
• @Davislor Why should you? They are not operators. However, what I do for the differential operator d is \newcommand*\diff{\mathop{}\!\mathrm{d}}. This gives op spacing to the left but ord spacing to the right. But again, the differential operator is an operator, whereas these constructs here are not. – Henri Menke Jul 13 at 4:58
• +1. A minor suggestion: Instead of taking a direct approach to variable naming, which requires you to be willing to define lots and lots of macros (e.g., \Risk, \CEOPC, etc), consider taking an indirect approach. First, define a macro called, say, \vn (short for "variable name", I suppose) via, say, \newcommand{\vn}[1]{\textsc{#1}}; second, encase all variable names in \vn directives. That way, if you ever decide to change the appearance of variable names from small-caps to sans-serif, all you'll have to do is change the definition of \vn. – Mico Jul 13 at 11:04
Not entirely sure what you mean by a better-looking equation, but if you place all of your longer variable names in a text box, it would look better to my eyes!
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$$\text{Risk}_{i,t}=c+\alpha \text{Risk}_{i,t-1}+\beta_{1} \text{CEO\_PC}_{i,t} +\beta_{2} \text{BR\_ind}_{i,t}+\beta_{k} \text{Z}_{i,t}+\epsilon_{i,t}$$
\end{document}
Which would give:
• With this approach, you would want to use \mathrm instead of \text. That way, if your equation is inside something italicized (like an ams theorem), you'll still have the variables looking the same. With \text, they would inherit the italicization. – Teepeemm Jul 12 at 18:09
• True for many reasons. \mathrm is definitely preferred over \text. – Mark Verschell Jul 19 at 16:41
I add my proposal, to have another very nice view as output.
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\begin{document}
$$\mathsf{Risk}_{i,t}=c+\alpha \mathsf{Risk}_{i,t-1}+\beta_{1} \mathsf{CEO\_PC}_{i,t}+\beta_{2} \mathsf{BR}\_\mathsf{ind}_{i,t}+\beta_{k} \mathsf{Z}_{i,t}+\epsilon_{i,t}$$
\end{document}
Using a macro \newcommand{\vn}[1]{\mathsf{#1}} as suggested by @Mico in the comment to have not many \mathsf:
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\newcommand{\vn}[1]{\mathsf{#1}}
\begin{document}
$$\vn{Risk}_{i,t}=c+\alpha \vn{Risk}_{i,t-1}+\beta_{1} \vn{CEO\_PC}_{i,t}+\beta_{2} \vn{BR}\_\vn{ind}_{i,t}+\beta_{k} \vn{Z}_{i,t}+\epsilon_{i,t}$$
\end{document}
• You don't need sansmath for \mathsf. That package has a very different purpose. Also the image in the OP suggests \mathsf{CEO\_PC}_{i,t} instead of \mathsf{CEO}_{\mathsf{PC}_{i,t}}. – Henri Menke Jul 13 at 5:10
• @HenriMenke With lot of sincerity I not remember the reason because I have put sansmath. I have edited my answer with your suggestions, and I thank you again in other way. Best regards. – Sebastiano Jul 13 at 8:44
• +1. A minor suggestion: Instead of hardcoding the appearance of variable names (via \mathsf, say, take a two-step or indirect approach: In step 1, define a macro called \vn (short for "variable name", I suppose) via \newcommand{\vn}[1]{\mathsf{#1}}; in step 2, encase each variable name in a \vn directive. E.g., \vn{Risk}_{i,t}=c+\alpha \vn{Risk}_{i,t-1}+.... That way, if you decide to use upright-serif rather than upright-sanserif for variable names, all you'd have to do is change the definition of \vn. – Mico Jul 13 at 10:58
|
2020-08-14 17:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9596880674362183, "perplexity": 2691.8068562474864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00518.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-858/topics/Topic-19659/subtopics/Subtopic-261755/?textbookIntroActiveTab=guide
|
# 7.07 Volume of spheres
Lesson
Now that we have seen how to find the volume of a cone, let's see if we can relate that to the volume of a sphere.
#### Exploration
Imagine a sphere that has the same radius as a cone with equal radius and height. How many times greater is the volume of the sphere than the volume of the cone?
Test your conjecture with the applet below. Click the button to pour the water from the cone to the sphere. Then, refill the water in the cone and repeat until the sphere is full.
Then, consider the following:
1. How many cones of water did it take to fill the sphere?
2. Recall that the volume of a cone can be found using the formula $V=\frac{1}{3}\pi r^2h$V=13πr2h. Using formula for the volume of a cone and your answer to the question above, derive the equation for the volume of a sphere.
Notice that if a cone and a sphere have equal radius, and if the cone has a height equal to its radius, then the cone will fill the sphere exactly four times. This means that the volume of the sphere is $4$4 times the volume of the cone.
$\text{Volume of sphere}$Volume of sphere $=$= $4\times\text{Volme of cone}$4×Volme of cone $=$= $4\times\frac{1}{3}\pi r^2h$4×13πr2h Substitute the formula for the volume of a cone $=$= $4\times\frac{1}{3}\pi r^2r$4×13πr2r In this special case, $h=r$h=r for the cone. $=$= $\frac{4}{3}\pi r^3$43πr3 Simplify
Volume of a sphere
The volume, $V$V, of a sphere can be calculated using the formula
$V=\frac{4}{3}\pi r^3$V=43πr3
where $r$r is the radius of the sphere.
#### Practice questions
##### Question 1
Find the volume of the sphere shown.
##### Question 2
The planet Mars has a radius of $3400$3400 km. What is the volume of Mars?
##### Question 3
A ball has a volume of $904.779$904.779 cubic units; what is its radius?
|
2021-12-08 10:14:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6914982199668884, "perplexity": 484.13645447193704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00500.warc.gz"}
|