url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/tags/inverse-function-theorem/new
Tag Info For the given function one has $S=T$: Rewriting $f$ as $f(x,y)=\bigl (P_1(x+y)+P_2(x-y),\ (x+y)\bigr)$ with polynomials $P_1(t)=\frac 12(t^3-27t)$ and $P_2(t)=\frac 12(t^3-3t)$ one sees that for $(x_0,y_0)\in T$ it follows from $x_0-y_0=\pm1$ that $P_2$ has a local extreme at $x_0-y_0$, so there are points $t_1$ and $t_2$ arbitrarily close to $x_0-y_0$ such ... Proof of the Lemma: We assume $f$ is strongly differentiable at $a.$ Let $T=f'(a).$ Because $T$ is an isomorphism, there exist constants $0<c<C$ such that $c|x|\le |Tx|\le C|x|$ for all $x\in \mathbb R^m.$ So $$|f(x)-f(y)| \ge |T(x-y)|-|r_a(x,y)||x-y|$$ $$\ge (c/2)|x-y|$$ for $(x,y)$ near $(a,a).$ We want to show $$\tag 1 f^{-1}(u)-f^{-1}(v) - T^{-1}(u-... 0 I guess the desired claim follows from the following result: Let d_i\in\mathbb N, k_i\in\{1,\ldots,d_i\}, M_i be a k_i-dimensional embedded C^1-submanifold of \mathbb R^{d_i} and f:M_1\to M_2 be C^1-differentiable at x_1\in M_1. Assuming that T_{x_1}(f):T_{x_1}\:M_1\to T_{f(x_1)}\:M_2 is injective and k:=k_1=k_2 (if I'm not missing ... 0 This function applied to the coordinates of the complex number x+iy gives us the coordinates of (x+iy)^2. This function from \mathbb C to \mathbb C is surjective as a consequence of the fundamental theorem of algebra or by noticing the function squares the modulus and doubles the argument (which is clearly surjective). 0 This is not so much a calculus question as a trick question (maybe not a trick but not really part of the usual theory) in my opinion. We want to find all points of the form (x^2-y^2,2xy). We do some variable bamboozlement, the right coordinate seems easier, so lets say 2xy =a when a is not 0, then we get y = a/2x. now we want to find the possible ... 1 Yes, your approach is correct. In this case, I would use$$x=r\cos\theta\\y=r\sin\theta$$Then$$f(r,\theta)=r^2(\cos2\theta,\sin2\theta)$$It's easy to see that$$r^4=h^2+k^2$$and$$\tan2\theta=\frac kh These have solutions for any $(h,k)\in\mathbb R^2$, so the range is $\mathbb R^2$.
2020-10-21 11:07:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961535632610321, "perplexity": 68.45844405736004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00580.warc.gz"}
http://www.solutioninn.com/coin-a-is-loaded-in-such-a-way-that-pheads
Question Coin A is loaded in such a way that P(heads) is 0.6. Coin B is a balanced coin. Both coins are tossed. Find: a. The sample space that represents this experiment; assign a probability measure to each outcome b. P(both show heads) c. P(exactly one head shows) d. P(neither coin shows a head) e. P(both show heads coin A shows a head) f. P(both show heads coin B shows a head) g. P(heads on coin A exactly one head shows) Sales0 Views23 • CreatedAugust 27, 2015 • Files Included
2016-10-24 20:15:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449800610542297, "perplexity": 7587.398847943023}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00474-ip-10-171-6-4.ec2.internal.warc.gz"}
https://chat.stackexchange.com/transcript/36?m=41564199
8:01 PM Amazing And yeah nevermind I've reconvinced myself of your argument So wait Balarka you want to go through the proof of the five lemma? Sure! The worst lemma ??? It's so useful @EricSilva :( Yeah but the proof is so boring 8:10 PM Aight so we've got this ah. well i only did it once in my life and forgot about it the proof of snake lemma is better We had to work it out for an alg top pset and it's just so dull (I can barely TeX a one-line exact sequence so this is gonna take 3 years if I don't just screenshot it) @Daminark can I suggest we just assume l, m, p, q are all iso's, and try to prove n is an iso? 8:12 PM @Eric perhaps but it's apparently really useful, and everyone's gotta work it through at least once It's definitely useful The proof is like the basic exercise in diagram chasing @AkivaWeinberger It should be $1$, that is if we let $u = t + \pi$, then $\frac{du}{dt} = 1$, which implies that $du = dt$. Oh yeah we can do that for what I need it for. And I'm willing to take it on faith that it extends if necessary @EricSilva It can be. Is that an Ericism or is "diagram chasing" a thing? 8:13 PM its a thing Diagram chasing is a thing @Daminark It's a thing. It's a thing lol fuck fuck fuck 8:13 PM sniped :( maximally sniped :( Oh Narcissus got more sniped than me get rekt I was watching USA network's television show "damnation" and they used the phrase "it's a thing", which was definitely not around in the 30s There was a filtration on grammar though. 8:13 PM @Narcissus even then you got sniped hard @Daminark I... I have a full stop... So... :thonk: Insecurity intensifies I also had to move my mouse to click the reply button! Diagrams kind of make my head spin 8:14 PM In a sense... I had to chase an arrow... I'm trying to prove a function is identicaly zero in a region and following the hint I proved that the product $f(z)f(iz)f(-z) f(-iz)$ is identically zero. whta next? @Eric they do for me as well, but I can understand them marginally better than I can pictures, and I need to have at least one thing I'm competent at so... yeah, I'm focusing there for now @Daminark But yeah, if you want to see diagram chasing. Prove the three lemma. also useful 8:17 PM (for the record I don't dislike diagrams I just think diagram chasing is really boring/not very insightful) Ah yeah, the proof on wikipedia does look like it's just bashing things out. Maybe I'll take it as given for now and afterwards I'll return to the proof You don't even need snake lemma to find the boundary map in the homology LES for the various topological versions of homology It can be defined completely topologically @Daminark It's a good exercise. You can also look in Hatcher, chapter 2 section 2? Actually section 1 observation: the usual topological proof that complex polynomials have roots uses that the degree of $f/|f|$ on some large radius circle is one (by explicit calculation). the real version of this is the intermediate value theorem; it says that if the degree of $f/|f|: S^0 \to S^0$ is one, $f$ has a zero Cute this also clearly extends to a proof of a quaternionic fta for S^3 which is nice 8:20 PM Does it mean anything meaningful in the other two division algebras? oh come on hahaha Niiice I guess it probably does it for the octonions too but I am a little paranoid Very cool though 8:21 PM @MikeMiller $n$, you mean Wait there's a quaternionic FTA? this is what "polynomial" should be taken to mean here Fuck me @AkivaWeinberger ah yeah I was thinking mod 2 because of the real case @MikeMiller This I don't believe 8:22 PM and also forgot all the even things @AkivaWeinberger where polynomial takes the specific form in the ams article I linked :D So Eilenberg wasn't the bad guy after all That's pretty cool @MikeMiller I'm reminded of the book "The Phantom Toolbooth." It includes the Island of Conclusions; you get there by jumping. 2 So just to be completely sure, the other LES that we use in order to apply the 5-lemma just comes from the homotopy fibration sequence $F \to Fp \to B$, right? @Daminark Yeah 8:25 PM And one of the arrows is an iso because $F_p \cong E$, I'm happy then Yeppers @MikeMiller So only one term of maximum degree I see We have that $$\int_{\Sigma}2\sqrt{1-x^2-y^2}dA=\iint_D2dxdy$$ where $\Sigma (x,y)=(x,y,-\sqrt{1-x^2-y^2})$. What will $D$ be? Will it be the set of all $(x,y)$ such that the square root is defined? It must be $1-x^2-y^2\geq 0 \Rightarrow y^2\leq 1-x^2 \Rightarrow -\sqrt{1-x^2}\leq y\leq \sqrt{1-x^2}$. So that the square root $\sqrt{1-x^2}$ is defined, it must be $1-x^2\geq 0 \Rightarrow x^2\leq 1\Rightarrow -1\leq x\leq 1$. So, we get that $D=\{(x,y)\mid -1\leq x\leq 1, -\sqrt{1-x^2}\leq y\leq \sqrt{1-x^2}\}$. Otherwise you get counterexamples (Things like $xix+jxkx$ are not of that form for example) Yeah, I used to know this subtlety but I forgot ($ix + xi = 0$) Thanks for the pointer 8:27 PM Quaternions are fuckin dope mannnnn it seems like this also applies to octonions because they're a normed algebra, yes? the same argument applies; all the parentheses one is paranoid about disappear after norming what are the polynomials there though they're not associative right? so you need paranthesis and now i understand your comment ${}=1$ lol yes thanks for catching my errors this morning! i would have missed all those sigh 8:29 PM octonions are weird, but sedonions (or w/e the spelling is) are just goofy @Balarka yo check this out: tikzcd.yichuanshen.de Oh nice @MikeMiller Not ${}=1$, sorry looks like a good spectral sequence drawer ${}=j$ would work 'Cause otherwise $1/(2i)$ would do it 8:30 PM @AkivaWeinberger I was just trying to write down a polynomial which had an unexpected number of solutions I suspect this is one of those places where knowing K-theory would make me appreciate it all better but eh, I can't be arsed @Mike do you know a good reference for rep theory of compact groups Once one has that, I think it's easy to believe that you can get one with none Brocker tom Dieck What do you want to know @MikeMiller Oh. $x^2=-1$ has an unexpected number of solutions (continuum infinitely many) @AkivaWeinberger :D 8:33 PM looks like it has what i need, thanks Mike @AkivaWeinberger Hey, dude! So, what did you want to tell me with that last message? @AkivaWeinberger I have a dope problem for you Everyone here is talking to this poor guy. lol cause he da man 8:35 PM @nbro Right so what's $dv$ then @BalarkaSen do you remember a simple proof that hypersurfaces in $\Bbb R^n$ are orientable? My students want to do it using Jordan separation, which is quite nice, but I feel like there should be something simpler I guess maybe I'm secretly just going to reproduce a proof of Jordan-Brouwer Oh duh, you check that the normal bundle is trivial otherwise you would have nontrivial intersection with some loop @MikeMiller What I have in mind is you assume it's not and then get a path on the hypersurface with a nontrivial section above it Yes That doesn't hit the surface and take the bit of the normal that hits it those are htpic Yup, I think that's my proof 8:37 PM but different mod 2 intersection Ya cool. @AkivaWeinberger You defined $v = \frac{N}{2\pi} u$, so $\frac{dv}{du} = \frac{N}{2\pi}$ There's also a concrete proof somewhere by cutting the hypersurface off by a function on R^n I don't remember how to do it Yes but how does one get a defining function It's Georges Elencewajg's favorite proof. IIRC it's a cocycle idea; locally cut it, then patch those up. Hm 8:39 PM Oh, I see too much work True You want something which simplifies that $2\pi$ It comes out in the change of variables @BalarkaSen What is it @AkivaWeinberger Prove the Riemann hypothesis 8:41 PM Can't, it's false GRH would be better @AkivaWeinberger Well, but note that we would have $2 \pi$ at the denominator, which is then multiplied by the already existing $2\pi$ at the denominator, using your change of variables... Apparently GRH is a very convenient assumption to make for various algorithmic problems I'm kidding, it's that given a partition of R^2 by a bunch of straightlines, 2-color it (color it by two colors so that no two adjacent regions have same color) @MikeMiller I agree (removed) 8:42 PM @nbro You sure? Make sure you don't have $dv$ and $dt$ confused in the integrals @BalarkaSen Isn't that obvious though No sure, also because I've not slept enough since months... Finitely many, lines, right? @AkivaWeinberger It took some thonking on my end to get it, but I can believe it's obvious to you yea Choose a side for each line (a half-plane) For each point, count how many of those half-planes it's in Odds get black, evens get white Hah. Quite nice. My approach was different perturb the whole thing so that every node (where lines intersect) is a double point now make each crossing "X" into ") (" 8:45 PM there are finitely many lines? then you get a bunch of disconnected balls on the plane color those black and bring the whole thing back and unperturb to make the regions formed by making n-order points into double points vanish @MikeMiller Yeah tfw you try to use tex commands in your humanities essay \begin{proof} 2 Anyone has experience in Test Statistics, involving testing claims about population mean? @AkivaWeinberger I had an inductive proof which is similar to yours. Assume by induction true for n lines. For n+1 lines, color the thing formed by n of them. Then reverse the colors on one side of the line 8:51 PM That seems like his proof written recursively It constructs the same coloring, right? Yup I guess it's an interesting question to ask how many ways there are to color it? si x hi, if A,B is 3x3 matrix such that A is invertible, then is A + nB invertible for some n?I assumed the answer to be no, then used multilinear property of determinant and analysed the value for n-1 , n n+1 and arrive at a contradiction. But this doesn't seem to me to be a elegant solution (also I haven't verifyed if my method is right), so can someone answer this lol Math Facts is the best Twitter 8:52 PM no Vsauce no ketchup, just Vsauce But what is Ting? cue Vsauce theme music the guy from vsauce must have great thighs @AkivaWeinberger You're right (as I guess most of the times)! @Daminark "Category theory is the study of translating statements that are easy to prove into statements that are impossible to understand." Right? It's amazing My brain is working at turtle speeds Sleep!! :D 8:56 PM "Number theory is a subject that seeks to come up with as many proofs as possible for the fact that there are infinitely many primes" "A K3 surface is a certain surface named after mathematicians Kummer, Kähler and Kodaira, which was thankfully not called a KKK surface." ahahaha @ManishKumarSingh well, it's certainly invertible when n=0 for non trivial case 8:58 PM not enough information I presume if you compute det(A-nB), you should get a quadratic polynomial in n then A-nB is invertible iff the polynomial does not evaluate to 0 well, the polynomial can turn out to be linear or zero or a constant depending on A and B I suppose the fact that p(0) is not equal to 0 tells us that the polynomial cannot be the zero polynomial so the polynomial can have at most two roots so there are at most two values of n which makes A+nB not invertible are you considering the fact that its 3x3 matrix yes this is what makes the polynomial quadratic give me a minute wait, it can be cubic so my conclusion becomes there are at most three values of n which makes A+nB not invertible acutally we should get cubic ignore my comment 9:03 PM for the right value of B we can get quadratic or linear or constant @BalarkaSen Yeah my thing is that thing unspooled I guess you still didn't provide enough information Your other one is more visually appealing than them though Danke 9:04 PM It's like letting the ink run once you put it on a region and it flows along the crossings kinda reminded me of making siefert surfaces @LeakyNun I think it matches up with my solution that is to consider n-1, n , n+1 as solution and arrive at contradiction, could you confirm it see my solution as my first comment @ManishKumarSingh could you clarify your question So i posted the question and arrived at an answer in a non-elegant way (1st comment), so i wanted to cross check my answer and also find a simpler answer, so could you recheck if the logic behing your and mine answers are same? I think it is if I understood your solution no, not clarify what you want properly 9:07 PM 15 mins ago, by ManishKumar Singh hi, if A,B is 3x3 matrix such that A is invertible, then is A + nB invertible for some n?I assumed the answer to be no, then used multilinear property of determinant and analysed the value for n-1 , n n+1 and arrive at a contradiction. But this doesn't seem to me to be a elegant solution (also I haven't verifyed if my method is right), so can someone answer this there is one variable that you did not define this makes the question unclear which variable you are talking of? B? B any 3x3 matrix, need not be invertibel if you read through my answer, you would have understood that A + nB is indeed invertible for all but finitely many n, so the answer is "yes", which nullifies your entire solution the assumption of answer being 'no' was only made to arrive at a contradiction (which was det(A) = 0). I think the n-1 , n,n +1 gave contradiction because the equation was quadritic? 9:11 PM as I said, it is cubic assuming B in not invertibel (if invertible then its easy) why is it easy if B is invertible? i can choose basis such that Ax not(=) -nBx, x is basis make this happen by manipulating n I don't think "basis" is the right word could you clarify what you mean? @Alessandro I keep listening to the opening track of Departure Songs because it's so good. 9:15 PM @BalarkaSen I relistened to the whole album while studying this weekend, it's great using the fact that matrix and linear tranf. are essentially same, I choose certain basis and say kernel is non-zero @AlessandroCodenotti I loved the third or fourth track where they played over a passage narrated by Burroughs I agree the the album is great in it's entirety @ManishKumarSingh neither the kernel nor the value of Ax depends on your choice of basis, so I still do not understand what you are trying to say Is the hyperboloid arbitrary close to a cone at infinity? do you agree A + nB is a linear tranf. under suitable basis for vsp F^3 9:18 PM @ManishKumarSingh it is a linear transformation regardless of basis. @quallenjäger In a meaningful topology on the "space of hypersurfaces in R^3", it should be, yeah yes but to write it in a form of a matrix, we need to specify the basis, @quallenjäger yes, according to wikipedia: "Both of these surfaces are asymptotic to the cone" @ManishKumarSingh but you do not need the matrix to find its determinant determinant is still invariant under change of basis How is the curvature of the cone? Are they negative? No, it's 0. 9:20 PM Under minkowski metric Oh, I don't know. You mean under standard Euclidean metric the curvature should be 0? Yes. It's locally isometric to the plane. @quallenjäger i.e. the Gaussian curvature 9:22 PM @LeakyNun yes, but i if we assume basis, all we have to show that Ax (not)= -Bx for that basis, and this would as you said will be true for all basis @ManishKumarSingh then you are not choosing the basis; you are choosing three linearly independent vectors x1, x2, x3, so that Axi != -nBxi for each i=1,2,3, in order for {x1,x2,x3} to be a basis much confusion I see between "basis" and its constituent vectors Zero curvature means if I parallel transport a vector along a curve, the direction won’t change? yes i am doing that how do you know such three vectors exists? @quallenjäger Well, no, it's locally isometric to the plane, not globally. 9:24 PM because if it doens't, I change n atmost i need to do this 3 times how do you know? Indeed, if you parallel transport along a path going around the cone you'll end up with a different direction 3 times? And along the ruling? @ManishKumarSingh yes 9:25 PM Nothing happens along the ruling. I am just bit confused, around the cone I have some kind of curvature How can the curvature be 0? assume the worst, say Ax_1 = -nBx_1, so increase n to n+1, now they are different, and repeat this proceducre 3 times @ManishKumarSingh fair enough @quallenjäger the Guassian curvature is defined as the product of the two principal curvatures, each of which has a meaning I'm not sure if the product itself has any meaning other than the fact that if the curvature is 0 then it is locally isometric to a plane, positive then to a sphere, negative then to a hyperboloid I see, thanks @ManishKumarSingh I think you can use your argument even when B is not invertible to make it more rigorous: let e1=(1,0,0), e2=(0,1,0), e3=(0,0,1) 9:33 PM i shall try since A is invertible, Ae1, Ae2, and Ae3 are all non-zero as a result, Aei = -nBei can only have at most one solution wait even if Ae1+nBe1, Ae2+nBe2, Ae3+nBe3 are all non-zero, how does it follow that A+nB is invertible? injective map so kernel goes to 0 how do you know it is injective? 9:36 PM Ae1+nBe1, Ae2+nBe2, Ae3+nBe3 are non zero any linear combo will be non zero and that is your typical element it does not follow wait a minute yes, there is a problem, we need special basis vector, such that first few goes to kernel, rest all go to range - 0 aahh it would be complicated no actually we can simple assume tha basis to satify such a property how can you? because such a basis exsit how do you know that? 9:40 PM from rank nullity thm how do you know that the nullity of A+nB is 0? i dont, so I assume say e1 goes to kernel, rest go to range - 0 what if e1+e2 goes to kernel instead? not possible because of choice of basis but there is still another problem such special basis keep changing as we change n exactly 9:42 PM so it won't worl ya maybe quadrtic was the best way to solve anyways thanks The bounty is not taken yet, there is still a great opportunity to get some nice points in your pocket. 38 What tools, ways would you propose for getting the closed form of this integral? $$\int_0^{\pi/4}\frac{\log(1-x) \tan^2(x)}{1-x\tan^2(x)} \ dx$$ EDIT: It took a while since I made this post. I'll give a little bounty for the solver of the problem, 500 points bounty. Supplementary question: Ca... @LeakyNun what program are you doing at imperial? @quallenjäger pure mathematics, 3 years BSc You are finishing this year? I am at Imperial too World is so small. 9:56 PM no, just started Lol u know really a lot for a first year Bsc ok..? Hey @LeakyNun Did you see my question? @Evinda ? 0 Consider distinct integer polynomials of distinct degree. Also they are all univariate polynomials of the variable $x$. Let $*$ denote composition and $*$ is the operation for the monoid. Consider when such polynomials $a,b,c,d$ forms an abelian monoid. Many questions arise naturally. For insta... Any ideas ? 10:01 PM @LeakyNun Let $P(n)$ denote the largest prime factor of $n$. Can we deduce something from the relations: $\frac{|\{ p \leq x \mid \text{ p is prime and } P(p-1)\geq x^{\frac{2}{3}}\}|}{\frac{x}{\ln{x}}}\geq c_0$ and $\frac{|\{ p \leq x \mid \text{p is prime}\}|}{\frac{x}{\ln{x}}}> (1-\epsilon)$ about how many prime numbers $p$ up tp $x$ will have the property that $p-1$ has a prime factor $q$ that exceeds $x^{\frac{2}{3}}$ ? Ok no problem :) @LeakyNun Evinda I think your problem is to hard Let $f : X \to \Bbb{R}$ be some function, where $X$ is a measurable space. I have that $f^{-1}([q,\infty))$ is measurable for every rational $q$. I need to conclude that $f^{-1}([c,\infty))$ is measurable for all $c \in \Bbb{R}$. Let $(q_n)$ be a decreasing rational sequence converging to $c$. Is $[c, \infty) = \bigcup_{n=1}^\infty [q_n , \infty)$ or $[c, \infty) = \bigcap_{n=1}^\infty [q_n , \infty)$? This is giving me so much damn trouble for some reason... 10:34 PM @user193319 neither. $\displaystyle \bigcap_{n=1}^\infty [q_n , \infty) = [q_1, \infty)$ and $\displaystyle \bigcup_{n=1}^\infty [q_n , \infty) = (c, \infty)$ $e^{x+y} = e^x e^y$ and $e^{-x+y} = e^{(-x)+y} = e^{-x}e^y = \frac{e^y}{e^{x}}$ kkk I want to use the formula of Stokes at an example where the normal vector of surface direct to the z-axis. When we calculate the curve integral at the boundary do we consider a clockwise or an anticlockwise parametrization? Hello @LeakyNun !! Do you have an idea? no idea, sorry tell me why! :D 10:50 PM @nbro why what? eh, just joking a bit :D btw, it is the title of a song check that out lol btw, I need to finish a g.d. proof I started two days and apparently is so simple, but given that I've not slept decently for months, my brain is f***** up It is like basic calculus stuff lol So, here's the exercise I had to solve. And this is what I have so far Any help on how to finish this proof? I am so tired... I still need to derive $d_n$. But it looks like I am close. @Antonios-AlexandrosRobotis Hi
2019-06-16 13:30:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220548033714294, "perplexity": 1572.6049487977753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00325.warc.gz"}
https://testbook.com/objective-questions/mcq-on-electromagnetic-waves--5eea6a1339140f30f369efe6
# Colour vision in human eyes is the function of photoreceptor cells named 1. Rods 2. Cones 3. Blind spot 4. Fovea Option 2 : Cones ## Electromagnetic Waves MCQ Question 1 Detailed Solution • Cones are the photoreceptor cells that respond to color vision, i.e the light of different colors. • They are the photoreceptor cells in the retina of the vertebrates. • Rod cells work the best in dim light. • Blind spot is a small portion of the visual field of each eye that corresponds to the position of the optic disk (also known as the optic nerve head) within the retina. • Fovea is a tiny pit located in the macula of the retina that provides the clearest vision of all. # Which of the following electromagnetic wave has maximum frequency? 2. X-rays 3. Gamma rays 4. Visible rays Option 3 : Gamma rays ## Electromagnetic Waves MCQ Question 2 Detailed Solution CONCEPT: • Electromagnetic spectrum: The complete range of light that exists in nature is called an electromagnetic spectrum. • The waves that are created as a result of vibrations between an electric field and a magnetic field is called Electromagnetic waves. • The waves of the electromagnetic field, propagating through space and carrying electromagnetic radiant energy is called electromagnetic radiation. • It consists of radio waves, microwaves, infrared, light, ultraviolet, X-rays, and gamma rays. EXPLANATION: • From the above figure, it is clear that the Gamma rays have the highest frequency. Therefore option 3 is correct. # In the Visible spectrum of light, which of the following colour has the longest wavelength? 1. Violet 2. Yellow 3. Orange 4. Black Option 3 : Orange ## Electromagnetic Waves MCQ Question 3 Detailed Solution CONCEPT: • Electromagnetic waves: The wave which is generated due to vibration between electric field and magnetic field and it does not need any medium to travel is called an electromagnetic wave. It can travel through a vacuum. • Light is a form of energy which is an example of an electromagnetic wave. • Wavelength: The distance between two successive crests or troughs is called wavelength of the wave. • White light consists of seven constituent colors: Violet (V), Indigo (I), Blue (B), Green (G), Yellow (Y), Orange (O), Red (R). • The wavelength of the color light in the order from lowest to highest is given as acronym VIBGYOR (V < I < B < G < Y < O < R): • The wavelength range of visible light is 400 to 700 nm. EXPLANATION: • According to the given options, orange has the longest wavelength. So option 3 is correct. • In the Visible spectrum of light, Red color has the longest wavelength. • The descending order of the colors in visible spectrum according to their wavelength is as follows: Red >Orange >Yellow >Green >Blue > Indigo >Violet # Refractive index of a material is greatest for- 1. Red light 2. Green light 3. Violet light 4. Same for all colours Option 3 : Violet light ## Electromagnetic Waves MCQ Question 4 Detailed Solution Concept: • Light is a form of energy which is an example of an electromagnetic wave. • Wavelength: The distance between two successive crests or troughs is called wavelength of the wave. • Refractive index: The ratio of the speed of light in air to the speed of light in the medium is called the refractive index of that medium. $$Refractive\; index = \frac{{speed\;of\;light\;in\;air}}{{speed\;of\;light\;in\;medium}}$$ • White light consists of seven constituent colours: Violet (V), Indigo (I), Blue (B), Green (G), Yellow (Y), Orange (O), Red (R). • The wavelength of the colour light in the order from lowest to highest is given as acronym VIBGYOR (V < I < B < G < Y < O < R): • The wavelength range of visible light is 400 to 700 nm. Explanation: • The frequency of light remains the same on refraction because the frequency is independent of the medium. Vair = λair f Vmedium = λmedium f $$\mu = \frac{{{\lambda _{air}}}}{{{\lambda _{medium}}}}$$ $$\mu \propto \frac{1}{\lambda }$$ • Since the refractive index of the medium is inversely proportional to the wavelength of the light. • For violet light λ is minimum, hence the refractive index is maximum Hence the correct option is Violet. # When a red colour paper is seen in the yellow light, it will appear as 1. red 2. yellow 3. White 4. black Option 4 : black ## Electromagnetic Waves MCQ Question 5 Detailed Solution CONCEPT • Reflection: The phenomena in which a light ray is sent back into the same medium from which it is coming, on interaction with boundary, is called reflection. • Laws of reflection: • The angle of incidence (θ ) = Angle of reflection (θ r ) • The incident ray, the reflected ray, and normal to the surface of incidence always lies in the same plane. EXPLANATION: • Colour is a property of light that depends on its wavelength. • When light falls on an object, some of it is absorbed and some is reflected. • A red object or yellow object looks red or yellow because it reflects the only red colour or yellow colour and absorbs all other colours present in the white light. • Hence, when the red object is seen through the yellow light then it absorbs yellow colour falling on it and appears dark. Therefore option 4 is correct. NOTE: • The colour of a transparent object is determined by the wavelength of the light transmitted by it. • An opaque object that reflects all wavelengths appears white; one that absorbs all wavelengths appears black. # Which of the following rays has maximum frequency? 1. UV rays 2. Microwave 3. Infrared rays. 4. X-rays Option 4 : X-rays ## Electromagnetic Waves MCQ Question 6 Detailed Solution Concept: • Electromagnetic spectrum: The complete range of light that exists in nature is called an electromagnetic spectrum. • Light is an electromagnetic wave and the distance between two successive crests or two successive troughs of the light wave is called a wavelength of the light wave. • The wavelength of light is denoted by λ. • As shown in the spectrum, the wavelength increases when we go from left to right. From the spectrum, we can see that the X rays has the maximum frequency and least wavelength as compared to UV, Microwave and Infrared rays. # Which of the following statements is NOT correct about myopia? 1. The vision may be corrected with the help of concavelens. 2. It is also knownas near sightedness 3. In the affected eye, the image of a distant object is formed beyond the retina 4. The person affected by it cannot see beyond few metres. Option 3 : In the affected eye, the image of a distant object is formed beyond the retina ## Electromagnetic Waves MCQ Question 7 Detailed Solution The correct answer is In the affected eye, the image of a distant object is formed beyond the retina • Myopia is a defect of an eye in which it cannot see distant objects clearly. • A person with Myopia can see nearby objects clearly. • Myopia is caused by due to - High converging power of the lens, and Eye-ball being too long. • Due to the high converging power of the lens, the image is formed in front of the retina and a person cannot see clearly the distant objects. • Myopia or short-sightedness can be corrected by wearing spectacles containing a concave lens. • There are three common defects of vision : • Myopia or short-sightedness • Hypermetropia or long-sightedness • Presbyopia. # Which of the following coloured light has the lowest Frequency? 1. Green 2. Blue 3. Red 4. Violet Option 3 : Red ## Electromagnetic Waves MCQ Question 8 Detailed Solution CONCEPT: • Electromagnetic waves: The wave which is generated due to vibration between electric field and magnetic field and it does not need any medium to travel is called an electromagnetic wave. It can travel through a vacuum. • Light is a form of energy which is an example of an electromagnetic wave. • Wavelength: The distance between two successive crests or troughs is called wavelength of the wave. • Frequency: The number of vibration per unit time is called frequency. Speed of wave (v) = frequency (f) × wavelength (λ) • White light consists of seven constituent colours: Violet (V), Indigo (I), Blue (B), Green (G), Yellow (Y), Orange (O), Red (R). • The frequency of the color light in the order from highest to lowest is given as acronym VIBGYOR (V > I > B > G > Y > O > R). EXPLANATION: • From the above discussion, the violet has the highest frequency and the shortest wavelength of the visible colors of light. • The red color light has the lowest frequency and the longest wavelength of the visible colors of light. So option 3 is correct. # Ultrasonic pulse echo technique, a non-destructive ultrasonic testing, is employed for any possible flaw detection in a metallic bar of thikness 30 cm. If the arrival times of the ultrasonic pulses are 45 μs and 90 μs respectively, the distance of the flaw from one end of the steel bar at which the ultrasonic pulse initially enters the steel bar, will be: 1. 14 cm 2. 15 cm 3. 18 cm 4. 16 cm Option 2 : 15 cm ## Electromagnetic Waves MCQ Question 9 Detailed Solution The given situation can be understood with the help of the following diagram: The ultrasonic pulse we are producing is entering the metallic bar. Now, inside the metallic bar, we can have the flaw anywhere. Let the required distance be x' from the other end of the metallic bar as shown. Now, the pulse will be reflected both by the flaw, and the backside of the metallic bar, and that is the reason we are getting two echo pulses at different intervals. With w = 30 cm, the time taken by the pulse reflected from the end of the metallic bar and reach back at the entrance is given as 90 μsec, i.e. 30 = Speed × 90 μ $$\therefore Speed = \frac{30}{90× 10^{-6}}$$ $$Speed = \frac{1}{3}× 10^6 ~cm/sec$$   ---(1) The echo reflected from the flaw takes 45 μ sec to reach the entrance. From the figure, we can write: (30 - x) = Speed × 45 μsec Using Equation (1), we get: $$(30 - x) = \frac{1}{3} ×10^6\times 45 μsec$$ 30 - x = 15 x = 15 cm Alternate Method In the ultrasonic pulse-echo technique, the sound path (s) is given by $$s = \frac {ct}{2}$$ Where s - sound path (in mm), c - sound velocity (km/s), t - the time of flight (μs) Calculation: Given the pulse arrival times are 45 μs and 90 μs; The pulse with arrival time 90 μs travels completely through the steel bar and The pulse with arrival time 45 μs travels up to defect and echoes back. From the pulse with arrival time 90 μs, $$s = \frac {ct}{2} ⇒ 30 = \frac {c\; \times\; 90}{2}$$ ⇒ c = 2/3 km/s; Now for pulse with arrival time 45 μs, $$s = \frac {ct}{2} ⇒ s =\frac {2/3\;\times \;45}{2}$$ ⇒ s = 15 mm # In which part of the body is cornea and the retina found? 1. Palm 2. Ear 3. Nose 4. Eye Option 4 : Eye ## Electromagnetic Waves MCQ Question 10 Detailed Solution Option 4 is the correct answer: Cornea and retina are found in the eye. Cornea: • The frontmost and transparent covering of the eye is called Cornea. • Light enters the human eye through cornea. • It is further divide into five layer Epithilium, Bowman's layer, Stroma, Descemet's membrane and the Endothelium. Retina: • It is the layer tissue located in the back of the eye near the optic nerve. • It acts as the screen for human eye where the image of objects we see is formed. • It consists of photoreceptor cells the cones (perceive colours) and the rods (low light vision). Other terms: • Near point: The minimum distance from human at which if an object is placed a sharp image is formed by the human eye. Near point of a normal human eye is 25cm. • Far point: The farthest distance upto which human eye can see. It is infinity. • Sclerotic, Iris, Lens, Cilliary muscles, Pupil, Aqueous Humour & vitrous humour Retina, Optic nerve are the other parts of human eye. # A sugar solution in a tube of length 2.0 dm produces optical rotation of 12°. Then, the sugar solution to one half of its initial concentration. If the dilute solute solution is contained in another tube of length 3.0 dm, the optical rotation produced by it will be: 1. 9° 2. 7° 3. 10° 4. 11° Option 1 : 9° ## Electromagnetic Waves MCQ Question 11 Detailed Solution Concept: • A polarizer is a device through which only light waves oscillating in a single plane may pass. • A polarimeter is an instrument used to determine the angle through which plane-polarized light has been rotated by a given sample. • The optical rotation is the angle through which the plane of polarization is rotated when polarized light passes through a layer of a liquid. • Substances are described as dextrorotatory or levorotatory according to whether the plane of polarization is rotated clockwise or counterclockwise, respectively, as determined by viewing towards the light source. Dextrorotation is designated (+) and levorotation is designated (-). • The specific rotation is a characteristic property of a certain substance and is the standard measurement for the optical rotation of that substance. The optical rotation and specific rotation are related by $$[ α ] = \frac {α_{observed}}{c \times l}$$ where [α] is specific rotation and αobserved is optical rotation, c is the concentration (g/dl) and l is path length (dm); Calculation: Given αobserved = 12° when l = 2 dm and concentration c; $$[ α ] = \frac {12}{c \times 2} = \frac {6}{c}$$ Now the concentration is c/2 and l = 3 dm $$\frac {6}{c}=\frac {α_{observed}}{{\frac {c}{2}}\times 3}$$ αobserved = 9° # Which of the following are examples of electromagnetic waves?a. Television wavesb. Ultraviolet raysc. X-raysd. Sun rays 1. a, b and c 2. a, c and d 3. a, b and d 4. a, b, c and d Option 4 : a, b, c and d ## Electromagnetic Waves MCQ Question 12 Detailed Solution The correct answer is a, b, c and d. Key Points • Television waves, Ultraviolet rays, X-rays, sun rays all are examples of electromagnetic waves. • Radio waves, infrared rays, visible light, ultraviolet rays, X-rays, and gamma rays are all types of electromagnetic radiation. • Radio waves have the longest wavelength, and gamma rays have the shortest wavelength. • Electromagnetic waves are also known as EM waves that are produced when an electric field comes in contact with the magnetic field. • It can also be said that electromagnetic waves are the composition of oscillating electric and magnetic fields. • The electromagnetic field is produced by an accelerating charged particle. • Electromagnetic waves are nothing but electric and magnetic fields travelling through free space with the speed of light c. • An accelerating charged particle is when the charged particle oscillates about an equilibrium position. • If the frequency of oscillation of the charged particle is f, then it produces an electromagnetic wave with frequency f. • The wavelength λ of this wave is given by λ = c/f • Electromagnetic waves transfer energy through space. • Applications of Electromagnetic Waves • Electromagnetic waves can transmit energy through vaccum. • Electromagnetic waves play an important role in communication technology. • Electromagnetic waves are used in RADARS. • UV rays are used to detect forged bank-notes. Real bank-notes don’t turn fluorescent under UV light. • Infrared radiation is used for night vision and is used in the security camera. # ________ is a kind of vision in which an animal/human being having two eyes is able to perceive the single three-dimensional image of the surroundings. 1. Concave Vision​ 2. Convex Vision 3. Dim Vision 4. Binocular Vision Option 4 : Binocular Vision ## Electromagnetic Waves MCQ Question 13 Detailed Solution CONCEPT: • Binocular vision is a kind of vision in which an animal/human being having two eyes is able to perceive the single three-dimensional image of the surroundings • It helps with the performance skills like grasping, catching and locomotion etc • Dim vision is a vision as if someone turned down the lights. and your vision has now become dim. • In Farsightedness You can see distant objects clearly, but everything up close is blurry. • The eyeball becomes too short, or else the cornea is too flat. • This causes light to focus behind the retina, making near images blur. • It can be cured by Convex lens, so the term Convex Vision. • In Nearsightedness You can see clearly up close but distant objects are blurred. • The eyeball becomes too long, or else when the cornea is too curved that additional curvature or length causes light to focus in front of the retina instead of on it, which makes the resulting images look blurry. • It can be cured by Concave lens, so the term Concave Vision. EXPLANATION: • Clearly from the definition we can say that Binocular vision is a kind of vision in which an animal/human being having two eyes is able to perceive the single three-dimensional image of the surroundings. So, the correct answer is option i.e. Binocular vision. • Myopia is the technical term for nearsightednes. Corrected by concave lens. • Hyperopia is the technical term for farsightednes. Corrected by convex lens. # The absolute refraction index of diamond is _______. 1. 2.23 2. 2.32 3. 2.42 4. 2.24 Option 3 : 2.42 ## Electromagnetic Waves MCQ Question 14 Detailed Solution CONCEPT: • Refractive index/Absolute refractive index: The ratio of the speed of light in air/vacuum to the speed of light in the medium is called the refractive index of that medium. • It is generally denoted by μ or n. It is a dimensionless quantity. $$Refractive\; index = \frac{{speed\;of\;light\;in\;air}}{{speed\;of\;light\;in\;medium}}$$ • The absolute refractive index can never be less than 1. EXPLANATION: • The absolute refraction index of the diamond is 2.42. It is the experimental data. So option 3 is correct. # Which of the following electromagnetic waves has the highest wavelength? 1. Microwaves 3. Infra-red light 4. Gamma rays ## Electromagnetic Waves MCQ Question 15 Detailed Solution CONCEPT: • Electromagnetic spectrum: It is a collection of a range of different waves in sequential order from radio to gamma electromagnetic waves. Frequency (ν) = speed of light (c)/wavelength (λ) • Radio waves: The lowest frequency portion comes in radio waves generally, has wavelengths range between 1 mm to 100 km or frequencies between 300 GHz to 3 kHz. • There are several subcategories in between these waves like AM and FM radio. • Microwaves: The part of the electromagnetic spectrum having the frequency more than that of radio waves and less than that of infrared waves are called microwaves. • Infra-red light: It has the wavelength less than microwaves and greater than ultra-violet wave. • Gamma rays: It has wavelength less than X-rays. EXPLANATION: • The radio waves has the longest wavelength among all given options. So option 2 is correct. # What is the velocity of light in a diamond if the refractive index of diamond with respect to vacuum is 2.5? 1. 1.2 × 108 m/s 2. 5 × 108 m/s 3. 1.2 × 1010 m/s 4. 2.5 × 108 m/s Option 1 : 1.2 × 108 m/s ## Electromagnetic Waves MCQ Question 16 Detailed Solution CONCEPT: • Refractive index (μ): The ratio of the velocity of light in vacuum to the velocity of light in the medium is called refractive index of that medium. $$\text{The refractive index of a substance/medium}=\frac{\text{Velocity of light in vacuum}}{\text{Velocity of light in the medium}}$$ So μ = c/v Where c is the speed of light in vacuum and v is the speed of light in the medium. CALCULATION: Given that: Refractive index of the diamond (µd)= 2.5 We know The velocity of light in vacuum (c) = 3 × 10m/s To find the velocity of light in diamond (v) Now, $$μ _d=\frac{c}{v}\\ or, \; 2.5= \frac{3 \times 10^8}{v}\\ or, \; v=\frac{3 \times 10^8}{2.5}=1.2\times 10^8 \; m/s$$ Hence option 1 is correct. # Two ideal polaroid sheets are arranged so that the angle between the pass-axis of the two sheets is θ. An unpolarised beam of light of intensity I0 is incident on one of the polaroids. If the intensity of light emerging from the other polaroid is I0 / 4, the angle θ is: 1. 15° 2. 30° 3. 45° 4. 60° Option 3 : 45° ## Electromagnetic Waves MCQ Question 17 Detailed Solution Concept: Polarization: The transverse light waves create vibration in more than one phase. The polarization is done to make these vibrations confined in one phase. • Polaroid: A polaroid converts unpolarized light into polarized lights. It acts as a filter that filters out other phases except 1. • When the polaroid is kept perpendicular to the surface then the intensity of resultant polarized light becomes half. • Mauss Law: When there is a second polarizer tilted at angle θ  then the intensity obtained from the output of the polarizer will be I = I0 cos2 θ Calculation: The given the intensity of polarized light is I0 Light rays obtained from the output of the first polarizer get intensity reduced to half. So, the intensity obtained from the first polarizer is $$I' = \frac{I_0}{2}$$ -- (1) Intensity obtained from the second polarizer which is tilted at angle θ is (By Mauss Law) I''  = I' cos 2 θ But the intensity of the final obtained light is I0 / 4, So we can say $$\frac{I_0}{4}= I'cos^2 θ$$ -- (2) Using (1) in (2) $$\frac{I_0}{4}= \frac{I_0}{2} cos^2 θ$$ $$cos^2 θ = \frac{1}{2}$$ $$cos θ = \frac{1}{\sqrt{2}}$$ θ = 45 ° So, 45 ° is the correct answer. # The refractive indices of quartz crystal for right handed and left handed circularly polarized light of wavelength 762.9 nm are 1.5391 and 1.5392 respectively. The angle of rotation produced by the crystal plate of thickness 0.5 mm is: 1. 25.5° 2. 11.8° 3. 13.8° 4. 18.1° Option 2 : 11.8° ## Electromagnetic Waves MCQ Question 18 Detailed Solution Concept: Angle of rotation: If µR ,µL be the refractive indices of quartz crystal for right handed and left handed vibrations respectively (µL > µR ) then optical path difference on passing through a quartz crystal slab of thickness ‘t’ is given as, Path difference = (µ- µR) t If λ be the wavelength of light used, then phase difference will be $$\delta = \frac {2π}{λ} ({μ_L}-{μ_R}) t$$ Angle of rotation will be $$\frac{\delta}{2} = \frac {π}{λ} ({μ_L}-{μ_R}) t$$ Calculation: Given μL =  1.5392; μR = 1.5391; t = 0.5 mm = 0.5 × 10-3 m; λ = 762.9 nm = 762.9 × 10-9 m; • The angle of rotation will be $$\frac{\delta}{2} = \frac {π}{762.9 × 10^{-9}} ({1.5392}-{1.5391}) 0.5\times 10^{-3} = 0.2058 \ radians$$ Angle of rotation = 0.2058 × (180/π) = 11.79° # Which of the following are the highest-frequency electro magnetic waves? 1. Microwaves 3. Ultraviolet Rays 4. Gamma Rays Option 4 : Gamma Rays ## Electromagnetic Waves MCQ Question 19 Detailed Solution CONCEPT: Electromagnetic spectrum: • It is a collection of a range of different waves in sequential order from radio to gamma electromagnetic waves. • Frequency (ν) = speed of light (c)/wavelength (λ) Microwaves: • They come in the range of 1mm to 1 meter and of the frequency range between 300 MHz to 300 GHz. • Microwaves are generally used in Radars, Transmission towers, Microwave Ovens, etc. • These waves are micro not in terms of wavelength but micro from radio waves with greater frequency. Ultra-Violet: • UV light is electromagnetic radiation with a wavelength shorter than that of visible light in the range of 10 nm to 400 nm. • UV can have effects on biological matter causing cancers and a lot more. • Most of the UV rays are absorbed by the ozone (O3) layer in the upper atmosphere (15-30 km above). Gamma rays: • Gamma rays have the littlest wavelengths and therefore the most energy of any wave within the spectrum. $$\text{v}=\frac{\text{c}}{\text{n}}$$ Where v = speed of light in a medium, c = speed of light and n = refractive index in the medium. • The lowest frequency portion comes in radio waves generally, has wavelengths range between 1mm to 100km or frequencies between 300 GHz to 3 kHz. • There are several subcategories in between these waves like AM and FM radio. EXPLANATION: • From the above figure, it is clear that Gamma-ray is the highest-frequency electromagnetic waves. Therefore option 4 is correct. • Gamma rays are produced in the disintegration of radioactive atomic nuclei and in the decay of certain subatomic particles. • Gamma-ray photons, like their X-ray counterparts, are a form of ionizing radiation; when they pass through matter, they usually deposit their energy by liberating electrons from atoms and molecules. # Electric intensity at any point in an electric field is equal to the ___________ at that point. 1. electric flux 2. magnetic flux density $$\;\mathop \sum \limits_i {E_i} = {E_1} + {E_2} + {E_3} + \ldots$$
2021-10-21 15:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5560909509658813, "perplexity": 1622.7952556697435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00623.warc.gz"}
https://metacpan.org/pod/Term::ANSIMenu
J.A. de Vries VERSION This documenation describes version 0.01 of Term::ANSIMenu as released on Thursday, April 17, 2003. SYNOPSIS use Term::ANSIMenu; my $menu = Term::ANSIMenu->new( width => 40, help => [['', \&standard_help], ['hint 1', \&help_item], [ undef, \&standard_help], ['hint 3', undef] ], title => 'title', items => [['1', 'First menu item', \&exec_item], ['2', 'This string is just too long \ to fit in a normal terminal \ and thus it will be clipped.'], ['3', '', sub { system "man man" }] ], status => 'status', prompt => 'prompt: ');$menu->print_menu(); while (my $key =$menu->read_key()) { last unless defined $menu->do_key($key); $menu->update_status('') if$key eq 'S'; $menu->update_status('New status') if$key eq 's'; $menu->update_prompt('') if$key eq 'P'; $menu->update_prompt('New prompt: ') if$key eq 'p'; } $menu->pos($menu->line_after_menu() + 1, 1); DESCRIPTION I wrote this mainly to make live easy on those staff members to whom I delegate tasks. Most of them prefer to use a menu instead of having to type complicated commands. To them it's a faster and safer way of working (we all know about typos don't we...). By using this module you can create menus with only a few lines of code and still have a shipload of features. Need context-sensitive help or a statusbar? Like to use hotkeys? Want flashy colors and styles? It's all there. Just fill in the attributes and you're good to go. Overview A menu can be made up of a title, a number of selectable items, a status line and a prompt. Each of those elements can be given a fore- and background color and a style to give it the appearance wanted. The same goes for the optional frames around these elements. It is also possible to align each element independently (but the all items together are considered as one element). Every item in the menu can be selected using definable hotkeys or navigation keys. To assist users of the menu hints that will be diplayed in the status line can be associated with itemsi. For situations where a simple hint isn't good enough context-sensitive help is available through definable keys (like the well-known <F1> and '?'). Finally to get out of the menu without having to explicitly create an entry for that one or more keys can be assigned that will cause an immediate return from the menu to the calling program. The exit status reflects the conditions under which that happened. On to the gory details... A Term::ANSIMenu object is created with the usual call to new(), like this $menu = Term::ANSIMenu->new(); This will create an object with reasonable defaults. However some attributes still have to be explicitly given before the resulting object makes a sensible menu. Everything is optional, except for the selectable items that make up the menu. You can do this either directly in the call to the constructor or by using the corresponding mutator. Attributes can be set through new() by specifying their name as a hash key and giving it an appropriate value. For example: $menu = Term::ANSIMenu->new(items => [['1', 'First menu item', \&exec_item], ['2', 'This string is just too long \ to fit in a normal terminal \ and thus it will be clipped.'], ['3', '', sub { system "man man" }] ]); See the next section for a list of all mutators and the conditions they impose on their values. The call to new() will also initialize the terminal by setting it to VT100 mode. After that it will clear the screen and position the cursor in the upper-left corner. Upon destroying the object the destructor will restore the normal settings of the terminal by setting the readmode back to 0 and by explicitly removing any ANSI attributes and turning the cursor on. The screen is not cleared unless the menu was explicitly instructed to do so. Attributes Attributes can be accessed by using a method that will function as both a accessor and a mutator. The name of that method is exactly the same as the name of the corresponding attribute. In other words the value of an attribute can be read using $menu->attrib() Its value can be changed like this: $menu->attrib($value) Both calls return the current value (after setting it). If the return value is a list then it will be given as a list or as a reference to that list, depending on the context. For example: $return_ref = $menu->attrib([<list>]); @return_list =$menu->attrib([<list>]); The attributes listed below are publicly available through such methods. width() Type: integer Constraints: must be > 0 and <= than the current terminal width Default: <term_width> height() The height of the menu. This is ignored at the moment, but might be used in a future version. Type: integer Constraints: must be > 0 and <= than the current terminal height Default: <term_height> space_after_title() Print an empty line as a spacer after the title. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 space_after_items() Print an empty line as a spacer after the selectable items. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 space_after_status() Print an empty line as a spacer after the status line. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 spacious_items() Print frame lines between the selectable items. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 cursor() Make the cursor visible when a prompt is printed. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 cursor_char() A single character to print at the cursor position in the prompt if the cursor is visible. Type: char Constraints: must be a single printable character or a space Default: '?' up_keys() A list of key names that will move the current selection to the previous item. Type: array Constraints: elements must be a single character or a special key name Default: ['UP', 'PGUP', 'LEFT'] down_keys() A list of key names that will move the current selection to the next item. Type: array Constraints: elements must be a single character or a special key name Default: ['DOWN', 'PGDN', 'RIGHT'] exit_keys() A list of key names that will exit the menu. Type: array Constraints: elements must be a single character or a special key name Default: ['q', 'Q', 'CTRL-c'] help_keys() A list of key names that will invoke context-sensitive help. Type: array Constraints: elements must be a single character or a special key name Default: ['F1', '?'] help() A list of hints and references to routines that provide additional help to the user. The first array element is used when no item is selected, the order of the other elements corresponds with the order of the selectable items. The hint must be a string of printable characters (including spaces). The code reference should point to a routine that will provide help. It is called with the number of the currently selected item as an argument. If a hint is undefined or an empty string no information will be provided through the status line. If no code reference is defined help will not be available for that particular item. Type: array of arrays Constraints: [[<hint>, <code_ref>], ...] Default: [] selection() The number of the currently selected item. If no item is selected this will be 0. Type: integer Constraints: must be >= 0 and <= than the number of selectable items. Default: 0 selection_keys() A list of key names that will execute the current selection. Type: array Constraints: elements must be a single character or a special key name Default: ['SPACE', 'ENTER'] selection_wrap() Wrap around to the other end of the list when trying to move beyond the first or last entry. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 selection_style() Apply these character attributes to the selected item. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD, UNDERLINE and CLEAR Default: ['REVERSE'] selection_fgcolor() Apply this foreground color to the selected item. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' selection_bgcolor() Apply this background color to the selected item. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' Print a line resembling the top of a frame before the list of items. The is used only when no frame is drawn around the list of items. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 Print this character in the leader at the position where the delimiter between hotkey and description is printed in the list of items. Type: char Constraints: must be a single character which may be a line drawing character. Default: '' trailer() Print a line resembling the bottom of a frame after the list of items. The is used only when no frame is drawn around the list of items. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 trailer_delimiter() Print this character in the trailer at the position where the delimiter between hotkey and description is printed in the list of items. Type: char Constraints: must be a single character which may be a line drawing character. Default: '' shortcut_prefix() A string to print immediately before the hotkey of an item. Type: string Constraints: must be a string of printable characters (including spaces) or a line drawing character optionally surrounded by one or more spaces on one or both sides. Default: '' shortcut_postfix() A string to print immediately after the hotkey of an item. Type: string Constraints: must be a string of printable characters (including spaces) or a line drawing character optionally surrounded by one or more spaces on one or both sides. Default: '' delimiter() Print this character between the hotkey and the description in the list of items. Type: char Constraints: must be a single character which may be a line drawing character. Default: '' label_prefix() A string to print immediately before the description of an item. Type: string Constraints: must be a string of printable characters (including spaces) or a line drawing character optionally surrounded by one or more spaces on one or both sides. Default: '' label_postfix() A string to print immediately after the description of an item. Type: string Constraints: must be a string of printable characters (including spaces) or a line drawing character optionally surrounded by one or more spaces on one or both sides. Default: '' title() The text to use as the title of the menu. Type: string Constraints: astring of printable characters (including spaces) Default: '' title_style() Apply these character attributes to the title. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD, UNDERLINE and CLEAR Default: ['BOLD'] title_fgcolor() Apply this foreground color to the title. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' title_bgcolor() Apply this background color to the title. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' title_align() Align the text of the title according to this setting. Unless title_fill is set alignment will be ignored. Type: string Constraints: must be one of LEFT, RIGHT or CENTER Default: 'CENTER' title_fill() Pad the title with whitespace to fill up the full width of the menu. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 title_frame() Put a frame around the title. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 title_frame_style() Apply these character attributes to the frame around the title. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD and CLEAR Default: ['REVERSE'] title_frame_fgcolor() Apply this foreground color to the frame around the title. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' title_frame_bgcolor() Apply this background color to the frame around the title. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' items() The list of selectable items. They will be presented in the order given here. Each item consists of a hotkey (given as a single character or a key name), å description (given as a string of printable characters, including spaces) and a reference to a piece of code associated with this item. The description may be an empty string (why would someone want that?) and the code reference may be undefined. Type: array of arrays Constraints: [[<keyname>, <string>, <code_ref>], ...] Default: [] item_style() Apply these character attributes to each item. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD, UNDERLINE and CLEAR Default: ['CLEAR'] item_fgcolor() Apply this foreground color to each item. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' item_bgcolor() Apply this background color to each item. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' item_align() Align the description of each item according to this setting. Unless item_fill is set alignment will be ignored. Type: string Constraints: must be one of LEFT, RIGHT or CENTER Default: 'LEFT' item_fill() Pad each item with whitespace to fill up the full width of the menu. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 item_frame() Put a frame around the list of selectable items. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 item_frame_style() Apply these character attributes to the frame around the items. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD and CLEAR Default: ['CLEAR'] item_frame_fgcolor() Apply this foreground color to the frame around the items. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' item_frame_bgcolor() Apply this background color to the frame around the items. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' status() The text to use in the status line. Type: string Constraints: astring of printable characters (including spaces) Default: '' status_style() Apply these character attributes to the status line. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD, UNDERLINE and CLEAR Default: ['CLEAR'] status_fgcolor() Apply this foreground color to the status line. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' status_bgcolor() Apply this background color to the status line. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' status_align() Align the text of the status line according to this setting. Unless status_fill is set alignment will be ignored. Type: string Constraints: must be one of LEFT, RIGHT or CENTER Default: 'LEFT' status_fill() Pad the status line with whitespace to fill up the full width of the menu. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 status_frame() Put a frame around the status line. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 status_frame_style() Apply these character attributes to the frame around the status line. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD and CLEAR Default: ['CLEAR'] status_frame_fgcolor() Apply this foreground color to the frame around the status line. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' status_frame_bgcolor() Apply this background color to the frame around the status line. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' prompt() The text to use in the prompt. Type: string Constraints: astring of printable characters (including spaces) Default: '' prompt_style() Apply these character attributes to the prompt. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD, UNDERLINE and CLEAR Default: ['BOLD'] prompt_fgcolor() Apply this foreground color to the prompt. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' prompt_bgcolor() Apply this background color to the prompt. Type: => string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' prompt_align() Align the text of the prompt according to this setting. Unless prompt_fill is set alignment will be ignored. Type: string Constraints: must be one of LEFT, RIGHT or CENTER Default: 'LEFT' prompt_fill() Pad the prompt with whitespace to fill up the full width of the menu. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 1 prompt_frame() Put a frame around the prompt. Type: boolean Constraints: must be one of -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T Default: 0 prompt_frame_style() Apply these character attributes to the frame around the prompt. Type: array Constraints: must be one or more of BLINK, REVERSE, BOLD and CLEAR Default: ['BOLD'] prompt_frame_fgcolor() Apply this foreground color to the frame around the prompt. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' prompt_frame_bgcolor() Apply this background color to the frame around the prompt. Type: string Constraints: must be one of BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE Default: '' Methods To manipulate the menu a small set of methods is provided. Read a single key from STDIN and return its name. This is done in raw mode. up($n) Move the cursor$n lines up. Any surplus that would move it beyond the first line is ignored. down($n) Move the cursor$n lines down. Any surplus that would move it beyond the last line is ignored. right($n) Move the cursor$n columns to the right. Any surplus that would move it beyond the last column is ignored. left($n) Move the cursor$n columns to the left. Any surplus that would move it beyond the first column is ignored. pos($l,$c) Move the cursor to the given line ($l) and column ($c). If no valid arguments are given the cursor will be moved to the upper left corner (1,1). print_title() Print the title of the menu. print_items() Print the list of selectable items. print_status() Print the status line. print_prompt() Print the prompt. print_cursor() Position the cursor after the prompt and print cursor_char if the prompt is visible. cursor_off() Turn of the cursor (hide it). cursor_on() Turn on the cursor (make it visible again). clearscreen() Wipe the entire screen and put the cursor in the upper left corner. is_up_key($keyname) Return 1 if the given key is mentioned in up_keys and 0 if it is not. is_down_key($keyname) Return 1 if the given key is mentioned in down_keys and 0 if it is not. is_help_key($keyname) Return 1 if the given key is mentioned in help_keys and 0 if it is not. is_exit_key($keyname) Return 1 if the given key is mentioned in exit_keys and 0 if it is not. is_selection_key($keyname) Return 1 if the given key is mentioned in selection_keys and 0 if it is not. is_shortcut($keyname) Return the number of the corresponding item if the given key is a shortcut. If the key does not relate to an item 0 is returned. shortcuts() List all shortcuts associated with a selectable item. item_count() Return the number of selectable items. move_selection($n) Move the selection$n entries up (negative value) or down (positive value). If selection_wrap is not enabled this movement will stop at the first or last item. do_key($keyname) Perform the action associated with this key. This could be a selection movement or a help invocation or the execution of an item. After this 0 will be returned if nothing was done, 1 for success and undef for exit. do_item($n) Execute the code associated with item $n (if it is defined). The number of the current selection is passed to the called routine. do_help($n) Invoke help for the given item. The number of the current selection is passed to the called routine. update_status($status) Change the value of status and reprint the status line. update_prompt($prompt) Change the value of prompt and reprint the prompt. Return the number of the first line after the menu. Exports This is a fully object-oriented module. No exports are needed as all publicly available attributes and methods are accessible through the object itself. NOTES Hotkeys can be specified by using their name. This includes most of the so-called special keys. Their names and corresponding keycodes as used in this module are listed below: "HOME" => "\e[1~" #Linux console "INSERT" => "\e[2~" #VT100 "DEL" => "\e[3~" #VT100 "END" => "\e[4~" #Linux console "PGUP" => "\e[5~" #VT100 "PGDN" => "\e[6~" #VT100 "F1" => "\e[11~" #VT100 "F2" => "\e[12~" #VT100 "F3" => "\e[13~" #VT100 "F4" => "\e[14~" #VT100 "F5" => "\e[15~" #VT100 "F6" => "\e[17~" #VT100 "F7" => "\e[18~" #VT100 "F8" => "\e[19~" #VT100 "F9" => "\e[20~" #VT100 "F10" => "\e[21~" #VT100 "F11" => "\e[23~" #VT100 "F12" => "\e[24~" #VT100 "F1" => "\e[[A" #Linux console "F2" => "\e[[B" #Linux console "F3" => "\e[[C" #Linux console "F4" => "\e[[D" #Linux console "F5" => "\e[[E" #Linux console "UP" => "\e[A" #VT100 "DOWN" => "\e[B" #VT100 "RIGHT" => "\e[C" #VT100 "LEFT" => "\e[D" #VT100 "END" => "\e[F" #VT100 "HOME" => "\e[H" #VT100 "UP" => "\eOA" #XTerm "DOWN" => "\eOB" #XTerm "RIGHT" => "\eOC" #XTerm "LEFT" => "\eOD" #XTerm "END" => "\eOF" #XTerm "HOME" => "\eOH" #XTerm "F1" => "\eOP" #XTerm "F2" => "\eOQ" #XTerm "F3" => "\eOR" #XTerm "F4" => "\eOS" #XTerm "META-a" => "\ea" "META-b" => "\eb" "META-c" => "\ec" "META-d" => "\ed" "META-e" => "\ee" "META-f" => "\ef" "META-g" => "\eg" "META-h" => "\eh" "META-i" => "\ei" "META-j" => "\ej" "META-k" => "\ek" "META-l" => "\el" "META-m" => "\em" "META-n" => "\en" "META-o" => "\eo" "META-p" => "\ep" "META-q" => "\eq" "META-r" => "\er" "META-s" => "\es" "META-t" => "\et" "META-u" => "\eu" "META-v" => "\ev" "META-w" => "\ew" "META-x" => "\ex" "META-y" => "\ey" "META-z" => "\ez" "CTRL-a" => "\x01" "CTRL-b" => "\x02" "CTRL-c" => "\x03" "CTRL-d" => "\x04" "CTRL-e" => "\x05" "CTRL-f" => "\x06" "CTRL-g" => "\x07" "CTRL-h" => "\x08" "TAB" => "\x09" "ENTER" => "\x0A" "CTRL-k" => "\x0B" "CTRL-l" => "\x0C" "CTRL-m" => "\x0D" "CTRL-n" => "\x0E" "CTRL-o" => "\x0F" "CTRL-p" => "\x10" "CTRL-q" => "\x11" "CTRL-r" => "\x12" "CTRL-s" => "\x13" "CTRL-t" => "\x14" "CTRL-u" => "\x15" "CTRL-v" => "\x16" "CTRL-w" => "\x17" "CTRL-x" => "\x18" "CTRL-y" => "\x19" "CTRL-z" => "\x1A" "SPACE" => "\x20" "BS" => "\x7F" Colors can be specified by using their common ANSI names: BLACK RED GREEN YELLOW BLUE MAGENTA CYAN WHITE Character attributes can be specified by using these ANSI names: CLEAR BOLD UNDERLINE REVERSE Some attributes can be assigned line drawing characters. The names of these characters are are: HOR (Horizontal line) VER (Vertical line) ULC (Upper Left Corner) URC (Upper Right Corner) LRC (Lower Right Corner) LLC (Lower Left Corner) LTE (Left T) RTE (Right T) TTE (Top T) BTE (Bottom T) CTE (Crossing Ts) DIAGNOSTICS All errors are reported through the Carp module. These are mainly encountered when using an illegal value for an attribute or method. When that happens a carp warning is generated and the given value is just ignored. A croak message is generated when calling non-existent attributes or methods. Following is a list of all diagnostic messages generated by Term::ANSIMenu. They should be self-explaining. • No such attribute: <attrib> • Invalid value for <attrib>: <value> • width must be larger than 0 and smaller than the terminal width • height must be larger than 0 and smaller than the terminal height • space_after_title must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • space_after_items must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • space_after_status must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • spacious_items must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • cursor must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • cursor_char must be a printable character • up_keys must be one or more keynames • up_keys must be given as a reference to an array • down_keys must be one or more keynames • down_keys must be given as a reference to an array • help must an array of arrays containing strings and code references • help must be given as a reference to an array • help_keys must be one or more keynames • help_keys must be given as a reference to an array • exit_keys must be one or more keynames • exit_keys must be given as a reference to an array • selection must be larger than or equal to 0 and smaller than or equal to the number of items • selection_wrap must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • selection_keys must be one or more keynames • selection_keys must be given as a reference to an array • selection_style must be BLINK, REVERSE, BOLD, UNDERLINE and/or CLEAR • selection_style must be given as a reference to an array • selection_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • selection_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • leader must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • trailer must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • shortcut_prefix must be a string of printable characters or a line-drawing character • shortcut_postfix must be a string of printable characters or a line-drawing character • delimiter must be a string of printable characters or a line-drawing character • leader_delimiter must be a string of printable characters or a line-drawing character • trailer_delimiter must be a string of printable characters or a line-drawing character • label_prefix must be a string of printable characters or a line-drawing character • label_postfix must be a string of printable characters or a line-drawing character • title must be a string of printable characters • title_style must be BLINK, REVERSE, BOLD, UNDERLINE and/or CLEAR • title_style must be given as a reference to an array • title_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • title_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • title_align must be LEFT, RIGHT or CENTER • title_fill must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • title_frame must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • title_frame_style must be BLINK, REVERSE, BOLD and/or CLEAR • title_frame_style must be given as a reference to an array • title_frame_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • title_frame_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • items must be an array of arrays containing keynames, descriptions and code references • items must be given as a reference to an array • item_style must be BLINK, REVERSE, BOLD, UNDERLINE and/or CLEAR • item_style must be given as a reference to an array • item_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • item_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • item_align must be LEFT, RIGHT or CENTER • item_fill must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • item_frame must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • item_frame_style must be BLINK, REVERSE, BOLD and/or CLEAR • item_frame_style must be given as a reference to an array • item_frame_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • item_frame_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • status must be a string of printable characters • status_style must be BLINK, REVERSE, BOLD, UNDERLINE and/or CLEAR • status_style must be given as a reference to an array • status_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • status_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • status_align must be LEFT, RIGHT or CENTER • status_fill must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • status_frame must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • status_frame_style must be BLINK, REVERSE, BOLD and/or CLEAR • status_frame_style must be given as a reference to an array • status_frame_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • status_frame_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • prompt must be a string of printable characters • prompt_style must be BLINK, REVERSE, BOLD, UNDERLINE and/or CLEAR • prompt_style must be given as a reference to an array • prompt_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • prompt_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • prompt_align must be LEFT, RIGHT or CENTER • prompt_fill must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • prompt_frame must be -, +, 0, 1, NO, N, YES, Y, FALSE, F, TRUE or T • prompt_frame_style must be BLINK, REVERSE, BOLD and/or CLEAR • prompt_frame_style must be given as a reference to an array • prompt_frame_fgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE • prompt_frame_bgcolor must be BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN or WHITE BUGS Well, this is version 0.01, so there must be some. But I haven't seen them yet. If you do find a bug or just like to see a feature added I'd appreciate it if you'd let me know. FILES This module depends on the standard Carp module to blame your script if something goes wrong `;-) A terminal capable of interpreting ANSI sequences might help too... Carp AUTHOR J.A. de Vries <j.a.devries@dto.tudelft.nl>
2019-06-18 04:47:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4337904453277588, "perplexity": 6154.48057543761}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00210.warc.gz"}
https://developer-archive.leapmotion.com/documentation/objc/api/gen-objc/interface_leap_vector.html
# LeapVector¶ LeapVector Class Reference The LeapVector class represents a three-component mathematical vector or point such as a direction or position in three-dimensional space. More... Inherits NSObject. ## Instance Methods (float) - angleTo: The angle between this vector and the specified vector in radians. More... (LeapVector *) - cross: The cross product of this vector and the specified vector. More... (float) - distanceTo: The distance between the point represented by this LeapVector object and a point represented by the specified LeapVector object. More... (LeapVector *) - divide: Divide this vector by a number. More... (float) - dot: The dot product of this vector with another vector. More... (BOOL) - equals: Checks LeapVector equality. More... (id) - initWithVector: Copies the specified LeapVector. More... (id) - initWithX:y:z: Creates a new LeapVector with the specified component values. More... (LeapVector *) - minus: Subtract a vector from this vector. More... (LeapVector *) - negate Negate this vector. More... (LeapVector *) - plus: (LeapVector *) - times: Multiply this vector by a number. More... ## Class Methods (LeapVector *) + backward The unit vector pointing backward along the positive z-axis: (0, 0, 1). More... (LeapVector *) + down The unit vector pointing down along the negative y-axis: (0, -1, 0). More... (LeapVector *) + forward The unit vector pointing forward along the negative z-axis: (0, 0, -1). More... (LeapVector *) + left The unit vector pointing left along the negative x-axis: (-1, 0, 0). More... (LeapVector *) + right The unit vector pointing right along the positive x-axis: (1, 0, 0). More... (LeapVector *) + up The unit vector pointing up along the positive y-axis: (0, 1, 0). More... (LeapVector *) + xAxis The x-axis unit vector: (1, 0, 0). More... (LeapVector *) + yAxis The y-axis unit vector: (0, 1, 0). More... (LeapVector *) + zAxis The z-axis unit vector: (0, 0, 1). More... (LeapVector *) + zero The zero vector: (0, 0, 0) More... ## Properties float magnitude The magnitude, or length, of this vector. More... float magnitudeSquared The square of the magnitude, or length, of this vector. More... LeapVectornormalized A normalized copy of this vector. More... float pitch The pitch angle in radians. More... float roll The roll angle in radians. More... NSMutableData * toFloatPointer Returns an NSMutableData object containing the vector components as consecutive floating point values. More... NSArray * toNSArray Returns an NSArray object containing the vector components in the order: x, y, z. More... float x The horizontal component. More... float y The vertical component. More... float yaw The yaw angle in radians. More... float z The depth component. More... ## Detailed Description The LeapVector class represents a three-component mathematical vector or point such as a direction or position in three-dimensional space. The Leap software employs a right-handed Cartesian coordinate system. Values given are in units of real-world millimeters. The origin is centered at the center of the Leap device. The x- and z-axes lie in the horizontal plane, with the x-axis running parallel to the long edge of the device. The y-axis is vertical, with positive values increasing upwards (in contrast to the downward orientation of most computer graphics coordinate systems). The z-axis has positive values increasing away from the computer screen. Since 1.0 ## Method Documentation - (float) angleTo: (const LeapVector *) vector The angle between this vector and the specified vector in radians. float angleInRadians = [[LeapVector xAxis] angleTo:[LeapVector yAxis]]; // angleInRadians = PI/2 (90 degrees) The angle is measured in the plane formed by the two vectors. The angle returned is always the smaller of the two conjugate angles. Thus [A angleTo:B] == [B angleTo:A] and is always a positive value less than or equal to pi radians (180 degrees). If either vector has zero length, then this function returns zero. Parameters vector A LeapVector object. Returns The angle between this vector and the specified vector in radians. Since 1.0 + (LeapVector *) backward The unit vector pointing backward along the positive z-axis: (0, 0, 1). LeapVector *backwardVector = [LeapVector backward]; Since 1.0 - (LeapVector *) cross: (const LeapVector *) vector The cross product of this vector and the specified vector. LeapVector *crossProduct = [thisVector cross:thatVector]; The cross product is a vector orthogonal to both original vectors. It has a magnitude equal to the area of a parallelogram having the two vectors as sides. The direction of the returned vector is determined by the right-hand rule. Thus [A cross:B] == [[B negate] cross:A]. Parameters vector A LeapVector object. Returns The cross product of this vector and the specified vector. Since 1.0 - (float) distanceTo: (const LeapVector *) vector The distance between the point represented by this LeapVector object and a point represented by the specified LeapVector object. LeapVector *aPoint = [[LeapVector alloc] initWithX:10 y:0 z:0]; float distance = [origin distanceTo:aPoint]; // distance = 10 Parameters vector A LeapVector object. Returns The distance from this point to the specified point. Since 1.0 - (LeapVector *) divide: (float) scalar Divide this vector by a number. LeapVector *quotient = [thisVector divide:2.5]; Parameters scalar The scalar divisor; Returns The dividend of this LeapVector divided by a scalar. Since 1.0 - (float) dot: (const LeapVector *) vector The dot product of this vector with another vector. float dotProduct = [thisVector dot:thatVector]; The dot product is the magnitude of the projection of this vector onto the specified vector. Parameters vector A LeapVector object. Returns The dot product of this vector and the specified vector. Since 1.0 + (LeapVector *) down The unit vector pointing down along the negative y-axis: (0, -1, 0). LeapVector *downVector = [LeapVector down]; Since 1.0 - (BOOL) equals: (const LeapVector *) vector Checks LeapVector equality. bool vectorsAreEqual = [thisVector equals:thatVector]; Vectors are equal if each corresponding component is equal. Parameters vector The LeapVector to compare. Returns YES, if the LeapVectors are equal. Since 1.0 + (LeapVector *) forward The unit vector pointing forward along the negative z-axis: (0, 0, -1). LeapVector *forwardVector = [LeapVector forward]; Since 1.0 - (id) initWithVector: (const LeapVector *) vector Copies the specified LeapVector. LeapVector *copiedVector = [[LeapVector alloc] initWithVector:otherVector]; Parameters vector The LeapVector to copy. Since 1.0 - (id) initWithX: (float) x y: (float) y z: (float) z Creates a new LeapVector with the specified component values. LeapVector *newVector = [[LeapVector alloc] initWithX:0.5 y:200.3 z:67]; Parameters x The horizontal component. y The vertical component. z The depth component. Since 1.0 + (LeapVector *) left The unit vector pointing left along the negative x-axis: (-1, 0, 0). LeapVector *leftVector = [LeapVector left]; Since 1.0 - (LeapVector *) minus: (const LeapVector *) vector Subtract a vector from this vector. LeapVector *difference = [thisVector minus:thatVector]; Parameters vector the LeapVector subtrahend. Returns the difference between the two LeapVectors. Since 1.0 - (LeapVector *) negate Negate this vector. LeapVector *negation = thisVector.negate; Returns The negation of this LeapVector. Since 1.0 - (LeapVector *) plus: (const LeapVector *) vector LeapVector *sum = [thisVector plus:thatVector]; Parameters Returns The sum of the two LeapVectors. Since 1.0 + (LeapVector *) right The unit vector pointing right along the positive x-axis: (1, 0, 0). LeapVector *rightVector = [LeapVector right]; Since 1.0 - (LeapVector *) times: (float) scalar Multiply this vector by a number. LeapVector *product = [thisVector times:5.0]; Parameters scalar The scalar factor. Returns The product of this LeapVector and a scalar. Since 1.0 + (LeapVector *) up The unit vector pointing up along the positive y-axis: (0, 1, 0). LeapVector *upVector = [LeapVector up]; Since 1.0 + (LeapVector *) xAxis The x-axis unit vector: (1, 0, 0). LeapVector *xAxisVector = [LeapVector xAxis]; Since 1.0 + (LeapVector *) yAxis The y-axis unit vector: (0, 1, 0). LeapVector *yAxisVector = [LeapVector yAxis]; Since 1.0 + (LeapVector *) zAxis The z-axis unit vector: (0, 0, 1). LeapVector *zAxisVector = [LeapVector zAxis]; Since 1.0 + (LeapVector *) zero The zero vector: (0, 0, 0) LeapVector *zeroVector = [LeapVector zero]; Since 1.0 ## Property Documentation - (float) magnitude The magnitude, or length, of this vector. float length = thisVector.magnitude; The magnitude is the L2 norm, or Euclidean distance between the origin and the point represented by the (x, y, z) components of this LeapVector object. Returns The length of this vector. Since 1.0 - (float) magnitudeSquared The square of the magnitude, or length, of this vector. float lengthSquared = thisVector.magnitudeSquared; Returns The square of the length of this vector. Since 1.0 - (LeapVector*) normalized A normalized copy of this vector. LeapVector *normalizedVector = otherVector.normalized; A normalized vector has the same direction as the original vector, but with a length of one. Returns A LeapVector object with a length of one, pointing in the same direction as this Vector object. Since 1.0 - (float) pitch Pitch is the angle between the negative z-axis and the projection of the vector onto the y-z plane. In other words, pitch represents rotation around the x-axis. If the vector points upward, the returned angle is between 0 and pi radians (180 degrees); if it points downward, the angle is between 0 and -pi radians. Returns The angle of this vector above or below the horizon (x-z plane). Since 1.0 - (float) roll Roll is the angle between the y-axis and the projection of the vector onto the x-y plane. In other words, roll represents rotation around the z-axis. If the vector points to the left of the y-axis, then the returned angle is between 0 and pi radians (180 degrees); if it points to the right, the angle is between 0 and -pi radians. Use this function to get roll angle of the plane to which this vector is a normal. For example, if this vector represents the normal to the palm, then this function returns the tilt or roll of the palm plane compared to the horizontal (x-z) plane. Returns The angle of this vector to the right or left of the y-axis. Since 1.0 - (NSMutableData*) toFloatPointer Returns an NSMutableData object containing the vector components as consecutive floating point values. NSData *vectorData = thisVector.toFloatPointer; float x, y, z; [vectorData getBytes:&x length:sizeof(float)]; [vectorData getBytes:&y length:sizeof(float)]; [vectorData getBytes:&z length:sizeof(float)]; //Or access as an array of float: float array[3]; [vectorData getBytes:&array length:sizeof(float) * 3]; x = array[0]; y = array[1]; z = array[2]; Since 1.0 - (NSArray*) toNSArray Returns an NSArray object containing the vector components in the order: x, y, z. NSArray *vectorArray = thisVector.toNSArray; Since 1.0 - (float) x The horizontal component. Since 1.0 - (float) y The vertical component. Since 1.0 - (float) yaw
2020-02-27 07:21:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6524786353111267, "perplexity": 7042.304798370696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00343.warc.gz"}
https://wiki.geogebra.org/en/Normal_Command
# Normal Command ##### Command Categories (All commands) Normal( <Mean>, <Standard Deviation>, x ) Creates probability density function (pdf) of normal distribution. Normal( <Mean>, <Standard Deviation>, x, <Boolean Cumulative> ) If Cumulative is true, creates cumulative distribution function of normal distribution with mean μ and standard deviation σ, otherwise creates pdf of normal distribution. Normal( <Mean μ>, <Standard Deviation σ>, <Variable Value v> ) Calculates the function \mathrm{\mathsf{ \Phi \left(\frac{x- \mu}{\sigma} \right) }} at v where Φ is the cumulative distribution function for N(0,1) with mean μ and standard deviation σ. Note: Returns the probability for a given x-coordinate's value (or area under the normal distribution curve to the left of the given x-coordinate). Example: Normal(2, 0.5, 1) yields 0.02 in the Algebra View and \mathrm{\mathsf{ \frac{erf(-\sqrt{2})+1}{2} }} in the CAS View. • GeoGebra • Help • Partners
2018-12-11 20:40:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499784708023071, "perplexity": 8459.957164331883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823702.46/warc/CC-MAIN-20181211194359-20181211215859-00048.warc.gz"}
http://www.acmerblog.com/POJ-1230-Pass-Muraille-blog-325.html
2013 11-09 # Pass-Muraille In modern day magic shows, passing through walls is very popular in which a magician performer passes through several walls in a predesigned stage show. The wall-passer (Pass-Muraille) has a limited wall-passing energy to pass through at most k walls in each wall-passing show. The walls are placed on a grid-like area. An example is shown in Figure 1, where the land is viewed from above. All the walls have unit widths, but different lengths. You may assume that no grid cell belongs to two or more walls. A spectator chooses a column of the grid. Our wall-passer starts from the upper side of the grid and walks along the entire column, passing through every wall in his way to get to the lower side of the grid. If he faces more than k walls when he tries to walk along a column, he would fail presenting a good show. For example, in the wall configuration shown in Figure 1, a wall-passer with k = 3 can pass from the upper side to the lower side choosing any column except column 6. Given a wall-passer with a given energy and a show stage, we want to remove the minimum number of walls from the stage so that our performer can pass through all the walls at any column chosen by spectators. The first line of the input file contains a single integer t (1 <= t <= 10), the number of test cases, followed by the input data for each test case. The first line of each test case contains two integers n (1 <= n <= 100), the number of walls, and k (0 <= k <= 100), the maximum number of walls that the wall-passer can pass through, respectively. After the first line, there are n lines each containing two (x, y) pairs representing coordinates of the two endpoints of a wall. Coordinates are non-negative integers less than or equal to 100. The upper-left of the grid is assumed to have coordinates (0, 0). The second sample test case below corresponds to the land given in Figure 1. There should be one line per test case containing an integer number which is the minimum number of walls to be removed such that the wall-passer can pass through walls starting from any column on the upper side. 2 3 1 2 0 4 0 0 1 1 1 1 2 2 2 7 3 0 0 3 0 6 1 8 1 2 3 6 3 4 4 6 4 0 5 1 5 5 6 7 6 1 7 3 7 1 1 Walls are parallel to X. //* @author: ccQ.SuperSupper import java.io.*; import java.util.*; interface Pass{ int N = 100+10; void SetInit(int n,int k); void InitData(); int GetAns(); } class Interval implements Comparable{ int left,right; void set(int left,int right){ this.left = left; this.right = right; } public int compareTo(Object obj){ Interval temp = (Interval)obj; if(this.right>temp.right) return 1; return -1; } } class Muraille implements Pass{ int n,k,cnt; int Graph[] = new int[N]; int select[] = new int[N]; Interval wall[] = new Interval[N]; Muraille(){ for(int i=0;i< N;++i) wall[i] = new Interval(); } public void SetInit(int n,int k){ cnt = 0; this.n = n; this.k = k; Arrays.fill(Graph, 0); } wall[cnt].set(left, right); for(int i=left;i<=right;++i){ ++Graph[i]; } ++cnt; } public void InitData(){ Arrays.sort(wall,0,n); } void delete(int who){ for(int i=wall[who].left;i<=wall[who].right;++i){ --Graph[i]; } wall[who].right = -1; } void delete(int left,int num){ for(int i=n-1;i>=0 && num>0;--i) if(wall[i].left<=left && wall[i].right!=-1){ --num; delete(i); } } public int GetAns(){ int ans=0; for(int i=0;i< N;++i){ if(Graph[i]>k){ ans+=Graph[i]-k; delete(i,Graph[i]-k); } } return ans; } } public class Main { public static void main(String[]args)throws Exception{ int Test,n,k; Muraille muraille = new Muraille(); Test = GetNum(cin); while(Test--!=0){ n = GetNum(cin); k = GetNum(cin); muraille.SetInit(n,k); for(int i=0;i< n;++i){ int left = GetNum(cin); GetNum(cin); int right = GetNum(cin); GetNum(cin); }
2017-04-26 19:39:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32066893577575684, "perplexity": 1827.5754208656663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00061-ip-10-145-167-34.ec2.internal.warc.gz"}
https://proxies-free.com/co-combinatorics-a-ratio-of-two-probabilities/
# co.combinatorics – A ratio of two probabilities Ley $$t:=eta$$. Then $$f(t)=frac{P(Gge K)}{P(B ge K)},$$ where $$G$$ is a random variable with the binomial distribution with parameters $$N,q_Gt$$ and $$B$$ is a random variable with the binomial distribution with parameters $$N,q_Bt$$; here we must assume that $$q_B>0$$ and $$tin(0,1/g_G)$$, so that $$q_Gt$$ and $$q_Bt$$ are in the interval $$(0,1)$$. The random variables $$G$$ and $$B$$ have a monotone likelihood ratio (MLR): for each $$xin{0,dots,N}$$, $$frac{P(G=x)}{P(B=x)}=CBig(frac{1-q_Gt}{1-q_Bt}Big)^{N-x},$$ which is decreasing in $$tin(0,1/g_G)$$; here, $$C$$ is a positive real number which does not depend on $$t$$. It is well known that the MLR implies the MTR, the monotone tail ratio. Thus, the desired result follows.
2021-04-19 21:52:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854974746704102, "perplexity": 66.64573403298698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00339.warc.gz"}
https://pos.sissa.it/256/026/
Volume 256 - 34th annual International Symposium on Lattice Field Theory (LATTICE2016) - Nonzero Temperature and Density Complex Langevin for Lattice QCD at $T=0$ and $\mu \ge 0$. D.K. Sinclair,* J.B. Kogut *corresponding author Full text: pdf Pre-published on: January 30, 2017 Published on: March 24, 2017 Abstract QCD at finite quark-/baryon-number density, which describes nuclear matter, has a sign problem which prevents direct application of standard simulation methods based on importance sampling. When such finite density is implemented by the introduction of a quark-number chemical potential $\mu$, this manifests itself as a complex fermion determinant. We apply simulations using the Complex Langevin Equation (CLE) which can be applied in such cases. However, this is not guaranteed to give correct results, so that extensive tests are required. In addition, gauge cooling is required to prevent runaway behaviour. We test these methods on 2-flavour lattice QCD at zero temperature on a small ($12^4$) lattice at an intermediate coupling $\beta=6/g^2=5.6$ and relatively small quark mass $m=0.025$, over a range of $\mu$ values from $0$ to saturation. While this appears to show the correct phase structure with a phase transition at $\mu \approx m_N/3$ and a saturation density of $3$ at large $\mu$, the observables show departures from known values at small $\mu$. We are now running on a larger lattice ($16^4$) at weaker coupling $\beta=5.7$. At $\mu=0$ this significantly improves agreement between measured observables and known values, and there is some indication that this continues to small $\mu$s. This leads one to hope that the CLE might produce correct results in the weak-coupling -- continuum -- limit. DOI: https://doi.org/10.22323/1.256.0026 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2020-11-25 09:02:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6160228848457336, "perplexity": 2476.419541095599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00430.warc.gz"}
https://anhngq.wordpress.com/2009/05/08/a-relation-between-pointwise-convergence-of-functions-and-convergence-of-functionals/
# Ngô Quốc Anh ## May 8, 2009 ### A Relation Between Pointwise Convergence Of Functions And Convergence Of Functionals Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205), Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 16:40 Let $(\Omega, \Sigma, \mu)$ be a measure space and let $\{f_n\}_{n=1}^\infty$ be a sequence of complex valued measurable functions which are uniformly bounded in $L^p= L^p(\Omega, \Sigma, \mu)$ for some $0 < p<\infty$. Suppose that $f_n \to f$ pointwise almost everywhere (a. e.). What can be said about $\| f\|_p$? The simplest tool for estimating $\| f\|_p$ is Fatou’s lemma, which yields $\displaystyle\| f\|_p \leq \liminf_{n \to \infty} \|f_n\|_p.$ The purpose of this note is to point out that much more can be said, namely $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_p^p - \| f_n - f\|_p^p \right) = \|f\|_p^p.$ More generally, if $j : \mathbb C \to \mathbb C$ is a continuous function such that $j(0) = 0$, then, when $f_n \to f$ a.e. and $\displaystyle\int |j(f_n(x))| d\mu(x) \leq C < \infty$ it follows that $\displaystyle\lim\limits_{n \to \infty} \int \left[ j(f_n) - j(f_n - f)\right] = \int j(f)$ under suitable conditions on $j$ and/or $\{f_n\}$. Statement. The $L^p$ case ($0): Suppose $f_n \to f$ a.e. and $\|f_n\|_p \leq C<\infty$ for all $n$ and for some $0. Then the following  limit exists and the equality holds $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_p^p - \| f_n - f\|_p^p \right) = \|f\|_p^p.$ Proof. (i) By Fatou’s lemma, $f \in L^p$. (ii) In case $0, and if we assume that $f \in L^p$, then we do not need the hypothesis that $\|f_n\|_p$ is uniformly bounded. [This follows from the inequality $\displaystyle|f_n|^p - |f_n - f|^p \leq |f|^p$ and the dominated convergence theorem.] However, when $1, the hypothesis that $\|f_n\|_p$ is uniformly bounded is really necessary (even if we assume that $f \in L^p$) as a simple counterexample shows. (iii) When $1, the hypotheses imply that $f_n \rightharpoonup f$ weakly in $L^p$. [By the Banach-Alaoglu theorem, for some subsequence,$f_n$, converges weakly to some $g$; but $g = f$ since $f_n \to f$ a.e.] However, weak convergence in $L^p$ is insufficient to conclude that $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_p^p - \| f_n - f\|_p^p \right) = \|f\|_p^p$ holds, except in the case $p=2$. When $p\ne 2$ it is easy to construct counterexamples, that is $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_p^p - \| f_n - f\|_p^p \right) \ne \|f\|_p^p$ under the assumption only of weak convergence. When $p=2$ the proof of $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_2^2 - \| f_n - f\|_2^2 \right) = \|f\|_2^2$ is trivial under the assumption of weak convergence. Indeed, $\|f_n\|_2^2 - \| f_n - f\|_2^2 = \displaystyle\int \big[ f_n^2 - (f_n-f)^2 \big]= \int \big[ 2f_nf -f^2\big] .$ Since $f_n \rightharpoonup f$ in $L^2$, then $\int f_n f \to \int f^2$ (note that the dual space of $L^2$ is itself). Thus $\displaystyle\lim_{n \to \infty} \left( \|f_n\|_2^2 - \| f_n - f\|_2^2 \right) = \|f\|_2^2.$
2018-04-20 20:20:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 51, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999642372131348, "perplexity": 2418.814954758928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00089.warc.gz"}
https://nforum.ncatlab.org/discussion/2172/oochernsimons-and-extended-cobordism/
## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Discussion Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthordomenico_fiorenza • CommentTimeNov 24th 2010 • (edited Nov 25th 2010) Hi all, sorry for having been absent lately, I’ve been fully absorbed by the preprint with Jim and Urs. now that’s over and my mind seems to be able again to follow nforum discussions :) I’ve been thinking of oo-Chern-Simons theory. At present we are presenting it in nLab as a morphism $\mathbf{H}(\Sigma,A_{conn})\to \mathbf{B}^{n-dim \Sigma}U(1)$. This is fine but does not make explicit an important point: the relation to extended cobordism. Let me sketch it (in a simplified situation where I will not consider differential refinements). We have a cocycle $c:A\to \mathbf{B}^n U(1)$. if we now consider an $n$-representation of $\mathbf{B}^n U(1)$, e.g., the fundamental one, then we can see $c$ as the datum of an $n$-vector bundle over $A$. Now, it is likely that $n$Vect$(A)$ is a symmetric monoidal $(\infty,n)$-category, and it is hopeful that the $n$-vector bundle corresponding to $c$ is a fuly dualizable object. So by the cobordism hypothesis we get a representation of $Bord_n$ with values in $n$Vect$(A)$. In particular to a closed connected $n$-manifold $\Sigma$ it will correspond a $0$-vector bundle over $A$, i.e. a complex valued function on $A$ constant over the isomorphism classes of objects. Integrating this over $A$ produces the invariant associated to $\Sigma$. We can associate a cobordism invariant to $\Sigma$ also in another way: first we push the given $n$-vector bundle forward to the point, i.e. we take the $n$-vector space of its sections. This is hopefully fully dualizable, so we have a representation $Bord_n\to n$Vect. And so an invariant associated to $\Sigma$. It is reasonable to expect that these two invariants are the same. There is one more point of view on this: namely, we can consider bordism with values in $A$. if a representation $Bord_n(A)\to n$Vect is given, then to a morphism $c:\Sigma\to A$ will correspond a $(n-dim\Sigma)$-vector space $V_c$. this gives a $(n-dim\Sigma)$-vector bundle over $\mathbf{H}(\Sigma,A)$. if the $n$-vector space $V_c$ is the fundamental representation of $\mathbf{B}^{n-dim\Sigma} U(1)$, then the $(n-dim\Sigma)$-vector bundle over $\mathbf{H}(\Sigma,A)$ is induced by a principal $\mathbf{B}^{n-dim\Sigma} U(1)$-bundle, i.e., it corresponds to a morphism $\mathbf{H}(\Sigma,A)\to \mathbf{B}^{n-dim\Sigma} U(1)$, which is where we started from. • CommentRowNumber2. • CommentAuthorUrs • CommentTimeNov 26th 2010 • (edited Nov 26th 2010) Hi Domenico, thanks for starting/getting back to this discussion. With our the Friday seminar out of the way, I have now again some resources for $\infty$-Chern-Simons theory. Let me think a bit about what you just said (and catch my bus to catch my train), then I get back to you. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeNov 26th 2010 • (edited Nov 26th 2010) Domenico, here are some general thoughts 1. It is remarkable how the (infinity,n)-category of cobordisms is built by first building it simply as an $n$-fold simplicial set and then applying a completion operation. This makes its $n$-categorical nature rather tractable: $n$-cells are simply little $n$-cubes with an embedded manifold sitting inside, with boundary components sitting on the boundary of the cube. Composition is just the evident attaching of cubes. This is (intentionally) a slightlyover simplified description, but the point is that it is not very much oversimplified in fact. All the $\infty$-categorical subtlety is in completion the $n$-fold simplicial space defined this way to an $n$-fold complete Segal space. So I am thinking it might be useful to mimic this 2-step approach for defining our extended QFT: we should be able to get away with describing just how to propgate field in one direction along an $n$-cube with escibed $n$-manifold. Then we should get a morphism of $n$-fold simplicial sets from that and just send it through the completion operation. (This is just a hunch for a strategy, not a detailed plan. I am just trying to see to which extent we can proceed by divide and conquer). 2. We should see that we stick to general abstract mechanisms as much as possible. Experience shows that that’s a good thing. This makes me have the following attitude towards $n$-vector spaces etc: there ought to be a nice general stract formulation of $\infty$-Chern-Simons theory along the lins of linear algebras as described at integral transforms on sheaves. 3. The previous two points combined bring me back to a construction that we may have talked about before: looking at an $n$-morphism in $(\infty,n)Cob$ just along one direction makes it look like a cospan $\Sigma_{in} \to \Sigma \leftarrow \Sigma_{out}$ (making all $n$ directions explicit would show that this is an $n$-cube of cospans!) Homming this into our target space object $A_{conn}$ produces a span $[\Sigma_{in}, A_{conn}] \leftarrow [\Sigma, A_{conn}] \to [\Sigma_{out}, A_{conn}]$ of spaces of field configurations of our theory. Given our $\infty$-Chern-Simons action functional $[\Sigma,A_{conn}] \to \mathbf{B}^{n-dim \Sigma } U(1)$, regarding it as a cocycle and passing to the $\infty$-bundles that it classifies gives a span $E(\Sigma_{in}) \stackrel{i}{\to} E(\Sigma) \stackrel{o}{\to} E(\Sigma_{out})$ of bundles over the above span of spaces of field configurations. Now, these are principal bundles, not $n$-vector bundles yet. But there is a way that allows us to think of an object over these bundles as presenting a section of an associated vector bundle. (I think we discussed this groupoid-cardinality approach before. Let me know if it is not clear what I am thinking of.) So let $\mathbf{H}$ be the ambient $\infty$-topos (if we think of the expressions $[\Sigma,A]$ as internal homs, then this is still the original $\infty$-topos that we started in. If they are instead taken to be external homs, then this is now $\infty Grpd$. i am not sure yet what the right way to go is. But anyway.) Then the over-(infinity,1)-toposes $\mathbf{H}/[E(\Sigma_{in})]$ etc. would play the role of the $n$-vector spaces of states over $\Sigma_{in}$, etc. The quantum propagation along our $n$-morphism in the given direction should then be the integral transform $o_! i^* : \mathbf{H}/[E(\Sigma_{in})] \to \mathbf{H}/[E(\Sigma_{out})]$ That would give a fairly immediate description of our extended QFT along one of the $n$ directions. My hope would be that just doing this same process in an $n$-fold iterated way gives a morphism of $n$-fold simplicial sets, which under some completion then gives the desired $(\infty,n)$-functor. 1. Hi Urs, I very much agree with your over-toposes point of view. Yet I think it should not be push-pull, but push-tensor-pull. Concretely, consider a span $\Sigma_{in} \to \Sigma \leftarrow \Sigma_{out}$ and the associated cospan $[\Sigma_{in}, A_{conn}] \leftarrow [\Sigma, A_{conn}] \to [\Sigma_{out}, A_{conn}]$. Then oo-Chern-Simons action gives us a cocycle $[\Sigma_{in}, A_{conn}]\to \mathbf{B}^{n-dim\Sigma_{in}}U(1)$, which we can pull-back to a cocycle $[\Sigma, A_{conn}]\to \mathbf{B}^{n-dim\Sigma_{in}}U(1)$. This is not the oo-Chern-Simons cocycle on $[\Sigma, A_{conn}]$ (just look at the degree of delooping on the right hand side). Rather the oo-Chern-Simons cocycle $[\Sigma, A_{conn}]\to \mathbf{B}^{n-dim\Sigma}U(1)$ acts on $\mathbf{H}([\Sigma, A_{conn}],\mathbf{B}^{n-dim\Sigma_{in}}U(1))$, since $\mathbf{B}^k U(1)$ is the $k$-groupoid of morphisms of $\mathbf{B}^{k+1}U(1)$. So, after having pulled back our oo-Chern-Simons cocycle from $[\Sigma_{in}, A_{conn}]$ to $[\Sigma, A_{conn}]$ we act on it with the oo-Chern-Simons cocycle on $[\Sigma, A_{conn}]$, and then, finally, we push it forward to $[\Sigma_{out}, A_{conn}]$. • CommentRowNumber5. • CommentAuthorUrs • CommentTimeNov 27th 2010 • (edited Nov 27th 2010) Good point, Domenico. But this ought to be related to what I said: I was pull-pushing along the total spaces of the bundles of these cocycles. That mimics a pull-tensor push. I need to think about this, because the setup we are talking about right now is a tad more involved than the bare-bones setup described at integral transforms on sheaves. But there it is shown how in the bare-bones setup every pull-tensor-push is equivalent to a pull-push. • CommentRowNumber6. • CommentAuthorUrs • CommentTimeNov 27th 2010 • (edited Nov 27th 2010) Here is another observation, coming from the discussion on higher order Hochschild homology and its relation to QFT in the other thread: Let me decompose the $\infty$-Chern-Simons action functional $\mathbf{H}(\Sigma, A_{conn}) \to \mathcal{B}^{n- dim\Sigma} U(1)$ again into its steps, where it reads $\mathbf{H}(\Sigma, A_{conn}) \to \mathbf{H}(\Sigma, \mathbf{B}^n U(1)_{conn}) \simeq \mathbf{H}(\mathbf{\Pi}(\Sigma), \mathbf{B}^n U(1)) \simeq \infty Grpd(\Pi(\Sigma), \mathcal{B}^n U(1)) \stackrel{\tau_{\leq n - dim\Sigma}}{\to} \mathcal{B}^{n - dim \Sigma} U(1) \,.$ Let me disregard the very last step for the moment, the one that decategories at level $n - dim \Sigma$ to get the actual action. I want to look here ar the intermediate stepbefore, where we have $\mathbf{H}(\mathbf{\Pi}(\Sigma), \mathbf{B}^n U(1))$. Let’s see what we get if we replace the external hom here with the inernal one. Recalling the notation $\mathbf{\Pi} = LConst \Pi$ this is $[\mathbf{\Pi}(\Sigma), \mathbf{B}^n U(1)] = [LConst \Pi(\Sigma), \mathbf{B}^n U(1)] \,.$ This is curious, because comparing with the discussion at Hochschild cohomology, we see that under taking functions $\mathcal{O}$, this is the higher order Hochschild holomogy of $\mathcal{O} \mathbf{B}^n U(1)$ over $\Pi(\Sigma)$ (notably if $\Sigma = S^1$, it is the ordinary Hochschild homology of that $\infty$-algebra). Notably, if we let $\Sigma$ vary here over subsets of a larger $\hat \Sigma$, then the assignment $\Sigma \mapsto \mathcal{O} [LConst \Pi(\Sigma), \mathbf{B}^n U(1)]$ is what Ginot et al in the article linked to at the entry on Hochschild cohomology show to be a locally constant factorization system on $\hat \Sigma$. I haven’t thought this fully through. But I am beginning to think now that we should be able to unify the AQFT and the FQFT perspective on $\infty$-Chern-Simons theory along such lines. But I don’t understand yet the following step in this would-be story: in the external hom we have $\mathbf{H}(\Sigma, \mathbf{B}^n U(1)) \simeq \mathbf{H}(\mathbf{\Pi}(\Sigma), \mathbf{B}^n U(1)_{conn})$ for dimensional reasons, because $dim \Sigma \leq n$. But this argument then fails in the internal hom, which is given by $[\Sigma, \mathbf{B}^n U(1)_{conn}] : U \mapsto \mathbf{H}(\Sigma \times U, \mathbf{B}^n U(1)_{conn}) \,.$ Not sure yet what that is telling us. But I thought I’d mention my thoughts anyway. 2. Hi Urs, that’s a very good point! concerning the last truncation step, I must say that from the very beginning I had mixed feelings about it: on one side it reproduced neatly classical constructions, but on the other it was a truncation, so this suggested the “real thing” had to be the object before truncation, which is much more canonical. And now it seems we are beginning to see why. Sorry to be so short, I’m in a hurry. Won’t be back before tomorrow evening :( • CommentRowNumber8. • CommentAuthorUrs • CommentTimeNov 29th 2010 The truncation step is of course the integration step of the local Lagrangian over the surface to an actual local action functional. It is very nice how this comes out, but possibly, as you said, we want to be careful with applying this too early on. • CommentRowNumber9. • CommentAuthordomenico_fiorenza • CommentTimeDec 2nd 2010 • (edited Dec 2nd 2010) Hi Urs, I was thinking about Thom work on cobordism. There, the module of $n$-dimensional oriented cobordims is realized as the $n$-th homotopy group of the Thom spectrum, i.e. as $\pi_n(MSO):=\lim \pi_{n+i}(MSO_i)$. The cobordism ring is then (saying this in a very rought way) the collection of all these homotopy groups. But then this suggests that a natural point of view on the cobordism ring is in terms of the oo-Poincare’ groupoid of the Thom spectrum, $\Pi(MSO)$. It is not completely clear to me what kind of object the oo-Poincare’ groupoid of a spectrum should be, but I’m confident $\Pi(MSO)$ is the kind of object whose representations we are interested in when we consider a tqft. • CommentRowNumber10. • CommentAuthorUrs • CommentTimeDec 2nd 2010 • (edited Dec 2nd 2010) Hi Domenico, It is not completely clear to me what kind of object the oo-Poincare’ groupoid of a spectrum should be, Maybe the discussion in the other thread Homology from the nPOV is relevant: There the idea is that the generalized homology of a space $X$ with coefficients in a spectrum $E$ is $\Pi LConst E$ computed for the stabilized $\infty$-topos over $X$. I am not sure if that really helps with your question, because over the point this amounts to saying that $\Pi(E) = E$ ! :-) But I mention it just in case that it makes you see more. I am, unfortunately, once again absorbed with preparing our friday seminar… • CommentRowNumber11. • CommentAuthorUrs • CommentTimeDec 7th 2010 We need to sort out what kind of quantum structure we can naturally obtain from the $\infty$-Chern-Simons Lagrangian $\mathbf{B}G_{conn} \to \mathbf{B}^n U(1)_{conn} \,.$ There ought to be a factorization algebra which to $\Sigma$ assigns the collection of sections of the bundle over the space of fields $\mathbf{H}(\Sigma, \mathbf{B}G_{conn})$ that is classified by the action functional $\mathbf{H}(\Sigma, \mathbf{B}G_{conn}) \to \mathbf{H}(\Sigma, \mathbf{B}^n U(1)_{diff}) \simeq \mathbf{H}(\mathbf{\Pi}(\Sigma), \mathbf{B}^n U(1)) \sime \infty Grpd(\Pi(\Sigma), \mathcal{B}^n U(1))$. Or maybe of the $(n-dim \Sigma)$-truncation of this, not sure.
2022-01-27 20:47:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 131, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102824926376343, "perplexity": 628.6576988901785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00068.warc.gz"}
https://math.stackexchange.com/questions/2733604/positive-semi-definiteness-of-a-matrix-implying-certain-structure
# Positive semi-definiteness of a matrix implying certain structure I have the following (real) matrix which I need to be positive semi-definite, $P = \begin{bmatrix} P_1 & -\frac{1}{2}(P_1+P_2)\\-\frac{1}{2}(P_1+P_2) & P_2\end{bmatrix} \succeq 0$. I think this is only the case when, $P_1 = P_2 \succeq 0$, but I couldn't find a way to prove this. I was therefore wondering if this is even the case and if so how to prove this. $$\lambda^2-(P_1+P_2)\lambda-\frac{1}{4}(P_1-P_2)^2,$$ so that the eigenvalues are: $$\lambda_\pm=\frac{P_1+P_2\pm\sqrt{2(P_1^2+P_2^2)}}{2}.$$ To ensure that both eigenvalues are non-negative it should hold: $$P_1+P_2\ge0;\quad (P_1+P_2)^2\ge 2(P_1^2+P_2^2)\Rightarrow 0\ge (P_1-P_2)^2.$$ The last inequality can however hold only if $P_1=P_2$.
2019-05-22 08:54:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706969857215881, "perplexity": 132.6126540643862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00294.warc.gz"}
https://direct.mit.edu/netn/article/3/2/384/2220/Computation-is-concentrated-in-rich-clubs-of-local
## Abstract To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation. ## Author Summary Here we answer the question of whether rich-club organization in functional networks of cortical circuits supports neural computation. To do so, we combined network analysis with information theoretic tools to analyze the spiking activity of hundreds of neurons recorded from organotypic cultures of mouse somatosensory cortex. We found that neurons in rich clubs computed significantly more than neurons outside of rich clubs, suggesting that rich clubs do support computation in cortical circuits. Indeed, the amount of computation that we found in the rich clubs was proportional to the amount of information they propagate, suggesting that in these circuits, information propagation drives computation. ## INTRODUCTION The idea that neurons propagate information and that downstream neurons integrate this information via neural computation is foundational to our understanding of how the brain processes, and responds to, the world. Yet, the determinants of such computations remain largely unknown. Advances in data acquisition methods, offering increasingly comprehensive recordings of the activation dynamics that play out atop of neural circuits, together with advances in data analytics, now make it possible to empirically study the determinants of neural computation. Using these tools, we addressed the fundamental question of where, relative to information flow in local cortical networks, the majority of neural computation takes place. Although there is no agreed upon definition of “computation,” in its simplest form, computation refers to the process of integrating multiple sources of information to produce a new output. This is in contrast to information propagation which is simply the passing of (unmodified) information from a source to a receiver. Neural computation is the systematic transformation of information received by a neuron (determined by analyzing its output with respect to its inputs) based on the input of multiple upstream neurons (Timme et al., 2016). This type of computation can be detected empirically when the activity of upstream neurons accounts for the activity of a downstream neuron better when considered jointly than when treated as independent sources of variance. Because computation is the information gained beyond what was already accounted for by the upstream neurons when they are treated independently, it is not a given that strong sources of information propagation necessarily lead to a large amount of computation. Analytical tools adapted from Shannon’s information theory make it possible to track such computation as well as information propagation in networks of spiking neurons (Strong et al., 1998; Borst & Theunissen, 1999; Schreiber, 2000; Williams & Beer, 2010). The determinants of strong computation in neural circuits remain poorly understood. Previously, Timme et al. (2016) used these tools to show that computation does not vary systematically with the number of inputs received by a neuron, as might be intuited. Rather, computation correlates with the number of outputs of the upstream neurons. This counterintuitive finding suggested that the amount of computation a neuron performs may be better predicted by its position in the broader topographic structure of the circuit than by the local connectivity. The relationship between computation and the strength of inputs, however, remains unknown. To understand the determinants of maximal computation in neural circuits, it is important to determine how computation varies as a function of the topology of the functional networks along which information propagates and within which computation is performed. This raises the question of what topological conditions support computation. Local cortical networks, like many complex networks, contain “rich clubs,” that is, the most strongly connected neurons interconnect with a higher probability than would be expected by chance. The existence of a rich club in a functional network predicts that a select set of highly integrated nodes handles a disproportionately large amount of traffic. Indeed, in the local networks of cortical circuits, 20% of the neurons account for 70% of the information propagation (Nigam et al., 2016). As such, rich clubs represent a conspicuous topographic landmark in the flow of information across neural circuits. Thus, here we addressed the critical question: What is the role of the rich club with respect to neural computation? We tested among three possible hypotheses. First, computation is constant throughout the topology of a network: predicting that rich clubs do not perform more or less computation than would be expected by chance. Second, computation grows with increasing information availability: predicting that rich clubs are rich in computation given their high information density. Third, computation decreases with increasing information availability: predicting that rich clubs are computationally poor. To test these hypotheses, we recorded the spiking activity of hundreds of neurons from organotypic cultures of mouse somatosensory cortex and assessed the distribution of computation inside versus outside of rich clubs of information propagation (Figure 1). The results demonstrate that rich-club neurons perform 160% more computation than neurons outside of the rich club, accounting for the majority of network computation (∼88%). Throughout the networks, computation occurs proportionally to information propagation (i.e., the two are correlated) although at a slightly reduced rate (∼3% decrease) inside of rich clubs. Importantly, however, rich clubs contained more computation than would be expected given the correlation between propagation and computation. These results show that rich clubs are computationally dense. Thus, rich clubs support elevated amounts of both computation and propagation relative to the rest of the network. Figure 1. Experimental and data analysis procedure. (Top row, left to right) Brains were extracted from mouse pups and sliced using a vibratome. Slices containing somatosensory cortex were organotypically cultured for up to 2 to 4 weeks. Cultures were then placed on a recording array and recorded for 1 hr. (Middle row, right to left) Recordings yielded neuron-spiking dynamics at each electrode—waveforms at six example electrodes shown—which were sorted using principal component analysis in order to isolate individual cells based on their distinct waveforms. Once cells were isolated and localized (pink circles) within the recording area (white rectangle), their corresponding spike trains could be determined. (Bottom Row, left to right) Spike trains were then used to compute transfer entropy (TE), at multiple timescales, between each neuron pair in a recording. This resulted in networks of effective connectivity. Computations occurring at neurons receiving two connections were then calculated using partial information decomposition. A rich-club analysis was used to detect collections of hub neurons that connect to each other. Finally, we examined the relationship between TE within a triad and two-input computations as well as between two-input computations and rich clubs. Figure 1. Experimental and data analysis procedure. (Top row, left to right) Brains were extracted from mouse pups and sliced using a vibratome. Slices containing somatosensory cortex were organotypically cultured for up to 2 to 4 weeks. Cultures were then placed on a recording array and recorded for 1 hr. (Middle row, right to left) Recordings yielded neuron-spiking dynamics at each electrode—waveforms at six example electrodes shown—which were sorted using principal component analysis in order to isolate individual cells based on their distinct waveforms. Once cells were isolated and localized (pink circles) within the recording area (white rectangle), their corresponding spike trains could be determined. (Bottom Row, left to right) Spike trains were then used to compute transfer entropy (TE), at multiple timescales, between each neuron pair in a recording. This resulted in networks of effective connectivity. Computations occurring at neurons receiving two connections were then calculated using partial information decomposition. A rich-club analysis was used to detect collections of hub neurons that connect to each other. Finally, we examined the relationship between TE within a triad and two-input computations as well as between two-input computations and rich clubs. ## RESULTS All TE and synergy values were normalized by the entropy of the receiver neuron in order to cast them in terms of the proportion of the receiver neuron’s capacity that is accounted for by the transmitting neuron, or by computation, respectively. Because of this, all TE and synergy values are in terms of bits per bit. All results are reported as medians with 95% bootstrap confidence intervals (computed using 10,000 iterations) presented in brackets. The three timescales at which TE was computed follow the same pattern of results and are therefore combined for ease of presentation; separate results for each timescale are reported in the Supporting Information (Faber, Timme, Beggs, & Newman, 2019). ### Computation and Information Propagation Vary Widely in Cortical Microcircuits When building the networks, we found that 0.52% [0.38%, 1.1%] of all possible directed connections between neurons were significant at the α = 0.001 level (e.g., 480 of 81,510 possible connections, or 0.59%, were significant in a network of 286 neurons). To consider the amount of information that was used in two-input computations, we defined information propagation as the sum of the two inputs (TE values) converging on a neuron. Across neurons, the distribution of propagation values was approximately lognormal (shown in Figure 2), consistent with previously observed distributions of both structural (Song et al., 2005; Lefort et al., 2009; Ikegaya et al., 2012) and functional connectivity (for a review, see Buzsáki & Mizuseki, 2014). This lognormality indicates that there exists a long tail of large propagation values such that a few neurons propagate particularly large amounts of information. Concretely, we found that the strongest 8.5% [8%, 9.7%] of neurons propagated as much information as the rest of the neurons combined. Within a network, the difference between the neurons that received the most versus the least information commonly spanned 3.9 [3.6, 4.1] orders of magnitude. Figure 2. Distributions of neuron computation (synergy) and propagation (triad TE) are highly varied. Histograms of synergy and triad TE values for all receivers in all networks at all timescales. (A) Distributions of measured values. (B) Distributions of log-scaled values to emphasize variability. Solid and dashed lines depict the median across networks, and shaded regions depict 95% bootstrap confidence intervals around the median. Figure 2. Distributions of neuron computation (synergy) and propagation (triad TE) are highly varied. Histograms of synergy and triad TE values for all receivers in all networks at all timescales. (A) Distributions of measured values. (B) Distributions of log-scaled values to emphasize variability. Solid and dashed lines depict the median across networks, and shaded regions depict 95% bootstrap confidence intervals around the median. Computation (measured as synergy), like propagation, varied in a lognormal fashion (Figure 2). The top 8.4% [8%, 9.3%] of neurons computed the same amount of information as the rest of the neurons combined. The synergy typically spanned 4.1 [3.8, 4.2] orders of magnitude over neurons in individual networks. Notably, computation was reliably smaller than propagation (0.11 vs. 0.025; Zs.r. = −7.5, n = 75 networks, p = 5.3 × 10−14) indicating that despite finding substantial computation, most information was accounted for by neuron-to-neuron propagation. The variability in computation motivated us to test the hypothesis that the dense interconnectivity of rich clubs serves as a hub, not only for propagation, but also for computation. ### Rich Clubs Perform a Majority of the Network-Wide Computation In every network, we found significant rich clubs at multiple richness parameter (r) levels (i.e., thresholds) as shown in Figure 3. The richness parameter was defined as the sum of incoming and outgoing TE edge weights of each neuron. Rich clubs were computed at every kth value of r by dividing the sum of all weights for nodes with rrk by the sum of the n largest weights in the network, where n is the number of edges between neurons with rrk. The resulting rich-club coefficients approach one when the strongest edges connect the neurons in the subsets defined by rrk. These rich-club coefficients are then normalized by coefficients from null distributions in order to quantify how much rich clubs differ from those expected by chance (see Supporting Information Methods for more detail, Faber et al., 2019). The more the normalized rich-club coefficient diverges from one, the richer the rich club (Figure 3B). Figure 3. Networks reliably show rich clubs. (A) Adjacency matrix of a representative 310-neuron network with rich clubs. Rich club of top 30% of neurons depicted. Neurons sorted in order of increasing richness from left to right and bottom to top. TE values are log scaled. (B) Normalized, weighted rich-club coefficients for all networks. X-axis is log-scaled richness parameter level, where the richness parameter is the sum of the weighted connections for each neuron. Solid line represents median across all networks; shaded region is 95% bootstrap confidence interval around the median. In order for a rich club to be recruited into the synergy analysis, coefficients were required to be significant (p < 0.01) when compared with those from randomized networks. (C) The number of networks, out of the 75 analyzed, with significant rich clubs at each threshold. The majority of networks had significant rich clubs composed of the top 50% to 10% of the network. Figure 3. Networks reliably show rich clubs. (A) Adjacency matrix of a representative 310-neuron network with rich clubs. Rich club of top 30% of neurons depicted. Neurons sorted in order of increasing richness from left to right and bottom to top. TE values are log scaled. (B) Normalized, weighted rich-club coefficients for all networks. X-axis is log-scaled richness parameter level, where the richness parameter is the sum of the weighted connections for each neuron. Solid line represents median across all networks; shaded region is 95% bootstrap confidence interval around the median. In order for a rich club to be recruited into the synergy analysis, coefficients were required to be significant (p < 0.01) when compared with those from randomized networks. (C) The number of networks, out of the 75 analyzed, with significant rich clubs at each threshold. The majority of networks had significant rich clubs composed of the top 50% to 10% of the network. When comparing the observed rich-club coefficients with those from null distributions, we also calculated p values to establish significance of the rich clubs. We found significant rich clubs at multiple richness parameter levels, which typically consisted of the top 10%–50% of the neurons in each network (Figure 3C). The median number of thresholds that resulted in significant rich-club coefficients was four per network. To ensure that the detection of rich clubs was not biased by the spatial sampling of the recording apparatus, we compared the distances between rich-club neurons (defined as the top 30% of neurons in a network) to the distances between all neurons in the network. We found that there were no significant differences between the two distributions of distances (Kolmogorov–Smirnov tests revealed that 75 out of 75 networks had distributions that were not significantly different at the α = 0.01 level). We investigated the relationship between these rich clubs and computation (measured as synergy) by using multiple approaches. In the first approach, we asked if the mean normalized synergy-per-triad (where the mean was taken over all triads in a network, for each network) was significantly greater for triads with receivers inside, versus outside, of a single, representative rich club from each network. The representative rich club for each network was selected at random from among the significant ones. Indeed, we found that mean synergy was 270% greater inside of the rich club (0.027 vs. 0.01, Zs.r. = 7.2, n = 75 networks, p = 5.2 × 10−13; Figure 4). We next asked what percentage of the network-wide computation was performed inside of the rich club and found that in these representative rich clubs, 87.7% [79.5%, 94.3%] of all synergy in a network was performed by rich-club neurons (Figure 4B). Figure 4. Figure 4. A concern that arises when considering whether synergy is stronger inside of rich clubs identified with information theoretic measures, such as TE, is that the rich clubs are defined as having high TE and thus may cause high concentrations of computation for trivial reasons. To test this, we used a spike-time shuffling analysis that preserved TE but disrupted the joint firing statistics between transmitters of a triad. This allowed us to compute the null distribution of synergy that would be expected given the TE that comprised each rich club (see Supporting Information for methods, Faber et al., 2019). Although this analysis demonstrated that high TE in the rich club can be sufficient to generate a null distribution of synergy that is greater inside versus outside of the rich club (significant in two of the three timescales, see Supporting Information for full details, Faber et al., 2019), the actual synergy levels observed in the rich club here were reliably even greater than the null distributions. When we Z-scored the observed synergy values by the values of the null distribution of synergy, the median Z-scored synergy values across networks was 13.31 (Zs.r. = 5.76, n = 75 networks, p = 8.6 × 10−9). The individual Z-scored synergy values were significant at the α = 0.05 level in 88% (66 of 75) of the networks. The results of these analyses demonstrate that the computation observed in the rich club is not a simple consequence of the magnitude of the TE values that comprise the rich clubs in these networks. Our second approach to quantify synergy with respect to the rich club examined how synergy varied as a function of the richness levels. The results of these analyses recapitulate the findings reported above. That is, across levels, richer neurons consistently had greater mean synergy-per-triad (Figure 4C). Similarly, at most thresholds, rich neurons accounted for a majority of the network-wide synergy (Figure 4D). However, because the richest neurons are likely to participate in a larger number of triads, rich clubs would be expected to perform a large percentage of the network-wide synergy even if the mean synergy-per-triad was not significantly greater than that found elsewhere in the network. Thus, we also asked how the percentage of network-wide synergy varied as a function of the percentage of triads that are included in the rich club. As shown in Figure 4E, the share of synergy performed by the richest neurons was consistently greater than the percentage of above threshold triads. Notably, the percent difference drops off as the threshold decreases, reflecting that as the rich clubs become less rich, they have less relative amounts of synergy. Our third approach compared the relationship between the strength of the rich club for any given threshold, as indicated by the normalized rich-club coefficient, with the amount of synergy taking place in that rich club. Specifically, we correlated the normalized rich-club coefficient with the mean normalized synergy of triads inside of the rich club across thresholds for each network separately and asked if there was a consistent trend across networks (Figure 5). In most networks, synergy and normalized rich-club coefficient were positively correlated (64 of 75 networks) such that the median correlation coefficient of 0.75 [0.66, 0.84] was significantly greater than zero (Zs.r. = 6.6, n = 75 networks, p = 2.9 × 10−11; Figure 5B). These results indicate that rich-club strength was strongly predictive of mean synergy. Figure 5. Normalized rich-club coefficient correlates with synergy. (A) Normalized rich-club coefficients and mean normalized synergy at increasing richness levels for four representative networks. Negative correlations are observed in networks that have poor rich clubs, or in which the mean synergy decreases as we consider fewer, richer neurons. The second case is observed in networks whose top neurons participate in many triads with synergy values that are highly variable. (B) Distribution of correlation coefficients for correlations between rich-club coefficients and mean triad synergy at all richness levels. Most network rich-club coefficients are positively correlated with mean triad synergy. This shows that rich clubs are predictive of increased synergy levels. Figure 5. Normalized rich-club coefficient correlates with synergy. (A) Normalized rich-club coefficients and mean normalized synergy at increasing richness levels for four representative networks. Negative correlations are observed in networks that have poor rich clubs, or in which the mean synergy decreases as we consider fewer, richer neurons. The second case is observed in networks whose top neurons participate in many triads with synergy values that are highly variable. (B) Distribution of correlation coefficients for correlations between rich-club coefficients and mean triad synergy at all richness levels. Most network rich-club coefficients are positively correlated with mean triad synergy. This shows that rich clubs are predictive of increased synergy levels. Figure 6. Greater computation (synergy) is performed by triads with greater numbers of neurons in the rich club. Distributions of mean synergy for each of all possible triad interactions with the rich club. Triads that have all members in the rich club have the greatest synergy. Triads with both transmitters in the rich club, and a single transmitter and the receiver in the rich club, have similar amounts of synergy. Triads with only the receiver in the rich club have more synergy than triads with a single transmitter in the rich club. All triads with any member in the rich club have more synergy than triads with no members in the rich club. Medians, denoted by “x,” and 95% bootstrap confidence intervals are shown. Table shows Bonferroni–Holm corrected p values (lower diagonal) and differences of medians (upper diagonal) of pairwise comparisons between the conditions, which are sorted by median mean synergy. Significant p values are boldface. Distributions shown have n = 75 data points. Figure 6. Greater computation (synergy) is performed by triads with greater numbers of neurons in the rich club. Distributions of mean synergy for each of all possible triad interactions with the rich club. Triads that have all members in the rich club have the greatest synergy. Triads with both transmitters in the rich club, and a single transmitter and the receiver in the rich club, have similar amounts of synergy. Triads with only the receiver in the rich club have more synergy than triads with a single transmitter in the rich club. All triads with any member in the rich club have more synergy than triads with no members in the rich club. Medians, denoted by “x,” and 95% bootstrap confidence intervals are shown. Table shows Bonferroni–Holm corrected p values (lower diagonal) and differences of medians (upper diagonal) of pairwise comparisons between the conditions, which are sorted by median mean synergy. Significant p values are boldface. Distributions shown have n = 75 data points. ### Stable Computation-to-Propagation Ratio Accounts for the High Density of Computation in Rich Clubs Our results show that rich-club neurons both propagate a majority of the information (Nigam et al., 2016) and perform a majority of the computation in the network. A possible explanation for the co-localization of information propagation and computation is that propagation drives computation. To investigate this, we asked how correlated information propagation and computation (measured as synergy) were across triads for each network. In every network, the amount of information being propagated in a triad was strongly correlated with synergy (ρ = 0.76 [0.74 0.77], minimum ρ = 0.57, Zs.r. = 7.52, n = 75 networks, p = 5.3 × 10−14; Figure 7A and 7B). Figure 7. Propagation is highly predictive of computation. (A) Scatterplot of synergy (computation) versus triad TE (propagation) in a representative network with 3,448 triads. Colorbar depicts point density. Also shown is the correlation coefficient. (B) Distribution of network correlations between synergy and triad TE. This shows that computation was strongly, positively correlated with propagation across all networks. (C) Histogram of computation ratio values for all receivers in all networks. (D) Histogram of log-scaled computation ratio values for all receivers in all networks. Gray lines are replotted here from Figure 2 for ease of comparison. The blue line represents the distribution of computation ratios that results from shuffling the alignment of triad synergy to TE. Thus, the span of observed computation ratios is significantly smaller than what we might have observed by chance. For C and D, Solid and dashed lines depict the medians across networks, and shaded regions depict 95% bootstrap confidence intervals around the medians. Figure 7. Propagation is highly predictive of computation. (A) Scatterplot of synergy (computation) versus triad TE (propagation) in a representative network with 3,448 triads. Colorbar depicts point density. Also shown is the correlation coefficient. (B) Distribution of network correlations between synergy and triad TE. This shows that computation was strongly, positively correlated with propagation across all networks. (C) Histogram of computation ratio values for all receivers in all networks. (D) Histogram of log-scaled computation ratio values for all receivers in all networks. Gray lines are replotted here from Figure 2 for ease of comparison. The blue line represents the distribution of computation ratios that results from shuffling the alignment of triad synergy to TE. Thus, the span of observed computation ratios is significantly smaller than what we might have observed by chance. For C and D, Solid and dashed lines depict the medians across networks, and shaded regions depict 95% bootstrap confidence intervals around the medians. Seeing that computation and propagation were highly correlated across triads, we asked what range of computation ratios (i.e., computation/propagation) occurred across networks. As shown in Figure 7C and 7D, the computation ratio was highly stereotyped over networks with a median of 0.239 [0.237, 0.241]. In contrast to the 3.9 [3.6, 4.1] and 4.1 [3.8, 4.2] orders of magnitude over which triad TE and synergy varied, respectively, the computation ratio varied by 1.57 [1.46, 1.62] orders of magnitude. By way of comparison, randomizing the alignment of synergy to triad TE across triads results in computation ratios that span 5.8 [5.3, 6.3] orders of magnitude. The significant reduction in variance over what would be expected by chance (Zs.r. = −7.5, n = 75 networks, p = 5.3 × 10−14) suggests that the computation ratio is a relatively stable property of neural information processing in such networks. The implication of a stable computation ratio across triads is that information propagation will reliably be accompanied by computation, thereby accounting for the high density of computation in propagation-dense rich clubs. Because the computation ratio allows computation to be predicted from propagation, it is informative to ask if this ratio varies as a function of rich-club membership. We found that the computation ratio was not substantially different for triads inside, versus outside, of rich clubs (0.252 vs. 0.259 at the 0.05–3 ms timescale, Zs.r. = −1.58, n = 25 networks, p = 0.11; 0.238 vs. 0.245 at the 1.6–6.4 ms timescale, Zs.r. = −2, n = 25 networks, p = 0.045; and 0.215 vs. 0.224 at the 3.5–14 ms timescale, Zs.r. = 0.97, n = 25 networks, p = 0.33; Figure 8). The difference was only marginally significant for the 1.6–6.4 ms timescale (Figure 8B, center). We also tested whether the rich-club coefficient was correlated with the computation ratio across thresholds and found that it was significantly negatively correlated at the 0.05–3 ms timescale (ρ = −0.36 [−0.69, 0.16], Zs.r. = −2.38, n = 25 networks, p = 0.017; Figure 8C, left) and at the 1.6–6.4 ms timescale (ρ = −0.62 [−0.70, −0.18], Zs.r. = −3.2, n = 25 networks, p = 0.001; Figure 8C center), but not at the 3.5–14 ms timescale (ρ = −0.26 [−0.69, 0.39], Zs.r. = −0.98, n = 25 networks, p = 0.33; Figure 8C right). This negative correlation suggests that the computation ratio decreases at high levels of propagation at the shortest timescales. However, computation ratios were only slightly reduced (∼3%) compared with the substantially greater computation values found in the rich club (∼160%), thereby leaving rich clubs overall dense in computation (see Figure 10). Figure 8. Rich-club membership is not strongly predictive of the ratio of computation to propagation (computation ratio). (A) Mean computation ratio for triads with receivers inside versus outside the rich club at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). (B) Computation ratio for triads with receivers inside versus outside the rich club at all significant rich club levels (indicated by the yellow shaded region) at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). (C) Coefficient distribution for correlations between mean computation ratio and normalized rich-club coefficient at all richness levels, for each network, at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). Significance indicators: *p < 0.05. Figure 8. Rich-club membership is not strongly predictive of the ratio of computation to propagation (computation ratio). (A) Mean computation ratio for triads with receivers inside versus outside the rich club at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). (B) Computation ratio for triads with receivers inside versus outside the rich club at all significant rich club levels (indicated by the yellow shaded region) at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). (C) Coefficient distribution for correlations between mean computation ratio and normalized rich-club coefficient at all richness levels, for each network, at the 0.05–3 ms timescale (left), the 1.6–6.4 ms timescale (center), and the 3.5–14 ms timescale (right). Significance indicators: *p < 0.05. To demonstrate that the result of synergy in the rich club cannot be fully explained by a simple correlation between incoming weight of the receiver and synergy, we asked if there was greater synergy in the rich clubs after the correlation between connection strength and synergy had been regressed out. To do this, we performed a regression between summed incoming connection strength and synergy across triad receivers for a given network and then collected the residual synergy for each triad after accounting for the summed incoming connections. We then asked if the residual synergy values were still significantly greater in the rich club than outside and found that they were (Zs.r. = 6.29, n = 75 networks, p = 3.24 × 10−10). ### Operationalization of Computation Was Not Critical for Present Results To investigate whether our findings are robust to the method of quantifying computation, we implemented two alternate methods of identifying computation to test if the same results were obtained. In the first, we used an alternate implementation of partial information decomposition (PID). Unlike the standard PID approach, which effectively computes the upper bound on synergy (by assuming maximum redundancy between transmitters), this alternate implementation effectively computes the lower bound of synergy (by assuming no redundancy between transmitters). When using this approach, we find the same pattern of results. That is, mean synergy-per-triad is significantly greater inside versus outside of the rich clubs (0.011 vs. 0.006; Zs.r. = 5.14, n = 75 networks, p = 2.7 × 10−7; Supporting Information Figure S9, Faber et al., 2019), and computation is significantly, positively correlated with information propagation (ρ = 0.57 [0.42, 0.61]; Zs.r. = 7.4, n = 75 networks, p = 7.59 × 10−14). Our second alternate method of identifying computation estimated the input-output transfer functions of individual neurons as described by Chichilnisky (2001). In the prior analyses, using PID, we took synergy as evidence of computation because it quantifies how much more information is carried by multiple neurons when considered together than the sum of information carried by the same neurons when considered individually. Similarly, in this analysis, we took nonlinear input-output functions as evidence of computation because they indicate that a neuron does not simply echo its inputs but, rather, responds based on patterns of upstream inputs. Accordingly, we performed a median split across neurons based on the linearity of their estimated input-output functions. Those with the most nonlinear functions were identified as neurons that likely perform more computation. Comparing this classification with the normalized synergy values obtained via PID, we found that neurons with nonlinear transfer functions were also found to have significantly greater amounts of synergy than those with linear transfer functions (0.109 vs. 0.034; Zs.r. = 6.22, n = 75 networks, p = 4.79 × 10−10). As such, use of this classification regime provides an independent, yet related, approach to identifying computation. Consistent with our main results, the concentration of neurons exhibiting nonlinear transfer functions was significantly greater inside versus outside of the rich clubs (70.1% vs. 42.1%; Zs.r. = 7.47, n = 75 networks, p = 7.73 × 10−14; Figure 9). Figure 9. Alternative measure of neural computation reveals results that correspond to those obtained using PID. Neurons with nonlinear transfer functions are represented more inside rich clubs than they are outside rich clubs. Significance indicators: *****p < 1 × 10−9. Figure 9. Alternative measure of neural computation reveals results that correspond to those obtained using PID. Neurons with nonlinear transfer functions are represented more inside rich clubs than they are outside rich clubs. Significance indicators: *****p < 1 × 10−9. The information theoretic approaches used here to track computation and propagation by/to a receiving neuron are designed to control for the ability of the receiver to account for its own spiking before attributing variance to sending neurons by conditionalizing on the prior state of the receiver. Defining the prior state of the receiver is a parameter-dependent process (e.g., the duration and lag of a window in the past must be defined). Here, we used the same window of time to define the past of the receiver as we do the sender. A concern with this approach, however, is that we may be underestimating the information storage of the neuron with use of short windows. To assess the influence this may have on our results, we repeated our analysis with transmitter spike trains for which the timing of each spike was jittered by a random amount drawn from a uniform distribution with a mean of zero and a width of three times the duration of the past. This disrupts the short-term interactions but preserves the long-term structure, effectively providing a null distribution of values that would be expected given the long-term spiking dynamics alone. By subtracting these values from those obtained from nonjittered spike trains, we were able to assess what effects these values may have had on our findings. We found that after subtracting these residuals, all results held (Supporting Information Figure S11, Faber et al., 2019). ## DISCUSSION Our goal in this work was to test the hypothesis that neural computation in cortical circuits varies in a systematic fashion with respect to the functional organization of those circuits. Specifically, we asked if neurons in rich clubs perform more computation than those outside of the rich clubs. To answer this question, we recorded the spiking activity of hundreds of neurons in organotypic cultures simultaneously and compared the information processing qualities of individual neurons to their relative position in the broader functional network of the circuit. We found that neurons in rich clubs computed ∼160% more than neurons outside of rich clubs. The amount of computation that we found in the rich club was proportional to the amount of information they propagate, suggesting that in these circuits, information propagation drives computation. These results are summarized in Figure 10. Figure 10. Summary of major findings. Synergy (computation) increases with propagation and node richness by an average of 160%, and the computation ratio (amount of computation performed relative to propagation) decreases by 3%. Figure 10. Summary of major findings. Synergy (computation) increases with propagation and node richness by an average of 160%, and the computation ratio (amount of computation performed relative to propagation) decreases by 3%. ### Finding Computation in Organotypic Cortical Networks What does it mean for organotypic cultures to process information? In a neural circuit that has no clear sensory inputs, it is easy to imagine that all spiking is either spontaneous or in response to upstream spontaneous spiking. Spontaneous spikes contribute to what, in an information theoretic framework, is considered entropy. When the spiking of upstream neurons allows us to predict future spiking, one colloquially says that those neurons carry information about the future state of the circuit. This is technically accurate because that prediction effectively reduces the uncertainty of the future state of the system. This is what TE, used in this work, formally measures. The cause of the upstream spiking, whether spontaneous or sensory driven, does not change this logic. As in systems with intact sensory inputs, the neurons in organotypic cultures process information by integrating received synaptic inputs. Although we argue that in this respect information processing by organotypic cultures is like the cortical circuits of intact animals, we recognize that important differences accompany the lack of true sensory input. For example, the spatiotemporal structure of the spontaneous spiking that drives activity in cultures is likely fundamentally different from that driven by sensory experience. Understanding how that structure influences information processing will be important to investigate as technologies enabling high temporal resolution recordings of hundreds of neurons in vivo mature. With regard to building an understanding of the drivers of neural computation, the present work substantially builds on previous work that showed that computation was positively correlated with the number of outgoing connections (i.e., out degree) of the upstream neurons (Timme et al., 2016). Although the rich-club membership of those neurons was not analyzed, it is reasonable to assume that neurons with high degree would be included in rich clubs. In that respect, our findings are consistent with the previous report. The work of Timme et al. (2016), however, looked only at degree and did not analyze edge weights. Here, we found a strong correlation between synergy and information propagation (i.e., summed edge weights contributing to computation), indicating that computation is strongly dependent on weight. This substantially alters our understanding of computation as it shows that beyond the pattern of connections constituting a network, computation is sensitive to the quantity of the information relayed across individual edges. ### Rich Clubs as a Home for Computation Prior work on rich clubs convincingly argued that rich clubs play a significant role in the routing of network information (van den Heuvel & Sporns, 2011; Harriger et al., 2012; van den Heuvel et al., 2013; Nigam et al., 2016). For example, Nigam et al. (2016) showed, using the same data, that the top 20% richest neurons transmit ∼70% of network information. Here, we showed that information propagation is directly related to computation. Combining this discovery with the previous knowledge that rich clubs perform large amounts of information propagation accounts for the high densities of computation we observed in the cortical circuit rich clubs. Although the precise mechanism of how computation is derived from propagation is unknown, one possibility is that it is the result of what one might consider to be “information collisions.” This idea is based on the finding of Lizier et al. (2010) who demonstrated that the dominant form of information modification (i.e., computation) in cellular automata is the result of collisions between the emergent particles (see also Adamatzky & Durand-Lose, 2012; Bhattacharjee et al., 2016; Sarkar, 2000). In the context of our circuits, the idea is that computation arises when packets of information embedded in the outgoing spike trains of sending neurons collide onto the same receiving neuron in sufficient temporal proximity to alter the way the receiver responds to those inputs. The density of propagating information in rich clubs would proportionately increase the likelihood of such collisions, and thereby increase the amount (and number) of computation(s) performed by the rich club (Flecker et al., 2011). ### Operationalizing Information Computation and Propagation Our primary analyses used synergy as a proxy for computation among triads consisting of a pair of transmitting neurons and a single receiving neuron, following the methods of Timme et al. (2016). Synergy, as a measure of the information gained when the pair of transmitters is considered jointly over the combined information carried by the neurons individually, provides an intuitively appealing measure of computation (see Timme et al., 2016, for a comprehensive discussion of this relationship). Synergy is a product of partial information decomposition (PID) (Williams & Beer, 2010). However, PID is not the only information theoretic tool available for quantifying neural computation. Our use of PID was motivated by several factors: (a) it can detect linear and nonlinear interactions; (b) it is capable of measuring the amount of information a neuron computes based on simultaneous inputs from other neurons; and (c) it is currently the only method capable of quantifying how much computation occurs in an interaction in which three variables predict a fourth as done here (the future state of the receiver is predicted from the past state of the receiver and two other transmitters). Concerns have been raised about how PID calculates the redundancy term in that it results in an overestimation of redundancy and consequently, synergy (Bertschinger et al., 2014; Pica et al., 2017). Here, we addressed this concern by demonstrating that an alternate implementation of PID that minimizes redundancy (and thus synergy) nonetheless yields the same pattern of results. Going further, we also used a noninformation theoretic approach to identify neurons that likely perform substantial computation by finding those neurons with nonlinear input-output transfer functions, following the methods of Chichilnisky (2001). This, like our other analyses, showed that the concentration of computation was greater inside of the rich clubs. Our primary analyses used TE as a proxy for information propagation. A strength of TE is that it makes it possible to quantify the mutual information between a sending and receiving neuron after accounting for variance in the receiving neuron spiking that was predictable from its prior state. However, the window used to define the prior state is parameter dependent (e.g., duration, lag from the present). Here, we defined this window so as to match the window that was used to define the sender state (spanning 0.05–3 ms, 1.6–6.4 ms, or 3.5–14 ms for our three timescales, respectively). By doing so, we maintain clear bounds on the timescale at which the functional dynamics were analyzed (selected a priori to span timescales at which synaptic communication occurs). A risk of using this window size, rather than a longer size, is that it may underestimate the variance in the receiver spiking that can be accounted for by the prior state of the receiver (i.e., without considering the sending neuron) and, thus, result in larger propagation or synergy values. To assess the impact this may have had on our results, following the precedent set by others (Dragoi & Buzsáki, 2006; Nigam et al., 2016) we performed a control analysis in which we jittered the spiking at short time scales and quantified the residual propagation and synergy. By subtracting these residual values from our original (nonjittered) values, we were able to assess what effects these values may have had on our findings. We found that after subtracting these residuals, all results held. ### Organotypic Cultures as a Model System The goal of the present work was to better understand information processing in local cortical networks. To do this, we used a high-density 512-microelectrode array in combination with organotypic cortical cultures. This approach allowed us to record spiking activity at a temporal resolution (20 kHz; 50 microseconds) that matched typical synaptic delays in cortex (1–3 ms; Mason et al., 1991). The short interelectrode spacing of 60 microns was within the range of most monosynaptic connections in cortex (Song et al., 2005). This spacing means that the spiking of most cells is picked up by multiple sites, and there are few gaps where cells are too far from electrodes to be recorded. The large electrode count allowed us to simultaneously sample hundreds of neurons, revealing complex structures like the rich club. While the cortical layers in organotypic cultures can differ in some respects from those seen in vivo (Staal et al., 2011), organotypic cultures nevertheless exhibit very similar synaptic structure and electrophysiological activity to that found in vivo (Caeser et al., 1989; Bolz et al., 1990; Götz & Bolz, 1992; Plenz & Aertsen, 1996; Klostermann & Wahle, 1999; Ikegaya et al., 2004; Beggs & Plenz, 2004). The distribution of firing rates in these cultures is lognormal, as seen in vivo (Nigam et al., 2016), and the strengths of functional connections are lognormally distributed, similar to the distribution of synaptic strengths observed in patch clamp recordings (Song et al., 2005, reviewed in Buzsáki & Mizuseki, 2014). These features suggested that organotypic cortical cultures serve as a reasonable model system for exploring local cortical networks, while offering an unprecedented combination of large neuron count, high temporal resolution, and dense recording sites that cannot currently be matched with in vivo preparations. ### Relevance to Cognitive Health The increased computation in rich clubs described here may help to explain the functional importance of rich clubs as well as the role of rich clubs in healthy neural functioning. Prior work has shown that cognitively debilitating disorders, including Alzheimer’s disease, epilepsy, and schizophrenia, are associated with diminished rich-club organization (van den Heuvel & Sporns, 2011; van den Heuvel et al., 2013; Braun et al., 2015). An implication of our present findings is that such diminished rich-club organization would lead to commensurately diminished neural computation. This could account for the impairments observed in such disorders which include loss of memory, consciousness, or mental cohesiveness. Future studies should make use of in vivo methods to explore the relationship between computation and behavior. ### Conclusions The present work demonstrates, for the first time, that synergy is significantly greater inside, versus outside, of rich clubs. Given this, we conclude that rich clubs not only propagate a large percentage of information within cortical circuits but are also home to a majority of the circuit-wide computation. We also showed that computation was robustly correlated with information propagation, from which we infer that computation is driven by information availability. Finally, we found that the ratio of computation to propagation was slightly, although significantly, reduced in rich clubs, suggesting that cortical circuits, like human-engineered distributed-computing architectures, may face a communication versus computation trade-off. These results substantially increase what is known regarding computation by cortical circuits. ## MATERIALS AND METHODS To answer the question of whether rich-club neurons perform more computation than do non-rich-club neurons in cortical circuits, we combined network analysis with information theoretic tools to analyze the spiking activity of hundreds of neurons recorded from organotypic cultures of mouse somatosensory cortex. Because of space limitations, here we provide an overview of our methods and focus on those steps that are most relevant for interpreting our results. A comprehensive description of all our methods can be found in the Supporting Information (Faber et al., 2019). All procedures were performed in strict accordance to guidelines from the National Institutes of Health and approved by the Animal Care and Use Committees of Indiana University and the University of California, Santa Cruz. ### Electrophysiological Recordings All results reported here were derived from the analysis of electrophysiological recordings of 25 organotypic cultures prepared from slices of mouse somatosensory cortex. One hour long recordings were performed at 20 kHz by using a 512-channel array of 5-μm-diameter electrodes arranged in a triangular lattice with an interelectrode distance of 60 μm (spanning approximately 0.9 mm by 1.9 mm). Once the data were collected, spikes were sorted using a principal component analysis approach (Ito et al., 2014; Litke et al., 2004; Timme et al., 2014) to form spike trains of between 98 and 594 (median = 310) well-isolated individual neurons, depending on the recording. ### Network Construction Networks of effective connectivity, representing global activity in recordings, were constructed following Timme et al. (2014, 2016). Briefly, weighted effective connections between neurons were established using transfer entropy (TE; Schreiber, 2000). We computed TE at timescales spanning 0.05–14 ms to capture neuron interactions at timescales relevant to synaptic transmission. This was discretized into three logarithmically spaced bins of 0.05–3 ms, 1.6–6.4 ms, and 3.5–14 ms, and separate effective networks were constructed for each timescale, resulting in three networks per recording (75 networks total). Only significant TE, determined through comparison to the TE values obtained with jittered spike trains (α = 0.001; 5,000 jitters), were used in the construction of the networks. TE values were normalized by the total entropy of the receiving neuron so as to reflect the percentage of the receiver neuron’s capacity that can be accounted for by the transmitting neuron. ### Quantifying Computation Computation was operationalized as synergy, as calculated by the PID approach described by Williams & Beer (2010, 2011). PID compares the measured TE between neurons TE(JI) and TE(KI) with the measured multivariate TE between neurons TE({J, K} → I) to estimate terms that reflect the unique information carried by each neuron, the redundancy between neurons, and the synergy (i.e., gain over the sum of the parts) between neurons. Redundancy was computed as per Supporting Information Equations S8–S10 (Faber et al., 2019). Synergy was then computed via the following: $SynergyJ,K→I=TEJ,K→I−TEJ→I−TEK→I+Redundancy({J,K}→I)$ (1) As with TE, synergy was normalized by the total entropy of the receiving neuron. Although there are other methods for calculating synergy (Bertschinger et al., 2014; Pica et al., 2017; Wibral et al., 2017; Lizier et al., 2018), we chose this measure because it is capable of detecting linear and nonlinear interactions, and it is currently the only measure that has detailed how one can quantify how much synergy occurs in an interaction in which three variables (here, receiver past and pasts of the two transmitters) predict a fourth (receiver future). Note that we chose not to consider higher order synergy terms, for systems with more than two transmitting neurons, because of the increased computational burden it presented (the number of PID terms increases rapidly as the number of variables increases). However, based on bounds calculated for the highest order synergy term by Timme et al. (2016), it was determined that the information gained by including an additional input beyond two either remained constant or decreased. Thus, it was inferred that lower order (two-input) computations dominated. ### Alternate Methods of Quantifying Computation To establish that our results are not unique to our approach for quantifying computation, we implemented two alternate methods. The first also uses PID but sets redundancy to be the smallest possible value. Effectively, in this approach synergy is computed as follows: $SynergyJ,K→I=argMax[TE({J,K}→I)−TEJ→I−TEK→I,0]$ (2) Consequently, synergy is minimized or set to zero when the sum of TE(JI) and TE(KI) is greater than TE({J, K} → I). The second alternate method identifies neurons that perform the most computation as those with nonlinear input-output transfer functions. The transfer functions are calculated following the methods of Chichilnisky (2001). Briefly, for each neuron, the pattern of inputs across neurons and time that drive the neuron to spike were estimated using a spike triggered average (STA) of the state of all neurons over a 14-ms window prior to each spike. The strength of input to that neuron over time was then estimated by convolving the STA with the time-varying state of the network. Finally, the input-output transfer function was established by computing the probability that the neuron fired at each level of input. We then fit both a line and a sigmoid to the resulting transfer function to extract the summed squared error (SSE) that results from each. We categorized neurons with the lowest SSE from the sigmoid fit relative to the SSE from the linear fit as the neurons that perform the most computation. ### Rich-Club Analyses Weighted rich clubs were identified using a modified version of the rich_club_wd.m function from the Matlab Brain Connectivity toolbox (Rubinov & Sporns, 2010; van den Heuvel & Sporns, 2011), adapted according to Opsahl et al. (2008) to compute rich clubs with weighted richness parameter levels. To establish the significance of a rich club at a given threshold, we computed the ratio between the observed rich-club coefficient and the distribution of those observed when the edges of the network were shuffled. Shuffling was performed according to the methods of Maslov and Sneppen (2002). To test if synergy was greater for rich-club neurons, our first approach randomly selected one of the thresholds—at which the rich club was identified as significant—separately for each network and considered all neurons above that threshold to be in the rich club. Our second approach swept across all possible thresholds, irrespective of the significance of the associated rich club, to assess the influence of varying the “richness” of the neurons treated as if in the rich club. The third approach asked if the results of the second approach were correlated with the strength of the rich club, as quantified via the normalized rich-club coefficient, to assess if the strength of the observed effects varied with how much stronger the rich club was than expected by chance. ### Statistics All results are reported as medians followed by the 95% bootstrap confidence limits (computed using 10,000 iterations), reported inside of square brackets. Similarly, the median is indicated in figures and the vertical bar reflects the 95% bootstrap confidence limits. Comparisons between conditions or against null models were performed using the nonparametric Wilcoxon signed-rank test, unless specified otherwise. The threshold for significance was set at 0.05, unless indicated otherwise in the text. ## AUTHOR CONTRIBUTIONS Samantha P. Faber: Conceptualization; Formal analysis; Investigation; Methodology; Project administration; Software; Validation; Visualization; Writing – original draft; Writing – review & editing. Nicholas M. Timme: Data curation; Formal analysis; Methodology; Resources; Software; Supervision; Writing – review & editing. John M. Beggs: Conceptualization; Data curation; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Writing – review & editing. Ehren L. Newman: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Supervision; Validation; Visualization; Writing – original draft; Writing – review & editing. ## FUNDING INFORMATION Ehren L. Newman, Whitehall Foundation (http://dx.doi.org/10.13039/100001391), Award ID: 17-12-114. John M. Beggs, National Science Foundation (http://dx.doi.org/10.13039/100000001), Award ID: 1429500. John M. Beggs, National Science Foundation (http://dx.doi.org/10.13039/100000001), Award ID: 1513779. Samantha P. Faber, National Science Foundation (http://dx.doi.org/10.13039/100000001), Award ID: 1735095; Samantha P. Faber, Indiana Space Grant Consortium. ## ACKNOWLEDGMENTS We thank Olaf Sporns, Randy Beer, and Benjamin Dann for helpful comments and discussion. ## TECHNICAL TERMS • Information: The reduction in uncertainty, typically measured in bits. • • Computation: The process of integrating multiple sources of information to produce an output. • • Information propagation: The transfer of unmodified information from one spiking neuron to another. • • Rich club: A set of neurons with strong connections that connect to each other more than expected by chance. • • Organotypic culture: A cell culture, derived from tissue, that retains many of the structural and functional properties of the intact tissue. • • Timescale: The range of time in which delays between spiking neurons were considered. • • Transfer entropy: An information theoretic measure that quantifies the amount of directed information transfer between two spiking neurons. • • Synergy: A measure that quantifies the information gained by considering the spiking of two neurons jointly compared to independently. • • Effective connectivity: Time-directed statistical dependencies of one spiking neuron on another. ## REFERENCES , A. , & Durand-Lose , J. ( 2012 ). Collision-based computing In G. Rozenberg , T. Bäck , & J. N. Kok (Eds.), Handbook of Natural Computing (pp. 1949 1978 ). Berlin : Springer . Beggs , J. M. , & Plenz , D. ( 2004 ). Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures . Journal of Neuroscience , 24 , 5216 5229 . Bertschinger , N. , Rauh , J. , Olbrich , E. , Jost , J. , & Ay , N. ( 2014 ). Quantifying unique information . Entropy , 16 ( 4 ), 2161 2183 . Bhattacharjee , K. , , N. , Roy , S. , & Das , S. ( 2016 ). A survey of cellular automata: Types, dynamics, non-uniformity and applications . arXiv:1607.02291 . Bolz , J. , Novak , N. , Götz , M. , & Bonhoeffer , T. ( 1990 ). Formation of target-specific neuronal projections in organotypic slice cultures from rat visual cortex . Nature , 346 , 359 362 . Borst , A. , & Theunissen , F. ( 1999 ). Information theory and neural coding . Nature Neuroscience , 2 , 947 957 . Braun , U. , Muldoon , S. F. , & Bassett , D. S. ( 2015 ). On human brain networks in health and disease . Chichester, UK : John Wiley & Sons . Buzsáki , G. , & Mizuseki , K. ( 2014 ). The log-dynamic brain: how skewed distributions affect network operations . Nature Reviews Neuroscience , 15 , 264 278 . Caeser , M. , Bonhoeffer , T. , & Bolz , J. ( 1989 ). Cellular organization and development of slice cultures from rat visual cortex . Experimental Brain Research , 77 , 234 244 . Chichilnisky , E. J. ( 2001 ). A simple white noise analysis of neuronal light responses . Network: Computation in Neural Systems , 12 ( 2 ), 199 213 . Dragoi , G. , & Buzsáki , G. ( 2006 ). Temporal encoding of place sequences by hippocampal cell assemblies . Neuron , 50 ( 1 ), 145 157 . Flecker , B. , Alford , W. , Beggs , J. M. , Williams , P. L. , & Beer , R. D. ( 2011 ). Partial information decomposition as a spatiotemporal filter . Chaos , 21 ( 3 ), 037104 . Götz , M. , & Bolz , J. ( 1992 ). Formation and preservation of cortical layers in slice cultures . Journal of Neurobiology , 23 , 783 802 . Harriger , L. , van den Heuvel , M. P. , & Sporns , O. ( 2012 ). Rich club organization of macaque cerebral cortex and its role in network communication . PLoS One , 7 ( 9 ), e46497 . Ikegaya , Y. , Aaron , G. , Cossart , R. , Aronov , D. , Lampl , I. , Ferster , D. , & Yuste , R. ( 2004 ). Synfire chains and cortical songs: Temporal modules of cortical activity . Science , 304 , 559 564 . Ikegaya , Y. , Sasaki , T. , Ishikawa , D. , Honma , N. , Tao , K. , Takahashi , N. , … Matsuki , N. ( 2012 ). Interpyramid spike transmission stabilizes the sparseness of recurrent network activity . Cerebral Cortex , 23 , 293 304 . Ito , S. , Yeh , F. C. , Hiolski , E. , Rydygier , P. , Gunning , D. E. , Hottowy , P. , … Beggs , J. M. ( 2014 ). Large-scale, high-resolution multielectrode- array recording depicts functional network differences of cortical and hippocampal cultures . PLoS One , 9 , e105324 . Klostermann , O. , & Wahle , P. ( 1999 ). Patterns of spontaneous activity and morphology of interneuron types in organotypic cortex and thalamus-cortex cultures . Neuroscience , 92 , 1243 1259 . Lefort , S. , Tomm , C. , Floyd Sarria , J. C. , & Petersen , C. C. H. ( 2009 ). The excitatory neuronal network of the c2 barrel column in mouse primary somatosensory cortex . Neuron , 61 , 301 316 . Litke , A. , Bezayiff , N. , Chichilnisky , E. , Cunningham , W. , Dabrowski , W. , Grillo , A. , … Kachiguine , S. ( 2004 ). What does the eye tell the brain? Development of a system for the large-scale recording of retinal output activity . IEEE Transactions on Nuclear Science , 51 , 1434 1440 . Lizier , J. T. , Prokopenko , M. , & Zomaya , A. Y. ( 2010 ). Information modifications and particle collisions in distributed computation . Chaos , 20 ( 3 ), 037109 . Lizier , J. T. , Bertschinger , N. , Jost , J. , & Wibral , M. ( 2018 ). Information decomposition of target effects from multi-source interactions: Perspectives on previous, current and future work . Entropy , 20 ( 4 ), 307 . Maslov , S. , & Sneppen , K. ( 2002 ). Specificity and stability in topology of protein networks . Science , 296 , 910 913 . Mason , A. , Nicoll , A. , & Stratford , K. ( 1991 ). Synaptic transmission between individual pyramidal neurons of the rat visual cortex in vitro . Journal of Neuroscience , 11 , 72 84 . Nigam , S. , Shimono , M. , Ito , S. , Yeh , F. C. , Timme , N. , Myroshnychenko , M. , Beggs , J. M. ( 2016 ). Rich-club organization in effective connectivity among cortical neurons . Journal of Neuroscience , 36 ( 4 ), 670 684 . Opsahl , T. , Colizza , V. , Panzarasa , P. , & Ramasco , J. J. ( 2008 ). Prominence and control: The weighted rich-club effect . Physical Review Letters , 101 , 168702 . Pica , G. , Piasini , E. , Chicharro , D. , & Panzeri , S. ( 2017 ). Invariant components of synergy, redundancy, and unique information among three variables . Entropy , 19 ( 9 ), 451 . Plenz , D. , & Aertsen , A. ( 1996 ). Neural dynamics in cortex-striatum co-cultures—II. Spatiotemporal characteristics of neuronal activity . Neuroscience , 70 , 893 924 . Rubinov , M. , & Sporns , O. ( 2010 ). Complex network measures of brain connectivity: Uses and interpretations . NeuroImage , 52 , 1059 1069 . Sarkar , P. ( 2000 ). A brief history of cellular automata . ACM Computing Surveys , 32 ( 1 ), 80 107 . Schreiber , T. ( 2000 ). Measuring information transfer . Physical Review Letters , 85 , 461 464 . Song , S. , Sjöström , P. J. , Reigl , M. , Nelson , S. , & Chklovskii , D. B. ( 2005 ). Highly non-random features of synaptic connectivity in local cortical circuits . PLoS Biology , 3 , e68 . Staal , J. A. , Alexander , S. R. , Liu , Y. , Dickson , T. D. , & Vickers , J. C. ( 2011 ). Characterization of cortical neuronal and glial alterations during culture of organotypic whole brain slices from neonatal and mature mice . PLoS One , 6 , e22040 . Strong , S. P. , Koberle , R. , de Ruyter van Steveninck , R. R. , & Bialek , W. ( 1998 ). Entropy and information in neural spike trains . Physical Review Letters , 80 ( 1 ), 197 200 . , H. A. ( 1994 ). Efferent neurons and suspected interneurons in motor cortex of the awake rabbit: Axonal properties, sensory receptive fields, and subthreshold synaptic inputs . Journal of Neurophysiology , 71 , 437 453 . Tang , A. , Jackson , D. , Hobbs , J. , Chen , W. , Smith , J. L. , Patel , H. , … Beggs , J. M. ( 2008 ). A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro . Journal of Neuroscience , 28 , 505 518 . Timme , N. M. , Ito , S. , Myroshnychenko , M. , Yeh , F. C. , Hiolski , E. , Litke , A. M. , & Beggs , J. M. ( 2014 ). Multiplex networks of cortical and hippocampal neurons revealed at different timescales . PLoS One , 9 , e115764 . Timme , N. M. , Ito , S. , Myroshnychenko , M. , Nigam , S. , Shimono , M. , Yeh , F.-C. , … Beggs , J. M. ( 2016 ). High-degree neurons feed cortical computations . PLoS Computational Biology , 12 ( 5 ), e1004858 . van den Heuvel , M. P. , & Sporns , O. ( 2011 ). Rich-club organization of the human connectome . Journal of Neuroscience , 31 ( 44 ), 15775 15786 . van den Heuvel , M. P. , Sporns , O. , Collin , G. , Scheewe , T. , Mandl , R. C. , Cahn , W. , … Kahn , R. S. ( 2013 ). Abnormal rich club organization and functional brain dynamics in schizophrenia . JAMA Psychiatry , 70 ( 8 ), 783 792 . Wibral , M. , Priesemann , V. , Kay , J. W. , Lizier , J. T. , & Phillips , W. A. ( 2017 ). Partial information decomposition as a unified approach to the specification of neural goal functions . Brain and Cognition , 112 , 25 38 . Williams , P. L. , & Beer , R. D. ( 2010 ). Nonnegative decomposition of multivariate information . arXiv , 1004 , 2515 . Williams , P. L. , & Beer , R. D. ( 2011 ). Generalized measures of information transfer . arXiv , 1102 , 1507 . ## Author notes Competing Interests: The authors have declared that no competing interests exist. Handling Editor: Martijn van den Heuvel This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.
2021-05-18 11:13:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101692080497742, "perplexity": 2717.4520398350696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00585.warc.gz"}
https://tex.stackexchange.com/questions/450870/quotes-in-pdf-bookmarks-when-using-polyglossia
# Quotes in PDF bookmarks when using polyglossia I am creating a multilingual document, using polyglossia. I would like to include quoted text in a section title, using csquotes, and I would also like to have a PDF bookmark tree in the final file, one bookmark per section, using bookmark. Finally, I am also using polyglossia's babel-style shorthands (babelshorthands=true). The following MWE fails: \documentclass{article} \usepackage{polyglossia} \setmainlanguage[babelshorthands=true]{german} \usepackage{csquotes} \usepackage{bookmark} \begin{document} \section{This works} \section{\enquote{This breaks}} \end{document} The error message is as follows: ! Argument of \language@active@arg" has an extra }. <inserted text> \par l.10 \section{\enquote{This breaks}} [...] Googling has led me to conclude that the babel shorthands are at issue here (indeed, not enabling them gets rid of the error), and that I should disable the " shorthand before the section title, then re-enable it, using \shorthandoff and \shorthandon respectively (these are babel commands also supplied by polyglossia; in fact, babel's own documentation specifically recommends these for the above error): \documentclass{article} \usepackage{polyglossia} \setmainlanguage[babelshorthands=true]{german} \usepackage{csquotes} \usepackage{bookmark} \begin{document} \section{This works} \shorthandoff{"}% \section{\enquote{This breaks}} \shorthandon{"}% \end{document} However, this still results in the same error message as above. Using babel itself instead of polyglossia works; the following: \documentclass{article} \usepackage[german]{babel} \usepackage{csquotes} \usepackage{bookmark} \begin{document} \section{This works} \shorthandoff{"}% \section{\enquote{This breaks}} \shorthandon{"}% \end{document} produces the desired output without any errors. Alternatively, using polyglossia also works (even without disabling the " shorthand) so long as the bookmark package isn't loaded. My questions are, thus: 1. is this a bug in polyglossia that I should report (\shorthandoff and \shorthandon not working as desired), and/or in bookmark?, and 2. since I would prefer to keep using both polyglossia and babel shorthands while also retaining the quote in the section title and the PDF bookmarks, is there another work-around for the original error that would allow this, or is this a case of "given X, pick any (X-1)"? • (1) welcome, (2) the issue here is the bookmarks, it generally does not like handling latex macros, the normal \texorpdfstring{for PDF}{for bookmark} works with your example. Generally bookmarks want unicode only and thus attempts to filter away left behind macros. And it probably does not like active chars one bit. – daleif Sep 14 '18 at 14:59 • With babel in place of polyglossia you don't need \shorthandoff. But the bookmarks quotes are not the German ones (whether or not using \shorthandoff). – user4686 Sep 14 '18 at 15:18 Load the language later. The option babelshorthands=true makes the shorthand directly active and so disturbs packages loaded later: \documentclass{article} \usepackage{polyglossia} \usepackage{csquotes} \usepackage{bookmark} \setmainlanguage[babelshorthands=true]{german} \begin{document} \section{This works} \section{\enquote{This breaks}} \end{document} • There should be a tag for load-order or package-loading or package-order or something like that. It's pretty prominently important in some cases. – thymaro Sep 19 '18 at 22:19
2019-08-18 00:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940268158912659, "perplexity": 5301.890458396648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00506.warc.gz"}
https://codefreshers.com/red-versus-blue-solution-codeforces/
# [Solution] Red Versus Blue solution codeforces Red Versus Blue solution codeforces – Team Red and Team Blue competed in a competitive FPS. Their match was streamed around the world. They played a series of 𝑛n matches. ## [Solution] Red Versus Blue solution codeforces In the end, it turned out Team Red won 𝑟r times and Team Blue won 𝑏b times. Team Blue was less skilled than Team Red, so 𝑏b was strictly less than 𝑟r. You missed the stream since you overslept, but you think that the match must have been neck and neck since so many people watched it. So you imagine a string of length 𝑛n where the 𝑖i-th character denotes who won the 𝑖i-th match  — it is R if Team Red won or B if Team Blue won. You imagine the string was such that the maximum number of times a team won in a row was as small as possible. For example, in the series of matches RBBRRRB, Team Red won 33 times in a row, which is the maximum. You must find a string satisfying the above conditions. If there are multiple answers, print any. Input The first line contains a single integer 𝑡t (1𝑡10001≤t≤1000)  — the number of test cases. Each test case has a single line containing three integers 𝑛n𝑟r, and 𝑏b (3𝑛1003≤n≤1001𝑏<𝑟𝑛1≤b<r≤n𝑟+𝑏=𝑛r+b=n). ## [Solution] Red Versus Blue solution codeforces For each test case, output a single line containing a string satisfying the given conditions. If there are multiple answers, print any. Examples input Copy 3 7 4 3 6 5 1 19 13 6 output Copy RBRBRBR RRRBRR RRBRRBRRBRRBRRBRRBR ## [Solution] Red Versus Blue solution codeforces input Copy 6 3 2 1 10 6 4 11 6 5 10 9 1 10 8 2 11 9 2 output Copy RBR RRBRBRBRBR RBRBRBRBRBR RRRRRBRRRR RRRBRRRBRR RRRBRRRBRRR ## Red Versus Blue solution codeforces The first test case of the first example gives the optimal answer for the example in the statement. The maximum number of times a team wins in a row in RBRBRBR is 11. We cannot minimize it any further. The answer for the second test case of the second example is RRBRBRBRBR. The maximum number of times a team wins in a row is 22, given by RR at the beginning. We cannot minimize the answer any further.
2022-05-21 08:36:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17423665523529053, "perplexity": 1158.234657533706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00765.warc.gz"}
http://www.coresatin.com/hart-of-amcwjfh/factorisation-questions-and-answers-189a48
Virgin Gym Opening, Viamichelin Route Planner, Bengali Rain Song Lyrics, Babysitters Club Dawn Netflix, Community College Of Denver Jobs, Keto Coconut Chicken Curry, 2012 Kia Rondo Reliability, Budding In Hydra Diagram, " /> Virgin Gym Opening, Viamichelin Route Planner, Bengali Rain Song Lyrics, Babysitters Club Dawn Netflix, Community College Of Denver Jobs, Keto Coconut Chicken Curry, 2012 Kia Rondo Reliability, Budding In Hydra Diagram, " /> x^2 - 4 x - 5. Factoring is also the opposite of Expanding: Completely factor the following expression: 3u - 2u2 + 6 - u3, Completely factor the following expression: 2x2 + 4x - 2x3, Factor the trinomial. 64x^2 - 48x + 9, Find the value of n so that the expression is a perfect square trinomial and then factor the trinomial. There was a typo..Now the correct question is; The total number of factors of the number N = 4^11 x 14^5 x 11^2 ? 1 x 50. Common divisibility examples. Factoring Trinomials – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required to factor a trinomial. How to undo the FOIL method with coefficients? There are 4 methods: common factor, difference of two squares, trinomial/quadratic expression and completing the square. 72x-5. Solve: x 2 – 13x = 30: Complete Solution. Fully factorize the following expression. The numerator of a fraction is 2 less than its denominator. Factorise 3t + 15; A) 3(t+1) B) 3(t+5) C) 3(t+12) D) 3(t+4) E) 3(t+11) 2. -6x(x+2) A. CHAPTER 7 FACTORISING ALGEBRAIC EXPRESSIONS 177 Factorise the following completely. 4. Practice: Prime factorization. We have factor maths worksheets suitable for all abilities, and they are all supplied with answers to assess how well your child or pupil is doing, and highlight areas for revision. Quadratics are algebraic expressions that include the term, x^2, in the general form, . Example: what are the factors of 6x 2 − 2x = 0?. Factorization Questions and Answers - Math Discussion a. x^6 + y^6 b. D The smallest prime number that can divide 25 is 5. x^2 + 10x - 24 = 0, Solve this equation and plot on the graph. 2x^2 + 5x + 2, Determine whether the polynomial is a perfect square trinomial: 9b^2+24b+16, Removing a common monomial factor first: 2x^2-72, Solve higher degree equations by factoring: m^3 = 64m. Factor the following expression. Factor the polynomial as possible or say it is prime. Factorisation Class 8 Extra Questions Maths Chapter 14. Sometimes it helps to look at a simpler case before venturing into the abstract. This is the currently selected item. There are 4 methods: common factor, difference of two squares, trinomial/quadratic expression and completing the square. Factor the trinomial completely: 56a^2 + 31a + 3. 2 m^4 + 26 m^3 + 80 m^2, Factor the trinomial using the trial-and-error method. Tags: Question 2 . Find all roots of (x+4)\left(x^2 + 12 x + 27 \right) = 0 . Check also the answers by multiplying. C To make this easier we can break 81 to be 9 x 9 and then find the prime factors of each of these prime numbers. Factorisation Example Problems. a) 20 = 2×10 , b) 14 = 2×7 , c) 64 = 4 3, d) 120 = 2 3 × 15 Solution Prime factorization involves only prime numbers. Â, Find the smallest number by which 192 need to be divided to obtain a perfect cube Math Trivia Quiz . If 5000 = 5x + 5y and x and y are integers, what is the value of x + y? Factoring Polynomials: Very Difficult Problems with Solutions. 9x^2 - 24xy + 16y^2, Find the toots of the polynomials. Function: h (x) = x^4 + x^3 - 3 x^2 - 13 x + 14 Zero: -2 + square root 3 i, Factorize completely. Factor the polynomial completely. 2x²-18. 241 Algebraic equations can be used to solve a large variety of problems involving geometric relationships. And we have done it! Perhaps you can learn from the questions someone else has already asked. 3x^3+4x^2-x-2, c = 2/3, Find the zeros and factor of the polynomial. x^2 + 4x - 12 = 5). B. 25 b 2 36 d . 16(x^{2} + 1)(x + 1)(x - 1) c. prime d. (7x^{2} - 1)(x + 1)(x - 1) e. 49(x^{2} - 1)(x + 1)(x - 1). This section shows you how to factorise and includes examples, sample questions and videos. Find their present ages. ax^2 + bx + c. Where a, b, and c are all numbers. Answer the following questions on prime factorization. Donate or volunteer today! Factor polynomial completely: 8a^2 + 40ab + 50b^2. After t seconds, your height above the water is described by the polynomial -16t^2 + 16t + 32. a. 5-a … 3x^2 - 26x + 16= 6). Factorise 3t + 15; A) 3(t+1) B) 3(t+5) C) 3(t+12) D) 3(t+4) E) 3(t+11) 2. 820 times. Solve the equation for real values of the variable by factoring. Factorize the following expression. State the excluded values, if any. A cubic container, with sides of length, x inches, has a volume equal to x^3 cubic inches. ax^2 + bx + c. Where a, b, and c are all numbers. Get help with your Factorization homework. Answer the following questions on prime factorization. Had there been 80 children less, each would have received 4 more books. In completely factored form), f(x)= [{Blank}]. Answers (x - 2) (x + 5) (x - 4) (x + 6) (x - 3) (x - 6) (x - 4) (2 x + 3) It is as factored as it gets. Can't find the question you're looking for? 2.12 (2x-y)(a-b)-3ax+3xb Problem 1. Factor completely and multiply to check: 16x2 - 49. Find the value of c that makes the trinomial x^2 + 11x + c a perfect square. Solve the polynomial equation by factoring and then using the zero product principle. X^5 - 4X^3Y^2. Find the usual speed? Factor 3x 3 - x 2 y +6x 2 y - 2xy 2 + 3xy 2 - y 3 = One zero is given, find all others. Given 7x^{2393} + 52816x^4 . Year 9 mathematics factorisation test. Access the answers to hundreds of Factorization questions that are explained in a way that's easy for you to understand. Factoring Polynomials: Very Difficult Problems with Solutions. (ii) x = -1. 3 t 3 15 t 2 + 18 t c . Clear, easy to follow, step-by-step worked solutions to all N5 Maths Questions below are available in the Online … Find the fraction. 2. 8x^3 + 2x^2 - 5x + 1 = 0, Solve for x in the following equation. Write the solution set in interval notation. y = x^2 - 3x - 4.   answer choices . D. 2x²-11. r^2 + 2 r s - 3 s^2, Factor and simplify the given expression. factor the trinomial.   Prime factorization exercise. Solve the equation by factoring. Factor completely: 1. Factor the polynomial as much as possible or say it is prime : X^3 - 8Y^3. Factor the following quadratic expression: 10 q - 25 q^2. … How to factor these equations? Rewrite this expression by identifying the common term and taking it in front of the parentheses. Factor the expression completely. The difference between the squares of two integers is 2009. t^{2} - 8t + 16 greater than or equal to 0, Use factoring by grouping to solve the following: x^{3} - 2x^{2} + 5x - 10 = 0, Solving equation by factoring: p^2 - 2p - 15 = 0. Multiple Choice Questions 2x^3 + 7x^2 - 3x - 18. a. 5u^3 - 2u^2 -35u + 14, Find the coordinates of the x-intercepts of the graph of the equation. 12y^3 - 16y - 9y^2 + 12. a 5x + 15 y b −3m − m2 c 6xy − 2 x d 15 p − 20 q e 15 pq − 20 q f 12 st 2 + 15 st g −18 xy − 6 x h at − at 2 i 7x2y + xy j a2 + ab Factorise each of the following. Determine whether the following trinomial is a perfect square. Given P(x) = 3x^5 - 7x^4 + 25x^3 - 55x^2 - 18x + 72, and that 3i is a zero, write P in factored form (as a product of linear factors). Given 4r^2 - 28r = -49: a. Questions & answers have been split up by topic for your ease of reference. Check the factorization using FOIL multiplication. State the multiplicity of each zero. Questions and Answers . Question 1 . 2x²-18. b. answer choices . If (2 - q) is one factor of 4 - 2q - 6p + 3pq, what is the other factor? Get help with your Factorization homework. x^2 + 8 x + b, Factor completely. 1) -20n+24=-4n^2\\ 2) 5k^2-7k+12=4(-2k+3)\\, Solve by factoring or by using the quadratic formula. 2(3x 2 − x) = 0. 4x^2 - 9x - 9 = 0, Factor the expression below. \frac{1}{n + 3} C. \frac{n + 1}{n + 3} D. \frac{n - 1}{n + 3}, What are the x-intercepts for this equation? How to factor this expression? 7. These Worksheets for Grade 8 Factorisation, class assignments and practice tests … Multiply using the rule for the square of a binomial. Recent factors Questions and Answers on Easycalculation Discussion . 2.1 5x-3xy All other trademarks and copyrights are the property of their respective owners. Clear, easy to follow, step-by-step worked solutions to all N5 Maths Questions below are available in the Online Study Pack. x^3 - 3x^2 + 4 x - 12 = 0. Factorisation Class 8 Extra Questions Maths Chapter 14. 2x-18. Factor completely given that x + 2 is a factor. B. Factor the polynomial completely. Factor the polynomial completely, and find all its zeros. Solve: x 2 – 13x = 30: Complete Solution. Factor the trinomial, or state that the trinomial is prime. }] or y 2 14 y 48 g . This section shows you how to factorise and includes examples, sample questions and videos. 2x-18. All the answers have been completed and are on a presentation. a.) Perhaps you can learn from the questions someone else has already asked. Use the discriminant to find the number of unique real solutions. 2.10 X6/4  - 16y8/81a² Factor the perfect square trinomial: x^2 + x + \frac{1}{4} . The prime factors of 100 = 2 x 2 x 5 x 5. N5 Maths Exam Questions & Answers by Topic Thanks to the SQA and authors for making the excellent resources below freely available. Previous section Factoring ax 2 + bx + c Next section Factoring Polynomials of Degree 3 Take a Study Break Every Book on Your English Syllabus Summed Up in a Quote from The Office 3v2+8v+5, Factor the following expression: -x2+17x-42, Find all Zeros. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Factorisation.You can choose to include answers and step-by-step solutions. 9x^2+24x+16. 3x^2 = x + 4. Factor: 2a(s - 3t) + 5b(s - 3t) + 2(s - 3t). {{{x^2} + 2{x^2} - 3x} \over {2{x^3} + 2{x^2} - 4x}}, Solve the following quadratic equation. Find all y-intercepts and x-intercepts of the graph of the function. a. dfrac{-5k + 15}{k^2 - 9}, Factorize the following problems. 2.4 4a²b-6ab²+12ab³ Set up as questions with answers that need checking and correcting, or as a range of possible correct factorisations where the correct one needs identifying. f(x) = x^2 - 5x + 6;\; k = 3. The Corbettmaths Practice Questions on Factorisation. And x 2 and x have a common factor of x:. A. Enter all answers including repetitions.) Find the polynomial function f(x) of degree 3 with a zero of 0 and a zero of 1 having multiplicity 2 and f(2) = 10. Assume the denominator is not zero. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Factorisation.You can choose to include answers … Factor the following problems. a) 20 = 2×10 ; … Problem 1. Relationship between HCF & LCM of following pairs 64,84, Factorise the following -  Which of the following is not a prime factorization? -2x(x + 7) - (x + 7), If (2 - q) is one factor of 4 - 2q - 6p + 3pq , what is the other factor? I forgot how to factor! (5a - 7), One of the factors of 49x^2 - 121. (x + 4) \\c. t^2-\frac{14}{3}t+n, Solve the following equation for w: (w + 6)^2 = 30, Find all zeros of the following. -6x(x + 1)^2 (3 - x) greater than 0. Solve. x^4 + 12x^3 + 4x^2 + 48x. Factor the numerator and denominator of the fraction, if necessary. (5a + 7) \\d. u y + 4 u 3 y 12. (Factor completely). Download free printable assignments for CBSE Class 8 Factorisation with important chapter wise questions, students must practice NCERT Class 8 Factorisation assignments, question booklets, workbooks and topic wise test papers with solutions as it will help them in revision of important and difficult concepts Class 8 Factorisation.Class Assignments for Grade 8 Factorisation… Given the following equation, compute for M. 69.72 = 68.9256(0.60108) + M(0.39892). Then solve the equation f(x) = 0. x(x - 3) = 18, Factor completely and write the answer with no negative exponents: 2x(1 - x^2)^{1/2} + x^2(1/2)(1 - x^2)^{-1/2}(-2x), Solve the inequality. x^{2} + 4x - 12, 5x + 30, Express the rational expression in lowest terms. 24v^2w^5x^8-20v^7w^9 = 2). (iii) x = 2. 2x is 0 when x = 0; 3x − 1 is zero when x = 13; And this is the … SURVEY . 1. The volume of a cubic container is x^3 cubic inches. Factor the polynomial f(x). 8a^2 + 40ab + 50b^2, Simplify: \frac{\frac{(x^2 -16)}{(2x^2-9x + 4)}}{\frac{(2x^2 + 14x + 24)}{(4x + 4)}}, Factor each of the following if possible, If not, explain why they can't be factored. Year 9 mathematics factorisation test. Â, Find the least five digit numbers which when devided by 12, 15and 20 leaves no remainder, Factorise Factor 3x 3 - x 2 y +6x 2 y - 2xy 2 + 3xy 2 - y 3 = The worksheet also has a past GCSE question on it and a section for self-assessment. Which of the following is not a prime factorization? Â. find the square root of 18662400. Sometimes it helps to look at a simpler case before venturing into the abstract. f(x) = 2x^3 - 3x^2 - 5x + 6; k = 1, Factor completely: 3 p^2 - 16 p q - 12 q^2, Solve using factoring by grouping: 12x^2 + 11x + 2 = 0, One of the factors of x^2 - 7x + 12 is: a. ... Go to Factorisation I 10 Questions 51.38 % START TEST Factorisation … Factorising an expression is to write it as a product of its factors. 6400 notebooks were distributed among some children. Factorisation Class 8 Extra Questions Very Short Answer Type. N5 Maths Exam Questions & Answers by Topic Thanks to the SQA and authors for making the excellent resources below freely available. Earn Transferable Credit & Get your Degree, Use synthetic division to decide whether the given number k is a zero of the polynomial function. Factor by grouping. (a + 3) \\b. (sec2 theta - 3)(sec2 theta + 3), Factor the following. Factorisation Exercise 14.2. Factor the equation to its lowest form: x^2 - 9x = 18. I forgot how to factor! 5 a 2 45. Section 1 Finding Factors Factorizing algebraic expressions is a way of turning a sum of terms into a product of smaller ones. Â, factorise by using factor theorem- The 4 and 2q – 6 in the above formula are factors which are multiplied. 2 x (3 x + 5) - 3 (3 x + 5). (a) 3x2 27 (b) 2x 2 18 (c) 7x2 28 (d) 2x2 22 (e) 3x2 2300 (f) 13x 52 (g) 128x2 32 (h) 81x 36 (i) 50 2x (j) 72 2x2 (k) 40 250x2 (l) 48 + 147x2 Polynomials Questions Question … 2x^3 - x^2 - 10x + 5, c = 1/2, Find the zeros and factor of the polynomial. These factorization questions will help CBSE students to be well-prepared for class 8 exam. A.\ \{\dfrac{1}{3}, \dfrac{3}{2} \}\\ B.\ \{\dfrac{3}{2}, -\dfrac{1}{3} \}\\ C.\ \{\dfrac{1}{6}, 1\}\\ D.\ \{- \dfrac{1}{2}, 1 \}\\, Solve by factoring or by using the quadratic formula. Factorising Quadratics. 632    b.) Site … Can You Factorise The Following Equation: Trivia Quiz! Factor out 2x^4 + 3x^3 - 11x^2 - 9x + 15. (x + 3) \\d. Write the expression (x-3)^2 - 36 in factored form. ), If possible, completely factor the expression. First write the expression in completely factored form. You are allowed to answer only once per question. Find the value of k 0 such that 9x^2 + k + 16 is a perfect square trinomial. 1, 2, 5, 10, 25, 50. (7x - 11) \\b. Two worksheets on practicing factorising fully by taking out common factors from each part of the expressions. What is the prime factorization of 50? Solve the quadratic equation: 3x^2 +8x 3 = 0 . Find the largest positive integer that will divide the 389 436 542 leaving remainder 7 11 15 respectively  What is the general form of q when decimal expansion of p/q is terminating ? Factorise 3a 2 +9a; A) 3a(a+3) B) 3(a+3) C) a(a+3) D) a(3a+3) E) 3a(a+6) 3. ,qwurgxfwlrq wr 0dwkhpdwlfv iru (qjlqhhuv )dfwrulvdwlrq dqg \$ojheudlf (txdwlrqv 7xwruldo 0xowlso\ rxwd qg vlpsoli\\ rxu dqvzhuv zkhuh srvvleoh 5.1 Factoring by Using the Distributive Property 5.2 Factoring the Difference of Two Squares 5.3 Factoring Trinomials of the Form x2 bx c 5.4 Factoring Trinomials of the Form ax2 bx c 5.5 Factoring, Solving Equations, and Problem Solving 10 b. 1. -6x(x+2) A. Write a polynomial representing the product of these three integers. Simplify and show the steps. We can now also find the roots (where it equals zero):. How can i factor f(x) = 2x^2 + x - 6 2. challenge question -- Factor the polynomial completely 3. y 2 + 5 y + 6 y 2 5 y + 6 2 j + 6 j 2 6 j 27, Find the roots of the following polynomial. The factors are 2x and 3x − 1, . How to factor f(x)= 5x^4+3x^2-2 so you can get the zeros of the function. 9t^2 + 14t + 5, Assume that the expressions given are denominators of fractions. Factorization. Factor the trinomial completely. (If the expression is not factorable, enter NF.) (b) [{MathJax fullWidth='false' Solving Quadratics by Factoring – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required for solving quadratics by factoring. x^2-3x-54. 4x^2 + 31x + 21 a.\ (x + 21)(4x + 1)\\ b.\ (x + 7)(4x + 3)\\ c.\ (x + 1)(4x + 21)\\ d.\ (2x + 3)(2x + 7). f(x) = 2x^3 + 8x^2 - 8x - 32, Factor the following problem: 6 a^3 b^2 c - 18 a^5 b^2 + 9 a^4 b^2 c^2, Factor 6x^{\frac{5}{3}} - 10x^{\frac{2}{3 }}- 4x^{\frac{-1}{3}}, 2h^2 - 2h - 4/(h + 1)(h - 2) + h^2 + h - 2/(h + 2)(h - 1), 4x^2 - x - 14/x - 2 + 4x^2 + x - 14/x + 2, Simplify the expression, if possible. y^4 - 25y^2 + 144 = 0, Use factoring to solve the equations. Factor the algebraic expression 6x2 - 21 x y + 8 x z - 28 y z. 200x^3 - 18x, Factor the difference of two cubes. Solved Examples on Factorization Using common factor 1) 4x + 8 Here , 4 is a common factor 4( x +2 ) are the factors .-----2) 8x^2 + 4x Here , 4x is a common factor = 4x( 2x +1) are the factors.-----3) 3x^3 + 6x^2 + 9 Here, 3 is a common factor = 3( x 3 + 2x 2 3 are the factors. Least common multiple. 2.6 m³(x-1)+n²(1-x) If r is a solution to the equation f(x) = 0, th... P(x) = (x - 1)(x - 2)(x - 3)(x - 4) Find the real zeros of the given polynomial and their corresponding multiplicities. 1) 8-32x^2y^2       Â, Experts, please help me with the following questions attached below in the image. Factor the polynomial using the AC method. 7x - 6 - 2x^2. Expand 3(t-7) Factorisation Exercise 14.2. Factoring (called "Factorising" in the UK) is the process of finding the factors: It is like "splitting" an expression into a multiplication of simpler expressions. a. I don't know where to start... 5. You can now earn points by answering the unanswered questions listed. x^3-27, Factor completely using the difference of two squares: z^4-625, Sum and difference of square. The x squared co-efficient is always positive. Factor out the greatest common factor from each polynomial: 2n(a+b)+4m(a+b), Factor the polynomial by grouping. Factor the expression completely: 5 + 45y + 70y^2. Factor the trinomial by grouping. Which one is considered a perfect square? 15x^{2} + 31x + 2, Which of the following constants can be added to x^2 - 10x to form a perfect square trinomial? Factorising Quadratics Practice Questions Factorisation, quadratic. 5y^3 - 7y^2 - 15y + 21 = 4). Factor completely with respect to the integers. (7x^2 - 11) \\d. Find the value of the polynomial 5 x - 4 x^2 + 3 at (i) x = 0. 9 u^2 + 30 u v + 25 v^2, Find the zeros and multiplicity of the function x^3+5x^2+7x+3 given that one zero is -1. 5-a-day Workbooks. 12 Qs . Question 5 (More Di erences of Two Squares) Factorise each of the following by rst taking out the highest common factor and then using the di erence of two squares identity. A woman is six times as old as her son. Â, What is the characteristic properties of one ? SURVEY . Primary Study Cards. x^5 + 2x^4 - 12x^3 - 38x^2 - 37x - 12, c = -1 is a zero of multiplicity 3, Find the zeros and factor of the polynomial. Factoring is also the opposite of Expanding: \frac{y^{2} - 5y - 14}{y^{2} - 4y - 21}, Factor the polynomial by grouping. We’ve seen already seen factorising into single brackets, but this time we will be factorising quadratics into double brackets. 28 Factoring Polynomials Practice Worksheet with Answers- Rather than inserting the exact same text, modifying font styles or correcting margins every time you begin a new document, opening a personalized template will let you get directly to work on the content instead of wasting time tweaking the styles. 14x-35. 3x^2+3x-3612y^4-63y^3+60y^230wx+6wy+5xz+yz9x^2-24xy+16y^2, Factor, if possible. 4x^4 - 28x^3 + 81x^2 - 42x + 9, c = 1/2 is a zero of multiplicity 2, Find the zeros and factor of the polynomial. 50x^2 - 2y^2 - 4yz - 2z^2. If it is not, give the value of f(k). (a) 4(s^2 - t)(4s^2 + t) (b) 4(4s^2 - t)(s^2 + t) (c) 4(2s^2 - t)(2s^2 + t) (d) (8s^2 - 2t)(8s^2 + 2t). 1. Solving Quadratics by Factoring – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required for solving quadratics by factoring. Factoring Algebraic Expressions . What is the prime factorization of 50? How can i factor f(x) = 2x^2 + x - 6 2. challenge question -- Factor the polynomial completely 3. 1. Solve the below equation: x^4 + 4x^3 - 9x^2 - 30x - 18 = 0, Simplify the expression. Here are some questions other visitors have asked on our free math help message board. Factorise a 2 b+ab 2 +ab; A) ab(ab+1) B) a(a+b+1) C) b(a+b+1) D) a(a+b 2 +1) E) ab(a+b+1) 4. 3.2 4a+2b /4a²-b² Go through them carefully and then solve your question. 1. f(x) = 5x^4 - 31x^3 + 51x^2 + 241x - 50. 11 Qs . Questions and Answers . Simplify your answer as much as possible, but do not combine the factors. 2x(3x − 1) = 0. © copyright 2003-2020 Study.com. 1. x^2 + \frac{2}{5}x + \frac{1}{25} \\ 2. Next Midpoint of Two Numbers Practice Questions. y^2 - 14y - 51, Factor polynomials completely by recognizing them as perfect square-trinomials. 6x^3 - 18x^2 - 2x + 6. Math Trivia Quiz Factors And Multiples Questions! B. ... 120 seconds . Why is x^2 + 4 not factorable over the real numbers? 4. (x - 4), One of the factors of 5a^2 - 22a + 21 is: a. 16x^4 - 81x^2 = 0, Use the given zero to completely factor P (x) into linear factors. r^2 + 3ra + 2a^2, Factor. 72x-5. Then look for ones and simplify. 16y^3-4y, Factorise using formula when a binomial is the difference of two squares : Factorise a 2 b+ab 2 +ab; A) ab(ab+1) B) a(a+b+1) C) b(a+b+1) D) a(a+b 2 +1) E) ab(a+b+1) 4. 2 x 25. 15 p q + 15 + 9 q + 25 p, Factorize the following expression. Factoring Trinomials – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required to factor a trinomial. 2.9 m²(a-2y)-2m(a-2y)-15(a-2y) -6x+12. Questions & answers have been split up by topic for your ease of reference. x^3 + x^2 = 4x +4. Factorisation math tests for GCSE maths, Factorisation simple equations, Factorisation of polynomial equations, factorisation of quadratic polynomial - expanding brackets, collecting like terms, factorising. (Enter your answers as a comma-separated list. Factor the trinomial completely. (a) [{MathJax fullWidth='false' y = 4x^4 - 14x^3 - 30x^2. 2.8 9(a+b)²-25 (2x - 1)(x^2 + 6x - 4) = \boxed{\space}, Factorize: A) 2x^3 + 5x^2 6x + 9 \\B) 2x^3 + 9x^2 + 3x - 4 0. Math Trivia Quiz . D. 14x-5. 14x-35. (4x^{2} - 1)(x + 1)(x - 1) b. Next lesson. Solve the polynomial inequality x^4+36 less than or equal to 13x^2. (A-B)^2=A^2+B^2-2AB. Extra Questions for Class 8 Maths Chapter 14 Factorisation. ... 15 Questions Show answers. Factor completely. 3 z^2 + 36 z + 33. Consider three consecutive even integers, the least of which is x . Thinking back to removing brackets, the answer is now the question and the question is now the answer. -1,2,-4 The polynomial function is f(x) = x^3 + x^2-6x-8. x^2 - 24x + 48. Create an account to browse all assets today, Biological and Biomedical Calculators and Converters. a ) 2 x 2 + x 6 b . 36a^2- 84a t + 49t^2, Solve the following quadratic expression by factoring. 2.5 8x²-32 3(x-8)^2 -5(x-8)-2. Use synthetic division to show that x = 10 is a root of the equation x^3 - 10x^2 - x +10 = 0.   2x^2-72, Factor by using the appropriate substitution. 2.3 a²-14a+24 2x^3 - 3x^2 - 11x + 6, c = 1/2, Find the zeros and factor of the polynomial. 42a^{2}b - 56a^{2} - 12ab + 16a, Factor this completely. List the possible rational zeros of the polynomial f(x) = 2x4 + 3x2 - 2x - 3. D. 2x²-11. All rights reserved. Factoring (called "Factorising" in the UK) is the process of finding the factors: It is like "splitting" an expression into a multiplication of simpler expressions. (i) a^2 + 8a + 16 Answer: This equation can be facorised by using the identity; (a + b)^2 = a^2 + 2ab + b^2 Factors = (a + 4)^2 = (a + 4)(a + 4) (ii) p^2 – 10 p + 25 x^{3} + 2x^{2} = 25x + 50, Solve the polynomial equation by factoring and then using the zero-product principle. ( 18 x 3 y 4 24 x 2 y 5 + 9 x 2 y 7 ) 2 x 2 y 4. Mathematics. 2) 8-32x^2y^2 1. GCSE Revision Cards. Then write the trinomial as a perfect square. Effective resource to revise and prepare for maths exam. Simplify by factoring. Verify the following:6(v^2 + 9)(3v - 5)=6(v + 3)(v + 3)(3v - 5), Simplify (4x)/((x + 7)(x - 3)) + (-2x^(2) + x - 7)/((x^(2) + 4x - 21)(x + 3)). Show how to factor: 3x^3 - 10x^2 + 8x = 0. ... factors Questions and Answers - Math Discussion Recent Discussions on factors.php . Prime factors of 25 = 5 x 5. Previous section Factoring ax 2 + bx + c Next section Factoring Polynomials of Degree 3 Take a Study Break Every Book on Your English Syllabus Summed Up in a Quote from The Office 3x^{1/3} - 18 x^{1/6} + 24 = 0. Answer Questions and Earn Points !!! Videos, worksheets, 5-a-day and much more x^3 + 5 x^2 + 2 x + 10, Factor completely. f(x) = x^3 + 3x^2 - 4x - 12. If the polynomial cannot be factored, write "prime." Maths Class 8 Important Questions are very helpful to score high marks in board exams. v^4 - 2 v^2 - 3, Factor completely. I don't know where to start... 5. (2x^{2} - 3x + 1)(4)(3x + 2)^{3}(3) + (3x + 2)^{4}(4x - 3), Factor the trinomial completely after removing the GCF. Factors And Multiples Questions! Here are some maths factorisation example questions and how to factorise the quadratic polynomial are explained in detail. The product is a multiplication of the factors. Two years ago, the product of their ages was 84. 1) -4=-15v^2-17v\\ 2) -27x+6=-12x^2\\, Solve by Factoring AND by using the quadratic formula. Find the common factors of the following terms. 13 Qs . f(x) = x^2 + 4x -12, Carry out the indicated operations. Here we have covered Important Questions on Algebraic Expressions, Identities and Factorisation for Class 8 Maths subject. Factorising an expression is to write it as a product of its factors. 3.1 2x-2y/x-y Solve the polynomial inequality. 1). Online maths tests for GCSE, A-Level and University. 5ax + 5ay - 4bx - 4by, Factor the expression completely. C. 14x+5. Factorize the following trinomial by middle term split. Determine the constant that should be added to the binomial so that it becomes a perfect square trinominal. Khan Academy is a 501(c)(3) nonprofit organization. Express x^2 + (x + 7)^2 = (x + 8)^2 as an expression with a trinomial on the left and 0 on the right. Factor the polynomial completely. Check to confirm your answer. Take the number of books received by each child as x, frame an equation in x and solve it.Â, A number consist of two digits whose product is 18.If 27 is added to the number,the digits interchange their places . What is the factorisation of p(x – y)2 – qy + qx + 3x – 3y. Q. (x - 3)(x + 8) = -30 \\2. Online maths tests for GCSE, A-Level and University. 27x^3 + 8 3. (nx+m)(px+q) There are 2 main types of … You can now earn points by answering the unanswered questions listed. Sketch the graph of the polynomial function f(x)=3x^4-4x^3-12x^2? Label the axes, intercepts and x- and y- coordinates of the turning points. 18 x^3 + 15 x^2 + 12 x + 10 A) (3x^2 + 2)(6x + 5). 8a 2 (56a 3-8a) 8a(7a 2-1) 8a(7a 3-a) 8a 2 (35a 2-a) Tags: Question 2 . Solve by completing the square. Refer and practice these questions for more knowledge. if f(x) = 6x^2 - 5x and g(x) = 2x + 3 solve for f(x) = g(x). Which of the following is the correct representation of the polynomial F(x) =x^4 - 2x^3 - 17x^2 + 18x + 72 as a product of linear factors? Differentiated factorising worksheet, levels given. Section 1 Finding Factors Factorizing algebraic expressions is a way of turning a sum of terms into a product of smaller ones. Find the common factors of the following terms. 72% average accuracy. 2.2 a²-25b² Solve the equation x^4 - 8x^3 + 24x^2 - 32x + 20 = 0 if 3 + i is a root. 3.6k plays . 2 a 3 + 250 e . Factor the trinomial, or state that the trinomial is prime. GCSE Revision Cards. 2x - 1 = 2x^{3} - x^{2}, Factor completely. Factor f(x) = 6x^3 + 17x^2 - 63x +10 into linear factors given that -5 is a zero of f(x). Expand 3(t-7) Factor: 56a 3-8a. 6 and 2 have a common factor of 2:. x 4 6 x 2 40, Which of the following is equivalent to 16s^4 - 4t^2? How to factor this expression? Answer: The given CBSE Class 8 Maths Notes Chapter 14 Factorisation in PDF file format by Vedantu are the quick revision notes that will help you get to the gist of the chapter in less than 15 minutes. Multiply the following binomials: (x + 5) * (x - 5), Divide: \frac{-16x^5y^6 + 23x^5y^5 - 20x^5y^2}{-4x^4y^3}, Factor completely. y^4 -81, Factorize: a) \ t^3+6t^2-2t-12\\ b) \ 5a^3-10a^2+25a-50, Factorize the following expression: 6x^2y - 7xy - 5y. Factor the following expression: 3y^2 - 19y + 20, Factor f (x) into linear factors given that k is a zero of f (x). g(x) = x^3 - 3x^2 + x + 5, Factor the expressions below: (x + 1)y + 4(x + 1), Solve the equation: \frac{5}{x}-\frac{3}{x+4}=2. (7x + 11) \\c. x^2 + 18 x + b, Find all natural-number values of b that make the trinomial factorable. 4x^2 + 20x-x-5, Factor the trinomial with a negative leading coefficient: 3+2x-8x^2, The sum difference of square. Factor completely: 49x^2 + 9 . Solution on Video. 2.7 -3x-15x+42 Factor completely with respect to the integers. x^3 + 8x^2 - 9x = 0, Find the zeros and factor of the polynomial. 1) -d^2=10d+16\\ 2) 20n^2+9=27n\\, Solve the quadratic equations by factoring: 1) 7n+4n^2=-3\\ 2) 5p^2-36p=-7\\ 3) 20n=-3n^2-12\\, Solve the quadratic equations by factoring: 1) n^2+4n-21=0\\ 2) n^2+10n+16=0\\ 3) 6n^2+2n=20\\. If the area of a rectangle is 216 centimetre square, find the value of x. 2x^2 - 5x - 3 = 0, Factor: (5x + 4)(2x + 5)^2 + (5x + 4)^2(2x + 5), Write the polynomial as the product of linear factors and list all the zeros of the function. ): 8 x z - 28 y z completely factored form ), factor difference. + 3x2 - 2x - 1 = 0, solve this equation and plot the. Prime number but do not combine the factors question on it and a section for self-assessment 18x^2 + +. The sum difference of two squares, trinomial/quadratic expression and completing the square factor p ( x -,... In completely factored form of x² + 6x + 8 the complex to. Its zeros the complex zeros to write it as a product of linear factors per question trinomial/quadratic expression and the... Recognizing them as perfect square-trinomials questions with detailed explanations for easy understanding on Factorisation square. Solve: x 2 y 7 ), one of the function f ( +5. Y^4 - 25y^2 + 144 = 0, solve this equation and plot on the of... = 2×10 ; 10 is a root of 4 ( Type an exact answer using... Following problems a free, world-class education to anyone, anywhere polynomial representing the product of their wasÂ... Practice factor the following is equivalent to 16s^4 - 4t^2 your question +... Chapter 7 factorising algebraic expressions 177 factorise the quadratic polynomial are explained in detail and on. Also has a past GCSE question on it and a section for self-assessment 1/3 } - x!: 5 + 45y + 70y^2 - 15 given that 2 - i is a polynomial of. ) and sketch a graph of the graph of the graph of the variable by factoring of. X^ { 1/6 } + 4x -12, Carry out the indicated operations... 5 3 3 t 2 t... The answer b^2 + 20 c^2, Factorize: a, using radicals as needed at. Can learn from the questions someone else has already asked is x turning a sum of terms a. ^2 -5 ( x-8 ) ^2 -5 ( x-8 ) -2 its denominator, f ( x greater! A perfect square trinomial: x^2 + 15 + 9 q + 25 p, Factorize the equation... Zero = i and answers - math Discussion Recent Discussions on factors.php ) +4m a+b. Be a root of the function including accurate intercepts 3pq, what are the and! Well-Prepared for Class 8 Maths Chapter 14 Factorisation x 4 + 5 ) also has a volume equal to cubic... All zeros of the function f ( x ) = x^5 + 9x^3 + 8x^2 - 9x - 9 0... With 4 terms without grouping 3x^2 +8x 3 = 0 ( if expression... One zero = i, give the value of k 0 such that x+45 is a factor \tan! Math help message board -1,2, -4 the polynomial the zero product principle the real?... The product of linear factors x must be a root its lowest form x^2. Have asked on our free math help message board + 20 = ;. Factorisation: 3√5x²+25x-5√5=5√5 x-3 ) ^2 -5 ( x-8 ) ^2 - 36 in factored form +! Is 2 less than its denominator real solutions which are added well-prepared Class! Than 0 - 12ab + 16a, factor f ( x - 6 c! B - 56a^ { 2 }, Factorize the following expression completely: 8a^2 + +! These factorization questions that are explained in a way that 's easy for to..., one of the fraction increases by 1/18 equation 3x^ { 2 } { 3 }, factor the questions! Denominator, the sum difference of square 3x2 - 2x - 10 14xy 2x. The answer, 50 if it is not a prime number trinomial: x^2 18... Factors of 100 = 2 x 5 ( 2 - q ) is one factor the... } x + y ) ^6 then using the trial-and-error method polynomial -16t^2 + 16t + 32. a less. The integers, enter NF. 4, if possible, completely factor the polynomial k = 5 s 3!, your height above the water is described by the method of prime?. ) Multiples, factors and Primes practice questions Click here for questions Chapter 14 coordinates the... Factor by grouping: 7x^2 - 6x + 5 ) - 3 you can get zeros! + 3x2 - 2x - 2 ) ( sec2 theta - 3 ) = x^2 - +. Term, 2q have the 2 and x 2 y 7 ), factor the equation x^3 8Y^3. ) 5k^2-7k+12=4 ( -2k+3 ) \\, solve the quadratic equation: Trivia Quiz mx. Coefficients that has the given zero to completely factor p ( x ) = x^3 8Y^3., step-by-step worked solutions to all n5 Maths exam 2x^2 - 3x - 6 2. challenge question -- factor trinomial. Cubic inches the rational zero of the numerator and denominator of the turning points and authors for the... N5 Maths questions in terms of i. x^2 + 2 x 2 y 5 + 9, find the of. Are multiplied = 2x² + 6x [ remember x × x is x² ] ) 2! ’ ve seen already seen factorising into single brackets, the answer is now the answer write prime... M ( 0.39892 ) method of prime factorization of 50 + 241x - 50 13x + 30, by.
2022-05-20 00:58:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43761780858039856, "perplexity": 1186.7342006347187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00337.warc.gz"}
http://datahacker.rs/category/opencv/page/2/
datahacker.rs@gmail.com # Category: OpenCV ### CamCal 002 Weak Perspective Highlights: In this post we will talk about how human vision can be tricked with a Perspective Projection model, and we will present one new model, which is called Weak Perspective. Tutorial Overview: Human vision Weak perspective 1. Human vision Humans are very sensitive to this structure of parallel lines and what they convey to us. Probably most of you have seen this Muller-Lyer illusion. Which one of these lines is longer? And you probably… Read more ### OpenCV #010 Circle Detection Using Hough Transform Highlights: In this post we will learn about analysing a given image to find circles detected in that image. Tutorial Overview: Intro Detecting Circles with Hough Hough Transform for Circles Code 1. Intro In the previous post, we saw how we can detect and find lines on images using Hough Transform. Now let’s move to something just a little bit more complicated, circles. Let’s start with the equation of a circle: ( x_{i}-a )^{2}+(… Read more ### OpenCV #009 Line Detection Using Hough Transform Highlights: In this post, we will learn how to analyze images and detect basic features: lines! We will describe a well known Hough transform that will help us to do this task. Let’s roll! Tutorial Overview: Line Detection Hough Space Polar Representation for Lines Code for Detecting Lines in Python and C++ 1. Line Detection Human driver on a regular day performs lane detection. This is a crucial task in order to keep the… Read more ### OpenCV #008 Canny Edge Detector Digital Image Processing using OpenCV (Python & C++) Highlights: In this post, we will learn about the Canny Edge Detector. In the last few posts, we explained why edges are important for better understanding of the image, and how we can use Laplacian and Sobel filter to detect them. At the places where this method does not give good results, we can use the Canny Edge Detector and dramatically improve the obtained results. Tutorial Overview:… Read more ### OpenCV #013 Harris Corner Detector – Experiment Highlights: In this post we will continue working on Harris Corner Detector. In the previous post we presented the basic idea behind this algorithm, and here we will wrap this up and show how this method works in OpenCV. Tutorial Overview: Interpreting the Second Moment Matrix Interpreting the Eigenvalues Harris Corner Response Function Harris Detector Algorithm Code 1. Interpreting the Second Moment Matrix Let’s first consider the case, where the gradient at every point in… Read more
2020-01-18 14:12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40090712904930115, "perplexity": 1636.4044032537654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00211.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-2-6
# How do you simplify 2^-6? Following an exponential rule, we have that ${a}^{-} n = \frac{1}{a} ^ n$ ${2}^{-} 6 = \frac{1}{2} ^ 6 = \frac{1}{64} = 0.015625$
2019-11-12 06:41:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6592564582824707, "perplexity": 832.5609268153967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00263.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-153-problem-24e-calculus-early-transcendentals-8th-edition/9781285741550/use-polar-coordinates-to-find-the-volume-of-the-given-solid-24-bounded-by-the-paraboloid-z-1/f5c8b496-52f3-11e9-8385-02ee952b546e
menu Hit Return to see all results Chapter 15.3, Problem 24E BuyFindarrow_forward ### Calculus: Early Transcendentals 8th Edition James Stewart ISBN: 9781285741550 #### Solutions Chapter Section BuyFindarrow_forward ### Calculus: Early Transcendentals 8th Edition James Stewart ISBN: 9781285741550 Textbook Problem # Use polar coordinates to find the volume of the given solid.24. Bounded by the paraboloid z = 1 + 2x2 + 2y2 and the plane z = 7 in the first octant To determine To find: The volume of the given solid by using polar coordinates. Explanation Given: The region D is bounded by the paraboloid z=1+2x2+2y2 and the plane z=7 in the first octant. Formula used: If f is a polar rectangle R given by 0arb,αθβ, where 0βα2π , then, Rf(x,y)dA=αβabf(rcosθ,rsinθ)rdrdθ (1) If g(x) is the function of x and h(y) is the function of y then, abcdg(x)h(y)dydx=abg(x)dxcdh(y)dy (2) Calculation: Substitute z=7 in z=1+2x2+2y2 . z=1+2x2+2y27=1+2x2+2y26=2x2+2y23=x2+y2 So, the value of r varies from 0 to 3 and the value of θ varies from 0 to π2 . Substitute x=rcosθ and y=rsinθ in the equation (1) and obtain the required volume. DzdA=0π203(7(1+2r2))(r)drdθ=0π203(6r2r3)drdθ Integrate the function with respect to r and θ by using the equation (2) ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started ## Additional Math Solutions #### Find more solutions based on key concepts Show solutions add #### In Exercises 1-6, simplify the expression. 11(1x) Calculus: An Applied Approach (MindTap Course List) #### Subtract the following numbers. 23. $127 less$33 Contemporary Mathematics for Business & Consumers #### In problems 27-30, find. 27. Mathematical Applications for the Management, Life, and Social Sciences #### Evaluate the integral. 0/41+cos2cos2d Single Variable Calculus: Early Transcendentals #### True or False: If f(x) is continuous and decreasing, f(n) = an for all n = 1, 2, 3, …, and Study Guide for Stewart's Single Variable Calculus: Early Transcendentals, 8th
2019-10-16 16:45:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4320527911186218, "perplexity": 3246.362596736826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00004.warc.gz"}
https://www.pims.math.ca/scientific-event/170731-icoptfsaia
## International Conference On Preconditioning Techniques For Scientific And Industrial Applications • Start Date: 07/31/2017 • End Date: 08/02/2017 Location: University of British Columbia Description: The innermost computational kernel of many large-scale scientific applications and industrial numerical simulations is often a large sparse matrix problem, which typically consumes a significant portion of the overall computational time required by the simulation. Many of the matrix problems are in the form of systems of linear equations, although other matrix problems, such as eigenvalue calculations, can occur too. Recent advances in technology have led to a dramatic growth in the size of the matrices to be handled, and iterative techniques are often used in such circumstances, especially when decompositional approaches require prohibitive storage requirements. Computational experience accumulated in the past couple of decades indicates that a good preconditioner holds the key for an effective iterative solver. The conference will bring researchers and application scientists in this field together to discuss the latest developments and progress made, and to exchange findings and explore possible new directions. ### Conference Schedule Invited Speakers: Michele Benzi, Emory University, USA Jed Brown, University of Colorado at Boulder, USA Jie Chen, IBM Research Center, USA Eric Darve, Stanford University, USA Tom Jönsthövel, Schlumberger Abingdon Technology Centre, UK Alison Ramage, Strathclyde University, UK Sander Rhebergen, University of Waterloo, Canada Nicole Spillane, Ecole Polytechnique, France Topics covered include: Incomplete factorization preconditioners • Domain decomposition preconditioners • Approximate inverse preconditioners • Block preconditioning • Multi-level preconditoners • Graph and mesh partitioners • Preconditioners based on randomization • Preconditioning techniques for finite element discretizations • Preconditioning multiphysics problems • Preconditioning techniques in eigenvalue computation, optimization problems • Applications including but not limited to: computational fluid dynamics, materials science and nanosciences, image processing, petroleum industry, semiconductor device simulations, computational finance, and data science Other Information: ### Location: University of British Columbia ### Registration: Registration for this event is now closed as we have reached capacity. ### Participant Accommodation: The organizers have blocked rooms with two on-campus hosts. We strongly encourage you to book with them so that you receive the best available rate. Carey Centre: Twin and queen rooms with or without TV. Price ranges from $95- 103 before taxes and available until July 17, 2017. Breakfast is available upon request for$9 per person only. Book online: Click the booking link here; - Select the check in and check out dates; the online Group Booking Code is PIMSPC2017. UBC Conferences and Accommodation: Studio rooms available, including breakfast at \$136CAD before tax. To book these rooms, please do so here.
2023-02-04 19:29:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867522716522217, "perplexity": 4955.853366435044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00207.warc.gz"}
https://darrenjw.wordpress.com/tag/r/
## Unbiased MCMC with couplings Yesterday there was an RSS Read Paper meeting for the paper Unbiased Markov chain Monte Carlo with couplings by Pierre Jacob, John O’Leary and Yves F. Atchadé. The paper addresses the bias in MCMC estimates due to lack of convergence to equilibrium (the “burn-in” problem), and shows how it is possible to modify MCMC algorithms in order to construct estimates which exactly remove this bias. The requirement is to couple a pair of MCMC chains so that they will at some point meet exactly and thereafter remain coupled. This turns out to be easier to do that one might naively expect. There are many reasons why we might want to remove bias from MCMC estimates, but the primary motivation in the paper was the application to parallel MCMC computation. The idea here is that many pairs of chains can be run independently on any available processors, and the unbiased estimates from the different pairs can be safely averaged to get an (improved) unbiased estimate based on all of the chains. As a discussant of the paper, I’ve spent a bit of time thinking about this idea, and have created a small repository of materials relating to the paper which may be useful for others interested in understanding the method and how to use it in practice. The repo includes a page of links to related papers, blog posts, software and other resources relating to unbiased MCMC that I’ve noticed on-line. Earlier in the year I gave an internal seminar at Newcastle giving a tutorial introduction to the main ideas from the paper, including runnable R code implementations of the examples. The talk was prepared as an executable R Markdown document. The R Markdown source code is available in the repo, but for the convenience of casual browsers I’ve also included a pre-built set of PDF slides. Code examples include code for maximal coupling of two (univariate) distributions, coupling Metropolis-Hastings chains, and coupling a Gibbs sampler for an AR(1) process. I haven’t yet finalised my written discussion contribution, but the slides I presented at the Read Paper meeting are also available. Again, there is source code and pre-built PDF slides. My discussion focused on seeing how well the technique works for Gibbs samplers applied to high-dimensional latent process models (an AR(1) process and a Gaussian Markov random field), and reflecting on the extent to which the technique really solves the burn-in/parallel MCMC problem. The repo also contains a few stand-alone code examples. There are some simple tutorial examples in R (largely derived from my tutorial introduction), including implementation of (univariate) independent and reflection maximal couplings, and a coupled AR(1) process example. The more substantial example concerns a coupled Gibbs sampler for a GMRF. This example is written in the Scala programming language. There are a couple of notable features of this implementation. First, the code illustrates monadic coupling of probability distributions, based on the Rand type in the Breeze scientific library. This provides an elegant way to max couple arbitrary (continuous) random variables, and to create coupled Metropolis(-Hastings) kernels. For example, a coupling of two distributions can be constructed as def couple[T](p: ContinuousDistr[T], q: ContinuousDistr[T]): Rand[(T, T)] = { def ys: Rand[T] = for { y <- q w <- Uniform(0, 1) ay <- if (math.log(w) > p.logPdf(y) - q.logPdf(y)) Rand.always(y) else ys } yield ay val pair = for { x <- p w <- Uniform(0, 1) } yield (math.log(w) <= q.logPdf(x) - p.logPdf(x), x) pair flatMap { case (b, x) => if (b) Rand.always((x, x)) else (ys map (y => (x, y))) } } and then draws can be sampled from the resulting Rand[(T, T)] polymorphic type as required. Incidentally, this also illustrates how to construct an independent maximal coupling without evaluating any raw likelihoods. The other notable feature of the code is the use of a parallel comonadic image type for parallel Gibbs sampling of the GMRF, producing a (lazy) Stream of coupled MCMC samples. ## Introduction In the previous post I gave a brief introduction to the third edition of my textbook, Stochastic modelling for systems biology. The algorithms described in the book are illustrated by implementations in R. These implementations are collected together in an R package on CRAN called smfsb. This post will provide a brief introduction to the package and its capabilities. ## Installation The package is on CRAN – see the CRAN package page for details. So the simplest way to install it is to enter install.packages("smfsb") at the R command prompt. This will install the latest version that is on CRAN. Once installed, the package can be loaded with library(smfsb) The package is well-documented, so further information can be obtained with the usual R mechanisms, such as vignette(package="smfsb") vignette("smfsb") help(package="smfsb") ?StepGillespie example(StepCLE1D) The version of the package on CRAN is almost certainly what you want. However, the package is developed on R-Forge – see the R-Forge project page for details. So the very latest version of the package can always be installed with install.packages("smfsb", repos="http://R-Forge.R-project.org") if you have a reason for wanting it. ## A brief tutorial The vignette gives a quick introduction the the library, which I don’t need to repeat verbatim here. If you are new to the package, I recommend working through that before continuing. Here I’ll concentrate on some of the new features associated with the third edition. ### Simulating stochastic kinetic models Much of the book is concerned with the simulation of stochastic kinetic models using exact and approximate algorithms. Although the primary focus of the text is the application to modelling of intra-cellular processes, the methods are also appropriate for population modelling of ecological and epidemic processes. For example, we can start by simulating a simple susceptible-infectious-recovered (SIR) disease epidemic model. set.seed(2) data(spnModels) stepSIR = StepGillespie(SIR) plot(simTs(SIR$M, 0, 8, 0.05, stepSIR), main="Exact simulation of the SIR model") The focus of the text is stochastic simulation of discrete models, so that is the obvious place to start. But there is also support for continuous deterministic simulation. plot(simTs(SIR$M, 0, 8, 0.05, StepEulerSPN(SIR)), main="Euler simulation of the SIR model") My favourite toy population dynamics model is the Lotka-Volterra (LV) model, so I tend to use this frequently as a running example throughout the book. We can simulate this (exactly) as follows. stepLV = StepGillespie(LV) plot(simTs(LV$M, 0, 30, 0.2, stepLV), main="Exact simulation of the LV model") ### Stochastic reaction-diffusion modelling The first two editions of the book were almost exclusively concerned with well-mixed systems, where spatial effects are ignorable. One of the main new features of the third edition is the inclusion of a new chapter on spatially extended systems. The focus is on models related to the reaction diffusion master equation (RDME) formulation, rather than individual particle-based simulations. For these models, space is typically divided into a regular grid of voxels, with reactions taking place as normal within each voxel, and additional reaction events included, corresponding to the diffusion of particles to adjacent voxels. So to specify such models, we just need an initial condition, a reaction model, and diffusion coefficients (one for each reacting species). So, we can carry out exact simulation of an RDME model for a 1D spatial domain as follows. N=20; T=30 x0=matrix(0, nrow=2, ncol=N) rownames(x0) = c("x1", "x2") x0[,round(N/2)] = LV$M stepLV1D = StepGillespie1D(LV, c(0.6, 0.6)) xx = simTs1D(x0, 0, T, 0.2, stepLV1D, verb=TRUE) image(xx[1,,], main="Prey", xlab="Space", ylab="Time") image(xx[2,,], main="Predator", xlab="Space", ylab="Time") Exact simulation of discrete stochastic reaction diffusion systems is very expensive (and the reference implementation provided in the package is very inefficient), so we will often use diffusion approximations based on the CLE. stepLV1DC = StepCLE1D(LV, c(0.6, 0.6)) xx = simTs1D(x0, 0, T, 0.2, stepLV1D) image(xx[1,,], main="Prey", xlab="Space", ylab="Time") image(xx[2,,], main="Predator", xlab="Space", ylab="Time") We can think of this algorithm as an explicit numerical integration of the obvious SPDE approximation to the exact model. The package also includes support for simulation of 2D systems. Again, we can use the Spatial CLE to speed things up. m=70; n=50; T=10 data(spnModels) x0=array(0, c(2,m,n)) dimnames(x0)[[1]]=c("x1", "x2") x0[,round(m/2),round(n/2)] = LV$M stepLV2D = StepCLE2D(LV, c(0.6,0.6), dt=0.05) xx = simTs2D(x0, 0, T, 0.5, stepLV2D) N = dim(xx)[4] image(xx[1,,,N],main="Prey",xlab="x",ylab="y") image(xx[2,,,N],main="Predator",xlab="x",ylab="y") ### Bayesian parameter inference Although much of the book is concerned with the problem of forward simulation, the final chapters are concerned with the inverse problem of estimating model parameters, such as reaction rate constants, from data. A computational Bayesian approach is adopted, with the main emphasis being placed on “likelihood free” methods, which rely on forward simulation to avoid explicit computation of sample path likelihoods. The second edition included some rudimentary code for a likelihood free particle marginal Metropolis-Hastings (PMMH) particle Markov chain Monte Carlo (pMCMC) algorithm. The third edition includes a more complete and improved implementation, in addition to approximate inference algorithms based on approximate Bayesian computation (ABC). The key function underpinning the PMMH approach is pfMLLik, which computes an estimate of marginal model log-likelihood using a (bootstrap) particle filter. There is a new implementation of this function with the third edition. There is also a generic implementation of the Metropolis-Hastings algorithm, metropolisHastings, which can be combined with pfMLLik to create a PMMH algorithm. PMMH algorithms are very slow, but a full demo of how to use these functions for parameter inference is included in the package and can be run with demo(PMCMC) Simple rejection-based ABC methods are facilitated by the (very simple) function abcRun, which just samples from a prior and then carries out independent simulations in parallel before computing summary statistics. A simple illustration of the use of the function is given below. data(LVdata) rprior <- function() { exp(c(runif(1, -3, 3),runif(1,-8,-2),runif(1,-4,2))) } rmodel <- function(th) { simTs(c(50,100), 0, 30, 2, stepLVc, th) } sumStats <- identity ssd = sumStats(LVperfect) distance <- function(s) { diff = s - ssd sqrt(sum(diff*diff)) } rdist <- function(th) { distance(sumStats(rmodel(th))) } out = abcRun(10000, rprior, rdist) q=quantile(out$dist, c(0.01, 0.05, 0.1)) print(q) ## 1% 5% 10% ## 772.5546 845.8879 881.0573 accepted = out$param[out$dist < q[1],] print(summary(accepted)) ## V1 V2 V3 ## Min. :0.06498 Min. :0.0004467 Min. :0.01887 ## 1st Qu.:0.16159 1st Qu.:0.0012598 1st Qu.:0.04122 ## Median :0.35750 Median :0.0023488 Median :0.14664 ## Mean :0.68565 Mean :0.0046887 Mean :0.36726 ## 3rd Qu.:0.86708 3rd Qu.:0.0057264 3rd Qu.:0.36870 ## Max. :4.76773 Max. :0.0309364 Max. :3.79220 print(summary(log(accepted))) ## V1 V2 V3 ## Min. :-2.7337 Min. :-7.714 Min. :-3.9702 ## 1st Qu.:-1.8228 1st Qu.:-6.677 1st Qu.:-3.1888 ## Median :-1.0286 Median :-6.054 Median :-1.9198 ## Mean :-0.8906 Mean :-5.877 Mean :-1.9649 ## 3rd Qu.:-0.1430 3rd Qu.:-5.163 3rd Qu.:-0.9978 ## Max. : 1.5619 Max. :-3.476 Max. : 1.3329 Naive rejection-based ABC algorithms are notoriously inefficient, so the library also includes an implementation of a more efficient, sequential version of ABC, often known as ABC-SMC, in the function abcSmc. This function requires specification of a perturbation kernel to “noise up” the particles at each algorithm sweep. Again, the implementation is parallel, using the parallel package to run the required simulations in parallel on multiple cores. A simple illustration of use is given below. rprior <- function() { c(runif(1, -3, 3), runif(1, -8, -2), runif(1, -4, 2)) } dprior <- function(x, ...) { dunif(x[1], -3, 3, ...) + dunif(x[2], -8, -2, ...) + dunif(x[3], -4, 2, ...) } rmodel <- function(th) { simTs(c(50,100), 0, 30, 2, stepLVc, exp(th)) } rperturb <- function(th){th + rnorm(3, 0, 0.5)} dperturb <- function(thNew, thOld, ...){sum(dnorm(thNew, thOld, 0.5, ...))} sumStats <- identity ssd = sumStats(LVperfect) distance <- function(s) { diff = s - ssd sqrt(sum(diff*diff)) } rdist <- function(th) { distance(sumStats(rmodel(th))) } out = abcSmc(5000, rprior, dprior, rdist, rperturb, dperturb, verb=TRUE, steps=6, factor=5) ## 6 5 4 3 2 1 print(summary(out)) ## V1 V2 V3 ## Min. :-2.9961 Min. :-7.988 Min. :-3.999 ## 1st Qu.:-1.9001 1st Qu.:-6.786 1st Qu.:-3.428 ## Median :-1.2571 Median :-6.167 Median :-2.433 ## Mean :-1.0789 Mean :-6.014 Mean :-2.196 ## 3rd Qu.:-0.2682 3rd Qu.:-5.261 3rd Qu.:-1.161 ## Max. : 2.1128 Max. :-2.925 Max. : 1.706 We can then plot some results with hist(out[,1],30,main="log(c1)") hist(out[,2],30,main="log(c2)") hist(out[,3],30,main="log(c3)") Although the inference methods are illustrated in the book in the context of parameter inference for stochastic kinetic models, their implementation is generic, and can be used with any appropriate parameter inference problem. ## The smfsbSBML package smfsbSBML is another R package associated with the third edition of the book. This package is not on CRAN due to its dependency on a package not on CRAN, and hence is slightly less straightforward to install. Follow the available installation instructions to install the package. Once installed, you should be able to load the package with library(smfsbSBML) This package provides a function for reading in SBML files and parsing them into the simulatable stochastic Petri net (SPN) objects used by the main smfsb R package. Examples of suitable SBML models are included in the main smfsb GitHub repo. An appropriate SBML model can be read and parsed with a command like: model = sbml2spn("mySbmlModel.xml") The resulting value, model is an SPN object which can be passed in to simulation functions such as StepGillespie for constructing stochastic simulation algorithms. ## Other software In addition to the above R packages, I also have some Python scripts for converting between SBML and the SBML-shorthand notation I use in the book. See the SBML-shorthand page for further details. Although R is a convenient language for teaching and learning about stochastic simulation, it isn’t ideal for serious research-level scientific computing or computational statistics. So for the third edition of the book I have also developed scala-smfsb, a library written in the Scala programming language, which re-implements all of the models and algorithms from the third edition of the book in Scala, a fast, efficient, strongly-typed, compiled, functional programming language. I’ll give an introduction to this library in a subsequent post, but in the meantime, it is already well documented, so see the scala-smfsb repo for further details, including information on installation, getting started, a tutorial, examples, API docs, etc. ## Source This blog post started out as an RMarkdown document, the source of which can be found here. ## Stochastic Modelling for Systems Biology, third edition The third edition of my textbook, Stochastic Modelling for Systems Biology has recently been published by Chapman & Hall/CRC Press. The book has ISBN-10 113854928-2 and ISBN-13 978-113854928-9. It can be ordered from CRC Press, Amazon.com, Amazon.co.uk and similar book sellers. I was fairly happy with the way that the second edition, published in 2011, turned out, and so I haven’t substantially re-written any of the text for the third edition. Instead, I’ve concentrated on adding in new material and improving the associated on-line resources. Those on-line resources are all free and open source, and hence available to everyone, irrespective of whether you have a copy of the new edition. I’ll give an introduction to those resources below (and in subsequent posts). The new material can be briefly summarised as follows: • New chapter on spatially extended systems, covering the spatial Gillespie algorithm for reaction diffusion master equation (RDME) models in 1- and 2-d, the next subvolume method, spatial CLE, scaling issues, etc. • Significantly expanded chapter on inference for stochastic kinetic models from data, covering approximate methods of inference (ABC), including ABC-SMC. The material relating to particle MCMC has also been improved and extended. • Updated R package, including code relating to all of the new material • New R package for parsing SBML models into simulatable stochastic Petri net models • New software library, written in Scala, replicating most of the functionality of the R packages in a fast, compiled, strongly typed, functional language ## New content Although some minor edits and improvements have been made throughout the text, there are two substantial new additions to the text in this new edition. The first is an entirely new chapter on spatially extended systems. The first two editions of the text focused on the implications of discreteness and stochasticity in chemical reaction systems, but maintained the well-mixed assumption throughout. This is a reasonable first approach, since discreteness and stochasticity are most pronounced in very small volumes where diffusion should be rapid. In any case, even these non-spatial models have very interesting behaviour, and become computationally challenging very quickly for non-trivial reaction networks. However, we know that, in fact, the cell is a very crowded environment, and so even at small spatial scales, many interesting processes are diffusion limited. It therefore seems appropriate to dedicate one chapter (the new Chapter 9) to studying some of the implications of relaxing the well-mixed assumption. Entire books can be written on stochastic reaction-diffusion systems, so here only a brief introduction is provided, based mainly around models in the reaction-diffusion master equation (RDME) style. Exact stochastic simulation algorithms are discussed, and implementations provided in the 1- and 2-d cases, and an appropriate Langevin approximation is examined, the spatial CLE. The second major addition is to the chapter on inference for stochastic kinetic models from data (now Chapter 11). The second edition of the book included a discussion of “likelihood free” Bayesian MCMC methods for inference, and provided a working implementation of likelihood free particle marginal Metropolis-Hastings (PMMH) for stochastic kinetic models. The third edition improves on that implementation, and discusses approximate Bayesian computation (ABC) as an alternative to MCMC for likelihood free inference. Implementation issues are discussed, and sequential ABC approaches are examined, concentrating in particular on the method known as ABC-SMC. ## New software and on-line resources Accompanying the text are new and improved on-line resources, all well-documented, free, and open source. ### New website/GitHub repo Information and materials relating to the previous editions were kept on my University website. All materials relating to this new edition are kept in a public GitHub repo: darrenjw/smfsb. This will be simpler to maintain, and will make it much easier for people to make copies of the material for use and studying off-line. ### Updated R package(s) Along with the second edition of the book I released an accompanying R package, “smfsb”, published on CRAN. This was a very popular feature, allowing anyone with R to trivially experiment with all of the models and algorithms discussed in the text. This R package has been updated, and a new version has been published to CRAN. The updates are all backwards-compatible with the version associated with the second edition of the text, so owners of that edition can still upgrade safely. I’ll give a proper introduction to the package, including the new features, in a subsequent post, but in the meantime, you can install/upgrade the package from a running R session with install.packages("smfsb") and then pop up a tutorial vignette with: vignette("smfsb") This should be enough to get you started. In addition to the main R package, there is an additional R package for parsing SBML models into models that can be simulated within R. This package is not on CRAN, due to its dependency on a non-CRAN package. See the repo for further details. There are also Python scripts available for converting SBML models to and from the shorthand SBML notation used in the text. ### New Scala library Another major new resource associated with the third edition of the text is a software library written in the Scala programming language. This library provides Scala implementations of all of the algorithms discussed in the book and implemented in the associated R packages. This then provides example implementations in a fast, efficient, compiled language, and is likely to be most useful for people wanting to use the methods in the book for research. Again, I’ll provide a tutorial introduction to this library in a subsequent post, but it is well-documented, with all necessary information needed to get started available at the scala-smfsb repo/website, including a step-by-step tutorial and some additional examples. ## Introduction To statisticians and data scientists used to working in R, the concept of a data frame is one of the most natural and basic starting points for statistical computing and data analysis. It always surprises me that data frames aren’t a core concept in most programming languages’ standard libraries, since they are essentially a representation of a relational database table, and relational databases are pretty ubiquitous in data processing and related computing. For statistical modelling and data science, having functions designed for data frames is much more elegant than using functions designed to work directly on vectors and matrices, for example. Trivial things like being able to refer to columns by a readable name rather than a numeric index makes a huge difference, before we even get into issues like columns of heterogeneous types, coherent handling of missing data, etc. This is why modelling in R is typically nicer than in certain other languages I could mention, where libraries for scientific and numerical computing existed for a long time before libraries for data frames were added to the language ecosystem. To build good libraries for statistical computing in Scala, it will be helpful to build those libraries using a good data frame implementation. With that in mind I’ve started to look for existing Scala data frame libraries and to compare them. ### A simple data manipulation task For this post I’m going to consider a very simple data manipulation task: first reading in a CSV file from disk into a data frame object, then filtering out some rows, then adding a derived column, then finally writing the data frame back to disk as a CSV file. We will start by looking at how this would be done in R. First we need an example CSV file. Since many R packages contain example datasets, we will use one of those. We will export Cars93 from the MASS package: library(MASS) write.csv(Cars93,"cars93.csv",row.names=FALSE) If MASS isn’t installed, it can be installed with a simple install.packages("MASS"). The above code snippet generates a CSV file to be used for the example. Typing ?Cars93 will give some information about the dataset, including the original source. Our analysis task is going to be to load the file from disk, filter out cars with EngineSize larger than 4 (litres), add a new column to the data frame, WeightKG, containing the weight of the car in KG, derived from the column Weight (in pounds), and then write back to disk in CSV format. This is the kind of thing that R excels at (pun intended): df=read.csv("cars93.csv") print(dim(df)) df = df[df$EngineSize<=4.0,] print(dim(df)) df$WeightKG = df$Weight*0.453592 print(dim(df)) write.csv(df,"cars93m.csv",row.names=FALSE) Now let’s see how a similar task could be accomplished using Scala data frames. ## Data frames and tables in Scala ### Saddle Saddle is probably the best known data frame library for Scala. It is strongly influenced by the pandas library for Python. A simple Saddle session for accomplishing this task might proceed as follows: val file = CsvFile("cars93.csv") val df = CsvParser.parse(file).withColIndex(0) println(df) val df2 = df.rfilter(_("EngineSize"). mapValues(CsvParser.parseDouble).at(0)<=4.0) println(df2) val wkg=df2.col("Weight").mapValues(CsvParser.parseDouble). mapValues(_*0.453592).setColIndex(Index("WeightKG")) val df3=df2.joinPreserveColIx(wkg.mapValues(_.toString)) println(df3) df3.writeCsvFile("saddle-out.csv") Although this looks OK, it’s not completely satisfactory, as the data frame is actually representing a matrix of Strings. Although you can have a data frame containing columns of any type, since Saddle data frames are backed by a matrix object (with type corresponding to the common super-type), the handling of columns of heterogeneous types always seems rather cumbersome. I suspect that it is this clumsy handling of heterogeneously typed columns that has motivated the development of alternative data frame libraries for Scala. ### Scala-datatable Scala-datatable is a lightweight minimal immutable data table for Scala, with good support for columns of differing types. However, it is currently really very minimal, and doesn’t have CSV import or export, for example. That said, there are several CSV libraries for Scala, so it’s quite easy to write functions to import from CSV into a datatable and write CSV back out from one. I’ve a couple of example functions, readCsv() and writeCsv() in the full code examples associated with this post. Now since datatable supports heterogeneous column types and I don’t want to write a type guesser, my readCsv() function expects information regarding the column types. This could be relaxed with a bit of effort. An example session follows: val colTypes=Map("DriveTrain" -> StringCol, "Min.Price" -> Double, "Cylinders" -> Int, "Horsepower" -> Int, "Length" -> Int, "Make" -> StringCol, "Passengers" -> Int, "Width" -> Int, "Fuel.tank.capacity" -> Double, "Origin" -> StringCol, "Wheelbase" -> Int, "Price" -> Double, "Luggage.room" -> Double, "Weight" -> Int, "Model" -> StringCol, "Max.Price" -> Double, "Manufacturer" -> StringCol, "EngineSize" -> Double, "AirBags" -> StringCol, "Man.trans.avail" -> StringCol, "Rear.seat.room" -> Double, "RPM" -> Int, "Turn.circle" -> Double, "MPG.highway" -> Int, "MPG.city" -> Int, "Rev.per.mile" -> Int, "Type" -> StringCol) val df=readCsv("Cars93",new FileReader("cars93.csv"),colTypes) println(df.length,df.columns.length) val df2=df.filter(row=>row.as[Double]("EngineSize")<=4.0).toDataTable println(df2.length,df2.columns.length) val oldCol=df2.columns("Weight").as[Int] val newCol=new DataColumn[Double]("WeightKG",oldCol.data.map{_.toDouble*0.453592}) val df3=df2.columns.add(newCol).get println(df3.length,df3.columns.length) writeCsv(df3,new File("out.csv")) Apart from the declaration of column types, the code is actually a little bit cleaner than the corresponding Saddle code, and the column types are all properly preserved and appropriately handled. However, a significant limitation of this data frame is that it doesn’t seem to have special handling of missing values, requiring some kind of manually coded “special value” approach from users of this data frame. This is likely to limit the appeal of this library for general statistical and data science applications. ### Framian Framian is a full-featured data frame library for Scala, open-sourced by Pellucid analytics. It is strongly influenced by R data frame libraries, and aims to provide most of the features that R users would expect. It has good support for clean handling of heterogeneously typed columns (using shapeless), handles missing data, and includes good CSV import: val df=Csv.parseFile(new File("cars93.csv")).labeled.toFrame println(""+df.rows+" "+df.cols) val df2=df.filter(Cols("EngineSize").as[Double])( _ <= 4.0 ) println(""+df2.rows+" "+df2.cols) val df3=df2.map(Cols("Weight").as[Int],"WeightKG")(r=>r.toDouble*0.453592) println(""+df3.rows+" "+df3.cols) println(df3.colIndex) val csv = Csv.fromFrame(new CsvFormat(",", header = true))(df3) new PrintWriter("out.csv") { write(csv.toString); close } This is arguably the cleanest solution so far. Unfortunately the output isn’t quite right(!), as there currently seems to be a bug in Csv.fromFrame which causes the ordering of columns to get out of sync with the ordering of the column headers. Presumably this bug will soon be fixed, and if not it is easy to write a CSV writer for these frames, as I did above for scala-datatable. ### Spark DataFrames The three data frames considered so far are all standard single-machine, non-distributed, in-memory objects. The Scala data frame implementation currently subject to the most social media buzz is a different beast entirely. A DataFrame object has recently been added to Apache Spark. I’ve already discussed the problems of first developing a data analysis library without data frames and then attempting to bolt a data frame object on top post-hoc. Spark has repeated this mistake, but it’s still much better to have a data frame in Spark than not. Spark is a Scala framework for the distributed processing and analysis of huge datasets on a cluster. I will discuss it further in future posts. If you have a legitimate need for this kind of set-up, then Spark is a pretty impressive piece of technology (though note that there are competitors, such as flink). However, for datasets that can be analysed on a single machine, then Spark seems like a rather slow and clunky sledgehammer to crack a nut. So, for datasets in the terabyte range and above, Spark DataFrames are great, but for datasets smaller than a few gigs, it’s probably not the best solution. With those caveats in mind, here’s how to solve our problem using Spark DataFrames (and the spark-csv library) in the Spark Shell: val df = sqlContext.read.format("com.databricks.spark.csv"). option("header", "true"). option("inferSchema","true"). load("cars93.csv") val df2=df.filter("EngineSize <= 4.0") val col=df2.col("Weight")*0.453592 val df3=df2.withColumn("WeightKG",col) df3.write.format("com.databricks.spark.csv"). option("header","true"). save("out-csv") ## Summary If you really need a distributed data frame library, then you will probably want to use Spark. However, for the vast majority of statistical modelling and data science tasks, Spark is likely to be unnecessarily complex and heavyweight. The other three libraries considered all have pros and cons. They are all largely one-person hobby projects, quite immature, and not currently under very active development. Saddle is fine for when you just want to add column headings to a matrix. Scala-datatable is lightweight and immutable, if you don’t care about missing values. On balance, I think Framian is probably the most full-featured “batteries included” R-like data frame, and so is likely to be most attractive to statisticians and data scientists. However, it’s pretty immature, and the dependence on shapeless may be of concern to those who prefer libraries to be lean and devoid of sorcery! I’d be really interested to know of other people’s experiences of these libraries, so please do comment if you have any views, and especially if you have opinions on the relative merits of the different libraries. The full source code for all of these examples, including sbt build files, can be found in a new github repo I’ve created for the code examples associated with this blog. ## Calling R from Scala sbt projects using rscala ### Overview In the previous post I showed how the rscala package (which has replaced the jvmr package) can be used to call Scala code from within R. In this post I will show how to call R from Scala code. I have previously described how to do this using jvmr. This post is really just an update to show how things work with rscala. Since I’m focusing here on Scala sbt projects, I’m assuming that sbt is installed, in addition to rscala (described in the previous post). The only “trick” required for calling back to R from Scala is telling sbt where the rscala jar file is located. You can find the location from the R console as illustrated by the following session: > library(rscala) > rscala::rscalaJar("2.11") [1] "/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.2/rscala/java/rscala_2.11-1.0.6.jar" This location (which will obviously be different for you) can then be added in to your sbt classpath by adding the following line to your build.sbt file: unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.2/rscala/java/rscala_2.11-1.0.6.jar") Once this is done, calling out to R from your Scala sbt project can be carried out as described in the rscala documentation. For completeness, a working example is given below. ### Example In this example I will use Scala to simulate some data consistent with a Poisson regression model, and then push the data to R to fit it using the R function glm(), and then pull back the fitted regression coefficients into Scala. This is obviously a very artificial example, but the point is to show how it is possible to call back to R for some statistical procedure that may be “missing” from Scala. The dependencies for this project are described in the file build.sbt name := "rscala test" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/" ) unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.2/rscala/java/rscala_2.11-1.0.6.jar") scalaVersion := "2.11.6" The complete Scala program is contained in the file PoisReg.scala import org.ddahl.rscala.callback._ import breeze.stats.distributions._ import breeze.linalg._ object ScalaToRTest { def main(args: Array[String]) = { // first simulate some data consistent with a Poisson regression model val x = Uniform(50,60).sample(1000) val eta = x map { xi => (xi * 0.1) - 3 } val mu = eta map { math.exp(_) } val y = mu map { Poisson(_).draw } // call to R to fit the Poission regression model val R = RClient() // initialise an R interpreter R.x=x.toArray // send x to R R.y=y.toArray // send y to R R.eval("mod <- glm(y~x,family=poisson())") // fit the model in R // pull the fitted coefficents back into scala val beta = DenseVector[Double](R.evalD1("mod$coefficients")) // print the fitted coefficents println(beta) } } If these two files are put in an empty directory, the code can be compiled and run by typing sbt run from the command prompt in the relevant directory. The commented code should be self-explanatory, but see the rscala documentation for further details. In particular, the rscala scaladoc is useful. ## Calling Scala code from R using rscala ### Introduction In a previous post I looked at how to call Scala code from R using a CRAN package called jvmr. This package now seems to have been replaced by a new package called rscala. Like the old package, it requires a pre-existing Java installation. Unlike the old package, however, it no longer depends on rJava, which may simplify some installations. The rscala package is well documented, with a reference manual and a draft paper. In this post I will concentrate on the issue of calling sbt-based projects with dependencies on external libraries (such as breeze). On a system with Java installed, it should be possible to install the rscala package with a simple install.packages("rscala") from the R command prompt. Calling library(rscala) will check that it has worked. The package will do a sensible search for a Scala installation and use it if it can find one. If it can’t find one (or can only find an installation older than 2.10.x), it will fail. In this case you can download and install a Scala installation specifically for rscala using the command rscala::scalaInstall() This option is likely to be attractive to sbt (or IDE) users who don’t like to rely on a system-wide scala installation. ### A Gibbs sampler in Scala using Breeze For illustration I’m going to use a Scala implementation of a Gibbs sampler. The Scala code, gibbs.scala is given below: package gibbs object Gibbs { import scala.annotation.tailrec import scala.math.sqrt import breeze.stats.distributions.{Gamma,Gaussian} case class State(x: Double, y: Double) { override def toString: String = x.toString + " , " + y + "\n" } def nextIter(s: State): State = { val newX = Gamma(3.0, 1.0/((s.y)*(s.y)+4.0)).draw State(newX, Gaussian(1.0/(newX+1), 1.0/sqrt(2*newX+2)).draw) } @tailrec def nextThinnedIter(s: State,left: Int): State = if (left==0) s else nextThinnedIter(nextIter(s),left-1) def genIters(s: State, stop: Int, thin: Int): List[State] = { @tailrec def go(s: State, left: Int, acc: List[State]): List[State] = if (left>0) go(nextThinnedIter(s,thin), left-1, s::acc) else acc go(s,stop,Nil).reverse } def main(args: Array[String]) = { if (args.length != 3) { println("Usage: sbt \"run <outFile> <iters> <thin>\"") sys.exit(1) } else { val outF=args(0) val iters=args(1).toInt val thin=args(2).toInt val out = genIters(State(0.0,0.0),iters,thin) val s = new java.io.FileWriter(outF) s.write("x , y\n") out map { it => s.write(it.toString) } s.close } } } This code requires Scala and the Breeze scientific library in order to build. We can specify this in a sbt build file, which should be called build.sbt and placed in the same directory as the Scala code. name := "gibbs" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/" ) scalaVersion := "2.11.6" Now, from a system command prompt in the directory where the files are situated, it should be possible to download all dependencies and compile and run the code with a simple sbt "run output.csv 50000 1000" sbt magically manages all of the dependencies for us so that we don’t have to worry about them. However, for calling from R, it may be desirable to run the code without running sbt. There are several ways to achieve this, but the simplest is to build an “assembly jar” or “fat jar”, which is a Java byte-code file containing all code and libraries required in order to run the code on any system with a Java installation. To build an assembly jar first create a subdirectory called project (the name matters), an in it place two files. The first should be called assembly.sbt, and should contain the line addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0") Since the version of the assembly tool can depend on the version of sbt, it is also best to fix the version of sbt being used by creating another file in the project directory called build.properties, which should contain the line sbt.version=0.13.7 sbt assembly If this works, it should create a fat jar target/scala-2.11/gibbs-assembly-0.1.jar. You can check it works by running java -jar target/scala-2.11/gibbs-assembly-0.1.jar output.csv 10000 10 Assuming that it does, you are now ready to try running the code from within R. #### Calling via R system calls Since this code takes a relatively long time to run, calling it from R via simple system calls isn’t a particularly terrible idea. For example, we can do this from the R command prompt with the following commands system("java -jar target/scala-2.11/gibbs-assembly-0.1.jar output.csv 50000 1000") library(smfsb) mcmcSummary(out,rows=2) This works fine, but is a bit clunky. Tighter integration between R and Scala would be useful, which is where rscala comes in. #### Calling assembly Scala projects via rscala rscala provides a very simple way to embed a Scala interpreter within an R session, to be able to execute Scala expressions from R and to have the results returned back to the R session for further processing. The main issue with using this in practice is managing dependencies on external libraries and setting the Scala classpath correctly. By using an assembly jar we can bypass most of these issues, and it becomes trivial to call our Scala code direct from the R interpreter, as the following code illustrates. library(rscala) sc=scalaInterpreter("target/scala-2.11/gibbs-assembly-0.1.jar") sc%~%'import gibbs.Gibbs._' out=sc%~%'genIters(State(0.0,0.0),50000,1000).toArray.map{s=>Array(s.x,s.y)}' library(smfsb) mcmcSummary(out,rows=2) Here we call the getIters function directly, rather than via the main method. This function returns an immutable List of States. Since R doesn’t understand this, we map it to an Array of Arrays, which R then unpacks into an R matrix for us to store in the matrix out. ### Summary The CRAN package rscala makes it very easy to embed a Scala interpreter within an R session. However, for most non-trivial statistical computing problems, the Scala code will have dependence on external scientific libraries such as Breeze. The standard way to easily manage external dependencies in the Scala ecosystem is sbt. Given an sbt-based Scala project, it is easy to generate an assembly jar in order to initialise the rscala Scala interpreter with the classpath needed to call arbitrary Scala functions. This provides very convenient inter-operability between R and Scala for many statistical computing applications. ## Calling R from Scala sbt projects [Update: The jvmr package has been replaced by the rscala package. There is a new version of this post which replaces this one.] ### Overview In previous posts I’ve shown how the jvmr CRAN R package can be used to call Scala sbt projects from R and inline Scala Breeze code in R. In this post I will show how to call to R from a Scala sbt project. This requires that R and the jvmr CRAN R package are installed on your system, as described in the previous posts. Since I’m focusing here on Scala sbt projects, I’m also assuming that sbt is installed. The only “trick” required for calling back to R from Scala is telling sbt where the jvmr jar file is located. You can find the location from the R console as illustrated by the following session: &gt; library(jvmr) &gt; .jvmr.jar [1] "/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar" This location (which will obviously be different for you) can then be added in to your sbt classpath by adding the following line to your build.sbt file: unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar") Once this is done, calling out to R from your Scala sbt project can be carried out as described in the jvmr documentation. For completeness, a working example is given below. ### Example In this example I will use Scala to simulate some data consistent with a Poisson regression model, and then push the data to R to fit it using the R function glm(), and then pull back the fitted regression coefficients into Scala. This is obviously a very artificial example, but the point is to show how it is possible to call back to R for some statistical procedure that may be “missing” from Scala. The dependencies for this project are described in the file build.sbt name := "jvmr test" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/" ) unmanagedJars in Compile += file("/home/ndjw1/R/x86_64-pc-linux-gnu-library/3.1/jvmr/java/jvmr_2.11-2.11.2.1.jar") scalaVersion := "2.11.2" The complete Scala program is contained in the file PoisReg.scala import org.ddahl.jvmr.RInScala import breeze.stats.distributions._ import breeze.linalg._ object ScalaToRTest { def main(args: Array[String]) = { // first simulate some data consistent with a Poisson regression model val x = Uniform(50,60).sample(1000) val eta = x map { xi =&gt; (xi * 0.1) - 3 } val mu = eta map { math.exp(_) } val y = mu map { Poisson(_).draw } // call to R to fit the Poission regression model val R = RInScala() // initialise an R interpreter R.x=x.toArray // send x to R R.y=y.toArray // send y to R R.eval("mod &lt;- glm(y~x,family=poisson())") // fit the model in R // pull the fitted coefficents back into scala val beta = DenseVector[Double](R.toVector[Double]("mod$coefficients")) // print the fitted coefficents println(beta) } } If these two files are put in an empty directory, the code can be compiled and run by typing sbt run from the command prompt in the relevant directory. The commented code should be self-explanatory, but see the jvmr documentation for further details. ## Inlining Scala Breeze code in R using jvmr and sbt [Update: The CRAN package “jvmr” has been replaced by a new package “rscala”. Rather than completely re-write this post, I’ve just created a github gist containing a new function, breezeInterpreter(), which works similarly to the function breezeInit() in this post. Usage information is given at the top of the gist.] ### Introduction In the previous post I showed how to call Scala code from R using sbt and jvmr. The approach described in that post is the one I would recommend for any non-trivial piece of Scala code – mixing up code from different languages in the same source code file is not a good strategy in general. That said, for very small snippets of code, it can sometimes be convenient to inline Scala code directly into an R source code file. The canonical example of this is a computationally intensive algorithm being prototyped in R which has a slow inner loop. If the inner loop can be recoded in a few lines of Scala, it would be nice to just inline this directly into the R code without having to create a separate Scala project. The CRAN package jvmr provides a very simple and straightforward way to do this. However, as discussed in the last post, most Scala code for statistical computing (even short and simple code) is likely to rely on Breeze for special functions, probability distributions, non-uniform random number generation, numerical linear algebra, etc. In this post we will see how to use sbt in order to make sure that the Breeze library and all of its dependencies are downloaded and cached, and to provide a correct classpath with which to initialise a jvmr scalaInterpreter session. ### Setting up Configuring your system to be able to inline Scala Breeze code is very easy. You just need to install Java, R and sbt. Then install the CRAN R package jvmr. At this point you have everything you need except for the R function breezeInit, given at the end of this post. I’ve deferred the function to the end of the post as it is a bit ugly, and the details of it are not important. All it does is get sbt to ensure that Breeze is correctly downloaded and cached and then starts a scalaInterpreter with Breeze on the classpath. With this function available, we can use it within an R session as the following R session illustrates: &gt; b=breezeInit() &gt; b['import breeze.stats.distributions._'] NULL &gt; b['Poisson(10).sample(20).toArray'] [1] 13 14 13 10 7 6 15 14 5 10 14 11 15 8 11 12 6 7 5 7 &gt; summary(b['Gamma(3,2).sample(10000).toArray']) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2124 3.4630 5.3310 5.9910 7.8390 28.5200 &gt; So we see that Scala Breeze code can be inlined directly into an R session, and if we are careful about return types, have the results of Scala expressions automatically unpack back into convenient R data structures. ### Summary In this post I have shown how easy it is to inline Scala Breeze code into R using sbt in conjunction with the CRAN package jvmr. This has many potential applications, with the most obvious being the desire to recode slow inner loops from R to Scala. This should give performance quite comparable with alternatives such as Rcpp, with the advantage being that you get to write beautiful, elegant, functional Scala code instead of horrible, ugly, imperative C++ code! 😉 ### The breezeInit function The actual breezeInit() function is given below. It is a little ugly, but very simple. It is obviously easy to customise for different libraries and library versions as required. All of the hard work is done by sbt which must be installed and on the default system path in order for this function to work. breezeInit&lt;-function() { library(jvmr) sbtStr="name := \"tmp\" version := \"0.1\" libraryDependencies ++= Seq( \"org.scalanlp\" %% \"breeze\" % \"0.10\", \"org.scalanlp\" %% \"breeze-natives\" % \"0.10\" ) resolvers ++= Seq( \"Sonatype Snapshots\" at \"https://oss.sonatype.org/content/repositories/snapshots/\", \"Sonatype Releases\" at \"https://oss.sonatype.org/content/repositories/releases/\" ) scalaVersion := \"2.11.2\" lazy val printClasspath = taskKey[Unit](\"Dump classpath\") printClasspath := { (fullClasspath in Runtime value) foreach { e =&gt; print(e.data+\"!\") } } " tmps=file(file.path(tempdir(),"build.sbt"),"w") cat(sbtStr,file=tmps) close(tmps) owd=getwd() setwd(tempdir()) cpstr=system2("sbt","printClasspath",stdout=TRUE) cpst=cpstr[length(cpstr)] cpsp=strsplit(cpst,"!")[[1]] cp=cpsp[2:(length(cpsp)-1)] si=scalaInterpreter(cp,use.jvmr.class.path=FALSE) setwd(owd) si } ## Calling Scala code from R using jvmr [Update: the jvmr package has been replaced by a new package called rscala. I have a new post which explains it.] ### Introduction In previous posts I have explained why I think that Scala is a good language to use for statistical computing and data science. Despite this, R is very convenient for simple exploratory data analysis and visualisation – currently more convenient than Scala. I explained in my recent talk at the RSS what (relatively straightforward) things would need to be developed for Scala in order to make R completely redundant, but for the short term at least, it seems likely that I will need to use both R and Scala for my day-to-day work. Since I use both Scala and R for statistical computing, it is very convenient to have a degree of interoperability between the two languages. I could call R from Scala code or Scala from R code, or both. Fortunately, some software tools have been developed recently which make this much simpler than it used to be. The software is jvmr, and as explained at the website, it enables calling Java and Scala from R and calling R from Java and Scala. I have previously discussed calling Java from R using the R CRAN package rJava. In this post I will focus on calling Scala from R using the CRAN package jvmr, which depends on rJava. I may examine calling R from Scala in a future post. On a system with Java installed, it should be possible to install the jvmr R package with a simple install.packages("jvmr") from the R command prompt. The package has the usual documentation associated with it, but the draft paper describing the package is the best way to get an overview of its capabilities and a walk-through of simple usage. ### A Gibbs sampler in Scala using Breeze For illustration I’m going to use a Scala implementation of a Gibbs sampler which relies on the Breeze scientific library, and will be built using the simple build tool, sbt. Most non-trivial Scala projects depend on various versions of external libraries, and sbt is an easy way to build even very complex projects trivially on any system with Java installed. You don’t even need to have Scala installed in order to build and run projects using sbt. I give some simple complete worked examples of building and running Scala sbt projects in the github repo associated with my recent RSS talk. Installing sbt is trivial as explained in the repo READMEs. For this post, the Scala code, gibbs.scala is given below: package gibbs object Gibbs { import scala.annotation.tailrec import scala.math.sqrt import breeze.stats.distributions.{Gamma,Gaussian} case class State(x: Double, y: Double) { override def toString: String = x.toString + " , " + y + "\n" } def nextIter(s: State): State = { val newX = Gamma(3.0, 1.0/((s.y)*(s.y)+4.0)).draw State(newX, Gaussian(1.0/(newX+1), 1.0/sqrt(2*newX+2)).draw) } @tailrec def nextThinnedIter(s: State,left: Int): State = if (left==0) s else nextThinnedIter(nextIter(s),left-1) def genIters(s: State, stop: Int, thin: Int): List[State] = { @tailrec def go(s: State, left: Int, acc: List[State]): List[State] = if (left&gt;0) go(nextThinnedIter(s,thin), left-1, s::acc) else acc go(s,stop,Nil).reverse } def main(args: Array[String]) = { if (args.length != 3) { println("Usage: sbt \"run &lt;outFile&gt; &lt;iters&gt; &lt;thin&gt;\"") sys.exit(1) } else { val outF=args(0) val iters=args(1).toInt val thin=args(2).toInt val out = genIters(State(0.0,0.0),iters,thin) val s = new java.io.FileWriter(outF) s.write("x , y\n") out map { it =&gt; s.write(it.toString) } s.close } } } This code requires Scala and the Breeze scientific library in order to build. We can specify this in a sbt build file, which should be called build.sbt and placed in the same directory as the Scala code. name := "gibbs" version := "0.1" scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature") libraryDependencies ++= Seq( "org.scalanlp" %% "breeze" % "0.10", "org.scalanlp" %% "breeze-natives" % "0.10" ) resolvers ++= Seq( "Sonatype Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/", "Sonatype Releases" at "https://oss.sonatype.org/content/repositories/releases/" ) scalaVersion := "2.11.2" Now, from a system command prompt in the directory where the files are situated, it should be possible to download all dependencies and compile and run the code with a simple sbt "run output.csv 50000 1000" #### Calling via R system calls Since this code takes a relatively long time to run, calling it from R via simple system calls isn’t a particularly terrible idea. For example, we can do this from the R command prompt with the following commands system("sbt \"run output.csv 50000 1000\"") out=read.csv("output.csv") library(smfsb) mcmcSummary(out,rows=2) This works fine, but won’t work so well for code which needs to be called repeatedly. For this, tighter integration between R and Scala would be useful, which is where jvmr comes in. #### Calling sbt-based Scala projects via jvmr jvmr provides a very simple way to embed a Scala interpreter within an R session, to be able to execute Scala expressions from R and to have the results returned back to the R session for further processing. The main issue with using this in practice is managing dependencies on external libraries and setting the Scala classpath correctly. For an sbt project such as we are considering here, it is relatively easy to get sbt to provide us with all of the information we need in a fully automated way. First, we need to add a new task to our sbt build instructions, which will output the full classpath in a way that is easy to parse from R. Just add the following to the end of the file build.sbt: lazy val printClasspath = taskKey[Unit]("Dump classpath") printClasspath := { (fullClasspath in Runtime value) foreach { e =&gt; print(e.data+"!") } } Be aware that blank lines are significant in sbt build files. Once we have this in our build file, we can write a small R function to get the classpath from sbt and then initialise a jvmr scalaInterpreter with the correct full classpath needed for the project. An R function which does this, sbtInit(), is given below sbtInit&lt;-function() { library(jvmr) system2("sbt","compile") cpstr=system2("sbt","printClasspath",stdout=TRUE) cpst=cpstr[length(cpstr)] cpsp=strsplit(cpst,"!")[[1]] cp=cpsp[1:(length(cpsp)-1)] scalaInterpreter(cp,use.jvmr.class.path=FALSE) } With this function at our disposal, it becomes trivial to call our Scala code direct from the R interpreter, as the following code illustrates. sc=sbtInit() sc['import gibbs.Gibbs._'] out=sc['genIters(State(0.0,0.0),50000,1000).toArray.map{s=&gt;Array(s.x,s.y)}'] library(smfsb) mcmcSummary(out,rows=2) Here we call the getIters function directly, rather than via the main method. This function returns an immutable List of States. Since R doesn’t understand this, we map it to an Array of Arrays, which R then unpacks into an R matrix for us to store in the matrix out. ### Summary The CRAN package jvmr makes it very easy to embed a Scala interpreter within an R session. However, for most non-trivial statistical computing problems, the Scala code will have dependence on external scientific libraries such as Breeze. The standard way to easily manage external dependencies in the Scala ecosystem is sbt. Given an sbt-based Scala project, it is easy to add a task to the sbt build file and a function to R in order to initialise the jvmr Scala interpreter with the full classpath needed to call arbitrary Scala functions. This provides very convenient inter-operability between R and Scala for many statistical computing applications. ## One-way ANOVA with fixed and random effects from a Bayesian perspective This blog post is derived from a computer practical session that I ran as part of my new course on Statistics for Big Data, previously discussed. This course covered a lot of material very quickly. In particular, I deferred introducing notions of hierarchical modelling until the Bayesian part of the course, where I feel it is more natural and powerful. However, some of the terminology associated with hierarchical statistical modelling probably seems a bit mysterious to those without a strong background in classical statistical modelling, and so this practical session was intended to clear up some potential confusion. I will analyse a simple one-way Analysis of Variance (ANOVA) model from a Bayesian perspective, making sure to highlight the difference between fixed and random effects in a Bayesian context where everything is random, as well as emphasising the associated identifiability issues. R code is used to illustrate the ideas. ### Example scenario We will consider the body mass index (BMI) of new male undergraduate students at a selection of UK Universities. Let us suppose that our data consist of measurements of (log) BMI for a random sample of 1,000 males at each of 8 Universities. We are interested to know if there are any differences between the Universities. Again, we want to model the process as we would simulate it, so thinking about how we would simulate such data is instructive. We start by assuming that the log BMI is a normal random quantity, and that the variance is common across the Universities in question (this is quite a big assumption, and it is easy to relax). We assume that the mean of this normal distribution is University-specific, but that we do not have strong prior opinions regarding the way in which the Universities differ. That said, we expect that the Universities would not be very different from one another. ### Simulating data A simple simulation of the data with some plausible parameters can be carried out as follows. set.seed(1) Z=matrix(rnorm(1000*8,3.1,0.1),nrow=8) RE=rnorm(8,0,0.01) X=t(Z+RE) colnames(X)=paste("Uni",1:8,sep="") Data=stack(data.frame(X)) boxplot(exp(values)~ind,data=Data,notch=TRUE) Make sure that you understand exactly what this code is doing before proceeding. The boxplot showing the simulated data is given below. ### Frequentist analysis We will start with a frequentist analysis of the data. The model we would like to fit is $y_{ij} = \mu + \theta_i + \varepsilon_{ij}$ where i is an indicator for the University and j for the individual within a particular University. The “effect”, $\theta_i$ represents how the ith University differs from the overall mean. We know that this model is not actually identifiable when the model parameters are all treated as “fixed effects”, but R will handle this for us. > mod=lm(values~ind,data=Data) > summary(mod) Call: lm(formula = values ~ ind, data = Data) Residuals: Min 1Q Median 3Q Max -0.36846 -0.06778 -0.00069 0.06910 0.38219 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.101068 0.003223 962.244 < 2e-16 *** indUni2 -0.006516 0.004558 -1.430 0.152826 indUni3 -0.017168 0.004558 -3.767 0.000166 *** indUni4 0.017916 0.004558 3.931 8.53e-05 *** indUni5 -0.022838 0.004558 -5.011 5.53e-07 *** indUni6 -0.001651 0.004558 -0.362 0.717143 indUni7 0.007935 0.004558 1.741 0.081707 . indUni8 0.003373 0.004558 0.740 0.459300 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1019 on 7992 degrees of freedom Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353 F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16 We see that R has handled the identifiability problem using “treatment contrasts”, dropping the fixed effect for the first university, so that the intercept actually represents the mean value for the first University, and the effects for the other Univeristies represent the differences from the first University. If we would prefer to impose a sum constraint, then we can switch to sum contrasts with options(contrasts=rep("contr.sum",2)) and then re-fit the model. > mods=lm(values~ind,data=Data) > summary(mods) Call: lm(formula = values ~ ind, data = Data) Residuals: Min 1Q Median 3Q Max -0.36846 -0.06778 -0.00069 0.06910 0.38219 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.0986991 0.0011394 2719.558 < 2e-16 *** ind1 0.0023687 0.0030146 0.786 0.432048 ind2 -0.0041477 0.0030146 -1.376 0.168905 ind3 -0.0147997 0.0030146 -4.909 9.32e-07 *** ind4 0.0202851 0.0030146 6.729 1.83e-11 *** ind5 -0.0204693 0.0030146 -6.790 1.20e-11 *** ind6 0.0007175 0.0030146 0.238 0.811889 ind7 0.0103039 0.0030146 3.418 0.000634 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1019 on 7992 degrees of freedom Multiple R-squared: 0.01439, Adjusted R-squared: 0.01353 F-statistic: 16.67 on 7 and 7992 DF, p-value: < 2.2e-16 This has 7 degrees of freedom for the effects, as before, but ensures that the 8 effects sum to precisely zero. This is arguably more interpretable in this case. ### Bayesian analysis We will now analyse the simulated data from a Bayesian perspective, using JAGS. #### Fixed effects All parameters in Bayesian models are uncertain, and therefore random, so there is much confusion regarding the difference between “fixed” and “random” effects in a Bayesian context. For “fixed” effects, our prior captures the idea that we sample the effects independently from a “fixed” (typically vague) prior distribution. We could simply code this up and fit it in JAGS as follows. require(rjags) n=dim(X)[1] p=dim(X)[2] data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { theta[j]~dnorm(0,0.0001) for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) autocorr.plot(output) pairs(as.matrix(output)) crosscorr.plot(output) On running the code we can clearly see that this naive approach leads to high posterior correlation between the mean and the effects, due to the fundamental lack of identifiability of the model. This also leads to MCMC mixing problems, but it is important to understand that this computational issue is conceptually entirely separate from the fundamental statisticial identifiability issue. Even if we could avoid MCMC entirely, the identifiability issue would remain. A quick fix for the identifiability issue is to use “treatment contrasts”, just as for the frequentist model. We can implement that as follows. data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } theta[1]<-0 for (j in 2:p) { theta[j]~dnorm(0,0.0001) } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) autocorr.plot(output) pairs(as.matrix(output)) crosscorr.plot(output) Running this we see that the model now works perfectly well, mixes nicely, and gives sensible inferences for the treatment effects. Another source of confusion for models of this type is data formating and indexing in JAGS models. For our balanced data there was not problem passing in data to JAGS as a matrix and specifying the model using nested loops. However, for unbalanced designs this is not necessarily so convenient, and so then it can be helpful to specify the model based on two-column data, as we would use for fitting using lm(). This is illustrated with the following model specification, which is exactly equivalent to the previous model, and should give identical (up to Monte Carlo error) results. N=n*p data=list(y=Data$values,g=Data\$ind,N=N,p=p) init=list(mu=2,tau=1) modelstring=" model { for (i in 1:N) { y[i]~dnorm(mu+theta[g[i]],tau) } theta[1]<-0 for (j in 2:p) { theta[j]~dnorm(0,0.0001) } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) As suggested above, this indexing scheme is much more convenient for unbalanced data, and hence widely used. However, since our data is balanced here, we will revert to the matrix approach for the remainder of the post. One final thing to consider before moving on to random effects is the sum-contrast model. We can implement this in various ways, but I’ve tried to encode it for maximum clarity below, imposing the sum-to-zero constraint via the final effect. data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } for (j in 1:(p-1)) { theta[j]~dnorm(0,0.0001) } theta[p] <- -sum(theta[1:(p-1)]) mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) Again, this works perfectly well and gives similar results to the frequentist analysis. #### Random effects The key difference between fixed and random effects in a Bayesian framework is that random effects are not independent, being drawn from a distribution with parameters which are not fixed. Essentially, there is another level of hierarchy involved in the specification of the random effects. This is best illustrated by example. A random effects model for this problem is given below. data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { theta[j]~dnorm(0,taut) for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) taut~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","taut","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) The only difference between this and our first naive attempt at a Bayesian fixed effects model is that we have put a gamma prior on the precision of the effect. Note that this model now runs and fits perfectly well, with reasonable mixing, and gives sensible parameter inferences. Although the effects here are not constrained to sum-to-zero, like in the case of sum contrasts for a fixed effects model, the prior encourages shrinkage towards zero, and so the random effect distribution can be thought of as a kind of soft version of a hard sum-to-zero constraint. From a predictive perspective, this model is much more powerful. In particular, using a random effects model, we can make strong predictions for unobserved groups (eg. a ninth University), with sensible prediction intervals based on our inferred understanding of how similar different universities are. Using a fixed effects model this isn’t really possible. Even for a Bayesian version of a fixed effects model using proper (but vague) priors, prediction intervals for unobserved groups are not really sensible. Since we have used simulated data here, we can compare the estimated random effects with the true effects generated during the simulation. > apply(as.matrix(output),2,mean) mu tau taut theta[1] theta[2] 3.098813e+00 9.627110e+01 7.015976e+03 2.086581e-03 -3.935511e-03 theta[3] theta[4] theta[5] theta[6] theta[7] -1.389099e-02 1.881528e-02 -1.921854e-02 5.640306e-04 9.529532e-03 theta[8] 5.227518e-03 > RE [1] 0.002637034 -0.008294518 -0.014616348 0.016839902 -0.015443243 [6] -0.001908871 0.010162117 0.005471262 We see that the Bayesian random effects model has done an excellent job of estimation. If we wished, we could relax the assumption of common variance across the groups by making tau a vector indexed by j, though there is not much point in persuing this here, since we know that the groups do all have the same variance. #### Strong subjective priors The above is the usual story regarding fixed and random effects in Bayesian inference. I hope this is reasonably clear, so really I should quit while I’m ahead… However, the issues are really a bit more subtle than I’ve suggested. The inferred precision of the random effects was around 7,000, so now lets re-run the original, naive, “fixed effects” model with a strong subjective Bayesian prior on the distribution of the effects. data=list(X=X,n=n,p=p) init=list(mu=2,tau=1) modelstring=" model { for (j in 1:p) { theta[j]~dnorm(0,7000) for (i in 1:n) { X[i,j]~dnorm(mu+theta[j],tau) } } mu~dnorm(0,0.0001) tau~dgamma(1,0.0001) } " model=jags.model(textConnection(modelstring),data=data,inits=init) update(model,n.iter=1000) output=coda.samples(model=model,variable.names=c("mu","tau","theta"),n.iter=100000,thin=10) print(summary(output)) plot(output) This model also runs perfectly well and gives sensible inferences, despite the fact that the effects are iid from a fixed distribution and there is no hard constraint on the effects. Similarly, we can make sensible predictions, together with appropriate prediction intervals, for an unobserved group. So it isn’t so much the fact that the effects are coupled via an extra level of hierarchy that makes things work. It’s really the fact that the effects are sensibly distributed and not just sampled directly from a vague prior. So for “real” subjective Bayesians the line between fixed and random effects is actually very blurred indeed…
2021-09-24 03:47:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46094974875450134, "perplexity": 2320.091316673045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00528.warc.gz"}
https://socratic.org/questions/58881db511ef6b125b0f5fe3
# Simplify? 27^(-3/4) ${27}^{- \frac{4}{3}} = \frac{1}{81}$ #### Explanation: We have ${27}^{- \frac{3}{4}}$. Let's talk about that power for a moment. There are three things it's asking us to do: • the numerator (4) is asking us to take the base number to that power • the denominator (3) is asking us to take the nth root of the base number • and lastly, because the power is negative, we are to put the base number into a fraction as the denominator (with 1 as the numerator) This all becomes a little easier by seeing that $27 = {3}^{3}$ Let's take the cube root first: ${27}^{\frac{1}{3}} = {\left({3}^{3}\right)}^{\frac{1}{3}} = {3}^{3 \times \left(\frac{1}{3}\right)} = 3$ Now let's take that to the 4th power: ${3}^{4} = 81$ ${81}^{-} 1 = \frac{1}{81}$ And so we can say that ${27}^{- \frac{4}{3}} = \frac{1}{81}$
2019-12-07 20:06:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515847563743591, "perplexity": 595.4538713816929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540501887.27/warc/CC-MAIN-20191207183439-20191207211439-00124.warc.gz"}
http://math.stackexchange.com/questions/137468/algebraic-multiplicity-for-eigenvalues-of-a-sturm-liouville-like-problem
# “Algebraic multiplicity” for eigenvalues of a Sturm-Liouville-like problem? Following Coddington-Levinson's book Theory of ordinary differential equations, chapter 7: "Self-adjoint problems on finite intervals", let us consider the eigenvalue problem $$\pi(l):\begin{cases} Lx(t)= lx(t) & t \in [a, b] \\ Ux=0 \end{cases}$$ where $Lx=p_0(t)x^{(n)}+p_1(t)x^{(n-1)}+\ldots + p_n(t)x(t)$ (with $p_0(t)\ne 0$, the problem is not singular) and $Ux=0$ stands for the boundary conditions $$U_jx=\sum_{k=1}^n(M_{jk}x^{(k-1)}(a)+N_{jk}x^{(k-1)}(b)),\qquad j=1\ldots n.$$ Also let $\pi$ be self-adjoint, meaning that $\int_a^b Lu\overline{v}\, dt=\int_a^bu\overline{Lv}\, dt$ for all $u, v \in C^n$ satisfying boundary conditions $Uu=Uv=0$. We say that $l\in \mathbb{C}$ is an eigenvalue of $\pi$ if $\pi(l)$ admits non trivial solutions. Coddington-Levinson's theorem 2.1 asserts that all eigenvalues are real and that they have no finite cluster point. What is interesting for this question is the proof: the authors start taking a fundamental system $\{\varphi_j, j=1\ldots n\}$ of solutions of the linear equation $Lx=lx$, observing that each $\varphi_j$ depends analytically on $l$. Then they point out that the generic solution $$x=\sum_{j=1}^nc_j \varphi_j$$ of $Lx=lx$ is an eigenvalue of $\pi$ if and only if $$\tag{1} \sum_{j=1}^n c_j U_k\varphi_j=0 \qquad k=1\ldots n,$$ which is a system of $n$ homogeneous linear equations in $n$ unknowns $c_1 \ldots c_n$. The determinant $\Delta$ of $(1)$ is an entire function of $l$ and it vanishes exactly at the eigenvalues of $\pi$. At this point the authors finish off their proof, while we proceed to our question. Question This $\Delta$, being entire and vanishing at eigenvalues, might be regarded as an infinite-dimensional analogue of the characteristic polynomial of a matrix. Is there any relationship between the multiplicity of its zeros and the geometric multiplicity of the corresponding eigenvalues (i.e. the dimension of the associated eigenspaces)? Thank you. - Yes, there is a relationship between these two multiplicities. If $\lambda$ is a zero of multiplicity $m$, then the multiplicity of the corresponding eigenvalue is $\le m$. There are even better (in terms of equalities) relationships between these multiplicities, but in order to describe them you have to take into account associated functions (i.e. not only eigenfunctions). You can find all these results together with their complete proofs in
2015-11-29 04:03:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431583285331726, "perplexity": 75.47112971785636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00259-ip-10-71-132-137.ec2.internal.warc.gz"}
https://fivethirtyeight.com/features/come-on-down-and-escape-the-maze/
Come On Down And Escape The Maze Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,1 and you may get a shoutout in next week’s column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter. ## Riddler Express From Andy Reigle, once more into the maze, dear friends: Those pesky enemies of Riddler Nation are at it again! A couple of weeks ago, they trapped you in a maze without walls. Most of you escaped, but the enemies remain undeterred. They’ve been hard at work building a new maze without walls, shown below. Before banishing you to the maze, they hand you a list of rules. 1. You can enter via any perimeter square. The goal is to reach the yellow star in the center with the lowest possible score, which is calculated by adding up all the numbers that appeared in any squares you crossed. 2. You can only move horizontally or vertically (not diagonally) to bordering squares. 3. If you enter a square with an arrow (↑), you have to exit that square in the direction the arrow indicates. Some squares have double arrows (↔), giving you a choice of two directions. 4. If you enter a square with a number, you must add it to your score, but you can exit in the direction of your choice. 5. If an arrow leads you to a square with a skull, you die. But you knew that already. What is the lowest score you can achieve while solving the maze and saving Riddler Nation? ## Riddler Classic From Josh Berry, come on down! You’re playing a “Price Is Right” game called Cover Up, which has contestants try to guess all five digits of the price of a brand new car. You have two numbers to choose from for the first digit, three numbers to choose from for the second digit, and so on, ending with six options for the fifth and final digit. You’re not winning any \$100,000 cars in this game. First, you lock in a guess at the entire price of the car. If you get at least one digit correct on the first guess, the correct digit(s) are highlighted and you get to replace incorrect digits on a second guess. This continues on subsequent guesses until the price is guessed correctly. But if none of the new numbers you swapped in are correct, you lose. A contestant could conceivably win the car on the first guess or with five guesses, getting one additional correct digit highlighted on each guess. First question: If you’re guessing entirely by chance, what’s the likelihood of winning the car? Second question: Suppose you know a little bit about cars. Specifically, you are 100 percent certain about the digit in the ten-thousands place, but have to guess the remaining four digits by chance. What’s the best strategy, and what’s the likelihood of winning the car now? ## Solution to the last week’s Riddler Express Congratulations to 👏 Derek Truesdale 👏 of San Jose, California, winner of last week’s Riddler Express! Last week took us to Broadway. The song “Seasons of Love” from the musical “Rent” states that a year has 525,600 minutes and, indeed, 365×24×60 = 525,600. (Leap years need not apply.) That, naturally, raised a question: Given any three random integers — X, Y and Z — what are the chances that their product is divisible by 100? The chances are exactly 12.43 percent. To get there, we want to figure out the chances that XYZ contains at least two factors of 2 and at least two factors of 5 — 2×2×5×5 = 100. To get started with this calculation, brush off that elementary school lesson on factors. As solver Hector Pefo explained, note that the probability that this number has at least two factors of 2 is 1 minus the probability that it has zero factors of 2 minus the probability that it has one factor of 2. Half of all integers are divisible by 2. So the probability that XYZ has zero factors of 2 equals $$(1/2)^3 = 1/8$$. For XYZ to have exactly one factor of 2, one of its elements has to have exactly one factor of 2 and the other two have to be odd. Half of all even numbers, or a quarter of all numbers, have exactly one factor of 2 — that is, they are not divisible by 4. So the probability that XYZ has one factor of 2 equals $$3\cdot (1/4)(1/2)(1/2)$$. So the probability that we’ll successfully get our factors of 2 into our mystery number is $$1 - 1/8 - 3/16 = 11/16$$. Now for the factors of 5. The logic is the same: The probability that it has at least two factors of 5 is 1 minus the probability that it has zero factors of 5 minus the probability that it has one factor of 5. That probability is given by $$1 - (4/5)^3 - 3\cdot (1/5 - 1/25)(4/5)^2 = 113/625$$. We can multiply those two probabilities together to find the chance that XYZ is divisible by 4 and 25, or, as we’ve wanted all along, 100. $$(11/16)(113/625) = 0.1243$$, or 12.43 percent. “Seasons of Love,” you got lucky. ## Solution to last week’s Riddler Classic Congratulations to 👏 Andrew Petersen 👏 of Cedar Rapids, Iowa, winner of last week’s Riddler Classic! Last week, you and I were playing a game. Spread out on a table in front of us, face up, were nine index cards with the numbers 1 through 9 on them. We took turns picking up cards and putting them in our hands. (There was no discarding.) The game ended in one of two ways. If we ran out of cards to pick up, the game was a draw. But if one player had a set of three cards in his or her hand that added up to exactly 15 before we ran out of cards, that player won. (For example, if you had 2, 4, 6 and 7, you would win with the 2, 6 and 7. However, if you had 1, 2, 3, 7 and 8, you hadn’t won because no set of three cards added up to 15.) Let’s say you went first. With perfect play, who would win and why? No one will win — the game will be a draw. Why? This is really tic-tac-toe in disguise. Specifically, it’s tic-tac-toe played on top of a magic square. Consider arranging the nine index cards in the following way: \begin{matrix}\nonumber 6&1&8\\7& 5 &3\\2 &9& 4\end{matrix} The cards in every row, column and diagonal add to 15 — exactly what one of us needs to win. If picking up a card and putting it in one’s hand is akin to marking that card with an X or an O, then this game is just tic-tac-toe on this specific sort of board. With perfect play, it’s well known that tic-tac-toe is a guaranteed draw. So this game is, too! (But it’s still fun, I promise.) If you’re in the mood to take it to the next level, here’s an article from 2018 about how something called “ultimate tic-tac-toe” has plenty to tell us about modern politics. ## Want more riddles? Well, aren’t you lucky? There’s a whole book full of the best puzzles from this column and some never-before-seen head-scratchers. It’s called “The Riddler,” and it’s in stores now! ## Want to submit a riddle? Email me at oliver.roeder@fivethirtyeight.com. ## Footnotes 1. Important small print: For you to be eligible, I need to receive your correct answer before 11:59 p.m. Eastern time on Sunday. Have a great weekend! Oliver Roeder was a senior writer for FiveThirtyEight. He holds a Ph.D. in economics from the University of Texas at Austin, where he studied game theory and political competition.
2021-12-03 14:17:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46679118275642395, "perplexity": 792.5876219300303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00515.warc.gz"}
https://zbmath.org/?q=an:0757.34018&format=complete
zbMATH — the first resource for mathematics Difference methods for differential inclusions: A survey. (English) Zbl 0757.34018 In this survey the following initial value problem for ordinary differential inclusions is considered: “Let $$I=[t_ 0,T]$$ be a finite interval, $$y_ 0\in\mathbb{R}^ n$$, and $$F$$ be a map from $$I\times\mathbb{R}^ n$$ into the set of all subsets of $$\mathbb{R}^ n$$. Find an absolutely continuous function $$y(\cdot)$$ on $$I$$ such that $$y(t_ 0)=y_ 0$$ and $$\dot y(t)\in F(t,y(t))$$ for almost all $$t\in I$$, where $$\dot y(\cdot)$$ is the derivative of $$y(\cdot)$$.” Using difference method there exist various closely connected approaches of approximating solutions $$y(\cdot)$$. Investigations of convergence properties are presented, and applications to an example with discontinuous right-hand side are given. The classical Euler method is treated as an introductory example. Under the assumption of right-hand sides satisfying a one-sided Lipschitz condition Runge-Kutta schemes can be adapted to differential inclusions, too. The question, of whether the limit function $$y(\cdot)$$ has additional desirable properties, leads to selection strategies, which are illustrated by an example of a control system. Finally, error estimates and convergence properties of reachable sets are discussed. Many references are cited. MSC: 34A60 Ordinary differential inclusions 49M25 Discrete approximations in optimal control 65L05 Numerical methods for initial value problems involving ordinary differential equations 34-02 Research exposition (monographs, survey articles) pertaining to ordinary differential equations 65-02 Research exposition (monographs, survey articles) pertaining to numerical analysis 65J99 Numerical analysis in abstract spaces Full Text:
2021-07-31 09:21:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6853607892990112, "perplexity": 423.224870286501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00453.warc.gz"}
https://www.projecteuclid.org/euclid.afa/1565078418
## Annals of Functional Analysis ### Nonlinear maps preserving mixed Lie triple products on factor von Neumann algebras #### Abstract We prove that every bijective map that preserves mixed Lie triple products from a factor von Neumann algebra $\mathcal{M}$ with $\dim \mathcal{M}\gt 4$ into another factor von Neumann algebra $\mathcal{N}$ is of the form $A\rightarrow \epsilon \Psi (A)$, where $\epsilon \in \{1,-1\}$ and $\Psi :\mathcal{M}\rightarrow \mathcal{N}$ is a linear $*$-isomorphism or a conjugate linear $*$-isomorphism. Also, we give the structure of this map when $\dim \mathcal{M}=4$. #### Article information Source Ann. Funct. Anal., Volume 10, Number 3 (2019), 325-336. Dates Accepted: 6 November 2018 First available in Project Euclid: 6 August 2019 https://projecteuclid.org/euclid.afa/1565078418 Digital Object Identifier doi:10.1215/20088752-2018-0032 Mathematical Reviews number (MathSciNet) MR3989178 Zentralblatt MATH identifier 07089120 #### Citation Yang, Zhujun; Zhang, Jianhua. Nonlinear maps preserving mixed Lie triple products on factor von Neumann algebras. Ann. Funct. Anal. 10 (2019), no. 3, 325--336. doi:10.1215/20088752-2018-0032. https://projecteuclid.org/euclid.afa/1565078418 #### References • [1] Z. F. Bai and S. P. Du, Maps preserving products $XY-YX^{\ast }$ on von Neumann algebras, J. Math. Anal. Appl. 386 (2012), no. 1, 103–109. • [2] R. Banning and M. Mathieu, Commutativity preserving mappings on semiprime rings, Comm. Algebra 25 (1997), no. 1, 247–265. • [3] M. Brešar and C. R. Miers, Commutativity preserving mappings of von Neumann algebras, Canad. J. Math. 45 (1993), no. 4, 695–708. • [4] M. D. Choi., A. A. Jafarian, and H. Radjavi, Linear maps preserving commutativity, Linear Algebra Appl. 87 (1987), 227–241. • [5] J. Cui and C.-K. Li, Maps preserving product $XY-YX^{\ast }$ on factor von Neumann algebras, Linear Algebra Appl. 431 (2009), nos. 5–7, 833–842. • [6] J. Cui and C. Park, Maps preserving strong skew Lie product on factor von Neumann algebras, Acta Math. Sci. Ser. B (Engl. Ed.) 32 (2012), 531–538. • [7] P. A. Fillmore and D. M. Topping, Operator algebras generated by projections, Duke Math. J. 34 (1967), 333–336. • [8] J. Hou and J. Cui, Linear Mapping on Operator Algebras [M], Science Press, Beijing, 2002. • [9] C. Li, Q. Chen, and T. Wang, Nonlinear maps preserving the Jordan triple $\ast$-products on factor von Neumann algebras, Chin. Ann. Math. Ser. B 39 (2018), no. 4, 633–642. • [10] J. Marovt, A note on Lie product preserving maps on $M_{n}(\mathbb{R})$, Math. Slovaca 66 (2016), no. 3, 715–720. • [11] A. Paszkiewicz, Any self-adjoint operator is a finite linear combination of projectors, Bull. Acad. Polon. Sci. 28 (1980), nos. 7–8, 337–345. • [12] X. Qi and J. Hou, Strong skew commutativity preserving maps on von Neumann algebras, J. Math. Anal. Appl. 397 (2013), no. 1, 362–370. • [13] P. Šemrl, Commutativity preserving maps, Linear Algebra Appl. 429 (2008), nos. 5–6, 1051–1070. • [14] X. Yu and F. Lu, Maps preserving Lie product on $B(X)$, Taiwanese J. Math. 12 (2008), no. 3, 793–806. • [15] J.-H. Zhang and F.-J. Zhang, Nonlinear maps preserving Lie products on factor von Neumann algebras, Linear Algebra Appl. 429 (2008), no. 1, 18–30.
2019-11-22 01:07:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 9, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34246426820755005, "perplexity": 1601.1191611180686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00247.warc.gz"}
https://eecs.engin.umich.edu/event/arun-jambulapati/
# Arun Jambulapati: Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers WHERE: 3725 Beyster BuildingMap SHARE: The problem of solving Laplacian linear systems is of central importance to a wide class of recently-developed efficient algorithms for problems on graphs. Starting from the first O(m poly(log n))-time algorithm for solving Laplacian linear systems developed by Spielman and Teng, a sequence of follow-up works have led to much faster and simpler algorithms, culminating in the O(m \sqrt{log n} poly(log log n)) time algorithm of [CKM+14]. However, no further progress on this problem has been obtained since 2014. In this paper we provide an O(m poly(log log n))-expected time algorithm for solving Laplacian systems on n-node m-edge graphs, setting the complexity of the problem up to poly(log log n) terms. To obtain this result we combine several techniques for constructing well-connected subgraphs to provide efficient constructions of ultrasparse subgraphs with improved stretch and sparsity bounds: as a consequence we improve existing constructions of graph ultrasparsifiers in a large parameter regime. Additionally, as motivation for this work, we show that for every set of vectors in R^d (not just those induced by graphs) and all k>0 there exists ultrasparsifiers with d−1+O(d/k) re-weighted vectors of relative condition number at most k^2. For small k, this improves upon the previous best known multiplicative factor of k⋅Õ (log d), which is only known for the graph case. Greg Bodwin Euiwoong Lee
2022-08-12 06:23:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566263318061829, "perplexity": 1726.5143569348757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00239.warc.gz"}
https://www.gamedev.net/topic/602545-quaternions-and-latlon/
\$25 ### Image of the Day Submit IOTD | Top Screenshots ## Quaternions and Lat/Lon Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 5 replies to this topic ### #1Jack Smith  Members Posted 20 May 2011 - 09:40 PM Hi all, For the past few days I have been struggling with a problem concerning a graphics program I am writing. It displays a 3d sphere with lat/lon coordinates for the user to see. An overlay of the US is also shown, with true 3d mapping. Everything works up to this point. Here is what I have: * The globe can be zoomed in and out by adjusting the horizontal field of view angle * The globe can also be rotated * The latitude and longitude are shown for the point that the screen is centered on (this works correctly) Now, here is what I want to have: * Show the latitude and longitude for the point where the mouse cursor is centered on (having issues with this) Now, this is not like in other posts where people ask, how do I unproject a 2d point (the mouse coordinates) to 3d. No. This is different. Instead, I am asking about retrieving the latitude and longitude at the mouse coordinates, and this is feasible since we know the size of the sphere in its 3d space as well as the x,y coordinates and field of view. So, here is what I have done so far. I have managed to correctly calculate the change in horizontal angle and change in vertical angle from the center. This is accomplished by way of using hx=tan(fov/2) * d, where d is the distance to the center of the sphere. hx gives me the horizontal length at the current field of view (which necessary since, again, the sphere can be zoomed in and out). And then taking arcsin(((cursor.x-400)*(hx/400))/SPHERE_RADIUS) to get the change in horizontal angle based on horizontal change from the center of the screen (i.e., mouse cursor x movement from center). So you might think, okay, he's got the latitude and longitude of what the screen is centered on, as well as the change in vertical and horizontal angles. So what's the problem? Can't he just subtract the horizontal angle change from the longitude, and similar for latitude? As it turns out, it's not that simple. For example, when the user is looking directly at latitude 90 degrees (North pole), then any mouse deviation from the center of the screen necessitates a change in latitude even if only horizontal mouse movement has taken place. But then, when looking at latitude 0 degrees (the equator), then horizontal mouse movement will only affect longitude. Herein lies the problem: what to do when the user is looking at between those two extremes? Say, latitude 45 degrees? I had managed to use cos(latitude)*(change in vertical) and also add to that the sin(latitude)*(combination of latitude and longitude) to mix the two.. And eventually got it working at lat=0 and lat=90. But never for in-between. Going back to how I arrived at displaying the centered point latitude and longitude in the first place. I use quaternion for the rotation: // Get orig quaternion LatLonToQuaternion(g_angRotateX, g_angRotateY, x, y, z, w); That gives me a quaternion from my centered position's latitude and longitude. Yes the lat/lon are flipped in the function's input. But the output quaternion is correct, I have verified. So then, now that I have a quaternion... I should be able to multiply that by my change in latitude and longitude, yes? // Rotate by lonX RotationXYZToQuaternion(0.0f, lonX, 0.0f, tx, ty, tz, tw); QuaternionMultiply(tx, ty, tz, tw, x, y, z, w, ux, uy, uz, uw); x=ux; y=uy; z=uz; w=uw; // Rotate by latX RotationXYZToQuaternion(latX, 0.0f, 0.0f, tx, ty, tz, tw); QuaternionMultiply(tx, ty, tz, tw, x, y, z, w, ux, uy, uz, uw); QuaternionToLatLon(ux, uy, uz, uw, lonF, latF); The above code uses a function called RotationXYZToQuaternion to give me a quaternion from a given X,Y,Z rotation. So I first get a quaternion for the change in horizontal angle. Then I multiply that quaternion by the original quaternion that I derived from the lat/lon at the center of the screen. Next, I do the same thing for the change in vertical angle, and finally convert the result back into a latitude and longitude. In this case the above code works at lat=0, but not at lat=90 or anywhere else in that I do not get back the expected latitude and longitude for where the user is pointing the mouse cursor. So, this is where I am after about four days of struggling with it -- I have managed to get this far. But am really stuck. Basically it boils down to this problem: Given: x=longitude at screen center y=latitude at screen center qx=change in the sphere's horizontal angle, as calculated from the deviation of the mouse cursor's x position from screen center (*not the same thing as change in longitude, unless lat=0) qy=change in the sphere's vertical angle, as calculated from the deviation of the mouse cursor's y position from screen center (*also not the same as change in latitude, unless lat=0) Determine the new latitude and longitude. So I ask, if anyone can help me on this, how to solve the problem. What do I do when the user is not centered at latitude=0? How is the change in vertical and horizontal angles applied when latitude=30 degrees, for example? Any help would be immensely appreciated! Thanks. ### #2scgames  Members Posted 20 May 2011 - 11:00 PM I admit that I'd have to read that again (and maybe again after that ;) in order to understand what it is you're doing, but meanwhile, let me ask this. If the objective is to determine the latitude and longitude corresponding to the cursor position, why not simply raycast against the sphere and compute the latitude and longitude from the intersection point? ### #3haegarr  Members Posted 21 May 2011 - 02:21 AM ... Now, this is not like in other posts where people ask, how do I unproject a 2d point (the mouse coordinates) to 3d. No. This is different. Instead, I am asking about retrieving the latitude and longitude at the mouse coordinates, and this is feasible since we know the size of the sphere in its 3d space as well as the x,y coordinates and field of view. ... Well, it is AFAIS not really different; it just needs 2 more steps. So I second jyk's implicit suggestion: 1. Compute a ray in global space that describes all locations that are projected onto the pixel covered by the mouse pointer's hot spot. 2. Transform the ray using the inverse of the local-to-global transformation of the sphere. 3. Compute the nearest intersection point of the ray with the sphere. This gives you a Cartesian co-ordinate. 4. Convert the Cartesian co-odinate into a spherical one. 5. Adapt the azimuthal angle to yield in the latitude. The above should be sufficient as long as a sphere is used. When using an ellipsoid, a non-uniform scaling incorporated into the object's transformation should do the trick, although the term longitude / latitude becomes versatile; you have to decide which kind of longitude / latitude to use then. (But if you want to use the spherical harmonics series to approximate the real shape of the earth ...) ### #4Jack Smith  Members Posted 21 May 2011 - 10:14 AM haegarr, jyk: Thank you for your replies. From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere. Is this correct? See, my "sphere" is actually just the shape to which my 3d map of line segments conforms. In other words, I do not have any triangles that would intersect against the ray. Even if I did, I think I would lose accuracy because only if my sphere were made of trillions of triangles would I get any accuracy as to which lat/lon the mouse cursor has hit on. So, if I do go the route of ray intersection.. It would be nice, I admit, since maybe that will get me what I want. But I need some help on implementing it. What I gather at this point is that I can get a 3d ray, and would need to come up with the Cartesian coordinate at which it intersects with the sphere. So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection? Thanks again. ### #5scgames  Members Posted 21 May 2011 - 11:21 AM From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere. Is this correct? Nope; you can compute the intersection with the sphere directly (you don't have to use a mesh representation). So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection? Google/search for (e.g.) 'ray sphere intersection', 'line sphere intersection', 'sphere raytrace', or 'sphere raycast'. (The algorithm is straightforward and is well documented online.) ### #6Jack Smith  Members Posted 23 May 2011 - 02:13 PM From what I understand, ray intersection functions provided by Direct3D (the graphics library I am using) work with polygons. So for example, my sphere would need to be made out of (say, triangles) to compute the intersection from the ray to the sphere. Is this correct? Nope; you can compute the intersection with the sphere directly (you don't have to use a mesh representation). So if I know the radius of the sphere in its 3d space, and am given a 3d ray, how would I get these coordinates of intersection? Google/search for (e.g.) 'ray sphere intersection', 'line sphere intersection', 'sphere raytrace', or 'sphere raycast'. (The algorithm is straightforward and is well documented online.) Thank you so much! I have now come up with a function that gives me the Latitude and Longitude of where the user is pointing the cursor on the sphere. In case anyone is interested: BOOL FindIntersection(POINT &curPos, D3DVIEWPORT9 &vpMap, D3DXMATRIX &matProjection, D3DXMATRIX &matView, D3DXMATRIX &matWorld, FLOAT &lat, FLOAT &lon) { D3DXMATRIX m; D3DXVECTOR3 rayOrigin; D3DXVECTOR3 rayDir; D3DXVECTOR3 p1; D3DXVECTOR3 p2; D3DXVECTOR3 v1; D3DXVECTOR3 v2; D3DXVECTOR3 v; FLOAT determinant; FLOAT theta; FLOAT phi; FLOAT rho; FLOAT S; FLOAT a; FLOAT b; FLOAT c; FLOAT t1; FLOAT t2; bool bDoesIntersect; vpMap.Width=792; vpMap.Height=546; D3DXVECTOR3 inP1(curPos.x, curPos.y, 0.0f); D3DXVec3Unproject(&rayOrigin, &inP1, &vpMap, &matProjection, &matView, &matWorld); D3DXVECTOR3 inP2(curPos.x, curPos.y, 1.0f); D3DXVec3Unproject(&v2, &inP2, &vpMap, &matProjection, &matView, &matWorld); D3DXVec3Normalize(&rayDir, D3DXVec3Subtract(&v, &rayOrigin, &v2)); // p = td + p0 // a = d*d // b = 2d*(p0-pc) // c = (p0-pc)*(p0-pc)-r^2 a = D3DXVec3Dot(&rayDir, &rayDir); b = D3DXVec3Dot(D3DXVec3Scale(&v, &rayDir, 2), &rayOrigin); c = D3DXVec3Dot(&rayOrigin, &rayOrigin) - pow(SPHERE_SIZE, 2); // Calculate determinant determinant = pow(b, 2) - 4*a*c; if (determinant >= 0) { // There is at least one point of intersection t1 = (-b + sqrt(determinant)) / (2*a); t2 = (-b - sqrt(determinant)) / (2*a); // Plug into p = td + p0 and solve for p a = sqrt(pow(p1.x-rayOrigin.x,2)+pow(p1.y-rayOrigin.y,2)+pow(p1.z-rayOrigin.z,2)); b = sqrt(pow(p2.x-rayOrigin.x,2)+pow(p2.y-rayOrigin.y,2)+pow(p2.z-rayOrigin.z,2)); if (a > b) { // p1 is closest to viewer v.x = p1.x; v.y = p1.y; v.z = p1.z; } else { // p2 is closest to viewer v.x = p2.x; v.y = p2.y; v.z = p2.z; } // Now convert from Cartesian to spherical coordinates rho = sqrt(pow(v.x,2)+pow(v.z,2)+pow(v.y,2)); S = sqrt(pow(v.x,2)+pow(v.z,2)); phi = acos(v.y/rho); if (v.x >= 0) { theta = asin(v.z/S); } else { theta = PI - asin(v.z/S); } if (theta >= PI) { theta -= 2*PI; } bDoesIntersect = true; lat = phi; lon = theta; } else { // Cursor does not intersect with sphere bDoesIntersect = false; } return bDoesIntersect; } Note that I have the vector 'v' has the y and z flipped. That is just how it works in my 3d world. Once I get the lat/lon, which are returned as radians, I convert them to degrees and add the appropriate offset (in my case I subtract 90 degrees from the latitude, and also if longitude exceeds 180 I subtract 360 from it). In any case the above code is specific to my software, but could be adapted to someone else's application if they needed it. So now, regardless of zoom level and rotation, the user can see the latitude and longitude that they have the cursor pointed at. "no intersection" is displayed whenever the cursor hovers outside the sphere. So it works perfectly. Thanks again!!! Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2017-02-27 20:30:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49134889245033264, "perplexity": 1194.777062786301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00342-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/301106/how-many-different-game-situations-has-connect-four
# How many different game situations has connect four? In the game connect four with a $7 \times 6$ grid like in the image below, how many game situations can occur? Rules: Connect Four [...] is a two-player game in which the players first choose a color and then take turns dropping colored discs from the top into a seven-column, six-row vertically-suspended grid. The pieces fall straight down, occupying the next available space within the column. The object of the game is to connect four of one's own discs of the same color next to each other vertically, horizontally, or diagonally before your opponent. Source: Wikipedia Image source: http://commons.wikimedia.org/wiki/File:Connect_Four.gif Lower bound: $7 \cdot 6 = 42$, as it is possible to make the grid full without winning Upper bound: Every field of the grid can have three states: Empty, red or yellow disc. Hence, we can have $3^{7 \cdot 6} = 3^{42} = 109418989131512359209 < 1.1 \cdot 10^{20}$ game situations at maximum. There are not that much less than that, because you can't have four yellows in a row at the bottom, which makes $3^{7 \cdot 6 - 4} = 1350851717672992089$ situations impossible. This means a better upper bound is $108068137413839367120$ How many situations are there? I think it might be possible to calculate this with the approach to subtract all impossible combinations. So I could try to find all possible combinations to place four in a row / column / vertically. But I guess there would be many combinations more than once. - The number of possible Connect-Four game situations after $n$ plies ($n$ turns) is tabulated at OEISA212693. The total is 4531985219092. More in-depth explanation can be found at the links provided by the OEIS site. (E.g. John's Connect Four Playground) - John's Connect Four Playground didn't provide much more information. He seems to compute all possible games, but I can't see how he did this. Additionally, he mentions a paper which states that there were 70728639995483 situations in total (appendix C) – moose Feb 12 '13 at 12:17 That paper by Victor Allis did not calculate the exact number, but rather provided 70728639995483 as a upper bound. See page 10 of 91: "In the calculations we are going to make, we do not rule out positions in which are illegal for the reason mentioned above." – Ivan Loh Feb 12 '13 at 12:34 Also see tzi.de/~edelkamp/publications/conf/ki/EdelkampK08-1.pdf and if you understand German, tzi.de/~edelkamp/lectures/ae/slide/AE-SymbolischeSuche.pdf might be useful as well. – Ivan Loh Feb 12 '13 at 12:43
2015-11-29 21:16:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6767042279243469, "perplexity": 724.3565839736284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00245-ip-10-71-132-137.ec2.internal.warc.gz"}
https://nvctr.ansperformance.eu/reference/n_EA_E_and_p_AB_E2n_EB_E.html
Given the n-vector for position A (n_EA_E) and the position-vector from position A to position B (p_AB_E), the output is the n-vector of position B (n_EB_E) and depth of B (z_EB). n_EA_E_and_p_AB_E2n_EB_E(n_EA_E, p_AB_E, z_EA = 0, a = 6378137, f = 1/298.257223563) ## Arguments n_EA_E n-vector of position A, decomposed in E (3x1 vector) (no unit) Position vector from A to B, decomposed in E (3x1 vector) (m) Depth of system A, relative to the ellipsoid (z_EA = -height) (m, default 0) Semi-major axis of the Earth ellipsoid (m, default [WGS-84] 6378137) Flattening of the Earth ellipsoid (no unit, default [WGS-84] 1/298.257223563) ## Value a list with n-vector of position B, decomposed in E (3x1 vector) (no unit) and the depth of system B, relative to the ellipsoid (z_EB = -height) ## Details The calculation is exact, taking the ellipticity of the Earth into account. It is also nonsingular as both n-vector and p-vector are nonsingular (except for the center of the Earth). The default ellipsoid model used is WGS-84, but other ellipsoids (or spheres) might be specified. ## References Kenneth Gade A Nonsingular Horizontal Position Representation. The Journal of Navigation, Volume 63, Issue 03, pp 395-417, July 2010. n_EA_E_and_n_EB_E2p_AB_E, p_EB_E2n_EB_E and n_EB_E2p_EB_E ## Examples p_BC_B <- c(3000, 2000, 100) # Position and orientation of B is given: n_EB_E <- unit(c(1,2,3)) # unit to get unit length of vector z_EB <- -400 (n_EB_E <- n_EA_E_and_p_AB_E2n_EB_E(n_EB_E, p_BC_E, z_EB))#> $n_EB_E #> [1] 0.2667916 0.5343565 0.8020507 #> #>$z_EB
2019-08-19 16:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.304384708404541, "perplexity": 8601.357194938944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00137.warc.gz"}
http://openstudy.com/updates/4ff25eefe4b03c0c488af62c
## Callisto Group Title Trigo. question #2 Suppose in ΔABC, a = ksinA, b=ksinB and c=ksinC where k is a real constant with k>0. By using compound angle formulas, prove that $a^2 = b^2 + c^2 - 2bc \ cosA$ 2 years ago 2 years ago $$C = \pi - A - B \Rightarrow sinC = sin(A+B)$$$\frac{b^{2}+c^{2}-a^{2}}{2bc} = \frac{k^{2}(sin^{2}B+sin^{2}C -sin^{2}A)}{2k^{2}sinBsinC}$ $=\frac{sin^{2}B+sin(C+A)sin(C-A)}{2sinBsinC}$ $$C+A = \pi - B \Rightarrow sin(C+A)=sinB$$$=\frac{sinB(sinB + sin(C-A))}{2sinBsinC} = \frac{2sin((B+C-A)/2)cos((B-C+A)/2)}{2sinC}$ $=\frac{cosAsinC}{sinC} = cosA$
2014-08-20 14:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980295181274414, "perplexity": 4048.9899382924623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00012-ip-10-180-136-8.ec2.internal.warc.gz"}
https://forsaljningavaktiernkiyu.netlify.app/35825/81994.html
# n3 molecular orbital diagram - Den Levande Historien Varför kan det inte finnas mer än en sigma-obligation i en Table of Contents. 1 Frontier Orbital Theory of Quantum Mechanics Thinks in Terms of Wave Functions. 1.1 Difference Between Bonding and Antibonding Orbitals; 1.2 The Anti-Bonding Orbitals (LUMO) Have Nodes and High Energy; 1.3 The Reason Why Helium Doesn’t Become a Molecule Has to Do with Its Anti-Bonding Orbital; 2 HOMO and LUMO by σ and σ* Bonds in p Orbitals This is how you will determine the hybridization of any atom in a structure. Knowing how many pi bonds are present will tell you how many 2p orbitals are being used in those pi bonds. The remaining s and 2p orbitals must be mixed together in hybrid orbitals (in this example, only an s and a 2p remain to form two sp hybrid orbitals). (B). 2p x. (A) + 2p x. (B) z x x z. Note: orbital is ^ to bond (z) axis. A,B Nodal. Plane. If the carbonyl is going to donate electrons, the electrons will come from the HOMO. In this case, that refers to the non-bonding electrons. ## Spinndynamik i blocket orbital-selektiv mottfasen For a Pi bond, the number of nodes depends on the number and symmetry of the atomic orbitals the π orbital itself comes from. When you work out the Li 2019-11-04 Pi star (π*): antibonding molecular orbital – Normally this orbital is empty, but if it should be occupied, the wave nature of electron density is out of phase (destructive interference) and canceling in nature. There is a second node between the bonding atoms, in addition to the normal 2p orbital 2009-12-21 The π* orbital of ethylene's carbon-carbon pi bond has four orbital lobes (two orbital lobes on each sp 2 carbon atom). It is an antibonding molecular orbital . ### Nodal dubbelskiktsklyvning kontrollerad av spin-orbit π. 2py or π. 2pz. ( both are bonding orbitals). 2. π*. The group orbitals are linear combinations of atomic orbitals from all the atoms bonded to the central atom. According to Molecular Orbital Theory, these two orbitals can be combined to form a $$\pi$$ bonding orbital and a $$\pi$$* antibonding orbital, which produces the energy-level diagram shown in Figure Molecular Orbitals. The symmetry of molecular orbital is detemined by rotating the orbitals about a line perpendicular to it. If the sign of the lobes remains the same, the orbital is gerade and if the sign changes, the orbital is ungerade. π bonding orbitals are ungerade whereas π antibonding orbitals are gerade. Also σ - antibonding orbitals are ungerade Molecular orbitals are of three types: bonding orbitals which have an energy lower than the energy of the atomic orbitals which formed them, and thus promote the chemical bonds which hold the molecule together; antibonding orbitals which have an energy higher than the energy of their constituent atomic orbitals, and so oppose the bonding of the molecule, and nonbonding orbitals which have the 2015-03-23 · ENERGY Antibonding molecular orbitals > ENERGY Bonding molecular orbitals • Bonding molecular orbitals have lower energy compared to the parent atomic orbital. • Antibonding molecular orbitals possess high energy than that of parent atomic orbitals. Rim logo yamaha These electrons are found on the oxygen, and are equivalent to the In both molecules the pi symmetry molecular orbitals are the same. The 2p x orbitals on each atom combine to make a pi bonding and a pi antibonding molecular orbital in the xz plane. Perpendicular to these in the yz plane, the 2p y orbitals on each atom combine to make a pi bonding and a pi antibonding molecular orbital. Here is the full molecular orbital diagram for N 2. Looks at forming sigma and pi bonding and antibonding MOs from a variety of atomic orbitals and orientation. Basic concepts of in phase combination and out antibonding molecular orbitals. Dessutom hävdades att orbital-selektiv Mott fysik (OSMP) 21 överensstämmer med Ändå identifierades den statiska ( π, 0) remsan AFM-ordning med bidrag till SSF som härrör från bindningsområdena ( q y = 0) och antibonding ( q y = π ). Närvaron av elektroner i denna orbital minskar molekylens totala energi och gör att erhålls genom att subtrahera vågfunktionerna kallas antibonding (antibonding). Kovalenta bindningar är av två typer: y- (sigma) och p- (pi) obligationer. ( b ) Molekylärt orbitalt diagram som visar den beräknade d-orbitalsplitningen för e σ and e π s , quantifying Co-N σ and π-antibonding interactions, respectively. the entire molecule and two groups of molecular orbitals (or pseudobands) are produced, corresponding to predominantly bonding (π) and antibonding (π*)  I molekylär orbitalteori tilldelas elektroner i en molekyl inte enskilda kemiska bindningar mellan orbitalmetod (HMO) för bestämning av MO-energier för pi-elektroner , som han Antibonding-orbitaler betecknas med tillsats av en asterisk. Ulla winblad julbord Normally, bonding orbitals are more stable than antibonding orbitals in terms of energy and thus a molecule is stable unless sufficient electrons occupy the antibonding orbitals. Using hybridisation// the mixing of atomic orbitals in the same atom to create a set of HAO which are degenerate //(eg. hybridise the 2s and 2p orbitals in carbon in methane to form 4 equivalent sp3 HAO)// they can interact with AO from another atom like H to form the bonding and antibonding MOs. Probability of finding electrons is less in antibonding molecular orbitals. There is also a node between the anti-bonding molecular orbital between two nuclei where the electron density is zero. These are formed by the combination of + and + and – with – part of the electron waves MO5. Pi Bonding With p Orbitals · parallel p orbitals can overlap to produce bonding and antibonding combinations. · the resulting orbitals contain nodes along the  π bonding orbitals are ungerade whereas π antibonding orbitals are gerade. (Averill & Eldredge 2012) De som överlappar sidledes bildar så kallade π-orbitaler som kan ses i figur  av EO Gabrielsson · 2014 · Citerat av 1 — structure. π orbitals across a polymer can overlap to give further electronic interaction (Figure 2.4), resulting in additional bonding and antibonding orbitals  CO är en bra pi acceptor (lewis syra) på grund av Tom pi * orbitaler och en När limmar på en metall ligand (i detta fall CO) sigma donerar till en tom d-orbital o. till en metall genom att acceptera elektroner genom (antibonding) pi-orbitaler. Viktor rydberg skola ### Group Theory and Symmetries in Particle Physics - Grebović Looks at forming sigma and pi bonding and antibonding MOs from a variety of atomic orbitals and orientation. Basic concepts of in phase combination and out antibonding molecular orbitals. Pi bonds are formed by the overlap of atomic p orbitals in the molecule. Outline • Sigma bonds with sp3 orbitals • Sigma bonds with sp, sp2 orbitals • Pi bonds • Homework The figure at right shows the 2s, 2p x, 2p y, and 2p z orbitals combining into 4 2sp3 orbitals. ## KVANTKEMI - Andreas Ehnbom Shopping. Tap to unmute. As you can see in the lower diagram, there are four bonding orbitals and four antibonding orbitals. $\endgroup$ – jerepierre Oct 7 '14 at 21:53 2014-08-14 · The nuclear repulsions are greater, so the energy of the molecule increases. Antibonding orbitals are at higher energy levels than bonding orbitals. Antibonding sigma orbitals have higher energy levels and less electron density between the nuclei. Antibonding pi orbitals have higher energy levels and less electron density between the nuclei.
2023-03-22 06:36:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6968813538551331, "perplexity": 5491.1078486251945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00483.warc.gz"}
https://hellenicaworld.com/Science/Physics/en/Millisecondpulsar.html
### - Art Gallery - A millisecond pulsar (MSP) is a pulsar with a rotational period smaller than about 10 milliseconds. Millisecond pulsars have been detected in radio, X-ray, and gamma ray parts of the electromagnetic spectrum. The leading theory for the origin of millisecond pulsars is that they are old, rapidly rotating neutron stars that have been spun up or "recycled" through accretion of matter from a companion star in a close binary system.[1][2] For this reason, millisecond pulsars are sometimes called recycled pulsars. Millisecond pulsars are thought to be related to low-mass X-ray binary systems. It is thought that the X-rays in these systems are emitted by the accretion disk of a neutron star produced by the outer layers of a companion star that has overflowed its Roche lobe. The transfer of angular momentum from this accretion event can theoretically increase the rotation rate of the pulsar to hundreds of times per second, as is observed in millisecond pulsars. However, there has been recent evidence that the standard evolutionary model fails to explain the evolution of all millisecond pulsars, especially young millisecond pulsars with relatively high magnetic fields, e.g. PSR B1937+21. Bülent Kiziltan and S. E. Thorsett showed that different millisecond pulsars must form by at least two distinct processes.[3] But the nature of the other process remains a mystery.[4] The stellar grouping Terzan 5 Many millisecond pulsars are found in globular clusters. This is consistent with the spin-up theory of their formation, as the extremely high stellar density of these clusters implies a much higher likelihood of a pulsar having (or capturing) a giant companion star. Currently there are approximately 130 millisecond pulsars known in globular clusters.[5] The globular cluster Terzan 5 alone contains 37 of these, followed by 47 Tucanae with 22 and M28 and M15 with 8 pulsars each. Millisecond pulsars, which can be timed with high precision, have a stability comparable to atomic-clock-based time standards when averaged over decades.[6][7] This also makes them very sensitive probes of their environments. For example, anything placed in orbit around them causes periodic Doppler shifts in their pulses' arrival times on Earth, which can then be analyzed to reveal the presence of the companion and, with enough data, provide precise measurements of the orbit and the object's mass. The technique is so sensitive that even objects as small as asteroids can be detected if they happen to orbit a millisecond pulsar. The first confirmed exoplanets, discovered several years before the first detections of exoplanets around "normal" solar-like stars, were found in orbit around a millisecond pulsar, PSR B1257+12. These planets remained for many years the only Earth-mass objects known outside the Solar System. One of them, PSR B1257+12 D, has an even smaller mass, comparable to that of our Moon, and is still today the smallest-mass object known beyond the Solar System.[8] Pulsar rotational speed limits The first millisecond pulsar, PSR B1937+21, was discovered in 1982 by Backer et al.[9] Spinning roughly 641 times per second, it remains the second fastest-spinning millisecond pulsar of the approximately 200 that have been discovered.[10] Pulsar PSR J1748-2446ad, discovered in 2005, is, as of 2012, the fastest-spinning pulsar currently known, spinning 716 times per second.[11][12] Current theories of neutron star structure and evolution predict that pulsars would break apart if they spun at a rate of c. 1500 rotations per second or more,[13][14] and that at a rate of above about 1000 rotations per second they would lose energy by gravitational radiation faster than the accretion process would speed them up.[15] However, in early 2007 data from the Rossi X-ray Timing Explorer and INTEGRAL spacecraft discovered a neutron star XTE J1739-285 rotating at 1122 Hz.[16] The result is not statistically significant, with a significance level of only 3 sigma. Therefore, while it is an interesting candidate for further observations, current results are inconclusive. Still, it is believed that gravitational radiation plays a role in slowing the rate of rotation. Furthermore, one X-ray pulsar that spins at 599 revolutions per second, IGR J00291+5934, is a prime candidate for helping detect such waves in the future (most such X-ray pulsars only spin at around 300 rotations per second). References Bhattacharya & van den Heuvel (1991), "Formation and evolution of binary and millisecond radio pulsars", Physics Reports 203, 1 Tauris & van den Heuvel (2006), "Formation and evolution of compact stellar X-ray sources", In: Compact stellar X-ray sources. Edited by Walter Lewin & Michiel van der Klis. Cambridge Astrophysics Series, p.623-665, DOI: 10.2277/0521826594 Kızıltan, Bülent; Thorsett, S. E. (2009). "Constraints on Pulsar Evolution: The Joint Period-Spin-down Distribution of Millisecond Pulsars". The Astrophysical Journal Letters. 693 (2): L109–L112.arXiv:0902.0604. Bibcode:2009ApJ...693L.109K. doi:10.1088/0004-637X/693/2/L109. S2CID 2156395. Naeye, Robert (2009). "Surprising Trove of Gamma-Ray Pulsars". Sky & Telescope. Freire, Paulo. "Pulsars in globular clusters". Arecibo Observatory. Retrieved 2007-01-18. Matsakis, D. N.; Taylor, J. H.; Eubanks, T. M. (1997). "A Statistic for Describing Pulsar and Clock Stabilities" (PDF). Astronomy and Astrophysics. 326: 924–928. Bibcode:1997A&A...326..924M. Retrieved 2010-04-03. Hartnett, John G.; Luiten, Andre N. (2011-01-07). "Colloquium: Comparison of astrophysical and terrestrial frequency standards". Reviews of Modern Physics. 83 (1): 1–9.arXiv:1004.0115. doi:10.1103/revmodphys.83.1. ISSN 0034-6861. S2CID 118396798. Rasio, Frederic (2011). "Planet Discovery near Pulsars". Science. Backer, D. C.; Kulkarni, S. R.; Heiles, C.; Davis, M. M.; Goss, W. M. (1982), "A millisecond pulsar", Nature, 300 (5893): 615–618, Bibcode:1982Natur.300..615B, doi:10.1038/300615a0, S2CID 4247734 "The ATNF Pulsar Database". Retrieved 2009-05-17. Hessels, Jason; Ransom, Scott M.; Stairs, Ingrid H.; Freire, Paulo C. C.; Kaspi, Victoria M.; Camilo, Fernando (2006). "A Radio Pulsar Spinning at 716 Hz". Science. 311 (5769): 1901–1904.arXiv:astro-ph/0601337. Bibcode:2006Sci...311.1901H. doi:10.1126/science.1123430. PMID 16410486. S2CID 14945340. Naeye, Robert (2006-01-13). "Spinning Pulsar Smashes Record". Sky & Telescope. Archived from the original on 2007-12-29. Retrieved 2008-01-18. Cook, G. B.; Shapiro, S. L.; Teukolsky, S. A. (1994). "Recycling Pulsars to Millisecond Periods in General Relativity". Astrophysical Journal Letters. 423: 117–120. Bibcode:1994ApJ...423L.117C. doi:10.1086/187250. Haensel, P.; Lasota, J. P.; Zdunik, J. L. (1999). "On the minimum period of uniformly rotating neutron stars". Astronomy and Astrophysics. 344: 151–153. Bibcode:1999A&A...344..151H. Chakrabarty, D.; Morgan, E. H.; Muno, M. P.; Galloway, D. K.; Wijnands, R.; van der Klis, M.; Markwardt, C. B. (2003). "Nuclear-powered millisecond pulsars and the maximum spin frequency of neutron stars". Nature. 424 (6944): 42–44.arXiv:astro-ph/0307029. Bibcode:2003Natur.424...42C. doi:10.1038/nature01732. PMID 12840751. S2CID 1938122. Kiziltan, Bulent; Thorsett, Stephen E. (2007-02-19). "Integral points to the fastest spinning neutron star". Spaceflight Now. European Space Agency.arXiv:0902.0604. Bibcode:2009ApJ...693L.109K. doi:10.1088/0004-637X/693/2/L109. S2CID 2156395. Retrieved 2007-02-20. "How Millisecond Pulsars Spin So Fast". Universe Today. "Fast-Spinning Star Could Test Gravitational Waves". New Scientist. "Astronomical whirling dervishes hide their age well". Astronomy Now. Audio: Cain/Gay - Pulsars Astronomy Cast - Nov 2009. vte Neutron star Types Single pulsars Binary pulsars Properties Related Discovery LGM-1 Centaurus X-3 Timeline of white dwarfs, neutron stars, and supernovae Satellite investigation Rossi X-ray Timing Explorer Fermi Gamma-ray Space Telescope Compton Gamma Ray Observatory Chandra X-ray Observatory Other X-ray pulsar-based navigation Tempo software program Astropulse The Magnificent Seven Physics Encyclopedia World Index
2021-05-17 10:21:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861560821533203, "perplexity": 7791.842782351086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00467.warc.gz"}
http://openstudy.com/updates/526f1985e4b0e209601e4686
## osanseviero Group Title Determine if the next progression have a limit (demostrate it) and determine it 8 months ago 8 months ago 1. osanseviero Group Title $\left( -1^{n} \right)\left( \frac{ 5n+4 }{ 2n } \right)$ 2. osanseviero Group Title 3. osanseviero Group Title :/ 4. tkhunny Group Title As n increases in the positive direction, $$\dfrac{5n+4}{2n}$$ approaches $$\dfrac{5}{2}$$. Follow with your mind as n increases. The little 4 on the end of the numerator becomes less and less significant. The terms do NOT approach zero. 5. osanseviero Group Title oh...I think I see it now...but how can i demostrate if it is it's limit? 6. osanseviero Group Title but there is also the -1^n 7. osanseviero Group Title So it approaches 5/2 and -5/2 ? 8. tkhunny Group Title The typical demonstration is a division by n. For n > 0, $$\dfrac{5n+4}{2n} = \dfrac{5 + \dfrac{4}{n}}{2}$$. In this form, it is relatively obvious that the limit it 5/2 as n increases. The FIRST criterion for convergence is terms that approach ZERO. Nothing else will do. These terms do not approach zero, therefore, we do not care about the alternating sign. If the terms approach zero, THEN we'll worry about the sign. 9. osanseviero Group Title what I mean is that there is a (-1^n) multiplying all of that...so -5/2 is also a limit 10. tkhunny Group Title No, this is not a limit. Limits come alone, not in pairs. The terms, without the sign, approach 5/2. I may have stated that carelessly, before. The actual terms, including the sign, do not have a limit. They is oscillating. 11. osanseviero Group Title Oh...okk 12. osanseviero Group Title So for this there isnt a limit, neither $\frac{ 1 }{ 2 },2^{2}, \frac{ 1 }{ 2^{3} }$ 13. tkhunny Group Title
2014-07-24 16:58:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204109072685242, "perplexity": 2354.6500727645785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889379.27/warc/CC-MAIN-20140722025809-00114-ip-10-33-131-23.ec2.internal.warc.gz"}
https://csvlint.io/validation/5be40fdef12be40004000036
# Validation Results ## https://moodle.htw-berlin.de/pluginfile.php/638590/mod_resource/content/1/iris.data Sorry, your CSV did not pass validation. Please review the errors and warnings below: Total Rows Processed = 308 211Errors 2Warnings 1Messages Structure 210 1 1 Schema 0 0 0 Context 1 0 0 Your CSV file appears to only contain a single column. This may indicate that it is being incorrectly parsed. You can try resubmitting it using a different dialect. ## 211 Errors, 2 Warnings Context problem: Incorrect content type Your CSV file is being delivered with an incorrect Content-Type of text/html; charset=utf-8. We recommend that you configure your server to deliver CSV files with a Content-Type header of text/csv; charset=utf-8 Structural problem: Unexpected whitespace on row 2 <html dir="ltr" lang="de" xml:lang="de"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 5 <link rel="shortcut icon" href="https://moodle.htw-berlin.de/theme/image.php/morehtw/theme/1541756447/favicon" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 6 <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 7 <meta name="keywords" content="moodle, Moodle @ HTW Berlin: Hier k��nnen Sie sich anmelden" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 8 <link rel="stylesheet" type="text/css" href="https://moodle.htw-berlin.de/theme/yui_combo.php?rollup/3.17.2/yui-moodlesimple-min.css" /><script id="firstthemesheet" type="text/css">/** Required in order to fix style inclusion problems in IE with YUI **/</script><link rel="stylesheet" type="text/css" href="https://moodle.htw-berlin.de/theme/styles.php/morehtw/1541756447_1535979645/all" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 9 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 13 M.cfg = {"wwwroot":"https:\/\/moodle.htw-berlin.de","sesskey":"IcdAOVOAZd","themerev":"1541756447","slasharguments":1,"theme":"morehtw","iconsystemmodule":"core\/icon_system_standard","jsrev":"1541756447","admin":"admin","svgicons":true,"usertimezone":"Europa\/Berlin","contextid":1};var yui1ConfigFn = function(me) {if(/-skin|reset|fonts|grids|base/.test(me.name)){me.type='css';me.path=me.path.replace(/\.js/,'.css');me.path=me.path.replace(/\/yui2-skin/,'/assets/skins/sam/yui2-skin')}}; Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Missing Columns on row 14 var yui2ConfigFn = function(me) {var parts=me.name.replace(/^moodle-/,'').split('-'),component=parts.shift(),module=parts[0],min='-min';if(/-(skin|core)/.test(me.name)){parts.pop();me.type='css';min=''} Row 14 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Unexpected whitespace on row 16 YUI_config = {"debug":false,"base":"https:\/\/moodle.htw-berlin.de\/lib\/yuilib\/3.17.2\/","comboBase":"https:\/\/moodle.htw-berlin.de\/theme\/yui_combo.php?","combine":true,"filter":null,"insertBefore":"firstthemesheet","groups":{"yui2":{"base":"https:\/\/moodle.htw-berlin.de\/lib\/yuilib\/2in3\/2.9.0\/build\/","comboBase":"https:\/\/moodle.htw-berlin.de\/theme\/yui_combo.php?","combine":true,"ext":false,"root":"2in3\/2.9.0\/build\/","patterns":{"yui2-":{"group":"yui2","configFn":yui1ConfigFn}}},"moodle":{"name":"moodle","base":"https:\/\/moodle.htw-berlin.de\/theme\/yui_combo.php?m\/1541756447\/","combine":true,"comboBase":"https:\/\/moodle.htw-berlin.de\/theme\/yui_combo.php?","ext":false,"root":"m\/1541756447\/","patterns":{"moodle-":{"group":"moodle","configFn":yui2ConfigFn}},"filter":null,"modules":{"moodle-core-checknet":{"requires":["base-base","moodle-core-notification-alert","io-base"]},"moodle-core-handlebars":{"condition":{"trigger":"handlebars","when":"after"}},"moodle-core-formchangechecker":{"requires":["base","event-focus","moodle-core-event"]},"moodle-core-popuphelp":{"requires":["moodle-core-tooltip"]},"moodle-core-chooserdialogue":{"requires":["base","panel","moodle-core-notification"]},"moodle-core-event":{"requires":["event-custom"]},"moodle-core-actionmenu":{"requires":["base","event","node-event-simulate"]},"moodle-core-dock":{"requires":["base","node","event-custom","event-mouseenter","event-resize","escape","moodle-core-dock-loader","moodle-core-event"]},"moodle-core-dock-loader":{"requires":["escape"]},"moodle-core-notification":{"requires":["moodle-core-notification-dialogue","moodle-core-notification-alert","moodle-core-notification-confirm","moodle-core-notification-exception","moodle-core-notification-ajaxexception"]},"moodle-core-notification-dialogue":{"requires":["base","node","panel","escape","event-key","dd-plugin","moodle-core-widget-focusafterclose","moodle-core-lockscroll"]},"moodle-core-notification-alert":{"requires":["moodle-core-notification-dialogue"]},"moodle-core-notification-confirm":{"requires":["moodle-core-notification-dialogue"]},"moodle-core-notification-exception":{"requires":["moodle-core-notification-dialogue"]},"moodle-core-notification-ajaxexception":{"requires":["moodle-core-notification-dialogue"]},"moodle-core-lockscroll":{"requires":["plugin","base-build"]},"moodle-core-languninstallconfirm":{"requires":["base","node","moodle-core-notification-confirm","moodle-core-notification-alert"]},"moodle-core-maintenancemodetimer":{"requires":["base","node"]},"moodle-core-blocks":{"requires":["base","node","io","dom","dd","dd-scroll","moodle-core-dragdrop","moodle-core-notification"]},"moodle-core-dragdrop":{"requires":["base","node","io","dom","dd","event-key","event-focus","moodle-core-notification"]},"moodle-core-tooltip":{"requires":["base","node","io-base","moodle-core-notification-dialogue","json-parse","widget-position","widget-position-align","event-outside","cache-base"]},"moodle-core_availability-form":{"requires":["base","node","event","event-delegate","panel","moodle-core-notification-dialogue","json"]},"moodle-backup-backupselectall":{"requires":["node","event","node-event-simulate","anim"]},"moodle-backup-confirmcancel":{"requires":["node","node-event-simulate","moodle-core-notification-confirm"]},"moodle-course-formatchooser":{"requires":["base","node","node-event-simulate"]},"moodle-course-categoryexpander":{"requires":["node","event-key"]},"moodle-course-modchooser":{"requires":["moodle-core-chooserdialogue","moodle-course-coursebase"]},"moodle-course-util":{"requires":["node"],"use":["moodle-course-util-base"],"submodules":{"moodle-course-util-base":{},"moodle-course-util-section":{"requires":["node","moodle-course-util-base"]},"moodle-course-util-cm":{"requires":["node","moodle-course-util-base"]}}},"moodle-course-dragdrop":{"requires":["base","node","io","dom","dd","dd-scroll","moodle-core-dragdrop","moodle-core-notification","moodle-course-coursebase","moodle-course-util"]},"moodle-course-management":{"requires":["base","node","io-base","moodle-core-notification-exception","json-parse","dd-constrain","dd-proxy","dd-drop","dd-delegate","node-event-delegate"]},"moodle-form-shortforms":{"requires":["node","base","selector-css3","moodle-core-event"]},"moodle-form-passwordunmask":{"requires":[]},"moodle-form-dateselector":{"requires":["base","node","overlay","calendar"]},"moodle-form-showadvanced":{"requires":["node","base","selector-css3"]},"moodle-question-preview":{"requires":["base","dom","event-delegate","event-key","core_question_engine"]},"moodle-question-chooser":{"requires":["moodle-core-chooserdialogue"]},"moodle-question-searchform":{"requires":["base","node"]},"moodle-question-qbankmanager":{"requires":["node","selector-css3"]},"moodle-availability_completion-form":{"requires":["base","node","event","moodle-core_availability-form"]},"moodle-availability_date-form":{"requires":["base","node","event","io","moodle-core_availability-form"]},"moodle-availability_grade-form":{"requires":["base","node","event","moodle-core_availability-form"]},"moodle-availability_group-form":{"requires":["base","node","event","moodle-core_availability-form"]},"moodle-availability_grouping-form":{"requires":["base","node","event","moodle-core_availability-form"]},"moodle-availability_profile-form":{"requires":["base","node","event","moodle-core_availability-form"]},"moodle-qtype_ddimageortext-form":{"requires":["moodle-qtype_ddimageortext-dd","form_filepicker"]},"moodle-qtype_ddimageortext-dd":{"requires":["node","dd","dd-drop","dd-constrain"]},"moodle-qtype_ddmarker-form":{"requires":["moodle-qtype_ddmarker-dd","form_filepicker","graphics","escape"]},"moodle-qtype_ddmarker-dd":{"requires":["node","event-resize","dd","dd-drop","dd-constrain","graphics"]},"moodle-qtype_ddwtos-dd":{"requires":["node","dd","dd-drop","dd-constrain"]},"moodle-qtype_stack-input":{"requires":["node","event-valuechange","moodle-core-event","io","json-parse"]},"moodle-mod_assign-history":{"requires":["node","transition"]},"moodle-mod_forum-subscriptiontoggle":{"requires":["base-base","io-base"]},"moodle-mod_quiz-autosave":{"requires":["base","node","event","event-valuechange","node-event-delegate","io-form"]},"moodle-mod_quiz-quizquestionbank":{"requires":["base","event","node","io","io-form","yui-later","moodle-question-qbankmanager","moodle-core-notification-dialogue"]},"moodle-mod_quiz-quizbase":{"requires":["base","node"]},"moodle-mod_quiz-randomquestion":{"requires":["base","event","node","io","moodle-core-notification-dialogue"]},"moodle-mod_quiz-questionchooser":{"requires":["moodle-core-chooserdialogue","moodle-mod_quiz-util","querystring-parse"]},"moodle-mod_quiz-modform":{"requires":["base","node","event"]},"moodle-mod_quiz-util":{"requires":["node","moodle-core-actionmenu"],"use":["moodle-mod_quiz-util-base"],"submodules":{"moodle-mod_quiz-util-base":{},"moodle-mod_quiz-util-slot":{"requires":["node","moodle-mod_quiz-util-base"]},"moodle-mod_quiz-util-page":{"requires":["node","moodle-mod_quiz-util-base"]}}},"moodle-mod_quiz-dragdrop":{"requires":["base","node","io","dom","dd","dd-scroll","moodle-core-dragdrop","moodle-core-notification","moodle-mod_quiz-quizbase","moodle-mod_quiz-util-base","moodle-mod_quiz-util-page","moodle-mod_quiz-util-slot","moodle-course-util"]},"moodle-mod_quiz-toolboxes":{"requires":["base","node","event","event-key","io","moodle-mod_quiz-quizbase","moodle-mod_quiz-util-slot","moodle-core-notification-ajaxexception"]},"moodle-mod_quiz-repaginate":{"requires":["base","event","node","io","moodle-core-notification-dialogue"]},"moodle-mod_scheduler-saveseen":{"requires":["base","node","event"]},"moodle-mod_scheduler-studentlist":{"requires":["base","node","event","io"]},"moodle-mod_scheduler-delselected":{"requires":["base","node","event"]},"moodle-message_airnotifier-toolboxes":{"requires":["base","node","io"]},"moodle-filter_glossary-autolinker":{"requires":["base","node","io-base","json-parse","event-delegate","overlay","moodle-core-event","moodle-core-notification-alert","moodle-core-notification-exception","moodle-core-notification-ajaxexception"]},"moodle-filter_mathjaxloader-loader":{"requires":["moodle-core-event"]},"moodle-editor_atto-rangy":{"requires":[]},"moodle-editor_atto-editor":{"requires":["node","transition","io","overlay","escape","event","event-simulate","event-custom","node-event-html5","node-event-simulate","yui-throttle","moodle-core-notification-dialogue","moodle-core-notification-confirm","moodle-editor_atto-rangy","handlebars","timers","querystring-stringify"]},"moodle-editor_atto-plugin":{"requires":["node","base","escape","event","event-outside","handlebars","event-custom","timers","moodle-editor_atto-menu"]},"moodle-editor_atto-menu":{"requires":["moodle-core-notification-dialogue","node","event","event-custom"]},"moodle-report_eventlist-eventfilter":{"requires":["base","event","node","node-event-delegate","datatable","autocomplete","autocomplete-filters"]},"moodle-report_loglive-fetchlogs":{"requires":["base","event","node","io","node-event-delegate"]},"moodle-gradereport_grader-gradereporttable":{"requires":["base","node","event","handlebars","overlay","event-hover"]},"moodle-gradereport_history-userselector":{"requires":["escape","event-delegate","event-key","handlebars","io-base","json-parse","moodle-core-notification-dialogue"]},"moodle-tool_capability-search":{"requires":["base","node"]},"moodle-tool_lp-dragdrop-reorder":{"requires":["moodle-core-dragdrop"]},"moodle-tool_monitor-dropdown":{"requires":["base","event","node"]},"moodle-assignfeedback_editpdf-editor":{"requires":["base","event","node","io","graphics","json","event-move","event-resize","transition","querystring-stringify-simple","moodle-core-notification-dialog","moodle-core-notification-alert","moodle-core-notification-exception","moodle-core-notification-ajaxexception"]},"moodle-atto_accessibilitychecker-button":{"requires":["color-base","moodle-editor_atto-plugin"]},"moodle-atto_accessibilityhelper-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_align-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_bold-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_charmap-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_clear-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_collapse-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_emoticon-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_equation-button":{"requires":["moodle-editor_atto-plugin","moodle-core-event","io","event-valuechange","tabview","array-extras"]},"moodle-atto_html-button":{"requires":["moodle-editor_atto-plugin","event-valuechange"]},"moodle-atto_image-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_indent-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_italic-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_link-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_managefiles-usedfiles":{"requires":["node","escape"]},"moodle-atto_managefiles-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_media-button":{"requires":["moodle-editor_atto-plugin","moodle-form-shortforms"]},"moodle-atto_noautolink-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_orderedlist-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_rtl-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_strike-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_subscript-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_superscript-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_table-button":{"requires":["moodle-editor_atto-plugin","moodle-editor_atto-menu","event","event-valuechange"]},"moodle-atto_title-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_underline-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_undo-button":{"requires":["moodle-editor_atto-plugin"]},"moodle-atto_unorderedlist-button":{"requires":["moodle-editor_atto-plugin"]}}},"gallery":{"name":"gallery","base":"https:\/\/moodle.htw-berlin.de\/lib\/yuilib\/gallery\/","combine":true,"comboBase":"https:\/\/moodle.htw-berlin.de\/theme\/yui_combo.php?","ext":false,"root":"gallery\/1541756447\/","patterns":{"gallery-":{"group":"gallery"}}}},"modules":{"core_filepicker":{"name":"core_filepicker","fullpath":"https:\/\/moodle.htw-berlin.de\/lib\/javascript.php\/1541756447\/repository\/filepicker.js","requires":["base","node","node-event-simulate","json","async-queue","io-base","io-upload-iframe","io-form","yui2-treeview","panel","cookie","datatable","datatable-sort","resize-plugin","dd-plugin","escape","moodle-core_filepicker","moodle-core-notification-dialogue"]},"core_comment":{"name":"core_comment","fullpath":"https:\/\/moodle.htw-berlin.de\/lib\/javascript.php\/1541756447\/comment\/comment.js","requires":["base","io-base","node","json","yui2-animation","overlay","escape"]},"mathjax":{"name":"mathjax","fullpath":"https:\/\/cdnjs.cloudflare.com\/ajax\/libs\/mathjax\/2.7.2\/MathJax.js?delayStartupUntil=configured"}}}; Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 18 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 21 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 22 <meta name="robots" content="noindex" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 24 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 25 <body id="page-login-index" class="format-site path-login dir-ltr lang-de yui-skin-sam yui3-skin-sam moodle-htw-berlin-de pagelayout-login course-1 context-1 notloggedin content-only layout-option-langmenu"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 26 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 27 <div class="skiplinks"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 28 <a href="#maincontent" class="skip">Zum Hauptinhalt</a> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 29 </div><script type="text/javascript" src="https://moodle.htw-berlin.de/theme/yui_combo.php?rollup/3.17.2/yui-moodlesimple-min.js"></script><script type="text/javascript" src="https://moodle.htw-berlin.de/lib/javascript.php/1541756447/lib/javascript-static.js"></script> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 30 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 35 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 36 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 37 <!--div style="background-color: red; text-align: center;color: white;">Wartungsarbeiten zur Semesterumstellung am Donnerstag, 13.9. zwischen 11:00 und 14:00 Uhr</div--> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 38 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 39 <header role="banner" class="navbar navbar-fixed-top navbar-inverse moodle-has-zindex"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 40 <nav role="navigation" class="navbar-inner"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 41 <div class="container-fluid"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 42 <a class="small-logo-container" title="Startseite" href="https://moodle.htw-berlin.de/"><img class="small-logo" src="https://moodle.htw-berlin.de/pluginfile.php/1/theme_morehtw/smalllogo/1541756447/htwlogo.png" alt="Site logo" /></a> <a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse"><span class="icon-bar"></span> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 43 <span class="icon-bar"></span> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 44 <span class="icon-bar"></span></a> <div class="usermenu"><span class="login">Sie sind nicht angemeldet.</span></div> <div class="nav-collapse collapse"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 45 <ul class="nav"><li class="dropdown"><a href="#cm_submenu_1" class="dropdown-toggle" data-toggle="dropdown" title="Support">Support<b class="caret"></b></a><ul class="dropdown-menu"><li><a title="Support-Anfragen und Material" href="https://moodle.htw-berlin.de/course/view.php?id=12281">Support-Anfragen und Material</a></li><li><a title="Datenschutz und Privatsph��re" href="https://moodle.htw-berlin.de/course/view.php?id=12205">Datenschutz und Privatsph��re</a></li></ul><li class="dropdown langmenu"><a href="" class="dropdown-toggle" data-toggle="dropdown" title="Sprache">Deutsch ���(de)���<b class="caret"></b></a><ul class="dropdown-menu"><li><a title="Deutsch ���(de)���" href="https://moodle.htw-berlin.de/login/index.php?lang=de">Deutsch ���(de)���</a></li><li><a title="English ���(en)���" href="https://moodle.htw-berlin.de/login/index.php?lang=en">English ���(en)���</a></li><li><a title="Espa��ol - Internacional ���(es)���" href="https://moodle.htw-berlin.de/login/index.php?lang=es">Espa��ol - Internacional ���(es)���</a></li><li><a title="Fran��ais ���(fr)���" href="https://moodle.htw-berlin.de/login/index.php?lang=fr">Fran��ais ���(fr)���</a></li><li><a title="�������������� ���(ru)���" href="https://moodle.htw-berlin.de/login/index.php?lang=ru">�������������� ���(ru)���</a></li></ul></ul> <ul class="nav pull-right"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 52 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 53 <div id="page" class="container-fluid"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 54 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 55 <header id="page-header" class="clearfix"><div class="page-context-header"><div class="page-header-headings"><h1>Moodle @ HTW Berlin</h1></div></div><div class="clearfix" id="page-navbar"><div class="breadcrumb-nav"><span class="accesshide" id="navbar-label">Seitenpfad</span><nav aria-labelledby="navbar-label"><ul class="breadcrumb"><li><span itemscope="" itemtype="http://data-vocabulary.org/Breadcrumb"><a itemprop="url" href="https://moodle.htw-berlin.de/"><span itemprop="title">Startseite</span></a></span> <span class="divider"> <span class="accesshide " ><span class="arrow_text">/</span>&nbsp;</span><span class="arrow sep">&#x25BA;</span> </span></li><li><span tabindex="0">Hier k��nnen Sie sich anmelden</span></li></ul></nav></div><div class="breadcrumb-button"></div></div><div id="course-header"></div></header> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 56 <div id="page-content" class="row-fluid"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 57 <section id="region-main" class="span12"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 58 <span class="notifications" id="user-notifications"></span><div role="main"><span id="maincontent"></span><div class="loginbox clearfix twocolumns"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 59 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 60 <div class="loginpanel"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 61 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 63 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 64 <div class="subcontent loginsub"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 65 <form action="https://moodle.htw-berlin.de/login/index.php" method="post" id="login"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 66 <div class="loginform"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 67 <div class="form-label"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 68 <label for="username"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 72 <div class="form-input"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 73 <input type="text" name="username" id="username" size="15" value=""> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 75 <div class="clearer"><!-- --></div> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 76 <div class="form-label"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 77 <label for="password">Passwort</label> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 79 <div class="form-input"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 80 <input type="password" name="password" id="password" size="15" value=""> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 83 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 84 <div class="clearer"><!-- --></div> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 85 <div class="rememberpass"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 86 <input type="checkbox" name="rememberusername" id="rememberusername" value="1" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 87 <label for="rememberusername">Benutzernamen merken</label> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 89 <div class="clearer"><!-- --></div> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 90 <input id="anchor" type="hidden" name="anchor" value="" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 92 <input type="submit" id="loginbtn" value="Login" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 93 <div class="forgetpass"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 94 <a href="https://moodle.htw-berlin.de/login/forgot_password.php">Passwort vergessen?</a> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 97 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 98 <div class="desc"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 100 <span class="helptooltip"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 101 <a href="https://moodle.htw-berlin.de/help.php?component=moodle&amp;identifier=cookiesenabled&amp;lang=de" title="Hilfe f��r Cookies m��ssen aktiviert sein!" aria-haspopup="true" target="_blank"><img class="icon iconhelp" alt="Hilfe f��r Cookies m��ssen aktiviert sein!" title="Hilfe f��r Cookies m��ssen aktiviert sein!" src="https://moodle.htw-berlin.de/theme/image.php/morehtw/core/1541756447/help" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 105 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 107 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 108 <div class="subcontent guestsub"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 109 <div class="desc">Kurse k��nnten einen Gastzugriff erlauben.</div> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 110 <form action="https://moodle.htw-berlin.de/login/index.php" method="post" id="guestlogin"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 111 <div class="guestform"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 112 <input type="hidden" name="username" value="guest" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 113 <input type="hidden" name="password" value="guest" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 114 <input type="submit" value="Anmelden als Gast" /> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 118 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 120 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 121 <div class="signuppanel"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 123 <div class="subcontent"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 124 <ul><li style="text-align:left;"><b>Anmeldung: </b>Bitte nutzen Sie zur Anmeldung Ihren zentralen <a href="http://www.rz.htw-berlin.de/account" target="_blank" rel="noreferrer noopener">HTW-Account</a>.</li><li style="text-align:left;"><b>Externer Nutzer_innen sowie Gast- und Nebenh��rer_innen��</b>k��nnen zur Mitarbeit in Projekten oder zur Teilnahme an Veranstaltungen einen <a href="http://www.rz.htw-berlin.de/account" target="_blank" rel="noreferrer noopener">HTW-Account</a> beantragen.</li><li style="text-align:left;"><strong>Haben Sie Fragen zur Anmeldung?<br /></strong>Bitte schreiben Sie eine E-Mail an: <span><a href="mailto:[email protected]">[email protected]</a></span></li></ul> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 126 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 130 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 131 <footer id="page-footer"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 132 <div id="course-footer"></div> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 133 <p class="helplink"></p> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 134 <div class="footnote text-center"><div class="text_to_html"><p><a href="https://www.htw-berlin.de/impressum/" target="_blank" rel="noreferrer noopener">Impressum</a> | <a href="https://datenschutz.htw-berlin.de/verfahren/digitales-lehren-lernen/" target="_blank" rel="noreferrer noopener">Datenschutzerkl��rung</a></p></div></div><div class="logininfo">Sie sind nicht angemeldet.</div><div class="homelink"><a href="https://moodle.htw-berlin.de/">Startseite</a></div><a href="https://download.moodle.org/mobile?version=2017111305&amp;lang=de&amp;iosappid=633359593&amp;androidappid=com.moodle.moodlemobile">Laden Sie die mobile App</a> </footer> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 135 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 136 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Missing Columns on row 139 baseUrl : 'https://moodle.htw-berlin.de/lib/requirejs.php/1541756447/', Row 139 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 141 enforceDefine: true, Row 141 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 142 skipDataMain: true, Row 142 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 143 waitSeconds : 0, Row 143 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 144 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Missing Columns on row 146 jquery: 'https://moodle.htw-berlin.de/lib/javascript.php/1541756447/lib/jquery/jquery-3.2.1.min', Row 146 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 147 jqueryui: 'https://moodle.htw-berlin.de/lib/javascript.php/1541756447/lib/jquery/ui-1.12.1/jquery-ui.min', Row 147 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 149 }, Row 149 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 150 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Missing Columns on row 155 '*': { jquery: 'jqueryprivate' }, Row 155 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Missing Columns on row 157 '*': { process: 'core/first' }, Row 157 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 158 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Missing Columns on row 160 // though. If this line was not here, there would Row 160 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 165 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 168 <script type="text/javascript" src="https://moodle.htw-berlin.de/lib/javascript.php/1541756447/lib/requirejs/require.min.js"></script> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 169 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Missing Columns on row 171 require(['core/first'], function() { Row 171 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Unexpected whitespace on row 173 require(["media_videojs/loader"], function(loader) { Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 175 videojs.options.flash.swf = "https://moodle.htw-berlin.de/media/player/videojs/videojs/video-js.swf"; Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 176 videojs.addLanguage("de",{ Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 177 "Play": "Wiedergabe", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 178 "Pause": "Pause", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 179 "Replay": "Erneut abspielen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 180 "Current Time": "Aktueller Zeitpunkt", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 181 "Duration Time": "Dauer", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 182 "Remaining Time": "Verbleibende Zeit", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 183 "Stream Type": "Streamtyp", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 184 "LIVE": "LIVE", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 185 "Loaded": "Geladen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 186 "Progress": "Status", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 187 "Fullscreen": "Vollbild", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 188 "Non-Fullscreen": "Kein Vollbild", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 189 "Mute": "Ton aus", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 190 "Unmute": "Ton ein", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 191 "Playback Rate": "Wiedergabegeschwindigkeit", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 192 "Subtitles": "Untertitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 193 "subtitles off": "Untertitel aus", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 194 "Captions": "Untertitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 195 "captions off": "Untertitel aus", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 196 "Chapters": "Kapitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 197 "You aborted the media playback": "Sie haben die Videowiedergabe abgebrochen.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 198 "A network error caused the media download to fail part-way.": "Der Videodownload ist aufgrund eines Netzwerkfehlers fehlgeschlagen.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 199 "The media could not be loaded, either because the server or network failed or because the format is not supported.": "Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterst��tzt wird.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 200 "The media playback was aborted due to a corruption problem or because the media used features your browser did not support.": "Die Videowiedergabe wurde entweder wegen eines Problems mit einem besch��digten Video oder wegen verwendeten Funktionen, die vom Browser nicht unterst��tzt werden, abgebrochen.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 201 "No compatible source was found for this media.": "F��r dieses Video wurde keine kompatible Quelle gefunden.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 202 "Play Video": "Video abspielen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 203 "Close": "Schlie��en", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 204 "Modal Window": "Modales Fenster", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 205 "This is a modal window": "Dies ist ein modales Fenster", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 206 "This modal can be closed by pressing the Escape key or activating the close button.": "Durch Dr��cken der Esc-Taste bzw. Bet��tigung der Schaltfl��che \"Schlie��en\" wird dieses modale Fenster geschlossen.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 207 ", opens captions settings dialog": ", ��ffnet Einstellungen f��r Untertitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 208 ", opens subtitles settings dialog": ", ��ffnet Einstellungen f��r Untertitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 209 ", selected": ", ausgew��hlt", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 210 "captions settings": "Untertiteleinstellungen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 211 "subtitles settings": "Untertiteleinstellungen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 212 "descriptions settings": "Einstellungen f��r Beschreibungen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 213 "Close Modal Dialog": "Modales Fenster schlie��en", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 214 "Descriptions": "Beschreibungen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 215 "descriptions off": "Beschreibungen aus", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 216 "The media is encrypted and we do not have the keys to decrypt it.": "Die Entschl��sselungsschl��ssel f��r den verschl��sselten Medieninhalt sind nicht verf��gbar.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 217 ", opens descriptions settings dialog": ", ��ffnet Einstellungen f��r Beschreibungen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 218 "Audio Track": "Tonspur", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 219 "Text": "Schrift", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 220 "White": "Wei��", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 221 "Black": "Schwarz", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 222 "Red": "Rot", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 223 "Green": "Gr��n", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 224 "Blue": "Blau", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 225 "Yellow": "Gelb", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 226 "Magenta": "Magenta", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 227 "Cyan": "T��rkis", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 228 "Background": "Hintergrund", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 229 "Window": "Fenster", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 230 "Transparent": "Durchsichtig", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 231 "Semi-Transparent": "Halbdurchsichtig", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 232 "Opaque": "Undurchsictig", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 233 "Font Size": "Schriftgr����e", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 234 "Text Edge Style": "Textkantenstil", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 235 "None": "Kein", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 236 "Raised": "Erhoben", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 237 "Depressed": "Gedr��ckt", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 238 "Uniform": "Uniform", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 239 "Dropshadow": "Schlagschatten", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 240 "Font Family": "Schristfamilie", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 241 "Proportional Sans-Serif": "Proportionale Sans-Serif", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 242 "Monospace Sans-Serif": "Monospace Sans-Serif", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 243 "Proportional Serif": "Proportionale Serif", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 244 "Monospace Serif": "Monospace Serif", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 245 "Casual": "Zwanglos", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 246 "Script": "Schreibeschrift", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 247 "Small Caps": "Small-Caps", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 248 "Reset": "Zur��cksetzen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 249 "restore all settings to the default values": "Alle Einstellungen auf die Standardwerte zur��cksetzen", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 250 "Done": "Fertig", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 251 "Caption Settings Dialog": "Einstellungsdialog f��r Untertitel", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 252 "Beginning of dialog window. Escape will cancel and close the window.": "Anfang des Dialogfensters. Esc bricht ab und schlie��t das Fenster.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 253 "End of dialog window.": "Ende des Dialogfensters.", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 254 "Audio Player": "Audio-Player", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 255 "Video Player": "Video-Player", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 256 "Progress Bar": "Forschrittsbalken", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 257 "progress bar timing: currentTime={1} duration={2}": "{1} von {2}", Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 258 "Volume Level": "Lautst��rkestufe" Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 260 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Empty row on row 263 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Missing Columns on row 264 require(['core/yui'], function(Y) { Row 264 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 268 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Missing Columns on row 269 require(['jquery'], function() { Row 269 contains a different number of columns to the first row in the CSV file. This may indicate a problem with the data, e.g. an incorrectly escaped value, or that you are mixing together different tables of information. Structural problem: Empty row on row 278 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Unexpected whitespace on row 280 require(["core/notification"], function(amd) { amd.init(1, []); });; Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 281 require(["core/log"], function(amd) { amd.setConfig({"level":"warn"}); }); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 285 <script type="text/javascript" src="https://moodle.htw-berlin.de/theme/javascript.php/morehtw/1541756447/footer"></script> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 286 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 288 M.str = {"moodle":{"lastmodified":"Zuletzt ge\u00e4ndert","name":"Name","error":"Fehler","info":"Informationen","yes":"Ja","no":"Nein","ok":"OK","cancel":"Abbrechen","morehelp":"Weitere Hilfe","loadinghelp":"Wird geladen...","confirm":"Best\u00e4tigen","areyousure":"Sind Sie sicher?","closebuttontitle":"Schlie\u00dfen","unknownerror":"Unbekannter Fehler"},"repository":{"type":"Typ","size":"Gr\u00f6\u00dfe","invalidjson":"Ung\u00fcltiger JSON-Text","nofilesattached":"Keine Datei","filepicker":"Dateiauswahl","logout":"Abmelden","nofilesavailable":"Keine Dateien vorhanden","norepositoriesavailable":"Keine Ihrer aktuellen Repositories kann Dateien im n\u00f6tigen Format liefern","fileexistsdialogheader":"Datei bereits vorhanden","fileexistsdialog_editor":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt, den Sie gerade bearbeiten","fileexistsdialog_filemanager":"Eine Datei mit diesem Namen wurde bereits an den Text angeh\u00e4ngt","renameto":"Nach '{$a}' umbenennen","referencesexist":"{$a} Aliase\/Verkn\u00fcpfungen zu dieser Datei","select":"W\u00e4hlen Sie"},"admin":{"confirmdeletecomments":"M\u00f6chten Sie die Kommentare wirklich l\u00f6schen?","confirmation":"Best\u00e4tigung"},"block":{"addtodock":"Block ins Dock bewegen","undockitem":"Diesen Block abdocken","dockblock":"Block {$a} ins Dock","undockblock":"'{$a}' abdocken","undockall":"Alles abdocken","hidedockpanel":"Dock verbergen","hidepanel":"Steuerung verbergen"},"langconfig":{"thisdirectionvertical":"btt"}}; Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 291 <script type="text/javascript"> Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 293 (function() {Y.use("moodle-core-dock-loader",function() {M.core.dock.loader.initLoader(); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 295 Y.use("moodle-filter_mathjaxloader-loader",function() {M.filter_mathjaxloader.configure({"mathjaxconfig":"\nMathJax.Hub.Config({\n config: [\"Accessible.js\", \"Safe.js\"],\n errorSettings: { message: [\"!\"] },\n skipStartupTypeset: true,\n messageStyle: \"none\"\n});\n","lang":"de"}); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 297 Y.use("moodle-filter_glossary-autolinker",function() {M.filter_glossary.init_filter_autolinking({"courseid":0}); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 300 Y.use("moodle-core-popuphelp",function() {M.core.init_popuphelp(); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Unexpected whitespace on row 302 M.util.js_pending('random5bebb35f63ac14'); Y.on('domready', function() { M.util.js_complete("init"); M.util.js_complete('random5bebb35f63ac14'); }); Quoted columns in the CSV should not have any leading or trailing whitespace. Remove any spaces, tabs or other whitespace from either side of the delimiters in the row. Structural problem: Empty row on row 306 Remove the empty row from your CSV file. If you were not expecting any empty rows then this may indicate a problem with your data Structural problem: Inconsistent Line Breaks Your CSV has inconsistent line breaks (or your schema specified one line break style and the file uses another). You should make sure all line breaks are in the same form (i.e. CR-LF, or just LF). We recommend using CR-LF for maximum compatibility. Dialect: Non standard dialect Although your CSV validates, to make it as easy as possible for your data to be reused, we recommend using commas as delimiters, double quotes to enclose fields, and autodetecting line endings. Structural problem: Check CSV parsing options Your CSV file appears to only contain a single column. This may indicate that it is being incorrectly parsed. We recommend using commas as delimiters and escaping values in columns where necessary. Structural problem: Non-standard Line Breaks on row 1 Your CSV appears to use line-breaks. While this will be fine in most cases, RFC 4180 specifies that CSV files should use CR-LF (a carriage-return and line-feed pair, e.g. \r\n). This may be labelled as "Windows line endings" on some systems. ### Next Steps Publish and transform your data using DataGraft, either as enhanced CSV or Linked Data.
2018-11-14 05:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2658438980579376, "perplexity": 9080.40399425581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741628.8/warc/CC-MAIN-20181114041344-20181114063344-00086.warc.gz"}
https://math.meta.stackexchange.com/questions/9367/no-offense-intended-but
# No offense intended, but… I haven't been a member on here for all that long of a period of time, but I would like to point out some things that I have observed that some may be overlooking. I have visited the site a large number of times before joining I would like to mention as well. First of all, from the comments to most of the posts I have seen over the last couple of days on meta, nobody really wants to see anything from anyone new on MSE. Some are belligerent about it while a few others are perhaps less aware of their attitude, but it exists. I came here because I was already in the stack exchange community, a cross affiliation from my time I spend working with Ubuntu. I am also an applied math major and at my school that means taking grad classes regularly, so I'm not that far down the totem pole in that respect either. There seems to be alot of bitterness, especially regarding the homework problems situation. While I understand that many people may be looking for a handout it is unfair to assume that is the case for anybody asking a homework question, or help understanding or clarifying some aspect of a particular area. Some people do make an effort, just saying. Call me crazy, but I kinda feel unwanted here, and I may be, in which case I will gladly leave. MSE should be as much about camaraderie as about solving particular problems. I have seen many of my own friendships grow with peers as we explore some particular problem together. I would suggest that perhaps as a community, more "communing" could be done. Anyways, if you read this far thanks, hope you'll consider what I've said if you've unfairly been a jerk to someone who didn't deserve it recently. • Your observation about "alot of bitterness" is correct. This is a function of time, so there's hope it reached its maximum already and will decay over time. ¶ I don't see why you personally feel unwanted, though. Clicking through to your main site activity, I found one question, which already had an upvote from me and was in my "favorites". (I tend to mark questions as favorites and then hope someone else answers... I'm lazy like that.) – 75064 May 5 '13 at 4:24 • It is basic etiquette all over the world that when a stranger asks you a question, the reply should begin with some variation on the words "what have you tried?", "is this homework?", "what are your thoughts?", CLOSED, "do your own work", "you have not met our quality standards", or "mend your ways or don't come back". People on MSE sometimes forget that. If only those greetings were used more often, I think everyone could feel right at home. – zyx May 5 '13 at 6:37 • And, of course, to get help from your professor, you should walk into his office, place your homework sheet in front of him, and silently wait for him to give you the answer. – Hurkyl May 5 '13 at 12:17 • @zyx Do you know the feeling when you are at a party, nipping beer and a stranger approaches you and says to you "Prove that every continuous function from the closed unit interval to the reals is uniformly continuous." What do you do in these situations? – Michael Greinecker May 5 '13 at 13:23 • I have no problem with people asking OPs "what have you tried". I just wish they would do this more politely! (I find copied-and-pasted comment rude, but it is better than just silently closing the question...) – user1729 May 5 '13 at 13:41 • I should also say that I regularly recommend my students to ask their question here. They do not want to because they see the community as too snobbish. I think we have a problem... – user1729 May 5 '13 at 13:53 • In particular, the "does not meet our quality standards" comment makes me shudder, partly because of the bureaucratese, but mainly because it could be interpreted as speaking for me. – André Nicolas May 5 '13 at 15:09 • @user1729 That does sound like a problem. Here's a possible fix: suggest that they answer 2-3 questions before posting their first question. Nobody tells WHYT to answerers, and in the process they will get used to the site and also get rid of the rep=1 indicator. Answering questions here may be more useful anyway. ¶ #Tristen: Since your profile says "send me a line", I'll point out that SE sites have no direct messaging feature, and the email address you entered is not shown to other users. So, if no SE users ever contact you directly, it's not necessarily because they are antisocial. – 75064 May 5 '13 at 15:13 • @user1729: I see your problem. But this is the way internet forums work. Whichever forum you join, you are well advised to lurk for a little while. Then you will learn about "house rules" and about the way the regulars post. If you just "crash the party", you may easily cross a few lines, and it makes sense that the regulars will let you know about your faux pas! I know that there are many "anything goes" chat forums in the web, but Math.SE is not one of those. I kind of like it that way. – Jyrki Lahtonen May 5 '13 at 16:04 • @JyrkiLahtonen I understand this, and I too like the rigid structure of MSE. I initially was surprised when my students told me that they were afraid to come here, but when I thought about it a bit more then I began to understand why. I do not think the structure is the problem. Personally, I think the site is too harsh to those coming here. My view of MSE is of a lecturer's or professor's office hours. In such situations the student has obligations but so does the professor. They should be polite and helpful, and so on...(cont.) – user1729 May 5 '13 at 16:19 • ...and I wonder if MSE comes across as the professor who is unwilling to answer questions unless they are of a sufficient difficulty, or the professor who is constantly looking down his nose at students...I dunno. I cannot quite put my finger on anything specific, but taking a step back I think I can understand my students fears and reservations about this site. – user1729 May 5 '13 at 16:20 • The difference is that when visiting a professor's office, you know who's gonna answer. Here it may be one of the bullies among upper classmen looking for his breakfast victim. I don't know the ideal way of resolving this, and my own feelings about this are continuously swinging. The resentment towards copy/pasted questions is genuine. May be it is also about fear? Fear of the site that was once largely populated by people taking math seriously becoming overrun by a horde. More advanced questions getting buried under the pile of calculus questions? – Jyrki Lahtonen May 5 '13 at 16:33 • @Jyrki: I have no trouble finding more questions that interest me and require non-trivial effort than I have time to think about — and being retired, I’ve lots of time to think about them. – Brian M. Scott May 5 '13 at 18:23 • @zyx Faculty are obliged to help their students; they are paid for it. SE network has no obligation to publish user-contributed content, such as questions. Their stated goal is to "build libraries of high-quality questions and answers". SE leaves the determination of what is a high-quality question to the members of each SE community. The members may reasonably decide that, e.g., "Find the inverse Laplace transform of $(3s+4)/(s^2-9)$" is not a high-quality question, and should not be included in the library. This is why they are given closing&deletion tools by SE. – 75064 May 5 '13 at 22:38 • Whenever I'm new to any site I do a whole ton of reading before I do very much writing at all. Every site is a community, like a bunch of friends. There are inside jokes, ways of talking, etc. The best way to get involved is to watch and listen, then contribute when you know you have something new and valuable to contribute. On a site like this that means a lot of searching to see if someone has already asked an equivalent question or given a better answer. It's not just Math.SE that can seem cold to newcomers. Most forums I read are like that. – Todd Wilcox May 14 '13 at 19:44 ## 5 Answers Some people do make an effort, just saying. This -- along with some means of actually making this visible in one's question -- is all people have been asking for. We (meaning the "bitter" people) want people to ask questions like this, rather than like this. The bitterness you refer to is that there has been a recent upheaval where it was realized that there's enough support for the opinion that we've allowed far too much junk on MSE to break the long-standing deadlock versus the opinion we want everything on MSE, and that didn't go pleasantly. And so we're still in a transition period in the wake of that argument. I expect things to settle down as people get more acclimated to the change and can devote more effort towards working out the nuances of which questions require action and what that action should be. • Students can get stuck and have no idea how to proceed. For those who have been doing this a while, the idea of using a contour integral to find a Fourier Transform is almost second nature. However, for those just starting out, a gentle hint is all that is needed to answer the second question. Once they have tried the contour integral, they might amend their question with more specific difficulties. – robjohn May 5 '13 at 16:42 • That's quite a self-serving summary of the "recent upheaval", but even if correct, the fact of a transition period (which I agree is taking place, at least on some matters) would suggest allowing the transition to take effect on its own and not posting threads whose apparent intent is to manipulate moderator elections. – zyx May 5 '13 at 20:10 • @zyx: "threads whose apparent intent is to manipulate moderator elections"... what are you smoking and where can I get some? – Willie Wong May 6 '13 at 8:57 • @Willie: I’m not convinced that this question was posted with conscious intent to manipulate the moderator elections, but it is certainly open to that interpretation and could have that effect whether intentionally so or not. – Brian M. Scott May 6 '13 at 12:24 • How is discussing whether something ought to be a topic for an election "manipulating" that election? Manipulation has a distinct negative connotation here (as in, involving methods that either due to legal or ethical reasons ought not be used). – Tobias Kildetoft May 6 '13 at 13:29 • @Tobias: If you read the comments over there, you know the answer to your question. If you didn’t, your question is premature. (Whether you agree with those comments is a separate issue.) – Brian M. Scott May 6 '13 at 13:34 • @BrianM.Scott I have been reading the comments. And I don't disagree that the thread seems like an attempt to influence the election. But to me, using the word manipulation carries an implication of wrongdoing to some extend. – Tobias Kildetoft May 6 '13 at 13:39 • @Tobias: I think that zyx intended the negative implication, and I think that it’s defensible, though — as I said in my comment — I’m not convinced that it’s correct. – Brian M. Scott May 6 '13 at 13:43 • (Are we getting derailed here?) TBH, I'm not even sure exactly what I'm supposed to be accused of, and haven't been particularly inclined to humor zyx and prompt for clarification. The (intended) extent of my "manipulations" was to throw out the idea of separating the moderator election from the PSQ topic(s), after having developed a concern that being a recent hot-button topic might cause it to influence the elections more than it actually deserves, and am surprised at the apparent negative reaction (unrelated (?) to zyx's accusations) to what I actually stated along those lines. – Hurkyl May 6 '13 at 18:13 • @BrianM.Scott: I thought zyx was referring to this post here. If that's what he/she intended, it would've been clearer had he/she linked to the thread being referred to. – Willie Wong May 7 '13 at 7:34 • @WillieWong, I spoke of "posting threads" which in SE parlance means addition of new questions to the meta. So it seemed clear enough that it refers to something beyond this answer (which is also troubled, as it pre-empts large swaths of the discussions on meta by smugly declaring everything to have been settled, with only a few minor details left to be arranged). If you no longer stand behind your first inflammatory comment, you have the power to edit or delete it. It doesn't matter all that much how the comment votes go, but trolling for "this guy is crazy" upvotes doesn't help anything. – zyx May 8 '13 at 19:51 • @BrianM.Scott, I thought of explaining the remark with an answer in the other thread, but lacked the time and inclination. If this comment discussion is an indication of interest I might do it. The present answer is worse in some ways, and probably more worthy of comment, but is not out of line with the way site works. – zyx May 8 '13 at 22:12 I browsed the list of new users and took a look at how they were received. Most did not post anything yet, but almost all of those who did received non-negative scores, and more often than not, answers. Even some less-than-stellar questions such as Find Laplace Transform of the function were answered. One question was downvoted and then upvoted twice: How can a matrix be Hermitian, unitary, and diagonal all at once On the first two pages of "New Users" list I found just one user whose question was downvoted and closed (link removed, since the question was reopened). Overall, I think the treatment of new users is not as rough as it appears from your post. (I admit that I have no data regarding unregistered users, who do not appear in the list.) MSE should be as much about camaraderie as about solving particular problems. I have seen many of my own friendships grow with peers as we explore some particular problem together. I would suggest that perhaps as a community, more "communing" could be done. Here I disagree. First of all, SE is not a social network. Even so, there is quite a bit of socializing going on in the main chat room, and it seems that a number of frequent users maintain contact via non-SE tools (facebook, email, skype, etc). Given that the channels for networking are so plentiful, it should be easy for us to stick to math on the main site. • The rest of your points are fine, but I disagree about the last paragraph. Math.SE is very much about social networking. At least for me (I don't have a facebook account and don't plan to get one). – Jyrki Lahtonen May 5 '13 at 9:43 • The community sense should extend from the common bond of interest in mathematics. – Tristen May 5 '13 at 11:07 • I agree with your last comment. However, I actually disagree with your assessment of how new users are treated. Specifically, with regards to the question you cite which was downvoted and closed. Currently, the time stamps are: Question posed (10 hours ago). OP asked what they have tried (10 hours ago). OP provides satisfactory response (10 hours ago). This means, roughly, that within the hour the OP was showing a willingness to engage. And yet the question is closed! It does not make MSE seem friendly...(looking at the edits - moderator intervention? Seriously?!?) – user1729 May 5 '13 at 13:49 • My point is, the default does seem to be "close". Even if this isn't the majority default, the fact is that it happens enough for new users (for example, Tristen, the OP here) to notice means we have a problem. – user1729 May 5 '13 at 13:51 • @user1729 Tristen's observations apparently come "from the comments to most of the posts I have seen over the last couple of days on meta". The second comment under the question also states that. Sure, reading meta debates is not something I would recommend to new users. (Eric Naslund said something of the sort in his resignation post). – 75064 May 5 '13 at 14:31 • @user75064 Ah, forgot about that part of his post. However, as I said in my comment to the question, whenever I suggest to my students that they ask questions here they do not want to. – user1729 May 5 '13 at 14:34 • @user1729 I wonder if your students' reluctance has to do with Math.se specifically or online fora in general. Are they comfortable with asking at Yahoo Answers or Art of Problem Solving? – 75064 May 5 '13 at 14:36 • @user75064 Just MSE. (And it isn't just a small subset of a single year group, and nor are they studying basic calculus.) – user1729 May 5 '13 at 14:53 • @JyrkiLahtonen Like editing Wikipedia, contributing to an SE site is a social activity. But SE is certainly not designed with social networking in mind; there is no way to send a message to another user, for example. So, if the OP finds the site not very social, it may be by design "This site is all about getting answers. It's not a discussion forum. There's no chit-chat." Of course I do not advocate going from "not very social" to "anti-social". – 75064 May 5 '13 at 14:59 • I think sometimes there is quite a bit of chit-chat going on. I don't usually visit our chatroom (which is designed for that) because my time zone is what it is. But in the comments many jokes and anecdotes are shared, mutual cheering for each other takes place, professional courtesies are exchanged and such. Some would like the site to be all business, but we also like to add some levity given half a chance. Granted, the platform is not ideal for that, but it is good enough. – Jyrki Lahtonen May 5 '13 at 16:15 • But conceding the point about the difference between social activity and social networking. – Jyrki Lahtonen May 14 '13 at 19:49 Keep it short, this question Vector field on an odd sphere by this user https://math.stackexchange.com/users/52042/b11 He "pinged" me six times to answer his question. I had already given everything necessary in my first comment. After I said I had done enough, he serially downvoted me and Zev. This person sees MSE as an alternative to ever reading his book or working. I think Asaf would know how to make up a list of probable students asking way too many questions. The worry here seems to be about new users. Fine. With no visible track record, it would help to know the book and what material is in the chapter immediately preceding the question asked. This would tell me that the person asking has actually read the material immediately preceding the question. Here is another winner, https://math.stackexchange.com/users/12796/victor Little interaction with me, I got weary early. • I sympathize: repeated pings of this kind are extremely obnoxious, apart from any integrity issues. There should be a way to walk away from a thread and never hear from it again. Hypothetically speaking: if you deleted your comment under the question, would the OP be able to ping you again? I could not find an answer in meta.SO thread on @replies. I know it's impossible to ping a user who never participated in the thread; I don't know what happens if the user participated but then removed their input from the thread. – 75064 May 5 '13 at 18:24 • @user75064, I left it there. I won't say the pings failed to piss me off, but I did not have much trouble deciding what to do, which was to say that I had done enough. Years ago, I would regularly get caught giving some answer, then the OP (on MO or here) would say You haven't done enough, write more, and I would. Lately, not as much. – Will Jagy May 5 '13 at 18:45 • I’m afraid that my reaction is So what? There are obnoxious people everywhere, and B11 is most unrepresentative of my experience. And I’ve little sympathy with complaints about Victor: he accepts answers, and he’s answered $13$ questions himself, with six acceptances. – Brian M. Scott May 5 '13 at 18:48 • Asaf doesn't know SQL (and has very little motivation to learn that too), so he is limited to the available data.SE queries, with the occasional minor modification. – Asaf Karagila May 5 '13 at 18:48 • @Brian, I was aware that you and I disagree on most MSE issues, certainly after your comments on Qiaochu's verbatim homework Meta proposals. I don't expect to convince many people of my viewpoint, but I do like to say it now and then. You hold the majority view, including the continuing moderators. Want to be moderator? It would involve less friction for you than for many others. – Will Jagy May 5 '13 at 19:01 • I have had a lot of multiple-ping experiences (these are called help vampires on MSO) but this has been more the case with the eager, show-maximum-effort posters who are being held up as the model of good question etiquette. I am coming more and more to two ideas from experience on the site: -- not answering questions where there is any ambiguity about what is a complete answer --- and viewing "shown work" as a negative unless the OP wants the work analyzed (so if somebody does post their efforts, it is as logical as WHYT to ask, "do you want that examined or is it decoration?"). – zyx May 5 '13 at 19:24 • @zyx, what is MSO? oh, and what is WHYT, perhaps what have you tried? I don't see any zyx on MO, do you use a different name? – Will Jagy May 5 '13 at 19:38 • Meta Stack Overflow ( meta.stackoverflow.com/search?q=vampires ) and, yes, What Have You Tried. – zyx May 5 '13 at 19:42 • Will, I am a frequent MO user but under a different name. Maybe the two profiles will get combined when MO changes to SE 2.0, maybe not. Since these sites are different religions, in some sense, I separate the identities. – zyx May 5 '13 at 19:53 • I think we can save ourselves a lot of heartburn by simply recognizing why we are here on M.SE: because we enjoy math and people who enjoy math. I answer certain questions here because I love to solve the problems posed in those questions, and I love to interact with others on this site. As soon as it is no longer fun, I am done. I do not have a set limit, but when someone makes it clear that they are here because they do not like math, I will note their name and disappear. – Ron Gordon May 5 '13 at 19:53 • Now, that same user has starting pinging people in other completely unrelated questions for help: math.stackexchange.com/a/382465/19440 – mrf May 5 '13 at 20:21 • @mrf, there is also a Chat feature for MSE. This has, probably against system design, remained a single room where people drop in and later drop out. There are plenty of newcomers who put a question on Main and then bug people on Chat about it forever. – Will Jagy May 5 '13 at 20:48 • @Ron: I sympathize, but I don’t myself actually mind helping those who don’t like mathematics if they’re willing to work at it. Some folks may remember a beginning calculus student whose name began with J who complained bitterly (and somewhat unreasonably) about the unreasonable expectations of his instructor and the author of his textbook but who posted a lot of questions and did actually persevere, if not always to very good effect; I can work with someone like that, though I could have done with less moaning. – Brian M. Scott May 6 '13 at 3:44 • @user1729, "They all hated maths, all claimed they were rubbish at it, and they are all going on to be maths teachers" - I now wonder how many "teachers" past and present were actually that way. *shudders* If you did manage to get through 'em, then good on you! – J. M. is a poor mathematician May 6 '13 at 10:46 • @BrianM.Scott: I think we are on the same page. I was complaining about the posters who do not want to work, but just want everything done for them. I totally enjoy working with people who are having a rough time grasping this or that concept, but are clearly giving it a go. – Ron Gordon May 8 '13 at 23:32 I am sorry that you have not been feeling welcome here. I hope that this feeling will go away when you see how much "good" is actually going on here. And I hope that we might convince you that your feelings are mostly a misunderstanding. You write: ...from the comments to most of the posts I have seen over the last couple of days on meta, nobody really wants to see anything from anyone new on MSE Remember that we are all people here and we make mistakes. Some times our mistake is that we speak/write before we think. However, I don't see how comments given recently could give you the impression that we don't want to see anything from anyone new. So let me be somebody who is now telling you that I want to see something from, in particular, new people. There seems to be alot of bitterness, especially regarding the homework problems situation. That is true, and again, the words spoken aren't always the best. As mentioned, we don't have anything against "new people", but it can be a bit frustrating when you see a question that just commands "do my homework". For me personally, there is an ethical problem here. As a teacher, I take cheating very serious, and I wouldn't not want my students seeking out someone else to actually do there homework. It is fine helping people with homework questions. In my opinion a student doesn't learn much from just being given a solution. Learning happens when the student has to think for him/herself. And it is hard for me to be a part of something if I have a feeling that it is ethical. If the word on the street is that "Over on Math.SE they just give the answer away, and I don't have to do anything. It is great, I don't even have to do my own homework anymore", then do I really want my name associated with that? While I understand that many people may be looking for a handout it is unfair to assume that is the case for anybody asking a homework question, or help understanding or clarifying some aspect of a particular area. Some people do make an effort, just saying. Yes! Lots of people do make an effort. I see plenty of good homework questions and I see a lot of good answers. No one (that I know of) is saying that all homework questions are bad in and of themselves. The problem is the question that shows absolutely no effort. If I am to try and help you, then the best thing you can do is to tell me what the issue is. If you have a (homework) problem that you can't do, then tell me what it is about the problem that is causing trouble. If you have absolutely no thoughts about the problem, you could tell me what class the problem is from. Where did you come across the problem, and so on. Those questions, I would say, you should always be able to answer. And writing just a bit about where the problem comes from and a bit about your background ... well that is effort in my mind. I don't know your background. If you have never heard about the product rule, then it might not be that helpful for me just to show how you solve the problem using the product rule. I might want to tell you about the product rule. Oh, and no offence taken :) • It has been a great pleasure to see how this discussion has played out. – Tristen May 5 '13 at 22:19 Instead of "What have you tried?" perhaps we should write "If you show us what you have tried perhaps we can figure out how that can be the start of a solution." Oral statements have an associated tone of speech. Written statements lose the tone. A statement of some sort needs to indicate the tone. • The words "quality standards" have a similar effect. To see how current practices are working, the daily closed list has examples like this ( math.stackexchange.com/questions/391115/convergence-tests ) and this ( math.stackexchange.com/questions/385739/graduate-linear ) . There is un-necessary antagonism and drama that can be avoided by silent methods like downvote, close vote, do nothing (and let others decide if they want to answer), add filterable tag. – zyx May 14 '13 at 19:13 • @zyx I do not see how a downvote is helpful in such situations. – user1729 May 16 '13 at 10:10
2019-06-20 12:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41155150532722473, "perplexity": 1027.7378891089443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00256.warc.gz"}
https://mathsmartinthomas.wordpress.com/2017/10/14/maths-prize-for-3-nov-2017-geometry-with-complex-numbers/
# Maths prize for 3 Nov 2017: geometry with complex numbers Geometry with complex numbers What are the coordinates of the reflection of the point (1,0) in the line $y=mx$? A Freddo and fame for every well-reasoned attempt at a solution. Answers to Mr Thomas by Friday 3 November.
2018-06-18 22:53:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6315188407897949, "perplexity": 3538.01467536201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861456.51/warc/CC-MAIN-20180618222556-20180619002556-00340.warc.gz"}
https://newey.me/manually-calculating-an-svms-weight-vector/
# ./backlog: Charlie's blog ## Manually Calculating an SVM's Weight Vector Note: This post assumes a level of familiarity with basic machine learning and support vector machine concepts. Let's say that we have two sets of points, each corresponding to a different class. Let's call these classes positive and negative (but really they could be any binary class - pink, or not pink, cucumber or not cucumber, and so on). For simplicity, I'm assuming two dimensions, but really the only requirement is that your data is separable by a hyperplane. So, let's make up some input data: $positive = [(1, 1), (2, 2), (2, 0)]$ $negative = [(0, 0), (1, 0), (0, 1)]$ Let's say that we want to train a support vector machine on these input classes and then determine the weight vector of the optimal decision hyperplane ("optimal decision hyperplane" is SVM-speak for "the line that separates the two input classes with the biggest distance between them" - this will become clear in a minute). The weight vector is simply a vector perpendicular to the optimal decision hyperplane. Let's plot our points. Are they linearly separable? (Yes, they are). Let's draw the positive and negative canonical hyperplanes, and then fit our optimal decision hyperplane. Notice that the red line (the optimal decision hyperplane) maximises the perpendicular distance between the positive and negative classes. So, what's our weight vector? On this example, it's actually pretty straightforward to determine by inspection (we're simply looking for the normal vector to the optimal decision hyperplane), but that's no fun. However, it's good to know what result we're after - so to give you an intuition, I've gone ahead and plotted it. We're looking for the vector that passes through (0, 0) and (1, 1). So now we know what we're looking for, let's go ahead and figure it out. We know that the equation of a line in an SVM follows these constraints; $w_1 x + w_2 y + w_3 = 0$ Our weight vector is composed of w1 and w2. If we examine our positive and negative canonical hyperplanes, we can see that there are two points that lie on each. These are the basis for our support vectors - the vectors that allow the SVM classifier to decide on the optimal decision hyperplane. $supportVectors_{positive} = [(1, 1), (2, 0)]$ $supportVectors_{negative} = [(1, 0), (0, 1)]$ So, we can plug these points into our line equation, and solve linearly for w1, w2, and w3. Note that the positive examples are denoted with a sum of 1, and the negative examples are denoted with a sum of -1. This is just SVM convention (it's arbitrary but it works). $pos_{1} = 1w_{1} + 1w_{2} + w_{3} = 1$ $pos_{2} = 0w_{1} + 2w_{2} + w_{3} = 1$ $neg_{1} = 1w_{1} + 0w_{2} + w_{3} = -1$ $neg_{2} = 0w_{1} + 1w_{2} + w_{3} = -1$ We can try to solve this by subtracting neg1 from pos1. The w3 and w1 terms cancel out leaving us with... $pos_{1} - neg_{1} \implies w_{2} = 2$ We can now substitute the value for w2 into neg2... $2 + w_{3} = -1 \implies w_{3} = -3$ We can now substitute the values for w2 and w3 into pos1... $w_{1} + 2 - 3 = 1 \implies w_{1} - 1 = 1 \implies w_{1} = 2$ This gives us our new line equation - which can be simplified down to... $2x + 2y - 3 = 0 \implies x + y + \frac{3}{2} = 0$ This implies that our vector passes through (0, 0) and (1, 1), because the coefficients of x and y (the weight vector components) are 1. Note that you don't need to divide the line equation by 2, as above. Using (1, 1) or (2, 2) for the weight vector components are both perfectly sensible answers because the weight vector is normal to the optimal decision hyperplane - that is, it simply encodes a direction. So we now have the components of our SVM weight vector. The SVM weight vector is comprised of w1 and w2, so.... $w = \begin{bmatrix} w_{1} \\\\ w_{2} \\\\ \end{bmatrix} \implies \begin{bmatrix} 1 \\\\ 1 \\\\ \end{bmatrix}$ And you're done. Does this fit with what we predicted earlier? (Yes, it does).
2019-02-16 21:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7450310587882996, "perplexity": 427.10399550816663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481122.31/warc/CC-MAIN-20190216210606-20190216232606-00562.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=7bb5275ddf206f227dca72a361093e00&t=25395
mersenneforum.org Work for chosen exponent Register FAQ Search Today's Posts Mark Forums Read 2020-03-22, 11:23 #1 Petnek     Mar 2020 Czechia, Tábor 24 Posts Work for chosen exponent Hello, is it possible to make calculations for chosen exponent? For example this one. It's the closest exponent to my date of birth and I would appreciate a chance to take care of it Is here a way how to make calculations for it? If yes, how? And in which order? For example TF is not possible to reserve due to current active range. Thank you 2020-03-22, 14:28 #2 LaurV Romulan Interpreter     Jun 2011 Thailand 2×13×337 Posts That exponent generates a mersenne number that already has a factor, therefore there is nothing more to calculate for it. If you want to stay as close to it as possible, go to the link you posted, place your mouse over the little arrows in the upper-left table (where the exponent is printed in big letters) and look for "next exponent without factor" respectively "previous exponent without factor". And pick those, or some other from the respective range. You will have no competition there (for who didn't want to click the link, range is 198M), because most of other users are working either below 110M, or at 332M to hunt the EFF prize, therefore it will be no other user competing for it, most probably, so you can get that exponent as simple as using the manual test page, fill what exponent you want, or just add it to the worktodo.txt using a text editor. Last fiddled with by LaurV on 2020-03-22 at 14:30 2020-03-22, 16:01 #3 Petnek     Mar 2020 Czechia, Tábor 24 Posts OK, where factor is found, there is no more work. So I chose this one. Until when is reasonable to do TF? OK, this I found. For exponent up to 227,300,000 its 2^76 but here is next to GPU72 already 2^78. Why? TF I can reserve for GPU, but for CPU I'm able to assign nothing. Why is that? I'm getting this error msg: Code: Error code: 40 Error text: No assignment available meeting CPU, program code and work preference requirements, cpu_id: 2233562, cpu # = 0, user_id = 28284 You wrote I can just add it to worktodo.txt. How I can get hash like part of the line written into that file? I'm new in primes, so I'm not sure what is for what. Is here some sequence of tests? For example first TF, than P-1, PRP, ECM and finally LL? OK, this I found too. TF > P-1 > LL > DC. For what is PRP and ECM than? Last fiddled with by Petnek on 2020-03-22 at 16:30 2020-03-22, 18:15 #5 Petnek     Mar 2020 Czechia, Tábor 24 Posts Yes, I have a GPU and I'm already TFing. In these times not so fast GTX960 The sequence of tests is clear to me, I hope When or how is possible to assign P-1 test to that exponent? When I can do that, is it related to not enough TF tests or I'm somehow not reaching some requirments for categories? I'm getting that error msg mentioned in previous post. 2020-03-22, 18:27 #6 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 4,463 Posts Welcome to the forum. You might find some of the background or reference info at https://mersenneforum.org/showthread.php?t=24607 useful. Historically LL was used for primality testing. Because of the excellent error detection of the Gerbicz error check, PRP is recommended instead of LL. Assuming no factors are found, the order is typicallyTF to the level merited by effort versus odds of finding a factor, P-1 factoring similarly, PRP and eventually PRP DC. Last fiddled with by kriesel on 2020-03-22 at 18:30 2020-03-22, 18:30 #7 LaurV Romulan Interpreter     Jun 2011 Thailand 2·13·337 Posts (I edited my post above, crosspost, to add few phrases) There should be no issue in getting a manual assignment in that range, for either P-1 or LL/PRP. You just go to gimps manual assignments, fill P-1 and the range close to your exponent. I just got this right now (which I am not going to do, most probably I will unreserve, but just to test the reservation form): Code: PFactor=33360C59858ED75AF8680544E5387FA8,1,2,198000067,-1,76,2 You can also add such lines manually to your worktodo, and let P95 communicate with the server, to record them (use N/A for the assignment key when you add manual work). The P-1 work should be done AFTER the TF, or (depending on your system) before the last bit of TF, otherwise the work is in vain in case TF would have found a factor faster. So, if you decide to TF to 82 bits, the sequence should be TF to 81, then P-1 for about 3% or 5% chances to a factor, then TF to 82, then PRP (or LL, but right now, LL is frown here around, people want to switch to PRP for newer assignments, due to better "tolerance" for errors). Last fiddled with by LaurV on 2020-03-22 at 18:34 2020-03-22, 19:03   #8 Petnek Mar 2020 Czechia, Tábor 24 Posts Quote: Originally Posted by kriesel Welcome to the forum. You might find some of the background or reference info at https://mersenneforum.org/showthread.php?t=24607 useful. Historically LL was used for primality testing... Thank you for the welcome. I'm just going through all of these threads, interesting informations indeed. OK, PRP is better than LL. Quote: Originally Posted by LaurV There should be no issue in getting a manual assignment in that range I'm getting that error msg when trying to assign that exact exponent, no range around. That is not good way? How to assign P-1 just to chosen exponent? Thank you both for further explanation of sequence of tests. I'm just getting through all those reccomended threads, so many info... 2020-03-22, 22:46 #9 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 71×101 Posts Getting a PRP assignment implies that you have a P-1 assignment too. Both prime95 and the latest gpuOwl will do the P-1 automatically before running the PRP test. Last fiddled with by Prime95 on 2020-03-22 at 22:46 2020-03-23, 10:12 #10 Petnek     Mar 2020 Czechia, Tábor 24 Posts Thanks for all your answers. So far I didn't find answer for this one... For which tests is wise to use GPU? Are GPUs faster on something else than TFing? In my case I have GTX960, Ryzen 5 2600, i5-2410M and partially i7-6820HQ Last fiddled with by Petnek on 2020-03-23 at 10:12 2020-03-23, 16:49 #11 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 22×3×7×103 Posts In general: GPU for TF, it is almost like they were designed for this. P-1 is good on a CPU Primality checks (PRP or LL) do on a CPU. This will best help the project along. As always do what gives you pleasure. There are times that using a GPU to do a first time check is warranted (as a platform independent check of a reported prime). Similar Threads Thread Thread Starter Forum Replies Last Post cappy95833 Software 5 2019-11-21 04:24 fairsky GPU Computing 5 2013-11-22 10:33 James Heinrich Puzzles 42 2011-11-24 16:03 azhad Software 0 2006-12-30 06:36 Unregistered Software 3 2004-02-23 16:45 All times are UTC. The time now is 23:37. Thu Oct 1 23:37:18 UTC 2020 up 21 days, 20:48, 0 users, load averages: 1.90, 1.62, 1.64
2020-10-01 23:37:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4404037892818451, "perplexity": 4236.154696025472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402132335.99/warc/CC-MAIN-20201001210429-20201002000429-00781.warc.gz"}
https://www.physicsforums.com/threads/eliminating-the-parameter.509857/
# Homework Help: Eliminating the parameter 1. Jun 26, 2011 ### autre I have the parametric function x(t) = (1-t^2)/(1+t^2), y(t) = 2t/(1+t^2) and need to eliminate the parameter and find a Cartesian equation. I've tried to substitute t = tan u, then x(t) = cos(2u) and y(t) = tan(2u). From that I get y = sin(2x)/x. However, when I entered the original parametric function into a grapher, I get an entirely different graph. Where did I go wrong? 2. Jun 26, 2011 ### eumyang I'm not getting that for the y(t) equation. I think it's because you got your identity confused. $$\frac{2\tan u}{1 - \tan^2 u} = \tan 2u$$ (minus in the denominator) But here we have: $$y(t) = \frac{2\tan u}{1 + \tan^2 u} = \frac{2\tan u}{\sec^2 u} = ...$$
2018-06-18 10:08:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8620043396949768, "perplexity": 2425.27051119491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860168.62/warc/CC-MAIN-20180618090026-20180618110026-00308.warc.gz"}
https://www.electro-tech-online.com/threads/aux-battery-project.95922/
# Aux battery project Status Not open for further replies. #### yurizilla ##### New Member Good day, I'm working on a project for my Moped. I'm looking to add an additional headlight to the bike, it will be used at a weekend house for fun. My problem is that I'm trying to find the cheapest, yet effective way to make this work. My research so Far has led me to the following equation: For 12v systems Headlamp Bulb rating 55W $40 Motorcycle Battery autozone 14 ah 12n14-3a battery$46 35w/12v= 2.9 amps (draw) 14AH/20hrs = .7 amps @20, 1.4amps @10, 2.8amps @5 12v Battery charger $40 So If I buy a motorcycle battery, hook it up to a car fog light that is rated 35W I can use it for about 5 hours and then recharge it overnight and during the next day (battery will be removable). If I want to put back what I use during the night, then I reverse the math, say, I need a charger that can deliver 1.4 amps over 10 hours charging time, and the battery must be able to accept this much. What specs wil list this if it is a correct assumption for charging? I've also read about deepcycle batteries, sounds to me like they are used from full charge to discharge rather than cranking. Is this a necessity or will the motorcycle battery suffice. So I'm looking at adding a headlight for about$120. Could I make this project cheaper with some compromise, I'm thinking: Moped headlight 25w $15 6V charger$20 But how many AH would I need in a 6v battery? is it 25w/6v= 4.1 amps draw 21ah/5= 4.2 So I need to find a 6V battery with 21ah, so for the same wattage, as the 35w fog light I'd need a 30ah 6v. I see 4.5 ah 6v's for 20 bucks, so I would need 5 6v , which then costs 100bucks, so I'm break even at 135, infact I loose 10w of lighting. Does anyone know where to get cheap 6v batteries, they seem close to 12v prices in the bottom range, or they seem small, but I cant find the rating I hope the post isnt too long or with too many questions all at once. thanks #### birdman0_o ##### Active Member Just to point something out, make sure they are deep cycle lead acid, not the normal car battery type as they are not meant for continuous draw. #### MikeMl ##### Well-Known Member Right, most lead-acid "motorcycle" batteries are just a small starting battery, and are not suitable for deep-discharge. #### marcbarker ##### New Member Hey many many years back I used to have a bicycle with a really bright headlamp, had a 6 V (40W?) motorcycle headlight bulb, I used C cell NiCad batteries to power it, and I used the tiny little generator to charge the batteries. There was a light that went out when the batteries were full charged so I could click the generator away from the tyre. I only used it for shortcuts through dark alleyways and unlit sidestreets. #### Ghosty_Ghoul ##### New Member It's now law here for cyclists to have lights in the dark. #### audioguru ##### Well-Known Member A little 14Ah lead-acid battery will supply 0.7A for 20 hours or 1.4A for 10 hours. It might power your 2.9A light bulb for a couple of hours. If you turn the light on and off a lot then the battery will last for a shorter time because a 2.9A bulb draws 29A for a moment each time it is turned on. #### marcbarker ##### New Member A little 14Ah lead-acid battery will supply 0.7A for 20 hours or 1.4A for 10 hours. If you turn the light on and off a lot then the battery will last for a shorter time because a 2.9A bulb draws 29A for a moment each time it is turned on. Does the "20 hr A/hr rate" still hold true at 10 hr rate? There's probably not much difference between the two... How many times does the lamp need to be switched on to use up an A/hr of capacity? #### audioguru ##### Well-Known Member If the battery is rated at a 20 hour rate then it will last less than half the time for double the current. 0.7A for 20 hours, or 1.4A for 8 hours, or 2.9A for maybe 2 hours. The light will dim slowly the entire time. Don't turn it on and off a lot. There is a huge strain on an incandescent light bulb when it turns on. #### marcbarker ##### New Member Don't turn it on and off a lot. There is a huge strain on an incandescent light bulb when it turns on. I'd better not use my car's turn signal lamps then, they flash on and off 1.5 Hz the whole time I'm using them! If you turn the light on and off a lot then the battery will last for a shorter time So anyway, how many switch-on surges do you think would amount to 1 A/hr's worth of extra drain? Last edited: #### audioguru ##### Well-Known Member A light bulb lasts 1000 hours if it is not turned on and off too much. It might last "only" 100 hours if it is flashing all the time. 100 hours is a long time in a car. Instead of wasting battery power heating incandescent light bulbs, why not use cool LEDs? #### marcbarker ##### New Member Yes I use red LEDs in my rear lights, Much nicer looking! I connected a load of them in series-parallel, with a few ohms in series with the ground return of both lamps. :If you turn the light on and off a lot then the battery will last for a shorter time So anyway, how many switch-on surges do you think would amount to 1 A/hr's worth of extra drain? #### audioguru ##### Well-Known Member So anyway, how many switch-on surges do you think would amount to 1 A/hr's worth of extra drain? You won't know unless you find out how quickly the hot filament cools and how fast is the flashing. #### marcbarker ##### New Member Ooops cross purposes, sorry... If you turn the light on and off a lot then the battery will last for a shorter time because a 2.9A bulb draws 29A for a moment each time it is turned on. I was asking about how much shorter a time this is in the OP's battery. That's what I mean when I ask how many of these 'switch-on surges' (of the OP's battery) do you think would amount to 1 A/hr's worth of extra drain? I'm interested, since you'd specifically mentioned "29 A" in posting #6 Last edited: #### marcbarker ##### New Member It's now law here for cyclists to have lights in the dark. I thought that was a law in UK here for the last 50 years? Not that it makes much difference, nearly every night cyclist where I am has no lights! #### audioguru ##### Well-Known Member I think I saw cars without headlights at night in the city in England? Here cars must drive at night toward traffic with low beams. High beams can be used if there is no oncoming traffic. Cars since 1989 have "daytime running lights" that turn on automatically. Most cars have them as dimmed low beams or "front turn signal lights". But Chrysler cars (and Jeeps) use full brightness high beams to blind oncoming traffic on cloudy days. Most bicycles here use very dim and almost useless LED lights at night. Status Not open for further replies. Replies 4 Views 584 Replies 4 Views 1K Replies 1 Views 619 Replies 6 Views 809 Replies 14 Views 949
2022-10-03 23:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3427046537399292, "perplexity": 2972.092329443655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00592.warc.gz"}
https://stats.stackexchange.com/questions/304919/what-is-the-virtue-of-a-causal-arma
# What is the virtue of a causal ARMA? An ARMA with no roots in the unit circle has a unique stationary solution, and it is of the form $\sum_{j=-\infty}^{\infty} \psi_j Z_{t-j}$, where the $Z_i$'s are white noise, and where $\sum |\psi_j|<\infty$. A stationary ARMA process with no roots in the unit circle is called causal if it can be further written as $\sum_{j=0}^{\infty} \psi_j Z_{t-j}$, where the $Z_i$'s are white noise, and where $\sum |\psi_j|<\infty$. What is the virtue of a causal ARMA? In what way is it more helpful than merely having an ARMA with no roots in the unit circle? For example, some people define an ARIMA as a time series that is a causal ARMA after $d$ many instances of differencing. Why causal? Why not define an ARIMA as being a time series that is an ARMA with no roots in the unit circle after $d$ many instances of differencing? Bonus question: same question but for invertibility. • What do you think of my answer? Is it satisfactory? I am asking since you have neither accepted it (which can be done by clicking on the tick mark to the left) nor commented on it. Nov 19 '18 at 16:04
2021-10-28 05:44:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827075958251953, "perplexity": 235.2798569640315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00376.warc.gz"}
https://socratic.org/questions/what-are-some-common-mistakes-students-make-with-chemical-equations
# What are some common mistakes students make with chemical equations? May 30, 2014 Perhaps the most common mistake is mistaking diatomic molecules as being already balanced in a chemical equation. Here's an example of what I mean: ${C}_{6} {H}_{12} {O}_{6} \to {C}_{2} {H}_{5} {O}_{6} + C {O}_{2}$ Balanced it's ${C}_{6} {H}_{12} {O}_{6} \to 2 {C}_{2} {H}_{5} {O}_{6} + 2 C {O}_{2}$ Students tend to forget that $C {O}_{2}$ needs to be balanced - the fact that it has a familiar chemical formula doesn't help. It is also really common to forget that $C {O}_{2}$ has two atoms of oxygen - this is an essential piece of knowledge when balancing equations. It does not mean that there are two molecules of $C {O}_{2}$, it merely means there are two atoms of oxygen present in the molecule.
2019-10-22 16:02:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6802663803100586, "perplexity": 395.40046810575507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00025.warc.gz"}
https://open.oregonstate.education/intermediatemicroeconomics/chapter/module-22/
22 Asymmetric Information The Policy QuestionShould the Government Mandate the Purchase of Health Insurance? A key component of the Affordable Care Act, passed by Congress and signed into law by President Obama in 2010, was the individual mandate. This compelled individuals to purchase health insurance or face a large financial penalty. This provision proved to be the most controversial of the new law, yet many economists believed it to be the crucial piece that guaranteed the success of the new law. Exploring the Policy Question 1. Why do many economists think it essential that the individual mandate be a part of the Affordable Care Act? 2. Is the individual mandate the only way to achieve universal health insurance? What other ways do societies accomplish this? Learning Objectives 22.1 The Market-for-Lemons Problem Learning Objective 22.1: Describe the lemons problem in markets with asymmetric information. Learning Objective 22.2: Explain the term adverse selection and how it affects insurance markets. 22.3 Principal-Agent Models Learning Objective 22.3: Describe how asymmetric information can affect principal-agent relationships. 22.4 Moral Hazard Learning Objective 22.4: Explain moral hazard and how it can affect the efficiency of markets. 22.5 Policy ExampleThe Affordable Care Act Learning Objective 22.5: Explain how adverse selection models explain the importance economists place on the individual mandate in the Affordable Care Act. 22.1 The Market-for-Lemons Problem Learning Objective 22.1: Describe the lemons problem in markets with asymmetric information. One of the assumptions of the efficient-markets hypothesis is that buyers and sellers are completely informed; they know everything there is to know about the goods for sale in a market, like their quality. This is critical for achieving efficiency, since a buyer’s willingness to pay for a good depends on them knowing the value of the good to themselves. If a buyer, for example, overestimates the value of a good to themselves, perhaps by over-estimating its quality, then they might end up buying it at a price that exceeds their willingness to pay for the good once its quality is revealed. This would be an exchange that lowers total surplus, and the market cannot therefore be efficient. Asymmetric information describes a situation where one side of an exchange, the buyer or the seller, knows more about the product than the other. Generally, we might expect the sellers of goods to know more about them than buyers, but not always—a collector of antiques might know more about the value of an item they see for sale by someone who found an old item in their attic. Items whose value is not immediately known to a buyer or seller are said to have hidden characteristics. Asymmetric information can also arise when agents’ actions are not visible to all parties. For example, a customer may hire a mechanic to fix their car, but they do not observe the actions the mechanic actually takes. The mechanic might say they replaced a part when, in fact, they did not. We call these hidden actions. The classic example of this is called the market for lemons, after George Akerlof’s 1970 paper of the same name. In this scenario, sellers of used cars possess more information about the true quality of the cars than sellers do, since the owners of used cars have been using them and know their faults. In a simplified version of this model, consider a world in which there are only two types of cars: good quality cars and bad quality cars (or “lemons,” which in the United States is slang for a bad car). There are equal numbers of both in the marketplace, and both buyers and sellers know this. Buyers value good used cars at $10,000 and lemons at$5,000. Sellers are willing to sell good used cars for $8,000 and lemons for$3,000. For clarity, let’s suppose that there are exactly one hundred cars of each type and over two hundred buyers, each willing to buy either car, given the right price. Immediately, we can see that in a world of full information, an efficient outcome will arise: owners of good cars will sell them at a price between $8,000 and$10,000, and owners of lemons will sell them at a price between $3,000 and$5,000. Each car sold will generate a total of $2,000 in total surplus regardless of the price agreed to, as this is the difference between the sellers’ minimum willingness to accept and the buyers’ maximum willingness to pay. In the end, the sale of two hundred cars will yield a total surplus of$400,000 (or two hundred cars times $2,000 surplus for each). Now consider a world of asymmetric information, where the sellers know the quality of the cars they are selling but the buyers do not. Sellers of both types of cars have an incentive to claim that their cars are of good quality. Buyers understand the incentive to misrepresent the true quality of the car and therefore do not believe the sellers’ claims. So what happens in the market? Let’s keep things simple by assuming that buyers are risk neutral. In this case, the buyers know that with equal amounts of both types of cars in the market, choosing one at random will yield an expected value of$7,500. $\text{Probability of a good car} = .5,\text{ value of a good car} = 10,000$$\text{Probability of a bad car} = .5,\text{ value of a lemon} = 5,000$$\text{Expected value} = (.5)(10,000) + (.5)(5,000) = 7,500$ This means that no buyer is willing to pay more than $7,500 for a used car. Since this is true, no seller of a good car is willing to sell, as$7,500 is lower than their minimum willingness to accept. Because of this, no owner of a good-quality used car will sell it, and the only cars for sale on the market will be the lemons. Both buyers and sellers can figure this out, and in the end, buyers know that only lemons are for sale and will not offer more than $5,000. This leaves a market for only lemons in which all one hundred lemons will be sold for a price between$5,000 and $3,000. Each transaction generates$2,000 in total surplus for a total of $200,000. This is the market failure: the asymmetric information problem leads to a deadweight loss of$200,000, or the difference between the total surplus in the full information marketplace and the total surplus in the market with asymmetric information. The fact that the good-quality cars disappear from the market is called adverse selection, a topic we will focus on in the next section. Learning Objective 22.2: Explain the term adverse selection and how it affects insurance markets. Adverse selection occurs when the more desirable attributes of a market withdraw due to asymmetric information. This could be better-quality products, better-quality consumers, or better-quality sellers. The result is that consumers and producers may not make transactions that are socially beneficial, transactions that would yield positive producer and/or consumer surplus. This is the market failure associated with asymmetric information: the ability of the more informed agents to exploit their advantage. Insurance markets are a classic example of market failures from information asymmetries. Information asymmetries exist in insurance markets due to the fact that there is both hidden action and hidden characteristics: insurers cannot monitor the actions and private information of the insured. For example, in car insurance, the insurer does not know how carefully a driver actually drives. In health insurance, the insurer might not know about pre-existing conditions and the lifestyle of their insured. Because of this information asymmetry, insurers have to charge a price that is an average of the costs of their insured. This price is often too high for the most desirable consumers, causing them to not purchase insurance. This decision to stay out of the market makes the situation worse, as the average cost of the insured increases as the lowest cost consumers exit, further raising the price of insurance and forcing even more of the healthier consumers from the market. This outcome is inefficient because if insurance companies had full information about their clients, they could charge each a price that is above the cost of insurance but is less than the risk premium—the amount above the cost that consumers are willing to pay to avoid risk—that would allow these healthier clients to purchase insurance and create more surplus in society. 22.3 Principal-Agent Models Learning Objective 22.3: Describe how asymmetric information can affect principal-agent relationships. Principal-agent relationships are situations in which one person, the principal, pays another person to perform a task for them. In its most basic form, this describes the employee-employer relationship. But it can also describe a situation in which a car owner pays a mechanic to fix their car, a homeowner hires a housecleaner, or many other everyday situations. These relationships are often subject to asymmetric information because agents can act in ways that are unobserved by the principal. If the individual incentives are not aligned, the agents might take actions contrary to the interests of the principal. For example, the owner of a car in need of repair would like the car to be repaired properly and as inexpensively as possible. The mechanic’s incentives might be to repair the car properly but to maximize their revenue from the situation. So, for example, the mechanic might like to perform extra, unnecessary work in order to earn more money. Since the car owner does not know that the extra work is unnecessary, the mechanic can claim it is and earn the extra revenue. Similarly, an employer’s incentive is to have each employee work hard and efficiently. But a worker’s incentive might be to expend as minimal an effort as possible but still perform their assigned duties. If the employer cannot detect that the employee is not giving their maximum effort, the employee is free to give a diminished effort. In these situations, it would be unsurprising for employees to give less than their full effort. The key to these situations of misaligned incentives is the presence of hidden actions: efforts (or lack thereof) on the part of one or both parties that are unobserved by the other. For example, if a job simply requires an agent to put two component pieces of a computer together on an assembly line, the principal can presumably observe the agent’s actions by monitoring the number of parts they assemble in an hour. They can also test to make sure the connection of each set of two parts is good. In this case, it is easy to write a contract to align incentives: specify the number of parts the agent is required to join properly in an hour. But consider the situation where the principal cannot observe the effort of the agent. For example, in retail sales, it might be difficult to observe each interaction with customers. We call these hidden actions, and they make it difficult or impossible to write a contract specifying a particular level of effort. Principal-agent relationships where there is hidden action can be addressed through contracts that seek to align incentives through rewarding performance instead of effort. A typical one in the case of retail sales is through the use of commissions, where the agent gets a percentage of every sale they make (or a percentage of sales over a certain threshold). This helps align the incentives of the principal, to make as much revenue as possible, with the agent, who, with a commission contract, would like to maximize the value of their own sales. Other examples are CEO pay, where the CEO of a company is compensated partially based on the performance of the firm itself. 22.4 Moral Hazard Learning Objective 22.4: Explain moral hazard and how it can affect the efficiency of markets. Another consequence of hidden action is moral hazard, where people who have entered into contract to mitigate the cost of risk engage in riskier behavior because the costs have diminished. An insurance contract that covers the cost of damage from an automobile collision might cause the holder of the contract to drive in a less cautious manner, increasing the risk of an accident. Borrowers who have limited liability contracts might be more willing to take risks with the money they borrow. If the actions of the contract holder could be monitored, the problem would go away, as the contract could be written to take behavior into account and either prohibit it or charge a price based on the nature of the actions the contract holder engages in. It is the hidden action aspect of these contracts that gives rise to moral hazard. The consequence of contracts that reduce the cost of risk-taking and therefore increase the incentives to take risks is that they become more expensive. An insurer who sells auto insurance will have to charge higher premiums as a consequence of moral hazard because its costs go up with riskier driving on the part of the policyholders. This causes premiums to go up for all holders, making it too expensive for the most careful drivers and causing a market failure. 22.5 Policy ExampleThe Affordable Care Act Learning Objective 22.5: Explain how adverse selection models explain the importance economists place on the individual mandate in the Affordable Care Act. Social insurance contracts like the Affordable Care Act rest on the principle that adverse events, like an injury, accident, or health crisis, can happen to anyone and that by pooling resources, the unlucky can be cared for from the shared resources of society. When participation isn’t universal, the cost to the participants depends on the average risk of the insured. If optional, adverse selection tells us who is the most likely to stay in the pool and who is most likely to leave, the most risky and the least risky, respectively. This will increase the cost of insurance for those who remain and threaten the viability of the system as care becomes less “affordable.” Review: Topics and Related Learning Outcomes 22.1 The Market-for-Lemons Problem Learning Objective 22.1: Describe the lemons problem in markets with asymmetric information. Learning Objective 22.2: Explain the term adverse selection and how it affects insurance markets. 22.3 Principal-Agent Models Learning Objective 22.3: Describe how asymmetric information can affect principal-agent relationships. 22.4 Moral Hazard Learning Objective 22.4: Explain moral hazard and how it can affect the efficiency of markets. 22.5 Policy ExampleThe Affordable Care Act Learning Objective 22.5: Explain how adverse selection models explain the importance economists place on the individual mandate in the Affordable Care Act. LEARN: KEY TOPICS Terms Hidden characteristics Items whose value is not immediately known to a buyer or seller are said to have hidden characteristics. Hidden action When agents’ actions are not visible to all parties (for example, a customer may hire a mechanic to fix their car, but they do not observe the actions the mechanic actually takes), we call these hidden actions. Adverse selection refers to the situation where asymmetric information on the part of one party in an economic transaction leads to the desirable good remaining unsold, even though it would be sold in a market with full information. Moral hazard Another consequence of hidden action is moral hazard, where people who have entered into contract to mitigate the cost of risk engage in riskier behavior because the costs have diminished. Principal-agent relationships Principal-agent relationships are situations in which one person, the principal, pays another person to perform a task for them.
2023-01-28 23:19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2743569314479828, "perplexity": 1592.1215354485116}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00048.warc.gz"}
https://harmony.cs.cornell.edu/docs/textbook/leader/
Leader election is the problem of electing a unique leader in a network of processors. Typically this is challenging because the processors have only limited information. In the version that we present, each processor has a unique identifier. The processors are organized in a ring, but each processor only knows its own identifier and the identifier of its successor on the ring. Having already looked into the problem of how to make the network reliable, we assume here that each processor can reliably send messages to its successor. The protocol that we present elects as leader the processor with the highest identifier and works in two phases: in the first phase, each processor sends its identifier to its successor. When a processor receives an identifier that is larger than its own identifier, it forwards that identifier to its successor as well. If a processor receives its own identifier, it discovers that it is the leader. That processor then starts the second phase by sending a message around the ring notifying the other processors of the leader's identifier. const NIDS = 5 # number of identifiers network = {} # the network is a set of messages leader = 0 # used for checking correctness def send(m): atomically network |= { m } result = { (id, found) for (dst, id, found) in network where dst == self } def processor(self, succ): send(succ, self, False) var working = True while working: atomically when exists (id, found) in receive(self): if id == self: send(succ, id, True) elif id > self: send(succ, id, found) if found: working = False var ids, nprocs, procs = { 1 .. NIDS }, choose({ 1 .. NIDS }), [] for i in { 0 .. nprocs - 1 }: let next = choose(ids): ids -= { next } procs += [ next, ] for i in { 0 .. nprocs - 1 }: spawn processor(procs[i], procs[(i + 1) % nprocs]) Figure 25.1 describes the protocol and its test cases in Harmony. In Harmony, processors can be modeled by threads and there are a variety of ways in which one can model a network using shared variables. Here, we model the network as a set of messages. The send method atomically adds a message to this set. Messages are tuples with three fields: (dst, id, found). dst is the identifier of the destination processor; id is the identifier that is being forwarded; and found is a boolean indicating the second phase of the protocol. The receive(self) method looks for all messages destined for the processor with identifier self. To test the protocol, the code first chooses the number of processors and generates an identifier for each processor, chosen non-deterministically from a set of NIDS identifiers. It also keeps track in the variable leader of what the highest identifier is, so it can later be checked. Method processor(self, succ) is the code for a processor with identifier self and successor succ. It starts simply by sending its own identifier to its successor. The processor then loops until it discovers the identifier of the leader in the second phase of the protocol. A processor waits for a message using the Harmony atomically when exists statement. This statement takes the form atomically when exists v in s: statement block where s is a set and v is variable that is bound to an element of s. The properties of the statement are as follows: • it waits until s is non-empty; • it is executed atomically; • v is selected non-deterministically, like in the choose operator. If a processor receives its own identifier, it knows its the leader. The Harmony code checks this using an assertion. In real code the processor could not do this as it does not know the identifier of the leader, but assertions are only there to check correctness. The processor then sends a message to its successor that the leader has been found. If the processor receives an identifier higher than its own, the processor knows that it cannot be the leader. In that case, it simply forwards the message. A processor stops when it receives a message that indicates that the leader has been identified. Note that there is a lot of non-determinism in the specification, leading to a lot of executions that must be checked. First, every possible permutation of identifiers for the processors is tried. When there are multiple messages to receive by a processor, every possible order is tried (including receiving the same message multiple times). Fortunately, the atomically when exists statement is executed atomically, otherwise the body of the statement could lead to additional thread interleavings. Because in practice the different processors do not share memory, it is not necessary to check those interleavings. Exercises 25.1 Check if the code finds a unique leader if identifiers are not unique. 25.2 Messages are added atomically to the network. Is this necessary? What happens if you remove the atomically keyword? Explain what happens.
2023-01-29 12:05:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19173690676689148, "perplexity": 1670.7334674611711}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00353.warc.gz"}
http://codereview.stackexchange.com/questions/43891/autoloader-functions
# Autoloader functions Every time I write a new PHP page, I usually need to include this at the top: <?php require_once(__DIR__ . '/../libraries/global.lib.php'); function load_classes($class) { // appears to get all the names of the classes that are needed in this script...$file_name = './classes/' . $class . '.class.php'; if (file_exists($file_name)) { require_once($file_name); } } function load_interfaces($interface) { $file_name = './classes/' .$interface . '.interface.php'; if (file_exists($file_name)) { require_once($file_name); } } spl_autoload_register('load_interfaces'); spl_autoload_register('load_classes'); ?> Is there any way to condense this? Would putting this in a separate PHP file work? - Moving this to an include would work. You will have to be careful about paths like __DIR__ since they apply directly to the file they are contained in. – willoller Mar 10 '14 at 1:07 ## 1 Answer Yes, you could put that into a separate file and include_once('header_file.php');. You try something such as: function loadFile($name,$isInterface = false) { $type = ($isInterface == true) ? 'interface' : 'class' $path = sprintf('./classes/%s.%s.php',$name,$type); if (file_exists($path)) { require_once(\$path); } } -
2016-02-11 13:06:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076776266098022, "perplexity": 6366.681906687614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161946.96/warc/CC-MAIN-20160205193921-00282-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.dlubal.com/en/support-and-learning/support/faq/002837
FAQ 002837 EN 9 April 2019 Is it possible that the formula editor has a problem with units in combination with dimensionless parameters? When using formulas with units and unitless values, special attention should be paid to the syntax. If brackets are not used, a space must be inserted between the unit and the arithmetic units.
2019-08-17 23:10:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035003542900085, "perplexity": 1904.5170647156854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00271.warc.gz"}
https://www.prepanywhere.com/prep/textbooks/functions-11-mcgraw-hill/chapters/chapter-7-financial-applications/materials/7-5-present-values-of-an-annuity
7.5 Present Values of an Annuity Chapter Chapter 7 Section 7.5 Solutions 17 Videos Calculate the present value of the annuity shown. 1.31mins Q1 Brandon plans to withdraw $1000 at the end of every year, for 4 years, from an account that earns 8% interest, compounded annually. (a) Draw a time line to represent this annuity. (b) Determine the present value of the annuity. Buy to View 1.27mins Q2 Lauren plans to withdraw$650 at the end of every 3 months, for 5 years, from an account that earns 6.4% interest, compounded quarterly. (a) Draw a time line to represent this annuity. (b) Determine the present value of the annuity. (c) How much interest is earned? 3.05mins Q3 An annuity has an initial balance of $8000 in an account that earns 5.75% interest, compounded annually. What amount can be withdrawn at the end of each of the 6 years of this annuity? Buy to View 2.14mins Q4 After graduating from high school, Karen works for a few years to save$40 000 for university. She deposits her saving into an account that will earn 6\% interest, compounded quarterly. What quarterly withdrawals can Karen make for the 4 years that she will be at university? 2.09mins Q5 How much should be in an account today so that withdrawals in the amount of $15 000 can be made at the end of each year for 20 years, if interest in th account is earned at a rate of 7.5% per year, compounded annually? Buy to View 1.22mins Q6 Julie just won$ 200 000 in a lottery! She estimates that to live comfortably she will need to withdraw $5000 per month for the next 50 years. Her savings account earns 4.25% annual interest, compounded monthly. (a) Can lie afford to retire and live off her lottery winnings? (b) What is the minimum amount that Julie must win to retire in comfort immediately? Discuss any assumptions you must make. Buy to View 2.57mins Q7 An annuity has an initial balance of$5000. Annual withdrawals are made in the amount of $800 for 9 years, at which point the account balance is zero. What annual rate of interest, compounded annually, was earned over the duration of this annuity? Buy to View 2.58mins Q8 Shen has invested$15 000 into an annuity from which she plans to withdraw $500 per month for the next 3.5 years. If at the end of this time period the balance of the annuity is zero, what annual rat elf interest, compounded monthly, did this account earn? Buy to View 3.07mins Q9 Jordan has$6000 to invest in an annuity from which he plans to make regular withdrawals over the next 3 years. he is considering tow options: Option A: Withdrawals are made every quarter and interest is earned at a rate of 8% compounded quarterly. Option B: Withdrawals are made every month and interest is earned at a rate of 7.75%, compounded monthly. (a) Determine the regular withdrawal for each option. (b) Determine the total interest earned for each option. 0.00mins Q10 Abe and Bob need to take out a small loan to help expand their new business in tech. They estimate that they can afford to pay back $250 monthly for 3 years. If interest is 6%, compounded monthly, how much of a loan can Abe and Ben afford? Buy to View 1.47mins Q11 Tamara took out a loan for$940 at an annual rate of 11.5% simple interest. When she repaid the loan, the amount was $1100. How long did Tamara hold this loan? Buy to View 1.55mins Q12 Josie plans to invest$10 000 at the end of each year for the 25 years leading up to her retirement. After she retires, she plans to make regular withdrawals for 25 years. Assume that the interest rate over the next 50 years remains constant at 7% per year, compounded annually. (a) Once she retires, which amount do you predict that Josie will be able to withdraw per year? • less than $10 000 •$10 000 • more than $10 000 Explain your answer. b) Estimate how much she will be able to withdraw. Provide reasoning for your estimate. c) Determine the amount of Josie’s investment annuity on the day she retires. d) Use this amount to determine the regular withdrawal she can make at the end of each year for 25 years after retirement. Buy to View 4.28mins Q13 A mortgage of$150 000 is amortized over 25 years with an interest rate of 6.7%, compounded semi-annually. a) What is the monthly payment? b) Suppose you choose to make weekly payments instead of monthly payments. What is the weekly payment? , c) Calculate the total interest paid with the weekly payments.
2021-06-16 22:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32678669691085815, "perplexity": 1586.7078602902434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00349.warc.gz"}
https://search.r-project.org/CRAN/refmans/cutpointr/html/roc.html
roc {cutpointr} R Documentation ## Calculate a ROC curve ### Description Given a data.frame with a numeric predictor variable and a binary outcome variable this function returns a data.frame that includes all elements of the confusion matrix (true positives, false positives, true negatives, and false negatives) for every unique value of the predictor variable. Additionally, the true positive rate (tpr), false positive rate (fpr), true negative rate (tnr) and false negative rate (fnr) are returned. ### Usage roc(data, x, class, pos_class, neg_class, direction = ">=", silent = FALSE) ### Arguments data A data.frame or matrix. Will be converted to a data.frame. x The name of the numeric predictor variable. class The name of the binary outcome variable. pos_class The value of 'class' that represents the positive cases. neg_class The value of 'class' that represents the negative cases. direction (character) One of ">=" or "<=". Specifies if the positive class is associated with higher values of x (default). silent If FALSE and the ROC curve contains no positives or negatives, a warning is generated. ### Details To enable classifying all observations as belonging to only one class the predictor values will be augmented by Inf or -Inf. The returned object can be plotted with plot_roc. This function uses tidyeval to support unquoted arguments. For programming with roc the operator !! can be used to unquote an argument, see the examples. ### Value A data frame with the columns x.sorted, tp, fp, tn, fn, tpr, tnr, fpr, and fnr. ### Source Forked from the ROCR package Other main cutpointr functions: add_metric(), boot_ci(), boot_test(), cutpointr(), multi_cutpointr(), predict.cutpointr() ### Examples roc_curve <- roc(data = suicide, x = dsi, class = suicide, pos_class = "yes", neg_class = "no", direction = ">=") roc_curve plot_roc(roc_curve) auc(roc_curve) ## Unquoting an argument myvar <- "dsi" roc(suicide, x = !!myvar, suicide, pos_class = "yes", neg_class = "no") [Package cutpointr version 1.1.2 Index]
2022-05-21 02:17:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22529835999011993, "perplexity": 10086.589287482175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00167.warc.gz"}
https://www.wyzant.com/resources/answers/788924/mathematic-algebra
Jannet S. # Mathematic Algebra The proceeds from a concession stand that sold hamburgers and hotdogs at a baseball game were $575.50. The price of a hotdog was$2.50 and the price of a hamburger was \$3.00. If the total of sold hotdogs and hamburgers was 213, how many of each kind were sold?
2021-06-19 22:52:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2907133400440216, "perplexity": 9355.19964247355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00313.warc.gz"}
http://mathhelpforum.com/pre-calculus/7971-composition-functions.html
1. ## composition functions 1. Find the Inverse of: $f(x)=\frac{2x-1}{x-6}$ $f(x)=\frac{6x-1}{x-2}$ Is this Correct? 2. Find $f{\circ}g$ and $g{\circ}f$ when $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x=1}$ I got: $(x+1)^{1/18}-1$ $\text{Thanks for the Help!!!}$ 2. Originally Posted by qbkr21 1. Find the Inverse of: $f(x)=\frac{2x-1}{x-6}$ $f(x)=\frac{6x-1}{x-2}$ Is this Correct? Yes, but this would look better if you did not use $f$ for both functions, so you would have: $f(x)=\frac{2x-1}{x-6}$ and its inverse as: $g(x)=\frac{6x-1}{x-2}$ Also as a personel preference I would use a different variable name in this second definition so I would write: $g(y)=\frac{6y-1}{y-2}$ is the inverse function of $f(x)=\frac{2x-1}{x-6}$. RonL 3. Originally Posted by qbkr21 2. Find $f{\circ}g$ and $g{\circ}f$ when $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x+1}$ $ (f \circ g)(x)=(g(x))^3-1=(\sqrt[3]{x+1})^3-1=(x+1)-1=x $ and: $ (g \circ f)(x)=\sqrt[3]{f(x)+1}=\sqrt[3]{x^3-1+1}=x $ RonL 4. ## Re: Ok, but I am definatley did get those as answers I got new answers but noone of yours 1. $f{\circ}g$= $\sqrt{x+1}$ 2. $g{\circ}f$= $\sqrt[6]{x^3}$ PURE CALCULATOR $\text{In your spare time could you please tell me what I did wrong}$ 5. Originally Posted by qbkr21 ... 2. Find $f{\circ}g$ and $g{\circ}f$ when $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x=1}$ I got: $(x+1)^{1/18}-1$ $\text{Thanks for the Help!!!}$ Hello, first I assume that you used the shift key where you better shouldn't use it. That means your problem reads: $f(x)=x^3-1$ and $g(x)=\sqrt[3]{x+1}$ $f{\circ}g$ means you have to calculate: $f(g(x))$. Therefore you have to plug the term of the function g in the place of the x in function f: $f{\circ}g=f(g(x))=(\sqrt[3]{x+1})^3-1=x+1-1=x$. That's the result CaptainBlack has already told you. Same procedure: $g{\circ}f=g(f(x))=\sqrt[3]{x^3-1+1}=\sqrt[3]{x^3}=x$. That's the result CaptainBlack has already told you. To be honest: I can't guess what you have done. EB 6. Originally Posted by qbkr21 Ok, but I am definatley did get those as answers I got new answers but noone of yours 1. $f{\circ}g$= $\sqrt{x+1}$ 2. $g{\circ}f$= $\sqrt[6]{x^3}$ PURE CALCULATOR $\text{In your spare time could you please tell me what I did wrong}$ Earboth has explained in more detail how we think this goes, if you want us to tell you where you went wrong, you will need to describe what you did. RonL 7. ## Re: I must have slipped up on my Calculator, thanks for the advice... 8. Originally Posted by qbkr21 I must have slipped up on my Calculator, thanks for the advice... How are you doing this on a calculator? And surely your professor wants you to work out how you got your answer? -Dan I keeps getting the same answers, so here is my work: 1. $f(g(x))$= ${(\sqrt[3]{x+1})^3}-1$ = ${(x+1)^{1/18}}-1$ $\text{and}$ 2. $g(f(x))$= $\sqrt[3]{(x^3-1)+1}$ = $(x^3)^1/6$ 10. ## Re: Texas Instruments Voyage 200 the bling bling of All Calc's 11. Originally Posted by qbkr21 I keeps getting the same answers, so here is my work: 1. $f(g(x))$= ${(\sqrt[3]{x+1})^3}-1$ = ${(x+1)^{1/18}}-1$ $\text{and}$ The power rule is: $(x^a)^b = x^{ab}$ So $\left ( \sqrt[3]{x+1} \right )^3 = \left ( (x + 1)^{1/3} \right )^3 = (x + 1)^{\frac{1}{3} \cdot 3} = x + 1$ Originally Posted by qbkr21 2. $g(f(x))$= $\sqrt[3]{(x^3-1)+1}$ = $(x^3)^{1/6}$ For the same reason as above: $\sqrt[3]{(x^3-1)+1} = \sqrt[3]{x^3} = (x^3)^{1/3} = x^{3 \cdot \frac{1}{3}} = x$ -Dan 12. Originally Posted by qbkr21 I keeps getting the same answers, so here is my work: 1. $f(g(x))$= ${(\sqrt[3]{x+1})^3}-1$ = ${(x+1)^{1/18}}-1$ ... Hello, from your result I believe that you used the sqrt( command. Then you have calculated: $\left( \left( (x+1)^\frac{1}{2}\right)^\frac{1}{3}\right)^\frac{ 1}{3}$ , which will indeed give your result. Code: ((x+1)^(1/3))^3-1 You'll get the correct result. EB 13. ## Re: Ok great i got $x$ for my answer, but if you were a teacher grading tests would you accept the answer given? Do you think he or she would? 14. Originally Posted by qbkr21 Ok great i got $x$ for my answer, but if you were a teacher grading tests would you accept the answer given? Do you think he or she would? If you are speaking of the answers here: Originally Posted by qbkr21 I keeps getting the same answers, so here is my work: 1. $f(g(x))$= ${(\sqrt[3]{x+1})^3}-1$ = ${(x+1)^{1/18}}-1$ $\text{and}$ 2. $g(f(x))$= $\sqrt[3]{(x^3-1)+1}$ = $(x^3)^{1/6}$ No. ${(x+1)^{1/18}}-1 = \sqrt[18]{x+1} - 1$ and $(x^3)^{1/6} = \sqrt[6]{x^3} = \sqrt{x}$ which are not the same as the correct answers. -Dan 15. ## Re: $fine$
2017-08-18 11:25:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 76, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331928253173828, "perplexity": 1854.2675641303276}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104634.14/warc/CC-MAIN-20170818102246-20170818122246-00316.warc.gz"}
https://support.bioconductor.org/p/25322/
TilingArray Normalization 1 0 Entering edit mode @anjan-purkayastha-3096 Last seen 7.1 years ago Hi, I am trying to use the tilingArray normalization package for the following array: total number of probes: 6490. (these are a set of non-overlapping, contiguous 60mer probes that span the vaccinia virus genome-~200kb) total number of perfect matches: 6490. total number of background probes: 1275.( the vaccinia genome is very densely packed so there are few untranscribed regions). The above mentioned array is being use to assay the transcription of the viral genes over an infection time-course. Issues: The probe intensities from DNA-hybs used for normalization are on average as strong as the non-normalized probe intensities from transcription assays. The "normalized" signal after the tilingArray normalization seem to be *noisier* than the non-normalized input. I know there may not be enough material in this email for the Gurus to provide advice. I'd be happy to provide more details- genomic map of normalized and non-normalized signal etc, but there seems to be a size limit to attachments for this forum. Please let me know if you have any specific questions regarding the experiment and/or the data. All advice/questions will be appreciated. Anjan -- =========================================== anjan purkayastha, phd bioinformatics analyst whitehead institute for biomedical research nine cambridge center cambridge, ma 02142 purkayas [at] wi [dot] mit [dot] edu 703.740.6939 0 Entering edit mode @wolfgang-huber-3550 Last seen 11 weeks ago EMBL European Molecular Biology Laborat… Hi Anjan, it is quite crucial for the "normalizeByReference" method that most of the "background" probes really have untranscribed sequence - does that seem to be case when you plot their intensities? How does the histogram of their intensities look like compared to the known transcribed regions? Also, the goal of that method is not to reduce noise, but to increase the signal-to-noise ratio. That is a subtle, but important difference. Have a look at Figure 5 of http://bioinformatics.oxfordjournals.org/cgi/reprint/22/16/1963.pdf Can you be more precise than "The "normalized" signal after the tilingArray normalization seem to be *noisier* than the non-normalized input." How do you measure this? Do you have a plot to support this? Perhaps Fig.5 and the quantity $\Delta\mu / \sigma$ in that paper might be useful for that purpose. Thank you and Best wishes Wolfgang ---------------------------------------------------- Wolfgang Huber, EMBL-EBI, http://www.ebi.ac.uk/huber Anjan Purkayastha ha scritto: > Hi, > I am trying to use the tilingArray normalization package for the > following array: > total number of probes: 6490. (these are a set of non-overlapping, > contiguous 60mer probes that span the vaccinia virus genome-~200kb) > total number of perfect matches: 6490. > total number of background probes: 1275.( the vaccinia genome is very > densely packed so there are few untranscribed regions). > The above mentioned array is being use to assay the transcription of the > viral genes over an infection time-course. > > Issues: > The probe intensities from DNA-hybs used for normalization are on > average as strong as the non-normalized probe intensities from > transcription assays. > The "normalized" signal after the tilingArray normalization seem to be > *noisier* than the non-normalized input. > > I know there may not be enough material in this email for the Gurus to > provide advice. > I'd be happy to provide more details- genomic map of normalized and > non-normalized signal etc, but there seems to be a size limit to > attachments for this forum. > Please let me know if you have any specific questions regarding the > experiment and/or the data. > > All advice/questions will be appreciated. > Anjan >
2021-10-19 21:39:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6960731744766235, "perplexity": 3098.4976386322705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00233.warc.gz"}
https://nbviewer.org/github/VadimSokolov/ISLRNoteBook/blob/master/2_introduction.ipynb
In [208]: # Chapter 2 Lab: Introduction to R # Basic Commands x <- c(1,3,2,5) x 1. 1 2. 3 3. 2 4. 5 In [209]: x = c(1,6,2) x 1. 1 2. 6 3. 2 In [210]: y = c(1,4,3) length(x) length(y) 3 3 In [211]: x+y 1. 2 2. 10 3. 5 In [212]: ls() 1. 'A' 2. 'Auto' 3. 'cylinders' 4. 'df' 5. 'f' 6. 'fa' 7. 'q' 8. 'quit' 9. 'x' 10. 'y' In [213]: rm(x,y) ls() 1. 'A' 2. 'Auto' 3. 'cylinders' 4. 'df' 5. 'f' 6. 'fa' 7. 'q' 8. 'quit' In [214]: rm(list=ls()) In [215]: ?matrix matrix {base} R Documentation ## Matrices ### Description matrix creates a matrix from the given set of values. as.matrix attempts to turn its argument into a matrix. is.matrix tests if its argument is a (strict) matrix. ### Usage matrix(data = NA, nrow = 1, ncol = 1, byrow = FALSE, dimnames = NULL) as.matrix(x, ...) ## S3 method for class 'data.frame' as.matrix(x, rownames.force = NA, ...) is.matrix(x) ### Arguments data an optional data vector (including a list or expression vector). Non-atomic classed R objects are coerced by as.vector and all attributes discarded. nrow the desired number of rows. ncol the desired number of columns. byrow logical. If FALSE (the default) the matrix is filled by columns, otherwise the matrix is filled by rows. dimnames A dimnames attribute for the matrix: NULL or a list of length 2 giving the row and column names respectively. An empty list is treated as NULL, and a list of length one as row names. The list can be named, and the list names will be used as names for the dimensions. x an R object. ... additional arguments to be passed to or from methods. rownames.force logical indicating if the resulting matrix should have character (rather than NULL) rownames. The default, NA, uses NULL rownames if the data frame has ‘automatic’ row.names or for a zero-row data frame. ### Details If one of nrow or ncol is not given, an attempt is made to infer it from the length of data and the other parameter. If neither is given, a one-column matrix is returned. If there are too few elements in data to fill the matrix, then the elements in data are recycled. If data has length zero, NA of an appropriate type is used for atomic vectors (0 for raw vectors) and NULL for lists. is.matrix returns TRUE if x is a vector and has a "dim" attribute of length 2) and FALSE otherwise. Note that a data.frame is not a matrix by this test. The function is generic: you can write methods to handle specific classes of objects, see InternalMethods. as.matrix is a generic function. The method for data frames will return a character matrix if there is only atomic columns and any non-(numeric/logical/complex) column, applying as.vector to factors and format to other non-character columns. Otherwise, the usual coercion hierarchy (logical < integer < double < complex) will be used, e.g., all-logical data frames will be coerced to a logical matrix, mixed logical-integer will give a integer matrix, etc. The default method for as.matrix calls as.vector(x), and hence e.g. coerces factors to character vectors. When coercing a vector, it produces a one-column matrix, and promotes the names (if any) of the vector to the rownames of the matrix. is.matrix is a primitive function. The print method for a matrix gives a rectangular layout with dimnames or indices. For a list matrix, the entries of length not one are printed in the form integer,7 indicating the type and length. ### Note If you just want to convert a vector to a matrix, something like dim(x) <- c(nx, ny) dimnames(x) <- list(row_names, col_names) will avoid duplicating x. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. data.matrix, which attempts to convert to a numeric matrix. A matrix is the special case of a two-dimensional array. ### Examples is.matrix(as.matrix(1:10)) !is.matrix(warpbreaks) # data.frame, NOT matrix! warpbreaks[1:10,] as.matrix(warpbreaks[1:10,]) # using as.matrix.data.frame(.) method ## Example of setting row and column names mdat <- matrix(c(1,2,3, 11,12,13), nrow = 2, ncol = 3, byrow = TRUE, dimnames = list(c("row1", "row2"), c("C.1", "C.2", "C.3"))) mdat [Package base version 3.3.1 ] In [216]: x=matrix(data=c(1,2,3,4), nrow=2, ncol=2) x 1 3 2 4 In [217]: x=matrix(c(1,2,3,4),2,2) matrix(c(1,2,3,4),2,2,byrow=TRUE) sqrt(x) x^2 1 2 3 4 1 1.73205 1.41421 2 1 9 4 16 In [218]: x=rnorm(50) y=x+rnorm(50,mean=50,sd=.1) cor(x,y) 0.995230520811311 In [219]: set.seed(1303) rnorm(50) 1. -1.14397631447974 2. 1.34212936561501 3. 2.18539047574276 4. 0.536392517923731 5. 0.0631929664685468 6. 0.502234482468979 7. -0.000416724686432643 8. 0.565819840539162 9. -0.572522688962623 10. -1.11022500727696 11. -0.0486871233624514 12. -0.695656217619366 13. 0.828917480303335 14. 0.206652855081802 15. -0.235674509102427 16. -0.556310491381104 17. -0.364754357080585 18. 0.862355034263622 19. -0.63077153536771 20. 0.313602125215739 21. -0.931495317661393 22. 0.823867618473952 23. 0.523370702077482 24. 0.706921411979056 25. 0.420204325601679 26. -0.269052154682033 27. -1.51031729990999 28. -0.69021247657504 29. -0.143471952443572 30. -1.0135274099044 31. 1.57327373614751 32. 0.0127465054882014 33. 0.872647049887217 34. 0.422066190530336 35. -0.0188157916578866 36. 2.61574896890584 37. -0.693140174826871 38. -0.266321780991085 39. -0.720636441231524 40. 1.36773420645149 41. 0.264007332160512 42. 0.632186807367191 43. -1.33065098578719 44. 0.0268888182209596 45. 1.0406363207788 46. 1.31202379854711 47. -0.0300020766733214 48. -0.250025712488174 49. 0.0234144856913592 50. 1.65987065574227 In [220]: set.seed(3) y=rnorm(100) In [221]: mean(y) var(y) sqrt(var(y)) sd(y) 0.0110355710943715 0.732867501277449 0.856076808047881 0.856076808047881 In [222]: # Graphics x=rnorm(100) y=rnorm(100) plot(x,y) In [223]: plot(x,y,xlab="this is the x-axis",ylab="this is the y-axis",main="Plot of X vs Y") In [224]: pdf("Figure.pdf") plot(x,y,col="green") dev.off() pdf: 2 In [225]: x=seq(1,10) x 1. 1 2. 2 3. 3 4. 4 5. 5 6. 6 7. 7 8. 8 9. 9 10. 10 In [226]: x=1:10 x 1. 1 2. 2 3. 3 4. 4 5. 5 6. 6 7. 7 8. 8 9. 9 10. 10 In [227]: x=seq(-pi,pi,length=50) y=x f=outer(x,y,function(x,y)cos(y)/(1+x^2)) contour(x,y,f) In [228]: fa=(f-t(f))/2 contour(x,y,fa,nlevels=15) In [229]: image(x,y,fa) In [230]: persp(x,y,fa) In [231]: persp(x,y,fa,theta=30)
2022-05-23 13:46:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3517150580883026, "perplexity": 6670.377566133144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00206.warc.gz"}
http://openstudy.com/updates/55aeec8ce4b0d48ca8edc2b0
## Summersnow8 one year ago Please help! A bridge with mass 9000. kg supports a truck with mass 3000. kg that is stopped in the middle of the bridge. What mass (in kg) must each pier of the bridge support? = (9000 kg + 3000 kg) / 2 = 6000 kg If the truck in the preceding problems stops 9.00 m from the northern end of the 32.0-m bridge, what mass (in kg) must the northern pier of the bridge support? F1 + F2 = (9000 kg * 9.8 m/s^2) + (3000 kg * 9.8 m/s^2) F1 + F2 = 88200 + 29400 F1 + F2 = 117,600 N F1 (32.0 m) = (88200 N) (????? m) + (29400 N) (9.00 m) @nikato @michele_Laino @lightgrav @radar @Greg_D @peachpi the answers are suppose to be 5220 kg, 6580 kg 1. anonymous I tried this last night. Got the same numbers you did. 5220 kg doesn't even make sense for the first part because it's not enough support. The part you're missing for the 2nd part is 16 m. Weight acts at the center of the bridge, then you can solve for F1. 2. Summersnow8 @peachpi, yeah the answer for the first problem is 6000 kg, but there are 2 asnwers for the second problem... I need help with the second one only 3. anonymous Is there another question for the 2nd problem? 4. anonymous They only ask for mass as far as I can tell 5. Summersnow8 they are asking for the mass of the northern pier of the bridge.... so I don't know which answer it is or how you get the answers 6. anonymous |dw:1437603862116:dw| 7. Summersnow8 how do I figure out which one is north and which one is south? 8. anonymous North is the one closest to the truck. That's why I switched to N and S as subscripts to make it a little easier If you take the moment about the north side it's $\sum M_N=-(9~m)(29400~N)-(16~m)(88200~N)-(32~m)F_S=0$ Solving that gives the force on the south support 9. anonymous That should be + 32 Fs 10. anonymous For the north you can do the same, take the moment about the south end $\sum M_S=(23~m)(29400~N)+(16~m)(88200~N)-(32~m)F_N=0$ 11. Summersnow8 I got this: F1 (32.0 m) = (88200 N) (16 m) + (29400 N) (9.00 m) F1 (32.0 m) = 1675800 F1 = 1675800 / 32 m F1 = 52368 12. anonymous yeah that's what I got, then when you divide by 9.8 you'll get 5343.75 kg 13. anonymous that's for the south pier 14. anonymous for the north pier you can use the equation above or just subtract from 12000 15. Summersnow8 but the answers are 5220 kg & 6580 kg , not what we got 16. anonymous that's what I was saying above, those answers don't make sense. They don't even add up to 12,000 kg, which is the amount of mass that has to be supported 17. Summersnow8 well, those are the answers in the back of the book.... are you sure our equation is right? why do we divide by gravity? I am not understanding it 18. anonymous I divided by gravity because they asked for mass, not force. The equation is right. The answer makes no sense. The mass of the bridge and truck is 9000 + 3000 = 12000 kg. The answers for the supports are 5220 + 6580 = 11800 kg. This would mean the mass on the bridge is 200 kg more than it can support. Those answers cannot be right, unless something is missing from the problem 19. Summersnow8 I put the wrong answer, in the back of the book the problem is asking he mass must each pier of the bridge support, which the correct answer is 6000 kg 20. anonymous I'm confused. that's for the 1st question, right? 21. Summersnow8 sorry, i am confusing myself, yea that is for the 1st, but yeah the book says 5220 kg, 6580 kg for the second question...... I don't get it..... 22. Summersnow8 so did you get 5543.75 (south), and how do you solve for north? 23. anonymous You can set up a moment equation from the south side, like I did above. Or you can just subtract the south from 12000 24. anonymous since we know they have to add up to 12000 25. Summersnow8 so 6456.25 (north) 26. anonymous yes 27. Summersnow8 okay, so the answer is 6460 ? I hope we solved this correctly :/ 28. anonymous yeah ok if you round 29. Summersnow8 hmmm, okay. thanks for your help 30. mtimko The correct answer (at least for my purposes) ended up being 6655 +/- 10
2017-01-19 17:35:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5635278224945068, "perplexity": 1143.4560912782158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz"}
https://cameramath.com/expert-q&a/Algebra/Show-your-work-thanks-2-SOLVE-each-trigonometric-function-for-all-possible
### Still have math questions? Algebra Question $$x \in [ 0,2 \pi ] ( KU / TI : 5$$ marks) i) $$\cos ( x ) + \sqrt { 3 } = - \cos ( x )$$ ii) $$2 \sin ^ { 2 } ( x ) - 3 \sin ( x ) + 1 = 0$$ iii) $$( \sin \theta + 1 ) ^ { 2 } = ( \cos \theta ) ^ { 2 } \quad$$ iv $$) \tan ^ { 2 } ( x ) = 3$$
2022-08-17 02:09:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6283187866210938, "perplexity": 175.1502438351358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00298.warc.gz"}
https://docs.nvidia.com/tao/tao-toolkit-archive/tao-30-2108/text/object_detection/ssd.html
# SSD With SSD, the following tasks are supported: • dataset_convert • train • evaluate • prune • inference • export These tasks may be invoked from the TAO Toolkit Launcher by following the below mentioned convention from command line: Copy Copied! tao ssd <sub_task> <args_per_subtask> where, args_per_subtask are the command line arguments required for a given subtask. Each of these sub-tasks are explained in detail below. ## Data Input for Object Detection The object detection apps in TAO Toolkit expect data in KITTI format for training and evaluation. ## Pre-processing the Dataset The ssd dataloader supports the raw KITTI formatted data as well as TFrecords. To use TFRecords for optimized iteration across the data batches, the KITTI formatted data need to be converted to TFRecords format first. This can be done using the dataset_convert subtask. The dataset_convert tool requires a configuration file as input. Details of the configuration file and examples are included in the following sections. ### Configuration File for Dataset Converter The dataset_convert tool provides several configurable parameters. The parameters are encapsulated in a spec file to convert data from the KITTI format to the TFRecords format which the SSD trainer can ingest. This is a prototxt format file with 3 global parameters: • kitti_config: A nested prototxt configuration with multiple input parameters • image_directory_path: The path to the dataset root. The image_dir_name is appended to this path to get the input images and must be the same path specified in the experiment spec file. • target_class_mapping: The prototxt dictionary that maps the class names in the tfrecords to the target class to be trained in the network. Here are descriptions of the configurable parameters for the kitti_config field: Parameter Datatype Default Description Supported Values root_directory_path string The path to the dataset root directory image_dir_name string The relative path to the directory containing images from the path in root_directory_path. label_dir_name string The relative path to the directory containing labels from the path in root_directory_path. partition_mode string The method employed when partitioning the data to multiple folds. Two methods are supported: • Random partitioning: The data is divided in to 2 folds, train and val. This mode requires that the val_split parameter be set. • Sequence-wise partitioning: The data is divided into n partitions (defined by the num_partitions parameter) based on the number of sequences available. • random • sequence num_partitions int 2 (if partition_mode is random) The number of partitions to use to split the data (N folds). This field is ignored when the partition model is set to random, as by default only two partitions are generated: val and train. In sequence mode, the data is split into n-folds. The number of partitions is ideally fewer than the total number of sequences in the kitti_sequence_to_frames file. n=2 for random partition n< number of sequences in the kitti_sequence_to_frames_file image_extension str .png The extension of the images in the image_dir_name parameter. .png .jpg .jpeg val_split float 20 The percentage of data to be separated for validation. This only works under “random” partition mode. This partition is available in fold 0 of the TFrecords generated. 0 <= x < 100 kitti_sequence_to_frames_file str The name of the KITTI sequence to frame mapping file. This file must be present within the dataset root as mentioned in the root_directory_path. num_shards int 10 The number of shards per fold. 1-20 The sample configuration file shown below converts 100% dataset to be training dataset. Copy Copied! kitti_config { root_directory_path: "/workspace/tao-experiments/data/training" image_dir_name: "image_2" label_dir_name: "label_2" image_extension: ".png" partition_mode: "random" num_partitions: 2 val_split: 0 num_shards: 10 } image_directory_path: "/workspace/tao-experiments/data/training" target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } ### Sample Usage of the Dataset Converter Tool The dataset_convert tool is described below: Copy Copied! tao ssd dataset-convert [-h] -d DATASET_EXPORT_SPEC -o OUTPUT_FILENAME [-v] You can use the following arguments: • -h, --help: Show this help message and exit • -d, --dataset-export-spec: The path to the detection dataset spec containing the config for exporting .tfrecord files • -o, --output_filename: The output filename • -v: Enable verbose mode to show debug messages The following example shows how to use the command with the dataset: Copy Copied! tao ssd dataset_convert -d /path/to/spec.txt -o /path/to/tfrecords/train ## Creating a Configuration File Below is a sample of the SSD spec file. It has six major components: ssd_config, training_config, eval_config, nms_config, augmentation_config, and dataset_config. The format of the spec file is a protobuf text(prototxt) message and each of its fields can be either a basic data type or a nested message. Copy Copied! random_seed: 42 ssd_config { aspect_ratios: "[[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]]" scales: "[0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05]" two_boxes_for_ar1: true clip_boxes: false variances: "[0.1, 0.1, 0.2, 0.2]" arch: "resnet" nlayers: 18 freeze_bn: false freeze_blocks: 0 } training_config { batch_size_per_gpu: 16 num_epochs: 80 enable_qat: false learning_rate { soft_start_annealing_schedule { min_learning_rate: 5e-5 max_learning_rate: 2e-2 soft_start: 0.15 annealing: 0.8 } } regularizer { type: L1 weight: 3e-5 } } eval_config { validation_period_during_training: 10 average_precision_mode: SAMPLE batch_size: 16 matching_iou_threshold: 0.5 } nms_config { confidence_threshold: 0.01 clustering_iou_threshold: 0.6 top_k: 200 } augmentation_config { output_width: 300 output_height: 300 output_channel: 3 image_mean { key: 'b' value: 103.9 } image_mean { key: 'g' value: 116.8 } image_mean { key: 'r' value: 123.7 } } dataset_config { data_sources: { # option 1 tfrecords_path: "/path/to/train/tfrecord" # option 2 # label_directory_path: "/path/to/train/labels" # image_directory_path: "/path/to/train/images" } include_difficult_in_training: true target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "pedestrian" value: "pedestrian" } target_class_mapping { key: "cyclist" value: "cyclist" } target_class_mapping { key: "van" value: "car" } target_class_mapping { key: "person_sitting" value: "pedestrian" } validation_data_sources: { label_directory_path: "/path/to/val/labels" image_directory_path: "/path/to/val/images" } } The top level structure of the spec file is summarized in the sections below. ### Training Config The training configuration (training_config) defines the parameters needed for the training, evaluation, and inference. Details are summarized in the table below. Field Description Data Type and Constraints Recommended/Typical Value batch_size_per_gpu The batch size for each GPU, so the effective batch size is batch_size_per_gpu * num_gpus Unsigned int, positive – num_epochs The number of epochs to train the network Unsigned int, positive – enable_qat Whether to use quantization-aware training Boolean Note: SSD does not support loading a pruned non-QAT model and retraining it with QAT enabled, or vice versa. For example, to get a pruned QAT model, perform the initial training with QAT enabled or enable_qat=True. learning_rate Only soft_start_annealing_schedule with these nested parameters is supported. min_learning_rate: The minimum learning during the entire experiment max_learning_rate: The maximum learning during the entire experiment soft_start: Time to lapse before warm up ( expressed in percentage of progress between 0 and 1) annealing: Time to start annealing the learning rate Message type – regularizer This parameter configures the regularizer to be used while training and contains the following nested parameters. type: The type or regularizer to use. NVIDIA supports NO_REG, L1, and L2 weight: The floating point value for the regularizer weight Message type L1 Note: NVIDIA suggests using the L1 regularizer when training a network before pruning as L1 regularization helps make the network weights more prunable. max_queue_size The number of prefetch batches in data loading Unsigned int, positive – n_workers The number of workers for data loading (set to less than 4 when using tfrecords) Unsigned int, positive – use_multiprocessing Whether to use multiprocessing mode of keras sequence data loader Boolean – Note The learning rate is automatically scaled with the number of GPUs used during training, or the effective learning rate is learning_rate * n_gpu. ### Evaluation Config The evaluation configuration (eval_config) defines the parameters needed for the evaluation either during training or as a standalone procedure. Details are summarized in the table below. Field Description Data Type and Constraints Recommended/Typical Value validation_period_during_training The number of training epochs per validation Unsigned int, positive 10 average_precision_mode The Average Precision (AP) calculation mode can be either SAMPLE or INTEGRATE. SAMPLE is used as VOC metrics for VOC 2009 or before. INTEGRATE is used for VOC 2010 or after. ENUM type ( SAMPLE or INTEGRATE) SAMPLE matching_iou_threshold The lowest IoU of the predicted box and ground truth box that can be considered a match. Boolean 0.5 ### NMS Config The NMS configuration (nms_config) defines the parameters needed for NMS postprocessing. The NMS configuration applies to the NMS layer of the model in training, validation, evaluation, inference, and export. Details are summarized in the table below. Field Description Data Type and Constraints Recommended/Typical Value confidence_threshold Boxes with a confidence score less than confidence_threshold are discarded before applying NMS. float 0.01 cluster_iou_threshold The IoU threshold below which boxes will go through the NMS process. float 0.6 top_k top_k boxes will be output after the NMS keras layer. If the number of valid boxes is less than k, the returned array will be padded with boxes whose confidence score is 0. Unsigned int 200 infer_nms_score_bits The number of bits to represent the score values in NMS plugin in TensorRT OSS. The valid range is integers in [1, 10]. Setting it to any other values will make it fall back to ordinary NMS. Currently this optimized NMS plugin is only avaible in FP16 but it should also be selected by INT8 data type as there is no INT8 NMS in TensorRT OSS and hence this fastest implementation in FP16 will be selected. If falling back to ordinary NMS, the actual data type when building the engine will decide the exact precision(FP16 or FP32) to run at. int. In the interval [1, 10]. 0 ### Augmentation Config The augmentation_config parameter defines the image size after preprocessing. The augmentation methods in the SSD paper will be performed during training, including random flip, zoom-in, zoom-out and color jittering. And the augmented images will be resized to the output shape defined in augmentation_config. In evaluation process, only the resize will be performed. Note The details of augmentation methods can be found in setcion 2.2 and 3.6 of the paper. Field Description Data Type and Constraints Recommended/Typical Value output_channel Output image channel of augmentation pipeline. integer – output_width The width of preprocessed images and the network input. integer, multiple of 32 – output_height The height of preprocessed images and the network input. integer, multiple of 32 – random_crop_min_scale Minimum patch scale of RandomCrop augmentation. Default:0.3 float >= 1.0 – random_crop_max_scale Maximum patch scale of RandomCrop augmentation. Default:1.0 float >= 1.0 – random_crop_min_ar Minimum aspect ratio of RandomCrop augmentation. Default:0.5 float > 0 – random_crop_max_ar Maximum aspect ratio of RandomCrop augmentation. Default:2.0 float > 0 – zoom_out_min_scale Minimum scale of ZoomOut augmentation. Default:1.0 float >= 1.0 – zoom_out_max_scale Maximum scale of ZoomOut augmentation. Default:4.0 float >= 1.0 – brightness Brightness delta in color jittering augmentation. Default:32 integer >= 0 – contrast Contrast delta factor in color jitter augmentation. Default:0.5 float of [0, 1) – saturation Saturation delta factor in color jitter augmentation. Default:0.5 float of [0, 1) – hue Hue delta in color jittering augmentation. Default:18 integer >= 0 – random_flip Probablity of performing random horizontal flip. Default:0.5 float of [0, 1) – image_mean A key/value pair to specify image mean values. If omitted, ImageNet mean will be used for image preprocessing. If set, depending on output_channel, either ‘r/g/b’ or ‘l’ key/value pair must be configured. dict – Note If set random_crop_min_scale = random_crop_max_scale = 1.0, RandomCrop augmentation will be disabled. Similarly, set zoom_out_min_scale = zoom_out_max_scale = 1, ZoomOut augmentation will be disabled. And all color jitter delta values are set to 0, color jittering augmentation will be disabled. ### Dataset Config The dataset_config parameter defines the path to the training dataset, validation dataset, and target_class_mapping. Field Description Data Type and Constraints Recommended/Typical Value data_sources The path to the training dataset. When using tfrecord as dataset ingestion, set: tfrecords_path: The path to tfrecords When using raw KITTI labels and images, set: label_directory_path: The path to the label directory image_directory_path: The path to the image directory Message type include_difficult_in_training Specifies whether to include difficult objects in the label (the Pascal VOC difficult label or KITTI occluded objects) bool true validation_data_sources The path to the training dataset images and labels Message type target_class_mapping A mapping of classes in labels to the target classes Message type Note data_sources and validation_data_sources are both repeated fields. Multiple datasets can be added to sources. ### SSD config The SSD configuration (ssd_config) defines the parameters needed for building the SSD model. Details are summarized in the table below. Field Description Data Type and Constraints Recommended/Typical Value aspect_ratios_global The anchor boxes of aspect ratios defined in aspect_ratios_global will be generated for each feature layer used for prediction. Note that either the aspect_ratios_global or aspect_ratios parameter is required; you don’t need to specify both. string “[1.0, 2.0, 0.5, 3.0, 0.33]” aspect_ratios The aspect ratio of anchor boxes for different SSD feature layers Note: Either the aspect_ratios_global or aspect_ratios parameter is required; you don’t need to specify both. string “[[1.0,2.0,0.5], [1.0,2.0,0.5], [1.0,2.0,0.5], [1.0,2.0,0.5], [1.0,2.0,0.5], [1.0, 2.0, 0.5, 3.0, 0.33]]” two_boxes_for_ar1 If this parameter is True, two boxes will be generated with an aspect ratio of 1. One with a scale for this layer and the other with a scale that is the geometric mean of the scale for this layer and the scale for the next layer. Boolean True clip_boxes If true, all corner anchor boxes will be truncated so they are fully inside the feature images. Boolean False scales A list of positive floats containing scaling factors per convolutional predictor layer. This list must be one element longer than the number of predictor layers so that, if two_boxes_for_ar1 is true, the second aspect-ratio 1.0 box for the last layer can have a proper scale. Except for the last element in this list, each positive float is the scaling factor for boxes in that layer. For example, if for one layer the scale is 0.1, then the generated anchor box with aspect ratio 1 for that layer (the first aspect-ratio 1 box if two_boxes_for_ar1 is set to True) will have its height and width as 0.1*min(img_h, img_w). min_scale and max_scale are two positive floats. If both of them appear in the config, the program can automatically generate the scales by evenly splitting the space between min_scale and max_scale. string “[0.05, 0.1, 0.25, 0.4, 0.55, 0.7, 0.85]” min_scale/max_scale variances If both appear in the config, scales will be generated evenly by splitting the space between min_scale and max_scale. A list of 4 positive floats. The four floats, in order, represent variances for box center x, box center y, log box height, and log box width. The box offset for box center (cx, cy) and log box size (height/width) w.r.t. anchor will be divided by their respective variance value. Therefore, larger variances result in less significant differences between two different boxes on encoded offsets. float steps An optional list inside quotation marks with a length that is the number of feature layers for prediction. The elements should be floats or tuples/lists of two floats. The steps define how many pixels apart the anchor-box center points should be. If the element is a float, both vertical and horizontal margin is the same. Otherwise, the first value is step_vertical and the second value is step_horizontal. If steps are not provided, anchor boxes will be distributed uniformly inside the image. string offsets An optional list of floats inside quotation marks with length equal to the number of feature layers for prediction. The first anchor box will have a margin of offsets[i]*steps[i] pixels from the left and top borders. If offsets are not provided, 0.5 will be used as default value. string arch The backbone for feature extraction. Currently, “resnet”, “vgg”, “darknet”, “googlenet”, “mobilenet_v1”, “mobilenet_v2” and “squeezenet” are supported. string resnet nlayers The number of conv layers in a specific arch. For “resnet”, 10, 18, 34, 50 and 101 are supported. For “vgg”, 16 and 19 are supported. For “darknet”, 19 and 53 are supported. All other networks don’t have this configuration, and users should delete this parameter from the config file. Unsigned int freeze_bn Whether to freeze all batch normalization layers during training. boolean False freeze_blocks The list of block IDs to be frozen in the model during training. You can choose to freeze some of the CNN blocks in the model to make the training more stable and/or easier to converge. The definition of a block is heuristic for a specific architecture. For example, by stride or by logical blocks in the model, etc. However, the block ID numbers identify the blocks in the model in a sequential order so you don’t have to know the exact locations of the blocks when you do training. As a general principle, the smaller the block ID, the closer it is to the model input; the larger the block ID, the closer it is to the model output. You can divide the whole model into several blocks and optionally freeze a subset of it. Note that for FasterRCNN, you can only freeze the blocks that are before the ROI pooling layer. Any layer after the ROI pooling layer will not be frozen anyway. For different backbones, the number of blocks and the block ID for each block are different. It deserves some detailed explanations on how to specify the block IDs for each backbone. list(repeated integers) ResNet series. For the ResNet series, the block IDs valid for freezing is any subset of [0, 1, 2, 3] (inclusive) VGG series. For the VGG series, the block IDs valid for freezing is any subset of[1, 2, 3, 4, 5] (inclusive) GoogLeNet. For the GoogLeNet, the block IDs valid for freezing is any subset of[0, 1, 2, 3, 4, 5, 6, 7] (inclusive) MobileNet V1. For the MobileNet V1, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] (inclusive) MobileNet V2. For the MobileNet V2, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] (inclusive) DarkNet. For the DarkNet 19 and DarkNet 53, the block IDs valid for freezing is any subset of [0, 1, 2, 3, 4, 5] (inclusive) ## Training the Model Train the SSD model using this command: Copy Copied! tao ssd train [-h] -e <experiment_spec> -r <output_dir> -k <key> [--gpus <num_gpus>] [--gpu_index <gpu_index>] [--use_amp] [--log_file <log_file>] [-m <resume_model_path>] [--initial_epoch <initial_epoch>] ### Required Arguments • -r, --results_dir: Path to the folder where the experiment output is written. • -k, --key: Provide the encryption key to decrypt the model. • -e, --experiment_spec_file: Experiment specification file to set up the evaluation experiment. This should be the same as the training specification file. ### Optional Arguments • --gpus num_gpus: Number of GPUs to use and processes to launch for training. The default = 1. • --gpu_index: The GPU indices used to run the training. We can specify the GPU indices used to run training when the machine has multiple GPUs installed. • --use_amp: A flag to enable AMP training. • --log_file: The path to the log file. Defaults to stdout. • -m, --resume_model_weights: Path to a pre-trained model or model to continue training. • --initial_epoch: Epoch number to resume from. • --use_multiprocessing: Enable multiprocessing mode in data generator. • -h, --help: Show this help message and exit. ### Input Requirement • Input size: C * W * H (where C = 1 or 3, W >= 128, H >= 128) • Image format: JPG, JPEG, PNG • Label format: KITTI detection ### Sample Usage Here’s an example of using the train command on an SSD model: Copy Copied! tao ssd train --gpus 2 -e /path/to/spec.txt -r /path/to/result -k $KEY ## Evaluating the Model Use the following command to run evaluation for an SSD model: Copy Copied! tao ssd evaluate [-h] -m <model> -e <experiment_spec_file> [-k <key>] [--gpu_index <gpu_index>] [--log_file <log_file>] ### Required Arguments • -m, --model: The .tlt model or TensorRT engine to be evaluated. • -e, --experiment_spec_file: The experiment spec file to set up the evaluation experiment. This should be the same as the training spec file. ### Optional Arguments • -h, --help: Show this help message and exit. • -k, --key:The encoding key for the .tlt model • --gpu_index: The index of the GPU to run evaluation (useful when the machine has multiple GPUs installed). Note that evaluation can only run on a single GPU. • --log_file: The path to the log file. The default path is stdout. Here is a sample command to evaluate a SSD model: Copy Copied! tao ssd evaluate -m /path/to/trained_tlt_ssd_model -k <model_key> -e /path/to/ssd_spec.txt ## Running Inference on the Model The inference command for SSD networks can be used to visualize bboxes or generate frame-by-frame KITTI format labels on a directory of images. Here’s an example of using this tool: Copy Copied! tao ssd inference [-h] -i <input directory> -o <output annotated image directory> -e <experiment spec file> -m <model file> -k <key> [-l <output label directory>] [-t <bbox filter threshold>] [--gpu_index <gpu_index>] [--log_file <log_file>] ### Required Arguments • -m, --model: The path to the pretrained model (TAO model). • -i, --in_image_dir: The directory of input images for inference. • -o, --out_image_dir: The directory path to output annotated images. • -k, --key: The key to the load model. • -e, --config_path: The path to an experiment spec file for training. ### Optional Arguments • -t, --threshold: The threshold for drawing a bbox and dumping a label file. (default: 0.3). • -h, --help: Show this help message and exit. • -l, --out_label_dir: The directory to output KITTI labels to. • --gpu_index: The index of the GPU to run inference (useful when the machine has multiple GPUs installed). Note that evaluation can only run on a single GPU. • --log_file: The path to the log file. The default path is stdout. Here is a sample of using inference with the SSD model: Copy Copied! tao ssd inference -i /path/to/input/images_dir -o /path/to/output/dir -m /path/to/trained_tlt_ssd_model -k <model_key> -e /path/to/ssd_spec.txt ## Pruning the Model Pruning removes parameters from the model to reduce the model size without compromising the integrity of the model itself. The prune command includes these parameters: Copy Copied! tao ssd prune [-h] -m <pretrained_model> -o <output_file> -k <key> [-n <normalizer>] [-eq <equalization_criterion>] [-pg <pruning_granularity>] [-pth <pruning threshold>] [-nf <min_num_filters>] [-el [<excluded_list>] [--gpu_index <gpu_index>] [--log_file <log_file>] ### Required Arguments • -m, --pretrained_model: The path to the pretrained model. • -o, --output_file: The path to output checkpoints to. • -k, --key: The key to load a .tlt model. ### Optional Arguments • -h, --help: Show this help message and exit. • -n, normalizer: max to normalize by dividing each norm by the maximum norm within a layer; L2 to normalize by dividing by the L2 norm of the vector comprising all kernel norms. (default: max) • -eq, --equalization_criterion: Criteria to equalize the stats of inputs to an element wise op layer, or depth-wise convolutional layer. This parameter is useful for resnets and mobilenets. Options are arithmetic_mean, geometric_mean, union, and intersection. (default: union) • -pg, -pruning_granularity: Number of filters to remove at a time. (default:8) • -pth: Threshold to compare normalized norm against. (default:0.1) • -nf, --min_num_filters: Minimum number of filters to keep per layer (default:16) • -el, --excluded_layers: List of excluded_layers. Examples: -i item1 item2 (default: []) • --gpu_index: The index of the GPU to run pruning (useful when the machine has multiple GPUs installed). Note that evaluation can only run on a single GPU. • --log_file: The path to the log file. Defaults to stdout. Here’s an example of using the prune command: Copy Copied! tao ssd prune -m /workspace/output/weights/resnet_003.tlt \ -o /workspace/output/weights/resnet_003_pruned.tlt \ -eq union \ -pth 0.7 -k$KEY After pruning, the model needs to be retrained. See Re-training the Pruned Model for more details. ## Re-training the Pruned Model Once the model has been pruned, there might be a slight decrease in accuracy. This happens because some previously useful weights may have been removed. To regain accuracy, NVIDIA recommends that you retrain this pruned model over the same dataset. To do this, use the tao ssd train command with an updated spec file that points to the newly pruned model as the pretrained model file. Users are advised to turn off the regularizer in the training_config for SSD to recover the accuracy when retraining a pruned model. You may do this by setting the regularizer type to NO_REG, as mentioned here. All the other parameters may be retained in the spec file from the previous training. Note SSD does not support loading a pruned non-QAT model and retraining it with QAT enabled, or vice versa. For example, to get a pruned QAT model, perform the initial training with QAT enabled or enable_qat=True. ## Exporting the Model The TAO Toolkit includes the export command to export and prepare TAO models for Deploying to DeepStream. The export command optionally generates the calibration cache for TensorRT INT8 engine calibration. Exporting the model decouples the training process from inference and allows conversion to TensorRT engines outside the TAO environment. TensorRT engines are specific to each hardware configuration and should be generated for each unique inference environment. This may be interchangeably referred to as the .trt or .engine file. The same exported TAO model may be used universally across training and deployment hardware. This is referred to as the .etlt file or encrypted TAO file. During model export, the TAO model is encrypted with a private key. This key is required when you deploy this model for inference. ### INT8 Mode Overview TensorRT engines can be generated in INT8 mode to improve performance, but require a calibration cache at engine creation-time. The calibration cache is generated using a calibration tensor file, if export is run with the --data_type flag set to int8. Pre-generating the calibration information and caching it removes the need for calibrating the model on the inference machine. Moving the calibration cache is usually much more convenient than moving the calibration tensorfile, since it is a much smaller file and can be moved with the exported model. Using the calibration cache also speeds up engine creation as building the cache can take several minutes to generate depending on the size of the Tensorfile and the model itself. The export tool can generate an INT8 calibration cache by ingesting training data using the following method: • Pointing the tool to a directory of images that you want to use to calibrate the model. For this option, make sure to create a sub-sampled directory of random images that best represent your training dataset. ### FP16/FP32 Model The calibration.bin is only required if you need to run inference at INT8 precision. For FP16/FP32-based inference, the export step is much simpler: all you need to do is provide a model from the train step to export to convert it into an encrypted TAO model. ### Exporting command Use the following command to export an SSD model Copy Copied! tao ssd export [-h] -m <path to the .tlt model file generated by tao train> -k <key> -e <path to experiment spec file> [-o <path to output file>] [--cal_data_file <path to tensor file>] [--cal_image_dir <path to the directory images to calibrate the model] [--cal_cache_file <path to output calibration file>] [--data_type <Data type for the TensorRT backend during export>] [--batches <Number of batches to calibrate over>] [--max_batch_size <maximum trt batch size>] [--max_workspace_size <maximum workspace size] [--batch_size <batch size to TensorRT engine>] [--engine_file <path to the TensorRT engine file>] [--strict_type_constraints] [--force_ptq] [--gen_ds_config] [--gpu_index <gpu_index>] [--log_file <log_file_path>] [--verbose] #### Required Arguments • -m, --model: The path to the .tlt model file to be exported using export. • -k, --key: The key used to save the .tlt model file. • -e, --experiment_spec: The path to the spec file. #### Optional Arguments • -o, --output_file: The path to save the exported model to. The default path is ./<input_file>.etlt. • --data_type: The desired engine data type. The options are “fp32”, “fp16”, “int8”. The default value is “fp32”. If using int8, the following INT8 arguments are required. • -s, --strict_type_constraints: A Boolean flag to indicate whether or not to apply the TensorRT strict_type_constraints when building the TensorRT engine. Note this is only for applying the strict type of INT8 mode. • --gen_ds_config: A Boolean flag indicating whether to generate the template DeepStream related configuration (“nvinfer_config.txt”) as well as a label file (“labels.txt”) in the same directory as the output_file. Note that the config file is NOT a complete configuration file and requires the user to update the sample config files in DeepStream with the parameters generated. • --gpu_index: The index of (descrete) GPUs used for exporting the model. We can specify the GPU index to run export if the machine has multiple GPUs installed. Note that export can only run on a single GPU. • --log_file: Path to the log file. Defaults to stdout. ### INT8 Export Mode Required Arguments • --cal_data_file: The tensorfile generated from tlt-int8-tensorfile for calibrating the engine. This can also be an output file if used with --cal_image_dir. • --cal_image_dir: The directory of images to use for calibration. Note The --cal_image_dir parameter applies the necessary preprocessing to generate a tensorfile at the path mentioned in the --cal_data_file parameter, which is in turn used for calibration. The number of generated batches in the tensorfile is obtained from the value set to the --batches parameter, and the batch_size is obtained from the value set to the --batch_size parameter. Ensure that the directory mentioned in --cal_image_dir has at least batch_size * batches number of images in it. The valid image extensions are .jpg, .jpeg, and .png. In this case, the input_dimensions of the calibration tensors are derived from the input layer of the .tlt model. ### INT8 Export Optional Arguments • --cal_cache_file: The path to save the calibration cache file to. The default value is ./cal.bin. • --batches: The number of batches to use for calibration and inference testing. The default value is 10. • --batch_size: The batch size to use for calibration. The default value is 8. • --max_batch_size: The maximum batch size of the TensorRT engine. The default value is 16. • --max_workspace_size: The maximum workspace size of the TensorRT engine. The default value is 1073741824 = 1<<30. • --engine_file: The path to the serialized TensorRT engine file. Note that this file is hardware specific and cannot be generalized across GPUs. Use this argument to quickly test your model accuracy using TensorRT on the host. As the TensorRT engine file is hardware specific, you cannot use this engine file for deployment unless the deployment GPU is identical to the training GPU. • --force_ptq: A Boolean flag to force post-training quantization on the exported .etlt model. * --gpu_index: The index of the (descrete) GPU for exporting the model if the machine has multiple GPUs installed. Note that export can only run on a single GPU. • --log_file: The path to the log file. The default path is stdout. Note When exporting a model that was trained with QAT enabled, the tensor scale factors to calibrate the activations are peeled out of the model and serialized to a TensorRT-readable cache file defined by the cal_cache_file argument. However, the current version of QAT doesn’t natively support DLA int8 deployment on Jetson. To deploy this model on Jetson with DLA int8, use the --force_ptq flag to use TensorRT post-training quantization to generate the calibration cache file. ### Exporting a Model Here’s a sample command using the --cal_image_dir option. Copy Copied! tao ssd export -m $USER_EXPERIMENT_DIR/data/ssd/ssd_kitti_retrain_epoch12.tlt \ -o$USER_EXPERIMENT_DIR/data/ssd/ssd_kitti_retrain.int8.etlt \ -e $SPECS_DIR/ssd_kitti_retrain_spec.txt \ --key$KEY \ --cal_image_dir $USER_EXPERIMENT_DIR/data/KITTI/val/image_2 \ --data_type int8 \ --batch_size 8 \ --batches 10 \ --cal_data_file$USER_EXPERIMENT_DIR/data/ssd/cal.tensorfile \ --cal_cache_file $USER_EXPERIMENT_DIR/data/ssd/cal.bin \ --engine_file$USER_EXPERIMENT_DIR/data/ssd/detection.trt ## Deploying to DeepStream The deep learning and computer vision models that you’ve trained can be deployed on edge devices, such as a Jetson Xavier or Jetson Nano, a discrete GPU, or in the cloud with NVIDIA GPUs. TAO Toolkit has been designed to integrate with DeepStream SDK, so models trained with TAO Toolkit will work out of the box with DeepStream SDK. DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. This section will describe how to deploy your trained model to DeepStream SDK. To deploy a model trained by TAO Toolkit to DeepStream we have two options: • Option 1: Integrate the .etlt model directly in the DeepStream app. The model file is generated by export. • Option 2: Generate a device specific optimized TensorRT engine using tao-converter. The generated TensorRT engine file can also be ingested by DeepStream. Machine-specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration. If the TensorRT or CUDA libraries of the inference environment are updated (including minor version updates), or if a new model is generated, new engines need to be generated. Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether. Option 1 is very straightforward. The .etlt file and calibration cache are directly used by DeepStream. DeepStream will automatically generate the TensorRT engine file and then run inference. TensorRT engine generation can take some time depending on size of the model and type of hardware. Engine generation can be done ahead of time with Option 2. With option 2, the tao-converter is used to convert the .etlt file to TensorRT; this file is then provided directly to DeepStream. See the Exporting the Model section for more details on how to export a TAO model. ### TensorRT Open Source Software (OSS) TensorRT OSS build is required for SSD models. This is required because several TensorRT plugins that are required by these models are only available in TensorRT open source repo and not in the general TensorRT release. Specifically, for SSD, we need the batchTilePlugin and NMSPlugin. If the deployment platform is x86 with NVIDIA GPU, follow instructions for x86; if your deployment is on NVIDIA Jetson platform, follow instructions for Jetson. #### TensorRT OSS on x86 Building TensorRT OSS on x86: 1. Install Cmake (>=3.13). Note TensorRT OSS requires cmake >= v3.13, so install cmake 3.13 if your cmake version is lower than 3.13c Copy Copied! sudo apt remove --purge --auto-remove cmake tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake 2. Get GPU architecture. The GPU_ARCHS value can be retrieved by the deviceQuery CUDA sample: Copy Copied! cd /usr/local/cuda/samples/1_Utilities/deviceQuery sudo make ./deviceQuery If the /usr/local/cuda/samples doesn’t exist in your system, you could download deviceQuery.cpp from this GitHub repo. Compile and run deviceQuery. Copy Copied! nvcc deviceQuery.cpp -o deviceQuery ./deviceQuery This command will output something like this, which indicates the GPU_ARCHS is 75 based on CUDA Capability major/minor version. Copy Copied! Detected 2 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 10.2 / 10.2 CUDA Capability Major/Minor version number: 7.5 3. Build TensorRT OSS: Copy Copied! git clone -b 21.03 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd$TRT_SOURCE mkdir -p build && cd build Note Make sure your GPU_ARCHS from step 2 is in TensorRT OSS CMakeLists.txt. If GPU_ARCHS is not in TensorRT OSS CMakeLists.txt, add -DGPU_ARCHS=<VER> as below, where <VER> represents GPU_ARCHS from step 2. Copy Copied! /usr/local/bin/cmake .. -DGPU_ARCHS=xy -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc) After building ends successfully, libnvinfer_plugin.so* will be generated under \pwd\/out/. 4. Replace the original libnvinfer_plugin.so*: Copy Copied! sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp $TRT_SOURCE/pwd/out/libnvinfer_plugin.so.7.m.n /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig #### TensorRT OSS on Jetson (ARM64) 1. Install Cmake (>=3.13) Note TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/Ubuntu 18.04 is cmake 3.10.2. Upgrade TensorRT OSS using: Copy Copied! sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake 2. Get GPU architecture based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table. Jetson Platform GPU_ARCHS Nano/Tx1 53 Tx2 62 AGX Xavier/Xavier NX 72 3. Build TensorRT OSS: Copy Copied! git clone -b 21.03 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd $TRT_SOURCE mkdir -p build && cd build Note The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, change 72 referring to GPU_ARCHS from step 2. Copy Copied! /usr/local/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc) After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/. 4. Replace "libnvinfer_plugin.so*" with the newly generated. Copy Copied! $export TRT_INC_PATH=”/usr/include/x86_64-linux-gnu” 1. Run the tao-converter using the sample command below and generate the engine. 2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on x86 section above or in this GitHub repo. Note Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model. #### Instructions for Jetson For the Jetson platform, the tao-converter is available to download in the NVIDIA developer zone. You may choose the version you wish to download as listed in the overview section. Once the tao-converter is downloaded, please follow the instructions below to generate a TensorRT engine. 1. Unzip the zip file on the target machine. 2. Install the OpenSSL package using the command: Copy Copied! sudo apt-get install libssl-dev 3. Export the following environment variables: Copy Copied! $ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” $export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu” 1. For Jetson devices, TensorRT 7.1 comes pre-installed with Jetpack. If you are using older JetPack, upgrade to JetPack 4.4 or JetPack 4.5. 2. Instructions to build TensorRT OSS on Jetson can be found in the TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo. 3. Run the tao-converter using the sample command below and generate the engine. Note Make sure to follow the output node names as mentioned in Exporting the Model section of the respective model. #### Using the tao-converter Copy Copied! tao-converter [-h] -k <encryption_key> -d <input_dimensions> -o <comma separated output nodes> [-c <path to calibration cache file>] [-e <path to output engine>] [-b <calibration batch size>] [-m <maximum batch size of the TRT engine>] [-t <engine datatype>] [-w <maximum workspace size of the TRT Engine>] [-i <input dimension ordering>] [-p <optimization_profiles>] [-s] [-u <DLA_core>] input_file ##### Required Arguments • input_file: Path to the .etlt model exported using tao ssd export. • -k: The key used to encode the .tlt model when doing the training. • -d: Comma-separated list of input dimensions that should match the dimensions used for tao ssd export. • -o: Comma-separated list of output blob names that should match the output configuration used for tao ssd export. For SSD, set this argument to NMS. ##### Optional Arguments • -e: Path to save the engine to. (default: ./saved.engine) • -t: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32. The options are {fp32, fp16, int8}. • -w: Maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30). • -i: Input dimension ordering, all other TAO commands use NCHW. The default value is nchw. The options are {nchw, nhwc, nc}. For SSD, we can omit it(defaults to nchw). • -p: Optimization profiles for .etlt models with dynamic shape. Comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. Can be specified multiple times if there are multiple input tensors for the model. This is only useful for new models introduced in TAO Toolkit 3.21.08. This parameter is not required for models that are already existed in version 2.0. • -s: TensorRT strict type constraints. A Boolean to apply TensorRT strict type constraints when building the TensorRT engine. • -u: Use DLA core. Specifying DLA core index when building the TensorRT engine on Jetson devices. ##### INT8 Mode Arguments • -c: Path to calibration cache file, only used in INT8 mode. The default value is ./cal.bin. • -b: Batch size used during the export step for INT8 calibration cache generation. (default: 8). • -m: Maximum batch size for TensorRT engine.(default: 16). If meet with out-of-memory issue, decrease the batch size accordingly. This parameter is not required for .etlt models generated with dynamic shape. (This is only possible for new models introduced in TAO Toolkit 3.21.08.) ##### Sample Output Log Here is a sample log for exporting a SSD model. Copy Copied! tao-converter -k$KEY \ -d 3,384,1248 \ -o NMS \ -e /export/trt.fp16.engine \ -t fp16 \ -i nchw \ -m 1 \ /ws/ssd_resnet18_epoch_100.etlt .. [INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [INFO] Detected 1 inputs and 2 output network tensors. ### Integrating the model to DeepStream There are 2 options to integrate models from TAO with DeepStream: • Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tao ssd export. • Option 2: Generate a device specific optimized TensorRT engine, using tao-converter. The TensorRT engine file can also be ingested by DeepStream. For SSD, we will need to build TensorRT Open source plugins and custom bounding box parser. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this GitHub repo. In order to integrate the models with DeepStream, you need the following: 1. Download and install DeepStream SDK. The installation instructions for DeepStream are provided in the DeepStream Development Guide. 2. An exported .etlt model file and optional calibration cache for INT8 precision. 3. A labels.txt file containing the labels for classes in the order in which the networks produces outputs. 4. A sample config_infer_*.txt file to configure the nvinfer element in DeepStream. The nvinfer element handles everything related to TensorRT optimization and engine creation in DeepStream. DeepStream SDK ships with an end-to-end reference application which is fully configurable. Users can configure input sources, inference model and output sinks. The app requires a primary object detection model, followed by an optional secondary classification model. The reference application is installed as deepstream-app. The graphic below shows the architecture of the reference application. There are typically 2 or more configuration files that are used with this app. In the install directory, the config files are located in samples/configs/deepstream-app or sample/configs/tlt_pretrained_models. The main config file configures all the high level parameters in the pipeline above. This would set input source and resolution, number of inferences, tracker and output sinks. The other supporting config files are for each individual inference engine. The inference specific config files are used to specify models, inference resolution, batch size, number of classes and other customization. The main config file will call all the supporting config files. Here are some config files in samples/configs/deepstream-app for your reference. • source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt: Main config file • config_infer_primary.txt: Supporting config file for primary detector in the pipeline above • config_infer_secondary_*.txt: Supporting config file for secondary classifier in the pipeline above The deepstream-app will only work with the main config file. This file will most likely remain the same for all models and can be used directly from the DeepStream SDK will little to no change. User will only have to modify or create config_infer_primary.txt and config_infer_secondary_*.txt. #### Integrating an SSD Model To run a SSD model in DeepStream, you need a label file and a DeepStream configuration file. In addition, you need to compile the TensorRT 7+ Open source software and SSD bounding box parser for DeepStream. A DeepStream sample with documentation on how to run inference using the trained SSD models from TAO Toolkit is provided on GitHub here. ##### Prerequisite for SSD Model 1. SSD requires batchTilePlugin and NMS_TRT. This plugin is available in the TensorRT open source repo, but not in TensorRT 7.0. Detailed instructions to build TensorRT OSS can be found in TensorRT Open Source Software (OSS). 2. SSD requires custom bounding box parsers that are not built-in inside the DeepStream SDK. The source code to build custom bounding box parsers for SSD is available here. The following instructions can be used to build bounding box parser: Step1: Install git-lfs (git >= 1.8.2) Copy Copied! curl -s https://packagecloud.io/install/repositories/github/git-lfs/ script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install Copy Copied! git clone -b release/tlt3.0 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps Step 3: Build Copy Copied! // or Path for DS installation export CUDA_VER=10.2 // CUDA version, e.g. 10.2 make This generates libnvds_infercustomparser_tlt.so in the directory post_processor. ### Label File The label file is a text file containing the names of the classes that the SSD model is trained to detect. The order in which the classes are listed here must match the order in which the model predicts the output. During the training, TAO SSD will specify all class names in lower case and sort them in alphabetical order. For example, if the dataset_config is: Copy Copied! dataset_config { data_sources: { label_directory_path: "/workspace/tao-experiments/data/training/label_2" image_directory_path: "/workspace/tao-experiments/data/training/image_2" } target_class_mapping { key: "car" value: "car" } target_class_mapping { key: "person" value: "person" } target_class_mapping { key: "bicycle" value: "bicycle" } validation_data_sources: { label_directory_path: "/workspace/tao-experiments/data/val/label" image_directory_path: "/workspace/tao-experiments/data/val/image" } } Then the corresponding ssd_labels.txt file would be: Copy Copied! background bicycle car person ### DeepStream Configuration File The detection model is typically used as a primary inference engine. It can also be used as a secondary inference engine. To run this model in the sample deepstream-app, you must modify the existing config_infer_primary.txt file to point to this model. Option 1: Integrate the model (.etlt) directly in the DeepStream app. For this option, users will need to add the following parameters in the configuration file. The int8-calib-file is only required for INT8 precision. Copy Copied! tlt-encoded-model=<TLT exported .etlt> tlt-model-key=<Model export key> int8-calib-file=<Calibration cache file> The tlt-encoded-model parameter points to the exported model (.etlt) from TLT. The tlt-model-key is the encryption key used during model export. Option 2: Integrate TensorRT engine file with DeepStream app. Step 1: Generate TensorRT engine using tao-converter. Detailed instructions are provided in the Generating an engine using tao-converter section above. Step 2: Once the engine file is generated successfully, modify the following parameters to use this engine with DeepStream. Copy Copied! model-engine-file=<PATH to generated TensorRT engine> All other parameters are common between the two approaches. To use the custom bounding box parser instead of the default parsers in DeepStream, modify the following parameters in [property] section of primary infer configuration file: Copy Copied! parse-bbox-func-name=NvDsInferParseCustomNMSTLT custom-lib-path=<PATH to libnvds_infercustomparser_tlt.so> Add the label file generated above using: Copy Copied! labelfile-path=<ssd labels> For all the options, see the sample configuration file below. To learn about what all the parameters are used for, refer to the DeepStream Development Guide. Copy Copied! [property] gpu-id=0 net-scale-factor=1.0 offsets=103.939;116.779;123.68 model-color-format=1 labelfile-path=<Path to ssd_labels.txt> tlt-encoded-model=<Path to SSD etlt model> tlt-model-key=<Key to decrypt model> infer-dims=3;384;1248 uff-input-order=0 maintain-aspect-ratio=1 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 num-detected-classes=4 interval=0 gie-unique-id=1 is-classifier=0 #network-type=0 output-blob-names=NMS parse-bbox-func-name=NvDsInferParseCustomNMSTLT custom-lib-path=<Path to libnvds_infercustomparser_tlt.so> [class-attrs-all] threshold=0.3 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0
2023-03-29 21:45:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29591473937034607, "perplexity": 6205.1394769234175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00370.warc.gz"}
https://www.prexam.com/ExamPractice.php?question=An+employer+recruits+experienced+%28x%29+and+fresh+%28y%29++workmen+for+his+firm.+If+he+cannot+afford+to+employ+more+than+9+people%2C+then
# Practice Test A) B) C) $x+y≥9$ D) none of these Correct option is B Explanation : Also, not more than $⇒$ less than or equal to. Therefore, the required condition is . Don't just read through questions. Boost your exam performance with innovative tools @PREXAM. Practice → Test → Error Improvement → Gauranteed 5-10% boost in your current score. SIMILAR QUESTIONS Q. A solution of the inequality 3x-2y>3 is Q. x > 2, y > - 1 then which of the following holds good? Q. If x and y are integers, then the equation 5 x + 19 y = 64 has PRACTICE OTHER SUBJECTS ALSO Maths Law Economics Accounts
2022-06-25 11:23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051170229911804, "perplexity": 1736.8832767231272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00047.warc.gz"}
https://answers.gazebosim.org/answers/8056/revisions/
# Revision history [back] The key to have realistic simulations is to provide a correct section <inertias> ... </inertias> A proper section should look like this: <inertia> <!-- interias are tricky to compute --> <ixx>0.000033719</ixx> <!-- for a box: ixx = 0.083 * mass * (y*y + z*z) --> <ixy>0.0</ixy> <!-- for a box: ixy = 0 --> <ixz>0.0</ixz> <!-- for a box: ixz = 0 --> <iyy>0.001305175</iyy> <!-- for a box: iyy = 0.083 * mass * (x*x + z*z) --> <iyz>0.0</iyz> <!-- for a box: iyz = 0 --> <izz>0.001322294</izz> <!-- for a box: izz = 0.083 * mass * (x*x + y*y) --> </inertia> </inertial> Steps to follow: 0- Use Blender to automatically set the center of mass and arrange the axis of the object at that center (this will make later steps easier) 1- Use the tutorial chapulina has suggested to calculate the inertial parameters and set the model. Don't forget to set a correct mass value (the one you have now seems unrealistic) 2- Use Gazebo's option View->Center of mass/Inertia to observe if your inertias are correct (see figures attached for a wrong one an a correct one, for a spoon) 3- If they do not show correct, iterate INCORRECT INERTIAS CORRECT INERTIAS
2020-08-14 08:38:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6301690340042114, "perplexity": 8741.905680412263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00454.warc.gz"}
http://www.cs.umd.edu/projects/softchat/20060410.html
Test designers widely believe that the overall effectiveness and cost of software testing depends largely on the type and number of test cases executed on the software. This paper shows that the {\it test oracle}, a mechanism that determines whether a software executed correctly for a test case, also significantly impacts the fault-detection effectiveness and cost of a test case. Graphical user interfaces (GUIs), which have become ubiquitous for interacting with today's software, have created new challenges for test oracle development. Test designers manually assert'' the expected values of specific properties of certain GUI widgets in each test case; during test execution, these assertions are used as test oracles to determine whether the GUI executed correctly. Since a test case for a GUI is a sequence of events, a test designer must decide (1) what to assert, and (2) how frequently to check an assertion, \eg, after each event in the test case or after the entire test case has completed execution. Variations of these two factors significantly impact the fault-detection ability and cost of a GUI test case. A technique to declaratively specify different types of automated GUI test oracles is described. Six instances of test oracles are developed and compared in an experiment on four software systems. The results show that test oracles do affect the fault-detection ability of test cases in different and interesting ways: (1) test cases significantly lose their fault-detection ability when using weak'' test oracles, (2) in many cases, invoking a thorough'' oracle at the end of test case execution yields the best cost-benefit ratio, (3) certain test cases detect faults only if the oracle is invoked during a small window of opportunity'' during test execution, and (4) using thorough and frequently-executing test oracles can make up for not having long test cases.
2019-04-19 10:45:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47628116607666016, "perplexity": 1824.3535698462201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527566.44/warc/CC-MAIN-20190419101239-20190419123239-00537.warc.gz"}
https://physics.stackexchange.com/questions/312935/euler-equation-and-conservation-of-angular-momentum-rigid-body
# Euler equation and conservation of angular momentum (rigid body) I am a beginner in this field. Just ask a simple question which confuses me. Please consider the following: 1. Conservation of angular momentum about fixed point $o$: $\dot{H}_o = M$. $M$: the total external torque applied to the body about $o$. 2. Euler equation: $I\dot{\omega}+\omega\times I \omega = M$. $I$: moment of inertia in matrix form (suppose diagonal $I$ for simplicity.) My question: If there is no external torque ($M=0$), then from 1., we know $\dot{H}_o=0$ and by $H=I\omega$, we know $\dot{\omega}=0$ (Due to rigid body, $I$ is constant). However, by 2., if $M=0$, $I\dot{\omega}=-\omega\times I \omega$. So $\dot{\omega}\ne 0$. It confuses to me. Where am I wrong? • Euler equation is written in the body frame of reference. – Abhijeet Melkani Feb 18 '17 at 20:40 • @A.Melkani It seems to remind me something. Could you explain it clearly? How does that fact solve my problem? I am very weak in identifying this. – sleeve chen Feb 18 '17 at 20:46 So, in the case of zero torque the (physical ie as expressed in space frame of reference) angular velocity vector of a rigid body vector is indeed constant. But $\omega$ in Euler's equations refer to the angular velocity vector expressed in the (moving) body frame of reference. And because the frame of reference is moving the description of the vector is non-constant. • So how does this fact influence the results of $\dot{\omega}\neq 0$ and the existence of external torque? – sleeve chen Feb 18 '17 at 20:48
2019-08-20 01:45:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132802486419678, "perplexity": 473.32653793172835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00099.warc.gz"}
https://www.physicsforums.com/threads/do-you-want-to-be-a-lefty.40512/
# Do you want to be a LEFTY 1. Aug 25, 2004 ### difference do you want to be a LEFTY !! hi, boys and girls as we sit down in front of computer day after day, do you feel pain from wrist arm and shoulder? if you can't endure, there is a simple and effective way. just be a LEFTY, use your another hand. additionally, as we all know, it can practise our right-brain do you want have a try, or tell me what you are anxious for 2. Aug 25, 2004 ### ArmoSkater87 umm...what does this have to do with general physics? 3. Aug 25, 2004 ### Miles Unless this was moved I would say its in the right forum. It would also be nice if I could use both my hands and be ambasomething or other, have no idea how to spell that word. 4. Aug 25, 2004 ### Moonbear Staff Emeritus Ambidextrous is the word you're looking for. And that's far better than just switching from right to left handedness (or vice versa). I've taught myself to do a number of things left-handed...still not as steady as with my right-hand, but enough to function if I ever broke my right arm or something like that. This probably should earn me a few extra points on that geek quiz posted around here somewhere! 5. Aug 25, 2004 ### jimmy p I used to be ambidextrous but of got sick of being half backward so I switched to my proper RIGHT arm. 6. Aug 25, 2004 ### Gza I don't think i'll ask. :rofl: 7. Aug 26, 2004 ### Gokul43201 Staff Emeritus I too have gotten quite good at doing some things with my left hand : I can count, point, wave, scratch my right hand, rotate my upper arm to make my watch look in the general direction of my eyes, press CTRL + ALT, press ALT + TAB and a host of other complex activities - all with my left hand. Nifty, wot ? 8. Aug 26, 2004 ### recon My father constantly reminds me to use my left hand for controlling the mouse. I'm not going to ignore him completely - he's got Carpel Tunnel Syndrome (CTS) from more than 20 years of using his computer at work. But, hey, I'm in the prime of my youth and I love Minesweeper and if you use your left hand while playing the game, you just CAN'T WIN!!! (this is only if you're a right-hander like me.) 9. Aug 26, 2004 ### recon By the way, does using one's left hand really make one smarter? 10. Aug 26, 2004 ### Galileo Im like a semi-quasi-pseudo lefty. I write with my left hand, but my strongest arm is right. I throw/kick objects with my right limbs. When I walk next to my bicycle, I walk on the right side (so the standard is always on the other side). I hold my telephone to my left ear. In general, things that require a certain precision I do left-handed. Things that require strenght are done right-handed. It can be a pain when I need both, you should see me writing on the blackboard, it's a disgrace. (and Im giving studentassistent lectures coming semester, this'll be fun). I think statistical research has been done on this. It seems that people who are left-handed are are generally more creative than people who are right-handed. Don't know the details though. 11. Aug 26, 2004 ### Chi Meson I think that we should ask Aron Ralston what it is like to become a lefty. (Do a google of that name. You'll know what I mean)
2017-03-26 13:42:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33563753962516785, "perplexity": 2668.632571869818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189239.16/warc/CC-MAIN-20170322212949-00151-ip-10-233-31-227.ec2.internal.warc.gz"}
http://scistatcalc.blogspot.com/2013/10/cauchy-distribution-cdf-and-quantile.html
## Sunday, 27 October 2013 ### Cauchy Distribution CDF and Quantile Calculator An implementation of the Cauchy Distribution CDF and Quantile function Calculator occurs below. The Cauchy distribution is also known as the Cauchy-Lorentz distribution, and for real parameters $-\infty < x_0 <\infty$ location and scale $\gamma>0$ it is:- $\Large \frac{1}{\pi\gamma[1+(\frac{(x-x_0)}{\gamma})^2]}$ where variable $-\infty < x <\infty$ is a real number. The $x_0$ and the $\gamma$ parameter fields have to be filled in, as well as two out of the three fields which are labelled Lower Limit, Upper Limit and Probability. The lower limit field needs to contain either a real number or string -inf for minus infinity. The upper limit field needs to contain either a real number or the string inf (for plus infinity). The probability field must contain a number only. $x_0$: $\gamma$: Lower limit: Upper limit: Probablility: Plot of distribution ($f(x)$) values against $x$ values $f(x)$ $x$
2018-12-10 09:44:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6122357249259949, "perplexity": 627.3343735118401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00583.warc.gz"}
https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.Window.rowsBetween.html
# pyspark.sql.Window.rowsBetween¶ static Window.rowsBetween(start, end)[source] Creates a WindowSpec with the frame boundaries defined, from start (inclusive) to end (inclusive). Both start and end are relative positions from the current row. For example, “0” means “current row”, while “-1” means the row before the current row, and “5” means the fifth row after the current row. We recommend users use Window.unboundedPreceding, Window.unboundedFollowing, and Window.currentRow to specify special boundary values, rather than using integral values directly. A row based boundary is based on the position of the row within the partition. An offset indicates the number of rows above or below the current row, the frame for the current row starts or ends. For instance, given a row based sliding frame with a lower bound offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from index 4 to index 7. New in version 2.1.0. Parameters startint boundary start, inclusive. The frame is unbounded if this is Window.unboundedPreceding, or any value less than or equal to -9223372036854775808. endint boundary end, inclusive. The frame is unbounded if this is Window.unboundedFollowing, or any value greater than or equal to 9223372036854775807. Examples >>> from pyspark.sql import Window >>> from pyspark.sql import functions as func >>> from pyspark.sql import SQLContext >>> sc = SparkContext.getOrCreate() >>> sqlContext = SQLContext(sc) >>> tup = [(1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b")] >>> df = sqlContext.createDataFrame(tup, ["id", "category"]) >>> window = Window.partitionBy("category").orderBy("id").rowsBetween(Window.currentRow, 1) >>> df.withColumn("sum", func.sum("id").over(window)).sort("id", "category", "sum").show() +---+--------+---+ | id|category|sum| +---+--------+---+ | 1| a| 2| | 1| a| 3| | 1| b| 3| | 2| a| 2| | 2| b| 5| | 3| b| 3| +---+--------+---+
2022-01-16 18:58:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7288939952850342, "perplexity": 7252.476464605116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00261.warc.gz"}
http://mineralspringscampground.com/jabari-parker-vvrgqj/tangent-of-a-circle-calculator-182dc0
The calculator will generate a step by step explanations and circle graph. If a secant and a tangent of a circle are drawn from a point outside the circle, then the product of the lengths of the secant and its external segment equals the square of the length of the tangent segment. tan(x) calculator. Calculate the tangent length segment when a secant and tangent intersects from a point outside the circle using this online Tangent Secant Theorem Calculator. The tangent to a circle is perpendicular to the radius at the point of tangency. Imagine we didn't know the length of the side BC.We know that the tangent of A (60°) is the opposite side (26) divided by the adjacent side AB - the one we are trying to find. The tangent is a trigonometric function, defined as the ratio of the length of the side opposite to the angle to the length of the adjacent side, in a right-angled triangle. The midpoint of line a is the point of tangency. To determine the equation of a tangent to a curve: Find the derivative using the rules of differentiation. Home ›Calculators›Math Calculators› Tangent calculator Tangent Calculator. Tangent of a Circle Calculator. For the circle tangent arrow, the circular segment is removed from this triangle. Given A, B, and C as the sides of the triangle and A as the area, the formula for the radius of a circle circumscribing a triangle is r = ABC / 4A and for a circle inscribed in a triangle is r = A / S where S = (A + B + C) / 2. The tangent to a circle equation x2+ y2=a2 for a line y = mx +c is y = mx ± a √[1+ m2] Tangent. Here, the list of the tangent to the circle equation is given below: 1. The tangent has two defining properties such as: A Tangent touches a circle in exactly one place. The tangent of a circle is perpendicular to the radius, therefore we can write: \begin{align*} 5 \times m_{Q} &= -1 \\ \therefore m_{Q} &= - \frac{1}{5} \end{align*} Substitute $$m_{Q} = - \frac{1}{5}$$ and $$Q(1;5)$$ into the equation of a straight line. E.g: 5e3, 4e-8, 1.45e12. Underneath the calculator, six most popular trig functions will appear - three basic ones: sine, cosine and tangent, and their reciprocals: cosecant, secant and cotangent. Pre-Calculus Unit Circle. Circle Sector calculator, Circle Segment Calculator, Arc Calculator, Circle Chord Calculator. Since it makes sense to start at 0 degrees, our circle will look like this: Fig 4. $\endgroup$ – … Angles in the same segment. A complete circle – 360 degrees. Set up a trigonometric equation, using the information from the picture. Angles are calculated and displayed in degrees, here you can convert angle units. To calculate the tangent of an angle in gradians, you must first select the desired unit by clicking on the options button calculation … For a given angle θ each ratio stays the same no matter how big or small the triangle is. Important angles and their corresponding sine, cosine and tangent values. Once we have the slope, we take the inverse tangent (arctan) of … Show Instructions. Example: Find the tangent points on the circle (x − 2) 2 + (y + 5) 2 = 9 from point (7 , 1). This website uses cookies to improve your experience, analyze traffic and display ads. tangere, to touch). Tangent Line Calculator. Example 1. And the area between the 3 tangent circles (green area) is: A = A T − A A − A B − A C. The angles of the triangle ABC can be found by cosine law: The green area circumference is: P = α r 1 + β r 2 + γ r 3. Anyway, the red line is obviously the tangent in the point (0|0), having the same slope as the graph. Tangents of Circles - Point of Tangency, Tangent to a Circle Theorem, Secant, Two-Tangent Theorem, Common Internal and External Tangents, in video lessons with examples and step-by-step solutions. Calculate the gradient of the normal at $$(-1;4)$$ First determine the gradient of the tangent at the given point: \begin{align*} \cfrac{dy}{dx} &= \cfrac{4}{(-1)^{2}} \\ \therefore m &= 4 \end{align*} Use the gradient of the tangent to calculate the gradient of the normal: ln (x), (1,0) tangent of f (x) = sin (3x), (π 6, 1) tangent of y = √x2 + 1, (0, 1) To graph a circle, visit the circle graphing calculator (choose the "Implicit" option). It can handle horizontal and vertical tangent lines as well. By using this website, you agree to our Cookie Policy. If you have this you can compute the circle's angle in degrees with (180 / π) * arctan (dy / dx). This formula works because dy / dx gives the slope of the line created by the movement of the circle across the plane. BYJU’S online tangent line calculator tool makes the calculations faster and easier where it displays the output in a fraction of seconds. Calculator for the angles at a circle: central angle and chord tangent angle. The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side: so called because it can be represented as a line segment tangent to the circle, that is the line that touches the circle, from Latin linea tangens or touching line (cf. Invalid input Radius: Diameter: Area: Circumference: Status: Calculator waiting for input Power of the Point. Please enter two values, but not two circular angles. The tangent to a circle equation x2+ y2=a2 at (a cos θ, a sin θ ) isx cos θ+y sin θ= a 1.4. Angle in a semi-circle. The line that joins two infinitely close points from a point on the circle is a Tangent. I promise you, the problem will become much clearer at that point. So first tangency point is: person_outlineAntonschedule 2011-05-14 19:39:53. In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. It can handle horizontal and vertical tangent lines as well. This calculator can find the center and radius of a circle given its equation in standard or general form. Circle Calculator . Part 2. Apply the second equation to get π x (12 / 2) 2 = 3.14159 x 36 = 113.1 cm 2 (square centimeters). The trick is, that the radius line and the tangent line are perpendicular. T he math journey around set builder notation starts with what a student already knows, and goes on to creatively crafting a fresh concept in the young minds. Trig calculator finding sin, cos, tan, cot, sec, csc To find the trigonometric functions of an angle, enter the chosen angle in degrees or radians. Find the length of line segment b. I am trying to figure out an equation to solve for the length of b. I'm using javascript, but I can adapt general equations. The square of the length of tangent segment equals to the difference of the square of length of the radius and square of the distance between circle center and exterior point. [insert diagram of circle A with tangent LI perpendicular to radius AL and secant EN that, beyond the circle, also intersects Point I] With Point I common to both tangent LI and secant EN, we can establish the following equation: LI^2 = IE * IN. The Tangent Secant Theorem explains a relationship between a tangent and a secant of the same circle. Load more. Share this page to Google Classroom. … The picture … How to calculate a tangent? Tangent Calculator. Circle Calculator. It is called "cotangent" in reference to its reciprocal - the tangent function - which can be represented as a line segment tangent to a circle. \beta. The tangent. In other words, we can say that the lines that intersect the circles exactly in one single point are Tangents. For angles in circles formed from tangents, secants, radii and chords click here. A tangent of an angle α is also equal to the ratio between its sine and cosine, so tanα = sinα / cosα. Substitute the $$x$$-coordinate of the given point into the derivative to calculate the gradient of the tangent. Click here for the formulas used in this calculator. Circular segment - is an area of a circle which is "cut off" from the rest of the circle by a secant (chord). Tangent, written as tan⁡(θ), is one of the six fundamental trigonometric functions.. Tangent definitions. Tangent To A Circle. If a secant and a tangent of a circle are intersecting outside the circle from a point, then the product of the lengths of the secant and its … Tangent function ( tan(x) ) The tangent is a trigonometric function, defined as the ratio of the length of the side opposite to the angle to the length of the adjacent side, in a right-angled triangle. There are other functions as well, mapping out diameter, circumference, and area from a given radius. Circle Sector, Segment, Chord and Arc Calculator. A tangent to a circle that intersects exactly in one place i.e radius at 90° angle. Circular segment. \pi. Find the length of the tangent in the circle shown below. Enter the lengths of the legs and the angle at the arrowhead. This calculator will find either the equation of the circle from the given parameters or the center, radius, diameter, area, circumference (perimeter), eccentricity, linear eccentricity, x-intercepts, y-intercepts, domain, and range of the entered circle. A circle has distance between Exterior Point and its Center as 12cm and radius as 7cm. Choose the number of decimal places, then click Calculate. = √144 - 49 $x = \frac 1 2 \cdot \text{ m } \overparen{ABC}$ Note: Like inscribed angles, when the vertex is on the circle itself, the angle formed is half the measure of the intercepted arc. Circle Sector, Segment, Chord and Arc Calculator. Calculate Reset: Result: * Use e for scientific notation. If all sides of a polygon are tangent to a circle, then the polygon is called circumscribed. Calculate the tangent length segment when a secant and tangent intersects from a point outside the circle using this online Tangent Secant Theorem Calculator. The calculator will generate a step by step explanations and circle graph. Sine, Cosine and Tangent. The tangent to a circle equation x2+ y2+2gx+2fy+c =0 at (x1, y1) is xx1+yy1+g(x+x1)+f(y +y1)+c =0 1.3. Free tangent line calculator - find the equation of the tangent line given a point or the intercept step-by-step This website uses cookies to ensure you get the best experience. Solution. Calculate the tangent of an angle in gradians . A tangent intersects a circle in exactly one place. Insert m and the point into , then you got b ; Can I see some examples? To calculate them: Divide the length of one side by another side It is called "tangent" since it can be represented as a line segment tangent to a circle. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side: so called because it can be represented as a line segment tangent to the circle, that is the line that touches the circle, from Latin linea tangens or touching line (cf. \alpha. Free online tangent calculator. Point of tangency is the point at which tangent meets the circle. Scroll down the page for more examples and solutions. Task 1: Given the radius of a cricle, find its area. Because a lot of pre-calculus work involves trigonometric functions, you need to understand ratios. In this case: a = 2 b = − 5 r = 3. For example, if the radius is 5 inches, then using the first area formula calculate π x 5 2 = 3.14159 x 25 = 78.54 sq in.. Because tangent equals sine divided by cosine, tan(0) = sin(0) / cos(0) = 0 / 1 = 0. (xp , yp) on a circle. Tangent of Circle. Since the tangent line to a circle at a point P is perpendicular to the radius to that point, theorems involving tangent lines often involve radial lines and orthogonal circles. Theorems involving Angles and Arcs. Topic: Circle. There can be only one tangent at a point to circle. The figure shows the wire, the tower, and the known information. The circle calculator, formula, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) to understand the concept of perimeter and area of circle. The calculator will find the tangent line to the explicit, polar, parametric and implicit curve at the given point, with steps shown. Tangent Calculator. If all sides of a polygon are tangent to a circle, then the polygon is called circumscribed. A cotangent of an angle α is also equal to the ratio between its cosine and sine, so cotα = cosα / sinα. \gamma. Author: MissSutton. Inverse tangent calculator. = 9.7468 cm. Insert x into the function, so you got the point where the tangent touches. A tangent to a circle that intersects exactly in one place i.e radius at 90° angle. Circle Calculator. Unit Circle Formula. The following diagrams show the Radius Tangent Theorem and the Two-Tangent Theorem. Length of the tangent from a point can be calculated from circle radius and distance between exterior point and center using this online Tangent of a Circle Calculator. Tangent, written as tan⁡(θ), is one of the six fundamental trigonometric functions.. Tangent definitions. Circle Sector calculator, Circle Segment Calculator, Arc Calculator, Circle Chord Calculator. Also, it can find equation of a circle given its center and radius. Insert x into the derivation, so you got the slope m of the tangent. Tangent Calculator The Tangent calculator is used to calculate the tangent of the input angle (x). Given a circle with it's center point $$M$$, the radius $$r$$ and an angle $$\alpha$$ of the radius line, how can one calculate the tangent line on the circle in point $$T$$? Circle Cal on its own page . RapidTables. As a tangent is a straight line it is described by an equation in the form $$y - b = m(x - a)$$.You need both a point and the gradient to find its equation. Example: find the area of a circle. If you want to find the tangent on the point x, you do three things: Insert x into the function, so you got the point where the tangent touches ; Insert x into the derivation, so you got the slope m of the tangent. $\begingroup$ Where on a circle would you draw a vertical tangent? Alternate Segment Theorem. Calculate the Tangent Line of a Circle October 11th, 2016. 9.21]. Interactive Circle Theorems. This combination happens when a portion of the curve is tangent to one side, and there is an imaginary tangent line extending from the two sides of the triangle. It has a single text field where you will enter the value and gives you options where you select the angle type. One point two equal tangents. It can either be in degrees or radians. This calculator can find the center and radius of a circle given its equation in standard or general form. Tangent Secant Theorem Calculator. $\endgroup$ – Andrew Chin Oct 20 '20 at 7:19 $\begingroup$ I would suggest to take some paper and make a drawing of the situation. Similarly, it also describes the gradient of a tangent to a curve at any point on the curve. tangere, to touch). Invalid input Radius: Diameter: Area: Circumference: Status: Calculator waiting for input Power of the Point. Angle made from the radius with a tangent. Tangent Properties - If a Line Touches a Circle and from the Point of Contact, a Chord is Drawn, the Angles Between the Tangent and the Chord Are Respectively Equal to the Angles in the Corresponding Alternate Segments video tutorial 00:19:01 TC is the diameter of the circle ∴ ∠ CBT = 90° (Angle in a semi – circle) Concept: Tangent Properties - If a Line Touches a Circle and from the Point of Contact, a Chord is Drawn, the Angles Between the Tangent and the Chord Are Respectively Equal to the Angles in the Corresponding Alternate Segments tan -1: Calculate Reset: Degrees: ° Radians: rad: Tangent table. By Yang Kuang, Elleyne Kase . =. The green line is no tangent cause the line intersects the graph without just touching it. Of course. The square of the length of tangent segment equals to the difference of the square of length of the radius and square of the distance between circle center and exterior point. The tangent. We also learn ed tangent definition, tangent geometry, tangent to a circle, tangent line equation, and checked out tangent line calculator. Our calculator is designed to help with a common geometry textbook problem, finding the x/y coordinates on the face of a circle. Tangent of Circle. Related Pages Circles Cyclic Quadrilaterals. Just enter your function and a point into our free calculator. As a tangent is a straight line it is described by an equation in the form $$y - b = m(x - a)$$.You need both a point and the gradient to find its equation. Also, it can find equation of a circle given its center and radius. Theorems involving Angles and Arcs. After you have filled these fields completely, click on the equal sign (=) or the ‘Calculate’ button to initiate the conversion. Unit circle showing cos(0) = 1 and sin(0) = 0. For angles in circles formed from tangents, secants, radii and chords click here. tan. For this problem, you must set up the trigonometric equation that features tangent, because the opposite side is the length of the tower, the hypotenuse is the wire, and the adjacent side is what you need to find. The Tangent intersects the circle’s radius at $90^{\circ}$ angle. Point of tangency is the point where the tangent touches the circle. \frac {\msquare} {\msquare} x^2. A tangent intersects a circle in exactly one place. The following formula is used to calculate the values of a unit circle. Can calculate area, arc length,chord length, height and perimeter of circular segment by radius and angle. Tangent points from a point. Suppose our circle has center (0;0) and radius 2, and we are interested in tangent lines to the circle that pass through (5;3). Task 2: Find the area of a circle given its diameter is 12 cm. Tangent to a Circle A tangent to a circle is a straight line which touches the circle at only one point. Sine, Cosine and Tangent (often shortened to sin, cos and tan) are each a ratio of sides of a right angled triangle:. Show Instructions. The read line is a tangent cause it just touches the graph in one point without intersecting it. This point is called the point of tangency. The calculator will find the tangent line to the explicit, polar, parametric and implicit curve at the given point, with steps shown. Mind the special case: A tangent line in an ininflection point does cross the graph of the function. It is according to the definition of tangent, that touches the circle at exactly one point. The tangent to a circle equation x2+ y2=a2 at (x1, y1) isxx1+yy1= a2 1.2. tan(x) calculator. Length of the tangent from a point can be calculated from circle radius and distance … If we look at the general definition - tan x=OAwe see that there are three variables: the measure of the angle x, and the lengths of the two sides (Opposite and Adjacent).So if we have any two of them, we can find the third.In the figure above, click 'reset'. Tangent lines to a circle This example will illustrate how to find the tangent lines to a given circle which pass through a given point. How to Use a Graphing Calculator . Length of Tangent =√ 122 - 72 The angle at the centre. Tangent to a Circle. Tangent line calculator is a free online tool that gives the slope and the equation of the tangent line. The above diagram has one tangent and one secant. A tangent to a circle is a straight line, in Free online tangent calculator. Click hereto get an answer to your question ️ The tangent at a point C of a circle and a diameter AB when extended intersect at P. If PCA = 110^0 , find CBA [see Fig. Method for finding circumference of circle: Let us inscribe into a circle a regular polygon, for example square. Now, let’s prove tangent and radius of the circle are perpendicular to each other at the point of contact. Tangent. Tangent To A Circle Theorem, Secant, Two-Tangent Theorem, Common Internal And External Tangents. One important ratio in right triangles is the tangent. Tangent calculator. 1.1. In the graph above, tan(α) = a/b and tan(β) = b/a. Education; Math; Calculus; How to Calculate the Tangent of an Angle; How to Calculate the Tangent of an Angle. Online trigonometric tangent calculator. Circle Cal on its own page . In the figure above, cotα = b / a, and cotβ = a / b. Given a circle with radius r, and a tangent line segment with length a. Circle Tangent Calculator. Circular segment. The following diagram is an example of two tangent circles. The two circles are tangent if they are touching each other at exactly one point. Also think about this: A line being tangent in one point may very well intersect the graph in some other point. The tangent will then be found … It is a line which touches a circle or ellipse at just one point. Cyclic quadrilaterals. Next. Tangent to a circle is the line that touches the circle at only one point. Sin (X) = X. Cosine (X) = Z. Tangent (X) = W. Where x is the angle and y, z and w are the values of the unit circle. [insert diagram of circle A with tangent LI perpendicular to radius AL and secant EN that, beyond the circle, also intersects Point I] With Point I common to both tangent LI and secant EN, we can establish the following equation: LI^2 = IE * IN Click here for the formulas used in this calculator. \cdot. The central angle spans a circular arc with a chord length s. The chord tangent angle or inscribed angle is the angle between circle and chord. tan. … Here we discuss the various symmetry and angle properties of tangents to circles. In the circle O , P T ↔ is a tangent and O P ¯ is the radius. The value of the square root is: x 1 = 4.87 x 2 = 0.61. y 1 = − 5.89 y 2 = − 2.34. To calculate tangent of 60, enter tan(60), after calculation, the restults sqrt(3) is returned. It is called "tangent" since it can be represented as a line segment tangent to a circle. On the picture: L - arc length h- height c- chord R- radius a- angle. The other values will be calculated. 4.3 Drawing an Arc Tangent to a Line or Arc and Through a Point. Tangents Of Circles . An angle formed by a chord and a tangent that intersect on a circle is half the measure of the intercepted arc. It is a line which touches a circle or ellipse at just one point. , for example square \begingroup $where on a circle with radius r and. Does cross the graph 0 degrees, here you can skip the multiplication sign, ... Triangles is the point of tangency is the radius byju ’ s radius at$ 90^ { }. Distance between Exterior point and its center and radius of the six fundamental trigonometric functions.. tangent definitions into circle. tangent '' since it can find equation of a circle that intersects in! Because dy / dx tangent of a circle calculator the slope m of the circle across the plane circular angles equation, the! O, P T ↔ is a free online tool that gives the slope of the of... Sector Calculator, circle segment Calculator, Arc Calculator length a this case: a tangent to a:... The line intersects the graph without just touching it similarly, it also describes the gradient the. Area: Circumference: Status: Calculator waiting for input $Power of the tangent will be... = 1 and sin ( 0 ) = a/b and tan ( )... And Through a point single point are tangents click calculate be calculated from circle radius and distance … circle arrow. / a, and the point of tangency circle at exactly one place i.e radius at 90^! Into our free Calculator using this online tangent Secant Theorem Calculator of six... Two defining properties such as: a = 2 b = − 5 r = 3 distance! Circle graphing Calculator ( choose the Implicit '' option ) graph in other... In this Calculator to each other at exactly one point the function values! Graph a circle the trick is, that touches the circle are perpendicular to each other at the point tangency! { \circ }$ angle this triangle tangent Theorem and the equation of a circle is a of... Of seconds first tangency point is: tangent Secant Theorem Calculator Implicit '' option.! By the movement of the tangent gives the slope m of the circle using this tangent! Of decimal places, then click calculate got b ; can I see some examples you will enter the of. B / a, and a tangent that intersect the circles exactly one... is equivalent to 5 ⋅ x. tangent points from a point to circle cosine. At a circle that intersects exactly in one single point are tangents radii and chords click.. Exterior point and its center and radius of a tangent and O P ¯ is the tangent.... Tangent that intersect the circles exactly in one point in circles formed from tangents, secants radii. ) = 1 and sin ( 0 ) = 0 lot of pre-calculus work involves trigonometric..... A given radius prove tangent and radius of the circle using this online tangent Secant Calculator! Line that joins two infinitely close points from a given radius area Arc! 5x is equivalent to 5 * x are touching each other at exactly one.! Let us inscribe into a circle in exactly one place i.e radius at 90°.! Tangent intersects a circle or ellipse at just one point may very intersect. This website uses cookies to improve your experience, analyze traffic and ads. Free online tool that gives the slope of the given point into, then click.. Is no tangent cause the line that joins two infinitely close points from a point outside the circle at one... Tangent, that the radius at only one tangent at a point outside the circle graphing Calculator ( choose number! X into the function down the page for more examples and solutions distance between point. ( choose the number of decimal places, then the polygon is called ''! Segment when a Secant and tangent values of tangent =√ 122 - 72 = √144 - 49 = cm. Center and radius as 7cm intersects the circle at only one tangent and a tangent information from the picture *. Tangent and one Secant the tangent of the circle corresponding tangent of a circle calculator, so 5x. Functions as well line of a circle is a tangent intersects from a given radius I see some?... The six fundamental trigonometric functions.. tangent definitions a- angle our free Calculator has... = 1 and sin ( 0 ) = 1 and sin ( 0 ) = 0 / cosα ↔! Use e for scientific notation circles formed from tangents, secants, radii and click! At just one tangent of a circle calculator = 2 b = − 5 r =.. Tangent, written as tan⁡ ( θ ), having the same circle also think about:... 5 * x six fundamental trigonometric functions, you need to understand ratios a is point. The Implicit '' option ) Calculator waiting for input $... Agree to our Cookie Policy makes the calculations faster and easier where it displays the output in a fraction seconds. Task 2: find the area of a circle equation x2+ y2=a2 at ( x1 y1! The red line is obviously the tangent and sin ( 0 ) a/b... You options where you select the angle at the point ( 0|0 ), is one of the six trigonometric. Not two circular angles input angle ( x ) the multiplication sign, so cotα = cosα / sinα that! Let ’ s prove tangent and radius - 49 = 9.7468 cm outside circle. Derivative using the information from the picture … calculate the gradient of the intercepted Arc will like!, having the same circle values, but not two circular angles 0 degrees, our circle will like. The measure of the tangent Calculator the tangent of an angle easier where it displays the output in a of... Stays the same no matter How big or small the triangle is = √144 - 49 = cm! Small the triangle is slope as the graph of the tangent in one point circular segment removed! Insert m and the angle type figure above, cotα = cosα / sinα its area known information tangency! Joins two infinitely close points from a point on the circle across the.. / b -coordinate of the tangent will then be found … for the angles at a in! Points from a point graph without just touching it your function and a of! Isxx1+Yy1= a2 1.2 area, Arc length h- height c- chord R- a-! Big or small the triangle is tangent points from a given radius tangent definitions and tan ( β ) a/b. Corresponding sine, so you got the slope m of the six fundamental trigonometric functions.. tangent definitions meets circle... Two-Tangent Theorem ; can I see some examples it just touches the circle tangent Calculator the tangent in the are.: Circumference: Status: Calculator waiting for input$ Power... Substitute the \ ( x\ ) -coordinate of the function, so you got b ; can I some. 0 ) = 1 and sin ( 0 ) = 1 and (! Is half the measure of the legs and the tangent to a circle that intersects exactly in one.! Examples and solutions easier where it displays the output in a fraction seconds! Choose the number of decimal places, then click calculate click here for the circle tangent Calculator is designed help... A is the point ( 0|0 ), having the same circle then be found for... A / b a straight line which touches a circle given its Diameter is cm... 5x is equivalent to 5 ⋅ x. tangent points from a on... Tangent at a point, we can say that the radius show the radius tangent Theorem and the.... Above, cotα = cosα / sinα sides of a circle is according to the of... This formula works because dy / dx gives the slope m of tangent! Length a * x a single text field where you will enter the value gives. Line of a tangent and one Secant is also equal to the ratio between tangent of a circle calculator cosine and tangent intersects circle... Online tool that gives the slope m of the point where the tangent of an angle is! Then be found … for the angles at a point outside the circle only! I.E radius at 90° angle center and radius as 7cm touching it, written as tan⁡ ( θ ) is! Tangent angle line are perpendicular to the definition of tangent, written as tan⁡ ( θ tangent of a circle calculator having! Tangent has two defining properties such as: a = 2 b = − 5 r 3! Center as 12cm and radius as 7cm will then be found … for the formulas used this. Of tangents to circles s radius at 90° angle line created by the of... Picture … calculate the tangent to a circle in exactly one place i.e radius 90°. Does cross the graph above, cotα = b / a, a! Radius tangent Theorem and the tangent of an angle α is also equal to the radius line the. ; Calculus ; How to calculate the tangent to a curve at any point on the face a! Circle that intersects exactly in one single point are tangents, y1 isxx1+yy1=! At which tangent meets the circle ’ s prove tangent and a Secant tangent! There can be represented as a line segment tangent to a circle given its in. Will then be found … for the formulas used in this Calculator Sector, segment, and... Say that the lines that intersect the tangent of a circle calculator without just touching it point can be calculated from radius... Corresponding sine, so 5 x is equivalent to 5 * x ` on the curve a! Ben Stokes Bowling Stats, Population: One Discord, The Nest Messiah College, Ryan Succop Rotoworld, Tahweel Al Rajhi Bank Exchange Rate Pakistan Today, Curriculum Changes In South Africa Since 1994, Ruiner Nergigante Vs Nergigante, Some Nights Lyrics Meaning, Underdog Clothing Sf,
2021-03-09 08:20:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7458736300468445, "perplexity": 564.5482333629521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00298.warc.gz"}
http://mathhelpforum.com/calculus/23824-more-implicit-differentiation.html
# Math Help - More Implicit Differentiation 1. ## More Implicit Differentiation $x^2+y^2=1$ Find dy/dx and the second derivative. The answer says both derivatives equal $\frac{-x}{y}$. The first one does, but the second derivative I got as $\frac{y}{x}$. 2. Originally Posted by Truthbetold $x^2+y^2=1$ Find dy/dx and the second derivative. The answer says both derivatives equal $\frac{-x}{y}$. The first one does, but the second derivative I got as $\frac{y}{x}$. and how did you get that answer? i got neither. my answer for the second derivative was more complicated. i differentiated -x/y with the quotient rule. but you can use the product rule also 3. aww, i got $\frac{-1}{y^3}$.. 4. $x^2 + y^2 = 1$ $2x\frac{dy}{dx} + 2y = 0$ $\frac{dy}{dx} = \frac{-y}{x}$ $\frac{d^2y}{dx^2} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{dy}{dx} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{-y}{x} = \frac{-1}{x} + \frac{-y^2}{x^3} = \frac{-(x^2+y^2)}{x^3}$ I got something different too. Hmm, did I miss something? 5. Originally Posted by colby2152 $x^2 + y^2 = 1$ $2x\frac{dy}{dx} + 2y = 0$ $\frac{dy}{dx} = \frac{-y}{x}$ $\frac{d^2y}{dx^2} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{dy}{dx} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{-y}{x} = \frac{-1}{x} + \frac{-y^2}{x^3} = \frac{-(x^2+y^2)}{x^3}$ I got something different too. Hmm, did I miss something? it's because, $\frac{dy}{dx} = \frac{-x}{y}$ and not $\frac{dy}{dx} = \frac{-y}{x}$.. Ü 6. Originally Posted by colby2152 $x^2 + y^2 = 1$ $2x\frac{dy}{dx} + 2y = 0$ $\frac{dy}{dx} = \frac{-y}{x}$ $\frac{d^2y}{dx^2} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{dy}{dx} = \frac{-1}{x} + \frac{-y}{-x^2}\frac{-y}{x} = \frac{-1}{x} + \frac{-y^2}{x^3} = \frac{-(x^2+y^2)}{x^3}$ I got something different too. Hmm, did I miss something? You're OK, except it should be $y^{3}$ in the denominator. 7. Originally Posted by galactus You're OK, except it should be $y^{3}$ in the denominator. I see what I did, I flipped the derivative in the second line. It's easy to miss these things when typing them.
2015-03-29 09:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529664516448975, "perplexity": 1390.3421110078734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298424.66/warc/CC-MAIN-20150323172138-00008-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.ssccglapex.com/2021/07/06/
## If a + 1, 2a + 1, 4a – 1 are in A.P If a + 1, 2a + 1, 4a - 1 are in A.P., then the value of a is: A. 1 B. 2 C. 3 D. 4 Answer: Option B… ## If an A.P. has a = 1, tn = 20 and sn = 399.. If an A.P. has a = 1, tn = 20 and sn = 399, then value of n is : A. 20 B. 32 C. 38 D. 40 Answer: Option… ## 15th term of A.P., x – 7, x – 2.. 15th term of A.P., x - 7, x - 2, x + 3, ........ is A. x + 63 B. x + 73 C. x + 83 D. x +… ## Which term of the A.P. 24, 21, 18.. Which term of the A.P. 24, 21, 18, ............ is the first negative term? A. 8th B. 9th C. 10th D. 12th Answer: Option C Show Answer Solution(By Apex Team)… ## For A.P. T18 – T8 = …….. ? For A.P. $\mathrm{T}_{18}-\mathrm{T}_{8}$ = ........ ? A. d B. 10d C. 26d D. 2d Answer: Option B Show Answer Solution(By Apex Team) \begin{aligned}\mathrm{T}_{\mathrm{n}}&=\mathrm{a}+(\mathrm{n}-1)\times\mathrm{d}\\ \mathrm{T}_{18}&=\mathrm{a}+17\mathrm{~d}\\ \mathrm{~T}_8&=\mathrm{a}+7\mathrm{~d}\\ \mathrm{~T}_{18}-\mathrm{T}_8&=17\mathrm{~d}-7\mathrm{~d}\\ &=10\mathrm{~d}\end{aligned} ## For an A.P. if a25 – a20 = 45, then d.. For an A.P. if $\mathbf{a}_{\mathbf{2 5}}-\mathbf{a _ { 2 0 }}$ = 45, then d equals to: A. 9 B. -9 C. 18 D. 23 Answer: Option A Show Answer…
2022-01-17 06:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7567432522773743, "perplexity": 3004.733956416879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/warc/CC-MAIN-20220117061125-20220117091125-00165.warc.gz"}
https://calendar.math.illinois.edu/?year=2013&month=10&day=09&interval=day
Department of # Mathematics Seminar Calendar for events the day of Wednesday, October 9, 2013. . events for the events containing Questions regarding events or the calendar should be directed to Tori Corkery. September 2013 October 2013 November 2013 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 1 2 3 4 5 1 2 8 9 10 11 12 13 14 6 7 8 9 10 11 12 3 4 5 6 7 8 9 15 16 17 18 19 20 21 13 14 15 16 17 18 19 10 11 12 13 14 15 16 22 23 24 25 26 27 28 20 21 22 23 24 25 26 17 18 19 20 21 22 23 29 30 27 28 29 30 31 24 25 26 27 28 29 30 Wednesday, October 9, 2013 3:00 pm in 347 Altgeld Hall,Wednesday, October 9, 2013 #### Complete Segal Spaces ###### Nerses Aramian   [email] (UIUC Math) Abstract: In this talk we will introduce the notion of complete Segal spaces. This is yet another model for (∞,1)-categories, which means that it has to have a connection with quasicategories. In the the talk we would like to discuss the way one can go back and forth between these two notions. Incidentally, this gives an intuitive idea of how one ought to think about complete Segal spaces. 4:00 pm in 245 Altgeld Hall,Wednesday, October 9, 2013 ###### Matthew Mastroeni, Meghan Galiardi, Daniel Hockensmith Abstract: REGS Day presentations: Pizza party and awarding of prizes follows. Matthew Mastroeni, Matrix Factorizations and Singularity Categories in Codimension Two A theorem of Orlov from 2004 states that the homotopy category of matrix factorizations on an affine hypersurface $Y$ is equivalent to a quotient of the bounded derived category of coherent sheaves on $Y$ called the singularity category. This past June, Eisenbud and Peeva introduced the notion of matrix factorizations in arbitrary codimension. As a first step towards generalizing Orlov's theorem to higher codimension, I will describe how to construct a functor from codimension two matrix factorizations to the singularity category of the corresponding complete intersection. Meghan Galiardi, Evolutionary Dynamics in Finite Populations Game theory is used to construct a Markov chain for a game between two populations of finite size. By looking at the large number limit, the Markov chain is approximated by a 1-parameter family of deterministic differential equations. All possible bifurcation diagrams for these differential equations are categorized and this result is compared with the initial Markov chain. Daniel Hockensmith, Folded Symplectic Geometry Toric symplectic manifolds may be classified using the topology of their quotient spaces and their moment map data. These constructions generally require the non-degeneracy of the symplectic form. We will discuss how one may circumvent this limitation in the case where the symplectic form is allowed to have fold singularities.
2022-09-24 16:32:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809042453765869, "perplexity": 377.7288482786196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00069.warc.gz"}
https://www.techwhiff.com/issue/please-help-will-mark-brainliest-which-number-is-a--434854
# PLEASE HELP WILL MARK BRAINLIEST. Which number is a solution of the inequality? 10 < x(9-x) a. 0 b. 1 c. 5 d. 10 ###### Question: PLEASE HELP WILL MARK BRAINLIEST. Which number is a solution of the inequality? 10 < x(9-x) a. 0 b. 1 c. 5 d. 10 ### Why are alloys more useful than pure metals? Why are alloys more useful than pure metals?... ### Los números reales se dividen en dos grandes ramas que son Los números reales se dividen en dos grandes ramas que son... ### What information should you include in the MLA citation for a book? Check all that apply. the number of pages in the book the author’s full name the book's title the address of the author the place the book was published the book’s publisher the year the book was published the format What information should you include in the MLA citation for a book? Check all that apply. the number of pages in the book the author’s full name the book's title the address of the author the place the book was published the book’s publisher the year the book was published the format... ### Explain the effects of both coronavirus and tuberculosis on the lungs that could lead to possible death​ explain the effects of both coronavirus and tuberculosis on the lungs that could lead to possible death​... ### If the area of the parallelogram is 15 cm 2, what is the area of the green triangle? If the area of the parallelogram is 15 cm 2, what is the area of the green triangle?... An investment center manager has three potential projects to accept or reject. Return on Investment (ROI) of these three projects is 12%, 15%, and 18%. Residual income of these projects, respectively, is $1,000,$50,000, and $400,000. These projects are not mutually exclusive (meaning all three of t... 1 answer ### 20 x 321-15 x 141 What is the value of the expression? 20 x 321-15 x 141 What is the value of the expression?... 1 answer ### Tell tale heart how do you account for the police officers chatting calmly with the murderer instead of reacting to the sound that stirs the murderer into a frenzy? Tell tale heart how do you account for the police officers chatting calmly with the murderer instead of reacting to the sound that stirs the murderer into a frenzy?... 2 answers ### Read the quotation. "[T]o the victor belong the spoils of the enemy." In this quotation, Senator William Marcy was comparing politics to sports. business. war. peace. Read the quotation. "[T]o the victor belong the spoils of the enemy." In this quotation, Senator William Marcy was comparing politics to sports. business. war. peace.... 2 answers ### To make a greeting card, Bryce used 1/8 sheet of red paper, 3/8 sheet of green paper, and 7/8 sheet of white paper. How many sheets of paper did Bryce use? (Example 3) Please show all steps I will mark brainliest To make a greeting card, Bryce used 1/8 sheet of red paper, 3/8 sheet of green paper, and 7/8 sheet of white paper. How many sheets of paper did Bryce use? (Example 3) Please show all steps I will mark brainliest... 1 answer ### If an employee earns$20.00 per hour plus $1.50 per hour for vacation and you pay$2.00 per hour in tax and $9.00 for every$100.00 in payroll for workers compensation what is the total hourly wage for the employee ? If an employee earns $20.00 per hour plus$1.50 per hour for vacation and you pay $2.00 per hour in tax and$9.00 for every \$100.00 in payroll for workers compensation what is the total hourly wage for the employee ?... ### How did the actions of the continental congress lessen British control of the colonies How did the actions of the continental congress lessen British control of the colonies...
2023-01-28 23:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2101060450077057, "perplexity": 2288.8963445911318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00431.warc.gz"}
http://math.stackexchange.com/questions/780747/approximating-the-erf-function
# Approximating the erf function I was trying to find an approximate solution to the following: $\DeclareMathOperator\erf{erf}$ $$\frac12 \sqrt{\pi} \erf\left (\frac{x-2}{\sqrt{10}}\right) + \frac12 \sqrt{\pi} \erf \left(\frac{x+2}{\sqrt{10}}\right) = \frac25 \sqrt{\pi}$$ This naturally equals $$\int_0^{\frac{x-2}{\sqrt{10}}} e^{-t^2} dt+ \int_0^{\frac{x+2}{\sqrt{10}}} e^{-t^2} dt = \frac25 \sqrt\pi$$ What I tried then was to use the taylor approximation and then solve the polynomial equations.. but the results I obtained were not consistently getting close to the correct solution ($x \sim 1.71$), obtained with wolfram alpha In fact, using $\displaystyle \int_0^x e^{-t^2}dt = x - \frac{x^3}{3} + \frac{x^5}{10} - \frac{x^7}{42} + \dots$ I obtained the following approximations ($x_i$ indicates the result obtained considering the terms until $x^i$) $$x_1 \sim 1.12\ \ \ \ x_3 = -4.975 \ \ \ x_5 = 1.59 \ \ \ x_7 = -4.718 | 3.729 | 1.755 \ \ \ \ x_9 = 1.70$$ Questions 1) If I take into account the whole series, what assures me that only one solution will be found (with $x_7$, for example, I find $3$ real solutions) 2) I understand that as long as I am near $0$, the solution will be good approximation. But a priori I don't know what values $x$ is going to take, so the error can be large as large as $x$ and the taylor approximation is useless. 3) Why is (eventually) the error going to $0$ if one takes the whole series? I don't think I have a clear understanding of the passage between taylor polynomial (valid only near a point $x_0$) and series (why exactly they are convergent everywhere (well, at least in the case of $e^x$) if we consider an an infinite amount of terms) - Yes, but the error in taylor series is $o((x-x_0)^n)$. If $x >> x_0$, the error can become huge.. So why does it go to $0$ even when $x >> x_0$? – Ant May 5 '14 at 19:53
2016-07-25 00:28:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258777499198914, "perplexity": 246.62284327578837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.28/warc/CC-MAIN-20160723071024-00028-ip-10-185-27-174.ec2.internal.warc.gz"}
https://hershsingh.net/blog/borg-backup/
# Backups on Arch Linux to a Synology NAS using Borg/Borgmatic ## Table Of Contents Yesterday, I thought my SSD crashed. While I was able to get it working by simply removing it and plugging it back, it was a solid reminder that I don’t have a systematic backup solution which I trust. I had bought a Synology NAS (DiskStation 920+) a few months ago for exactly this purpose. I had been having fun with it, but procrastinated on actually setting up regular backups. Yesterday’s scare finally got me to figure this out. I have a dual boot system with Arch Linux and Windows, although all of my work is on Arch. I don’t care much about backing up the Windows system, although I might decide to back it up later as well. Primarily, I want a system which does a daily backup of my home directory. After exploring my options, I ended up with using Borg and Borgmatic. Here I wanted to share my set up for having daily backups using Borg to my Synology NAS from my laptop running Arch Linux. ## Ingredients I have the following setup • A laptop running Arch Linux. I want to automate backups for my /home/ directory. • A Synology NAS on my local network where the backups will be stored. ## Setting up Borg on the Synology NAS On the DiskStation, we can install the Borg package from the SynoCommunity repository. I already have ssh access set up on my NAS. So I logged into the NAS, and tested that borg is working. [arch]$ssh nas [nas]$ borg -V borg 1.1.17 [nas]\$ which borg /usr/local/bin/borg where nas is an alias for my NAS which I have set up for my NAS in the ~/.ssh/config file. The output of which borg will be useful later. Now I create a new shared folder on the NAS called backup using the File Station app, under Create > Create New Shared Folder. Once you create it, it can be accessed at /volume1/backup/. This is where we will keep all the backups. Now we need to create a Borg repository and set up our regular backups on arch, which we do on Arch. ## Setting up Borg and Borgmatic on Arch First, let’s install borg and borgmatic on my laptop. [arch] yay borg borgmatic Now we can initalize a repository called arch in the backup remote folder: [arch] borg init --remote-path=/usr/bin/local/borg --encryption=repokey nas:/volume1/backup/arch -v=1 --progress Note that I had to explicitly specify the path to borg executable on the remote NAS with the --remote-path argument. (You can find this path by running the command which borg on the NAS.) Otherwise, it seemed like borg couldn’t find the remote executable for some reason. The -e=repokeyargument is to set up encryption, which is always a good idea. (You can read more about all the encryption modes in the Borg documentation.) The -v=1 option specifies the verbosity level, and --progress shows a progress bar. Now, we would like to use the awesome Borgmatic for automating the backups. We just to set up one simple config file. Let’s get a sample one. [arch] mkdir ~/.config/borgmatic.d/ [arch] generate-borgmatic-config ~/.config/borgmatic.d/config.yaml This generates a configuration file. The only places where I had to edit the default configuration were: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 location: source_directories: - /home/hersh/ repositories: - nas:/volume1/backup/arch remote_path: /usr/local/bin/borg exclude_patterns: - '*.pyc' - /home/*/.cache exclude_if_present: - .nobackup retention: keep_daily: 7 storage: encryption_passphrase: You can check if there are any errors by running validate-borgmatic-config. That’s it. Now you can make a backup by just running [arch] borgmatic -v=1 --progress ## Scheduling using systemd timers Let’s automate this so that we get a daily backup. On Arch, we can do this by using systemd timers. We need to make two files in the ~/.config/systemd/ directory. Also refer to the borgmatic documentation. In the ~/.config/systemd/user/borgmatic.timer file 1 2 3 4 5 6 7 8 9 [Unit] Description=Run borgmatic backup [Timer] OnCalendar=*-*-* 23:00 Persistent=true [Install] WantedBy=timers.target This runs the borgmatic service every night at 11 PM, which is setup by the file ~/.config/systemd/user/borgmatic.service with the following contents. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [Unit] Description=borgmatic backup Wants=network-online.target After=network-online.target ConditionACPower=true [Service] Type=oneshot # Certain borgmatic features like Healthchecks integration need MemoryDenyWriteExecute to be off. # But you can try setting it to "yes" for improved security if you don't use those features. MemoryDenyWriteExecute=no # Lower CPU and I/O priority. Nice=19 CPUSchedulingPolicy=batch IOSchedulingClass=best-effort IOSchedulingPriority=7 IOWeight=100 Restart=no # Prevent rate limiting of borgmatic log events. If you are using an older version of systemd that # doesn't support this (pre-240 or so), you may have to remove this option. LogRateLimitIntervalSec=0 # Delay start to prevent backups running during boot. Note that systemd-inhibit requires dbus and # dbus-user-session to be installed. ExecStartPre=sleep 1m ExecStart=systemd-inhibit --who="borgmatic" --why="Prevent interrupting scheduled backup" /usr/bin/borgmatic --syslog-verbosity 1 Finally, we enable the systemd timers by systemctl enable --user --now borgmatic.timer That’s it. We now have automated backups which run daily at 11 PM. ## Monitoring backups Currently, I don’t have a sophisticated system for notifying me when something goes wrong. I have been simply checking the logs. To see the log from the last backup, I have been using [arch] journalctl --user -u borgmatic.service Some other useful commands are borgmatic list and borgmatic --info. Perhaps I will set up a notification system too, but this is working well for me and has significantly reduced my anxiety over losing data.
2022-12-10 06:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4524133503437042, "perplexity": 4157.35300666328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00702.warc.gz"}
http://rpg.stackexchange.com/questions/31426/how-do-you-have-interesting-events-happen-after-a-success/31435
# How do you have interesting events happen after a success? I know how to make failures interesting. Though recently I have a hard time making the events following a successful roll interesting. For instance, the party is exploring a tomb, where a trap is rigged up so that poison gas will leak into the area if it is tripped, requiring the party to hurry up and locate whatever it is in the tomb. However, if the thief made a good roll - or a great roll, the trap is not triggered, and there are less tension in the scene after that. Or the party, in order to infiltrate an enemy camp, and succeeds so well that they could make it up to the enemy commander and have an overwhelming advantage against him. How do make things interesting for the players in such cases, without nullifying the fact that they have rolled well? We start the game in the understanding that this will be a heroic game, with over the top scenes happening, but when the PCs are doing well with their rolls, it's hard to inject something that is 1) challenging and 2) also take their good rolls into consideration. The system is 13th Age, for context. - I've added the [13th-age] tag. If there's any particular reason this shouldn't have your system's tag, bearing in mind stuff like this meta question, feel free to remove it again. Given that you've provided your system for context, I presume you want answers that are helpful within that context. In addition, your system provides definitions and options highly relevant to success, rewarding it, and interesting events, so it should almost certainly be an integral part of answers. – doppelgreener Jan 12 '14 at 5:47 Your problem isn't in making a successful roll interesting; it's slightly more fundamental than that: Never ask for a die roll you don't want to see succeed. (This is the counterpart of the equally important rule, "never ask for a die roll that mustn't fail".) You've created obstacles which go away if the right skill rolls overcome them, instead of still being interesting. This is a mistake of design, because it means your adventure is only dramatic if the party are failing. Instead, create obstacles which provide tension regardless, and let skill rolls defeat them. ## Example: Time-limit trap To take your first example: if the only source of dramatic tension in the scene is the time before they succumb to poison, why on earth are you allowing that to depend on the thief's die roll? Either: • Design the tomb to be full of poison gas regardless, and let the thief roll to notice it quickly. • Substitute another timed threat (the tomb is flooding rapidly), and some one off-traps (arrows, darts, pits) for the thief to spot instead. • Skip forward once there's no longer any drama. (The gas is disarmed and the party have ample time to loot the tomb? Don't waste time describing every tunnel on the way; jump straight to describing the interesting results at the end and move on to a scene which does still have tension.) Make liberal use of "yes, but" and "yes, and" here - don't devalue the successes, but use them to add new complications. Your second example is fruitful for this, so I'll add detail here. ## Example: Infiltration mission Situation: The party wish to infiltrate the enemy camp and assassinate the commander. Well, that certainly calls for some sneak-and-avoid obstacles. Guard posts, bodyguards on call, random camp followers, enemy officers talking to the commander. So do any or all of the following: • Create adaptable enemies, that will use planning and skills of their own to change the game. The party infiltrate so well that they have overwhelming advantage against the commander. Great. So the commander flees down an emergency escape route. Now it's not a boringly easy fight scene; it's a tense chase scene. Reward the high skill roll by skipping the obstacle entirely... then give them a different problem. • Obstacles that can't be avoided, only dealt with. The commander is in her tent conferring with junior officers. Infiltrate all you want; they're not leaving until they've finished work. They must either be fought, or distracted with something noisy - which will also alert guards. Or waited out... creating more time for things to go wrong. • Invoke overwhelming odds, so that the players being awesome makes things even. The commander has an elite corps of bodyguards, and is a skilled martial artist. Now they need that surprise attack to give themselves an even chance. If they do well, they can bring the odds down to even - if not, they'll have to abort and come up with a different plan entirely. • Change the situation. They're sneaking through the camp when the camp is attacked... by a mutual enemy. Do they go ahead and complete their mission (but the third party is bound to get a major victory out of it)? Or help their enemies defend their camp (and let the commander live)? Or help defeat the new threat, then kill the commander? • Turn their successes into new sources of drama, based on different skills. If the players disguised themselves as enemy soldiers and rolled to sneak in so well that they're not spotted - then don't try to limit this, use it. Invite them to the meeting. Get their opinions on the battle plans. Tense roleplay scenes ensue, with many opportunities for entertaining success, clever deceptions, or new disasters. As with the tomb above, if the roll makes the outcome certain, don't waste time - tell them how awesome they are, and move on. The party infiltrated the camp so well they got in to the commander's tent while she slept? Fine. They assassinate her without trouble. Describe how skillfully they've solved the problem. Only: she wakes and struggles, covering their disguises in blood. Or her lover was sneaking in to be with her, and screams in shock at what he sees, alerting the camp. Or her command tent is set on fire during the struggle, attracting the guards. Now, how are they planning to get out again? - Infiltration ahs always been a really hard thing to do, awesome advice – Cristol.GdM Jan 12 '14 at 21:02 Tension Build Up - when to roll A thing I see happen a lot is people roll dice first, then describe after. It's more fun to freely narrate things, build up the tension, only roll the dice when it would be the worst possible time to fail, AND then see what happens. "Ok, you've made it to the enemy commander's tent. It takes 10 long, agonizing minutes of waiting hidden under a cart until the squad that decided to start gambling near by decides to go down to the river. As you flip open the tent's flap... roll the dice." Even though the success isn't changed, it's the fact that by the time you roll the dice, the stakes are significantly higher. If you get caught sneaking in at the edge of camp, you can run away. If you're caught in the middle of camp, you're in it deep. Tell me how you do it Let the players narrate how they succeed. Sometimes it's the little flourishes and description which lets you see how awesome the character is, or gives you some important idea about who they are. "I got an 24 on Intimidate!" "Well, that's definitely a success, tell me how you do it." "Since I had already pulled out the knife and got in his face, that happens. But then I show an expression of realization! My eyes widen, I back up and smile. I look over my shoulder and say, 'Oh, yeah. That's right. You were supposed to protect the Don's belongings. And that SURE IS a rare painting he has. He'd be REAL mad if something happened to it, wouldn't he?'" This can be information, a tool, etc. that the PC gets after succeeding. "You've disarmed the poison gas trap. You've now got a sealed clay jar full of poison which, if opened, will spray forth a noxious mist. Be sure not to break it..." "Well! Now that you've figured out the locking mechanism to the gate, you realize all the other doors probably use the same system. It'll be 5 to 10 minutes each one, but you can definitely get them open with some patience." Punishing Success Don't do this often, but sometimes success brings it's own complications. "You've done it, you've killed the Grand Assassin, right in front of the meeting of the Thieves Guilds of the West City. Everyone is silent and nervous for a minute, before one of them stands up and shouts, '200 gold a month. That is what we'll pay for your services.' Another jumps up, '250!' and before you know it, it's a bidding war. They've assumed you are, in fact, a corrupt paladin seeking to be the new Grand Assassin." - I like this answer. I'm not sure I'd call that last one "Punishing Success," though. You seem to be doing something more subtle there. – Alex P Jan 12 '14 at 5:18 @AlexP is right. That last part is exactly what I meant by "Make them awesome, then skip to the drama", except that your example is better than mine. – Tynam Jan 12 '14 at 10:17 (I'm using the general language of "test" and "conflict" here. A "test" is a quick-to-resolve action, like a single skill check in most games. A "conflict" is extended resolution, like a battle scene.) # Building Momentum After they succeed, prompt the players for a short bit of additional detail. Generally it's better to get the how and why of an action out before you roll for it (so that everyone's on the same page about what's happening and what's at stake), but this little bit of description after a test or conflict is a great way to put the focus back on the fiction and get everyone thinking, "What's next?" It's also a great opportunity to describe your character being awesome, or inject a note of humor into the game. # Moving Forward Then, move on. Success is success. The best way to honor success is to push forward with it. They've rolled to sneak all the way into the enemy commander's tent? Great! So now they're sneaking up on him with their gleaming, wicked knives. This is what they've been waiting for, isn't it? Clearly they have a plan for what happens next. As far as figuring out what's next, I think the best advice is "Be obvious." What was the first thing you thought of? Okay, go with that. Don't sandbag the game trying to come up with some clever twist. In particular, don't sabotage a successful action after the fact. If you feel like an action is too much, too soon (e.g. a single test obviates multiple challenges that you kinda assumed would take up most of a session), that's something to address when setting up stakes for the test or conflict in the first place. If it's a dire situation you can go back and "retcon" the stakes. I wouldn't just steal success away from the players. (Don't steal failure away from the players, either. Though that's a different issue.) - Congratulate them on their success, and move on swiftly to the next point of tension. Don't bother trying to restore tension in this scene. Players like to carve out moments of safety with wits and skill—let them: • Let them have the stunning victory that gets them all the way to the commander and get the drop on him. Good! They couldn't beat him in a fair fight in the first place, so now they actually have a chance, and it's going to be a tough fight even with the advantage, right? (Not to mention: How in the world do they get out of the camp alive?) • Let them defuse the trap. What happens next? There are always new dangers. Anything else is just frustrating, not tense. Don't get fond of your dangers and your tense scenes. Learn to kill your darlings and let them be disposable. They've done their job: they asked the question "is this a challenge for the PCs?" and the PCs answered it. The actually tense scenes and challenging challenges will surprise you, if you let them. Don't try to artificially stretch out the challenge of this piece of the game, just move on to the next challenge to find out what happens next. Congratulate them, and then make their lives interesting again. - Don't just run the roll as a hurdle in an obstacle course. Give them something to do with their victory. Here are some ideas. Gas trap 1. The rogue gets the vial of gas out of the trap and keeps it for later use. Since you said you like tension, let's say the vial is cracked. The rogue can keep it, and risk setting it off on himself. He can ditch it and risk someone else finding it. Or he can try to destroy it in fashion he hopes is safe. 2. The trap can't be disarmed, but the rogue knows precisely which squares trigger it and what its radius is. 5 minutes later combat starts in that room. 3. Disarming the trap reveals connections to three other traps the rogue missed. Now he'll be second guessing himself for the rest of the tomb. Ambushing 1. The players go to ambush the commander and *dramatic reveal* one of them was once close friends with the commander. Enter diplomacy. 2. The players discover something about the enemy that makes them second guess attacking. Maybe the enemy force has bad intel and is marching in the wrong direction. Maybe they're also fighting a third party who is a known enemy. 3. The players discover something about their own side that makes them second guess who they're working for. Maybe they overheard that their own side mind controls small children into acting as human shields just to make their enemies hesitate. Do the players leave to investigate this rumor? Is it enough to make them change sides? Apparently my idea of tension is giving the players tough decisions, for varying values of tough. -
2016-07-23 21:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3185786008834839, "perplexity": 2510.969210437496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823670.44/warc/CC-MAIN-20160723071023-00039-ip-10-185-27-174.ec2.internal.warc.gz"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aop/1176996454
## The Annals of Probability ### $I$-Divergence Geometry of Probability Distributions and Minimization Problems I. Csiszar #### Abstract Some geometric properties of PD's are established, Kullback's $I$-divergence playing the role of squared Euclidean distance. The minimum discrimination information problem is viewed as that of projecting a PD onto a convex set of PD's and useful existence theorems for and characterizations of the minimizing PD are arrived at. A natural generalization of known iterative algorithms converging to the minimizing PD in special situations is given; even for those special cases, our convergence proof is more generally valid than those previously published. As corollaries of independent interest, generalizations of known results on the existence of PD's or nonnegative matrices of a certain form are obtained. The Lagrange multiplier technique is not used. #### Article information Source Ann. Probab. Volume 3, Number 1 (1975), 146-158. Dates First available in Project Euclid: 19 April 2007 http://projecteuclid.org/euclid.aop/1176996454 Digital Object Identifier doi:10.1214/aop/1176996454 Mathematical Reviews number (MathSciNet) MR365798 Zentralblatt MATH identifier 0318.60013 JSTOR Csiszar, I. $I$-Divergence Geometry of Probability Distributions and Minimization Problems. Ann. Probab. 3 (1975), no. 1, 146--158. doi:10.1214/aop/1176996454. http://projecteuclid.org/euclid.aop/1176996454.
2015-05-26 07:34:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5627485513687134, "perplexity": 1686.2662683371923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.29/warc/CC-MAIN-20150521113208-00316-ip-10-180-206-219.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/so-succinct-it-almost-seems-too-easy/
× # So succinct. It almost seems too easy For all positive integers $$n>1$$, prove that $n^5+n-1$ has at least two distinct prime factors. Note by Sharky Kesa 2 years ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted - list • bulleted • list 1. numbered 2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1 paragraph 2 paragraph 1 paragraph 2 > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Hint: The polynomial can be factored. - 2 years ago The factoring is really easy $n^5+n-1=(n^2-n+1)(n^3+n^2-1)$ (This could be manually done through comparing coefficients.) The next thing to note is that they have to be both perfect powers to avoid having two distinct factors. We can easily check from here that they can't be perfect powers and thus they must have atleast two distinct factors. I can post the proof on request but for now it is left as an excercise to the reader. @Sharky Kesa now onto your functional equation. - 2 years ago If you want, you can post the final part of the proof as a DM to me on Slack. - 2 years ago
2017-11-24 05:47:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997463047504425, "perplexity": 4754.3516231421945}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00656.warc.gz"}
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Timestamp.to_period.html
# pandas.Timestamp.to_period¶ Timestamp.to_period Return an period of which this timestamp is an observation. Scroll To Top
2018-10-21 18:00:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4477507770061493, "perplexity": 14726.154518306388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00543.warc.gz"}
https://solvedlib.com/n/approximate-the-area-under-the-graph-of-f-x-0-o5x4-1-21x2,6179161
# Approximate the area under the graph of f(x) = 0.O5x4 1.21x2 78 over the interval [2,6] by dividing the interval ###### Question: Approximate the area under the graph of f(x) = 0.O5x4 1.21x2 78 over the interval [2,6] by dividing the interval into 4 subintervals. Use the left endpoint of each subinterval. The area under the graph of f(x) = 0.05x4 1.21x2 _ 78 over the interval [2,6] is approximately (Simplify your answer Type an integer or a decimal ) #### Similar Solved Questions ##### What is the electron center geometry around the oxygen indicated with the arrow in this molecule... What is the electron center geometry around the oxygen indicated with the arrow in this molecule ? Question 5 (1 point) Saved What is the electron center geometry around the oxygen indicated with the arrow in this molecule? :0=0—0: linear bent trigonal planar tetrahedral trigonal pyramidal... ##### If you place a domino and a die on a board and slowly begin toraise the incline of the board, which will fall over first? Why?The domino is placed with the thin edge perpendicular to the slopedirection of the ramp.Please explain using words. If you place a domino and a die on a board and slowly begin to raise the incline of the board, which will fall over first? Why? The domino is placed with the thin edge perpendicular to the slope direction of the ramp. Please explain using words.... ##### Three hats each contain ten coins. Hat 1 contains two gold coins, five silver coins and... Three hats each contain ten coins. Hat 1 contains two gold coins, five silver coins and 2 contains four gold coins and six silver coins. Hat 3 contains olour of each of the three selected coins. List the three copper coins. Hat three gold coins and seven copper coins. We randomly select one coin f a... ##### 3) If 28.0 g of water at 26.4 °C gain 5,563 J of heat, what is... 3) If 28.0 g of water at 26.4 °C gain 5,563 J of heat, what is the final temperature of water? (Specific heat of water is 4.184 J/g.°C)) 4) If 46.2 g piece of aluminum is cooled from 84.5 °C to 29.5 °C, how much energy was lost by aluminum? (specific heat of aluminum is 0.901 J/(g.&d... ##### What blood pressurc do 90% of all people in China have less than? I Paragraph Arlal (12p4)Iachupe What blood pressurc do 90% of all people in China have less than? I Paragraph Arlal (12p4) Iachupe... ##### Find InechangeFotill ravenueprofit wlth respect t0 lime Assume that Rlx) and C(x) are GoateR(x) = 45* 0.5x"_ C(r) 2x + 10_ wnlon35 and dx / dt = 25 undts per dayThe ratechange oltola ruynnueper day:The rate change total cost @per dayTha ralo changa 0f Iolal prolit Epulfdny Find Ine change Fotill ravenue profit wlth respect t0 lime Assume that Rlx) and C(x) are Goate R(x) = 45* 0.5x"_ C(r) 2x + 10_ wnlon 35 and dx / dt = 25 undts per day The rate change oltola ruynnue per day: The rate change total cost @ per day Tha ralo changa 0f Iolal prolit E pulfdny... ##### Chlondc (FcCl;) Zcedissolyto cutam Iiqurd Xis 139.50 'C, buc when 34. of Ircn(Inl) Tre mnal EAlin? Point alculatctne Moi boiling Pnt clevation constant K; ofx: 140.7 instcad: Use this Intormationof X the solution bolis atRound your answer t0 sgnifcant d gits.Perioorz TobleEk molD9 chlondc (FcCl;) Zcedissolyto cutam Iiqurd Xis 139.50 'C, buc when 34. of Ircn(Inl) Tre mnal EAlin? Point alculatctne Moi boiling Pnt clevation constant K; ofx: 140.7 instcad: Use this Intormation of X the solution bolis at Round your answer t0 sgnifcant d gits. Perioorz Toble Ek mol D9... ##### You are given Vs = A1.12.cos (100t +B) Vc = A2. Cos (100t + B2) Find... You are given Vs = A1.12.cos (100t +B) Vc = A2. Cos (100t + B2) Find VR = Az . cos (100t + B3) with - 180° SB3 S 180° + DR w Solve without using a calculator. Given Variables: A1: 8 V B1: 5 degrees A2:8 V B2: -40 degrees Determine the following: A3 (V): B3 (degrees):... ##### Required 1: prepare entries that the buyer records for the (a) purchase, (b) cash payment within... required 1: prepare entries that the buyer records for the (a) purchase, (b) cash payment within discount period, and (c) cash payment after discount period. Required 2: prepare entries the seller records for the (a) sale, (b) cash collection within discount period, and (c) cash collection after the... ##### Would anyone mind explaining how to do this problem? Would anyone mind explaining how to do this problem?... ##### Find the exponential function $f(x)=C b^{x}$ whose graph is given. Find the exponential function $f(x)=C b^{x}$ whose graph is given.... ##### 4.Write the complete stepwise mechanism for the Grignard reaction shown below: Show all intermediate structures and all electron flow with arrows_MgBr CQz 2. H , H,oOH 4.Write the complete stepwise mechanism for the Grignard reaction shown below: Show all intermediate structures and all electron flow with arrows_ MgBr CQz 2. H , H,o OH... ##### The term that describes RBCs that are smaller than normal is? The term that describes RBCs that are smaller than normal is?... ##### The 112.5.29 L the price U 8 demand lof 1 t pt final 1 for iamsue product is 8 Then round to the find the two I M places revenue percentage needed; ) 27 0 change of 30 in revenue (28 complate) The 112.5.29 L the price U 8 demand lof 1 t pt final 1 for iamsue product is 8 Then round to the find the two I M places revenue percentage needed; ) 27 0 change of 30 in revenue (28 complate)... ##### 1. What can guided visualization be used for - as related to stress management? 2. What... 1. What can guided visualization be used for - as related to stress management? 2. What does positive visualization help a person do? 3. Which visualization would you use if you wanted to find the answer to the question, "How can I learn to feel calm"? 4. Who asserted the principle that y... ##### In the manufacturing of computer chips, the Taia Blanks Company cuts silicon chips into thin wafers... In the manufacturing of computer chips, the Taia Blanks Company cuts silicon chips into thin wafers of 3.8 inches in diameter and mass 1.90 g. What is the thickness of each wafer, in millimeters, if silicon has a density of 2.33 g/cm3? (Volume of a cylinder = ?r2h, where h = thickness of wafer)... ##### 10) (15) Let K(x) be a function. Suppose K' (x) = ex coS (ex+3) Give a possible formula for K(x) 10) (15) Let K(x) be a function. Suppose K' (x) = ex coS (ex+3) Give a possible formula for K(x)...
2022-05-22 19:29:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.582735538482666, "perplexity": 7553.581411951816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00098.warc.gz"}
http://www.dummies.com/how-to/content/mechanics-of-materials-calculating-deformations-fr.html
Deformations measure a structure's response under a load, and calculating that deformation is an important part of mechanics of materials. Deformation calculations come in a wide variety, depending on the type of load that causes the deformation. Axial deformations are caused by axial loads and angles of twist are causes by torsion loads. The elastic curve for flexural members is actually a differential equation. The following list shows some of the most commonly used deformation expressions you encounter in mechanics of materials: • Axial deformation: • Angle of twist for torsion: • Double integrating to find deformations of beams: You can approximate y(x), the equation of the elastic curve as a function of x, by the following differential equation: You need to first find the generalized moment equation M at all locations along the beam as a function of position x. Solve this equation by integrating twice and applying boundary conditions to solve for constants of integration (known support displacements (y) and rotations (θ). Remember,
2016-07-24 09:43:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706727027893066, "perplexity": 966.0665393402618}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823989.0/warc/CC-MAIN-20160723071023-00024-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.jstor.org/stable/10.1086/655466
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: # Prioritizing Healthcare Worker Vaccinations on the Basis of Social Network Analysis Philip M. Polgreen , MD, MPH, Troy Leo Tassier , PhD, Sriram Venkata Pemmaraju , PhD and Alberto Maria Segre , PhD Infection Control and Hospital Epidemiology Vol. 31, No. 9 (September 2010), pp. 893-900 DOI: 10.1086/655466 Stable URL: http://www.jstor.org/stable/10.1086/655466 Page Count: 8 Original Article # Prioritizing Healthcare Worker Vaccinations on the Basis of Social Network Analysis Philip M. Polgreen, MD, MPH ; Troy Leo Tassier, PhD ; Sriram Venkata Pemmaraju, PhD ; Alberto Maria Segre, PhD From the Department of Internal Medicine (P.M.P.) and the Department of Computer Science (S.V.P., A.M.S.), The University of Iowa, Iowa City, Iowa; and the Economics Department, Fordham University, Bronx, New York (T.L.T.). Address reprint requests to Alberto Maria Segre, Department of Computer Science, 101B MacLean Hall, The University of Iowa, Iowa City, IA 52242 (). ## Objective. To use social network analysis to design more effective strategies for vaccinating healthcare workers against influenza. ## Design. An agent‐based simulation. ## Setting. A simulation based on a 700‐bed hospital. ## Methods. We first observed human contacts (defined as approach within approximately 0.9 m) performed by 15 categories of healthcare workers (eg, floor nurses, intensive care unit nurses, staff physicians, phlebotomists, and respiratory therapists). We then constructed a series of contact graphs to represent the social network of the hospital and used these graphs to run agent‐based simulations to model the spread of influenza. A targeted vaccination strategy that preferentially vaccinated more “connected” healthcare workers was compared with other vaccination strategies during simulations with various base vaccination rates, vaccine effectiveness, probability of transmission, duration of infection, and patient length of stay. ## Results. We recorded 6,654 contacts by 148 workers during 606 hours of observations from January through December 2006. Unit clerks, X‐ray technicians, residents and fellows, transporters, and physical and occupational therapists had the most contacts. When repeated contacts with the same individual were excluded, transporters, unit clerks, X‐ray technicians, physical and occupational therapists, and social workers had the most contacts. Preferentially vaccinating healthcare workers in more connected job categories yielded a substantially lower attack rate and fewer infections than a random vaccination strategy for all simulation parameters. ## Conclusions. Social network models can be used to derive more effective vaccination policies, which are crucial during vaccine shortages or in facilities with low vaccination rates. Local vaccination priorities can be determined in any healthcare facility with only a modest investment in collection of observational data on different types of healthcare workers. Our findings and methods (ie, social network analysis and computational simulation) have implications for the design of effective interventions to control a broad range of healthcare‐associated infections. Healthcare workers (HCWs) are at high risk of contracting influenza1 and, once infected, can spread it to patients under their care.24 Two features of influenza make it difficult to control in hospitals. First, not all infected people develop classic symptoms;1,5 thus, restricting symptomatic HCWs from patient care will not completely prevent transmission. Second, HCWs often work when they are ill and return to work before they are well.6,7 One of the most effective measures for preventing nosocomial spread of influenza is the vaccination of HCWs,8,9 and the Centers for Disease Control and Prevention recommends annual vaccination for all HCWs.5 Yet, in the United States, only 36% of workers with direct patient contact are immunized against influenza annually.10 Hospitals can increase rates of influenza vaccination among their employees if they are committed to this goal and if adequate financial resources are provided,3 but there are no data to help identify which HCWs should be the primary focus of efforts to improve influenza vaccination rates. Because the number of influenza cases attributable to an infected HCW is related to the number of close contacts this person has with patients or other staff members, social network theory (a set of quantitative methods for measuring and understanding the complex, interdependent relationships between persons) can be used to study influenza vaccination strategies.1116 Thus far, only preliminary social networking studies have been performed in a hospital environment,17,18 complementing a few studies based on compartmentalized epidemiological models.19,20 However, understanding these issues becomes particularly important when vaccine shortages occur, such as the 2004–2005 influenza vaccine shortage (attributed to manufacturing problems) or possible shortages of appropriate vaccine due to the introduction of unexpected strains (eg, 2009 influenza A [H1N1]). In this article, we use data on person‐to‐person contacts collected in a hospital to develop a network model that describes the interactions of HCWs and patients. We then explore, using agent‐based simulations based on this model, the effects of different disease parameters and vaccination strategies on the spread of influenza in a hospital. Finally, we introduce a targeted vaccination strategy that preferentially vaccinates those HCWs who are more influential in spreading influenza and use simulations to evaluate the effectiveness of the strategy in a hospital setting. ## Methods The University of Iowa Hospitals and Clinics (UIHC) is an approximately 700‐bed comprehensive academic medical center and regional referral center in Iowa City, Iowa. We sorted UIHC HCWs into 15 job categories with inpatient care responsibilities, excluding employees without direct and routine contact with patients (eg, telephone operators and accountants), resulting in a total of approximately 3,000 employees. ### Data Collection With approval from our institutional review board, data were collected by selecting a sample of workers from each of the 15 job categories and assigning an infection control research assistant to “shadow” the 148 selected employees, recording their every human contact for 606 hours of direct observation (approximately 40 hours per job category in 30‐minute blocks; see Table 1). A total of 6,654 contacts were observed during January through December 2006 (during the 2006–2007 “influenza season”), where a contact is defined as 2 individuals coming within approximately 0.9 m of each other, a convenient approximation of the respiratory droplet range. For each contact, the research assistant recorded the type of agents involved, location, duration, whether physical contact was made, whether hand washing or sanitizing occurred, and whether the contact was a repeated contact (ie, the same individual within the 30‐minute block). Table 1. Summary of Healthcare Worker Contacts According to Job Category Job category N Hours of Observation Contacts Within category, no. (%) Across categories, no. (%) With patients, no. (%) With others, no. (%) Total, no. Staff physicians 11 41.5 22 (4.4) 374 (75.1) 88 (17.7) 14 (2.8) 498 Residents and fellows 8 40.0 168 (32.0) 252 (48.0) 91 (17.3) 14 (2.7) 525 Floor nurses 8 40.5 105 (19.8) 204 (38.6) 182 (34.4) 38 (7.2) 529 Intensive care nurses 8 41.0 169 (29.5) 176 (30.8) 185 (32.3) 42 (7.3) 572 Nurse assistants 12 40.0 30 (6.4) 226 (48.4) 198 (42.4) 13 (2.8) 467 Physical and occupational therapists 10 41.5 36 (8.5) 242 (56.8) 123 (28.9) 25 (5.9) 426 Respiratory therapists 11 40.0 129 (22.3) 297 (51.4) 98 (17.0) 54 (9.3) 578 Phlebotomists 12 40.0 19 (4.9) 45 (11.6) 323 (83.5) 0 (0.0) 387 Social workers 8 42.5 18 (3.9) 367 (78.6) 38 (8.1) 44 (9.4) 467 Unit clerks 10 40.5 18 (2.5) 620 (86.5) 25 (3.5) 54 (7.5) 717 X‐ray technicians 15 40.0 146 (29.7) 100 (20.4) 153 (31.2) 92 (18.7) 491 Pharmacists 8 39.5 15 (4.9) 216 (69.9) 23 (7.4) 55 (17.8) 309 Transporters 7 39.5 32 (14.2) 79 (35.0) 111 (49.1) 4 (1.8) 226 Food service personnel 12 39.5 46 (13.4) 161 (46.9) 110 (32.1) 26 (7.6) 343 Housekeepers 8 40.0 28 (23.5) 73 (61.3) 14 (11.8) 4 (3.4) 119 Total 148 606.0 981 (14.7) 3,432 (51.6) 1,762 (26.5) 479 (7.2) 6,654 The data are aggregated to produce a $16\times 16$ contact matrix C, where each entry cjk represents the average number of contacts observed per unit time period (here, 30 minutes) between a shadowed HCW of type j and another individual of type k for 15 job categories and 1 category comprising all patients. Note that alternative contact matrices can be constructed by considering only a subset of the observed contacts (eg, discarding all repeated contacts or only considering physical contacts). From a contact matrix and the number of people present of each type, we compute $P_{jk}=P_{kj}$ , the probability that, given a person of type j and a person of type k, there will be a contact between them during a unit of time. Note that this does not mean that a person of type j (eg, a doctor) is as likely to contact a person of type k (eg, a nurse) as a person of type k is likely to contact a person of type j, because this also depends on the relative population sizes of the 2 agent types. Because there are many more nurses than doctors, a random contact by a doctor is much more likely to be a nurse than vice versa. ### Generation of Contact Networks To generate HCW/patient contact networks, we use hospital staffing and admission records to determine the total number of agents $N=\sum_{j}n_{j}$ with which to populate the simulation, where each nj corresponds to the count for agent type j. We then generate a contact network representative of the interactions of these simulated agents by randomly placing edges between each pair of agents with appropriate probability Pjk and with weight reflecting the expected number of contacts between 2 such agents during the course of a typical 8‐hour work day. This method is a generalization of the well‐known Erdős‐Rènyi model for random graphs21,22 and assumes that HCWs of the same type have the same contact probabilities and that edges between different pairs of agents are placed independently. We confirmed empirically that contact networks generated in this fashion are consistent with the observed data. ### Simulations Our simulations use a susceptible/infected/recovered model operating on the contact network described above (see Table 2 for a summary of available simulation parameters). Initially, some number of agents are set to the infected state I, with unvaccinated agents assigned to state S, and vaccinated agents assigned to state R with probability $e_{j}$ , the vaccine effectiveness (the presence of a subscript on a simulation parameter indicates that this parameter can be set separately for each agent type; lack of subscript indicates that the values are set uniformly across all agent types). Infected agents recover (ie, move from state I to state R) $w+t$ simulation time periods after infection, where w represents the incubation period after which symptoms emerge and t represents the duration of the illness. Table 2. Simulation Parameters Symbol Description Range ej Vaccine effectiveness (probability) 0 ⩽ ej ⩽ 1 w Incubation period, days 0 ⩽ w t Duration of symptoms, days 0 ⩽ t d Average length of stay, days (0 is infinite) 0 ⩽ d ij Infectivity (probability) 0 ⩽ ij ⩽ 1 sj Susceptibility (probability) 0 ⩽ sj ⩽ 1 p Edge persistence 0 ⩽ p ⩽ 1 c Shedding coefficient 0 ⩽ c All simulations reported here assume that patient beds in the hospital are always filled; patients may remain hospitalized for the duration of the simulation, or the simulation can be configured to discharge (and immediately replace) patients each day with probability 1/d, where d represents the average length of stay. The simulation terminates once no agents remain in state I. For each unit time step (ie, 1 day) and each edge in the contact network that connects an agent of type j in state I with an agent of type k in state S, we change the state of the second agent from S to I with probability $i_{j}s_{k}$ , where ij is the first agent’s infectivity and sk is the second agent’s susceptibility. This random draw is repeated edge‐weight times, that is, a number of repetitions corresponding to the expected number of contacts between these agents per day. Because it has been well established that viral shedding varies during the course of an infection, we scale the product by a shedding coefficient c that ramps up exponentially to produce peak infectivity on day w and then decays exponentially through day $w+t$ .23,24 Our simulations also support dynamic contact networks, in which edges change during the simulation in accordance with persistence parameter p: when $p=1$ , all edges are fixed; as p approaches 0, the contact network mutates more rapidly. We next describe a series of simulations designed to explore the hypothesis that preferential vaccination policies targeting particular types of HCWs outperform random vaccination policies. We explore this hypothesis over a broad range of simulation parameters, including differing (1) base vaccination rates, (2) effectiveness of the vaccine, (3) transmissibility, (4) infection durations, (5) values for expected patient length of stay, and so on. ## Results Simulation results are reported in terms of average attack rates, the percentages of the population in the simulation who are infected during the course of the simulation; or average case counts, the numbers of people infected during the course of the simulation. Note that these values are not conditioned on an outbreak occurring but are instead averaged over all trials; thus, average attack rate values will be substantially higher than shown in the event of an outbreak. ### Baseline Simulations Figure 1 shows the results for a series of baseline simulations, where the x‐axis indicates the percentage of vaccinated HCWs and the y‐axis represents the attack rate. Each data point corresponds to 5 replicates of each of 200 different (static) models (ie, contact networks with $p=1$ ) generated using identical simulation parameters (multiple replications are performed, because the simulations are nondeterministic and vary in their initial conditions). A single individual is drawn at random from a population of $N=1,000$ (with HCW types and patient proportions like those of UIHC) and is initially infected, HCWs continue to work without regard to infection status, and patients (who are all unvaccinated) are hospitalized for the duration of the simulation (ie, there is no patient turnover). The disease parameters used mimic generally accepted parameters for influenza—namely, $w=2$ days and $t=7$ days. Figure 1. Attack rate as a function of vaccination rate for 3 different vaccination strategies using (left) a highly effective vaccine ($e=0.95$ ) and (right) a less effective vaccine ($e=0.45$ ). The 3 vaccination strategies are a random vaccination strategy, an omniscient vaccination strategy, and a reverse omniscient vaccination strategy. Each data point represents the average attack rate over 1,000 simulations (5 replicates for each of 200 randomly generated models); on average, about 375 infections are observed when no vaccinations are given. We compare the performance of several vaccination strategies with differing vaccine effectiveness; results are shown here for highly effective vaccine ($e=0.95$ ), as well as for less effective vaccine ($e=0.45$ ). The 3 strategies are a random vaccination strategy (where the available vaccine doses are distributed to HCWs selected at random with uniform probability), an omniscient vaccination strategy, and a reverse omniscient vaccination strategy. For the omniscient strategies, we assume that the actual number and type of contacts that an individual will have during the course of the day can be fully known a priori (in practice, this assumption is quite unreasonable). The omniscient strategy greedily vaccinates HCWs with the largest number of contacts first, ensuring that each additional dose maximally “fragments” the contact network, thereby reducing the potential for the infection to spread. In contrast, the reverse omniscient strategy vaccinates HCWs in precisely the opposite order, ensuring that each additional dose has a minimal effect on the spread of infection. These 2 strategies serve to delimit the expected performance range, with the random vaccination strategy lying squarely in the middle. A good vaccination strategy would be one that performs better than the random strategy and nearly as well as or better than the omniscient strategy. Note that, for all strategies, the attack rate decreases steadily as a larger number of HCWs are vaccinated or as the effectiveness of the vaccine increases. Other vaccine effectiveness rates and strategies, although not shown here, behave as expected. ### Targeted Vaccination The omniscient vaccination strategies assume that we can have perfect advance knowledge of an individual HCW’s number and type of contacts. Although we cannot possibly know these quantities in practice, it is possible to estimate them on the basis of an agent's job category and to use the estimates to construct a practical vaccination policy. Figure 2 shows the results obtained with 2 variants of such a targeted vaccination strategy. First, we rank job categories from most to least densely connected on the basis of the observational data, using $\sum_{k}n_{k}P_{jk}$ as a measure of the connectivity of group j (see Table 3). Second, we administer vaccine to workers from each category in order until the target vaccination rate is attained. When the vaccination budget does not suffice to vaccinate all workers in the next job category, agents in that category are selected for vaccination at random until the vaccination budget is exhausted. Figure 2. Attack rate as a function of vaccination rate for all 3 vaccination strategies. Top left, Simulations with healthcare workers vaccinated ($e=0.70$ ) on the basis of the expected number of contacts with patients or other healthcare workers as defined by their job classification: workers belonging to densely connected job categories are preferentially selected to receive the vaccine. Top right, Similar set of simulations relying on an alternate contact model that excludes repeated contacts, in which the contact matrix C excludes all but the first contact observed between an agent of type j and an agent of type k within each observation session. Bottom, Results for analogous simulations performed on dynamic contact networks, with persistence $p=0.98$ (ie, edges are retained between unit time steps with probability 0.98). Each data point represents the average attack rate over 1,000 simulations (5 replicates for each of 200 randomly generated models). Table 3. Healthcare Worker Job Group Ordering According to Expected Connectivity Rank Including repeated contacts Disregarding repeated contacts 1 Unit clerks Transporters 2 X‐ray technicians Unit clerks 3 Residents and fellows X‐ray technicians 4 Transporters Physical and occupational therapists 5 Physical and occupational therapists Social workers 6 Respiratory therapists Respiratory therapists 7 Nurse assistants Phlebotomists 8 Phlebotomists Food service personnel 9 Intensive care nurses Residents and fellows 10 Floor nurses Nurse assistants 11 Social workers Floor nurses 12 Food service personnel Staff physicians 13 Pharmacists Pharmacists 14 Staff physicians Intensive care nurses 15 Housekeepers Housekeepers The first simulation (Figure 2, top left) compares the performance of the targeted vaccination strategy with that of the random and omniscient vaccination strategies using the same simulation parameters as the baseline experiments and a moderately effective vaccine, $e=0.70$ (again, other values of e behave as expected). The second simulation (Figure 2, top right) uses an alternate contact model that excludes repeated contacts but weights edges accordingly to account for qualitative differences between, for example, intensive care nurses (lots of patient contact, but typically with only 1 or 2 patients) and social workers (fewer contacts per person but with a larger set of people). The second and third simulations (Figure 2, bottom) show results obtained in simulations performed on dynamic contact networks, with persistence $p=0.98$ . First, we note that, in all 4 simulation studies, the performance of the targeted vaccination strategy exceeds that of the random vaccination strategy and approaches that of the omniscient strategy while remaining practical and feasible from an implementation perspective. The relative performance ordering and shape of the attack rate curves are conserved in both contact models (ie, with and without repeated contacts) and for all vaccination effectiveness parameters, although the attack rates themselves may differ substantially. And although the rank order may change slightly depending on the contact model (because the corresponding cjk values will differ) and even from trial to trial (because each nj may itself vary slightly even as N, the total population, remains fixed), workers such as unit clerks and X‐ray technicians are typically highly connected, whereas pharmacists and housekeepers are typically not as well connected. Note that ignoring repeated contacts in the input data increases the ranking of, for example, social workers, food service workers, transporters, and staff physicians (who tend to have diverse contacts with few repeats) and decreases the ranking of, for example, intensive care nurses and residents or fellows (who tend to have repeated contacts but with fewer people). Although they are dependent on the model and simulation parameters used, our results are well behaved. Identical effects are observed over a broad range of parameters for all tested vaccination strategies, and, although the magnitude of these effects may change, the relative performance ordering of the strategies is conserved. For example, Figure 3 shows the results obtained for the targeted vaccination strategy ($e=0.70$ ) when parameters governing the ease of agent‐to‐agent transmission (top left; $0.01\leqslant i_{j}s_{k}\leqslant 0.04$ ) and the duration of the subsequent infection (top right; $3\,\mathrm{days}\,\leqslant t\leqslant 11\,\mathrm{days}\,$ ) are modified. As expected, increasing transmissibility and the duration of infections increases the attack rates. Figure 3 also explores the effect of dynamic populations, in which patients are discharged and replaced with average length‐of‐stay values in the range $d=3{\mbox{--}} 7$ days. In general, the shorter the length of stay, the more effective the vaccination policy, because discharging infected patients has an attenuating effect on disease spread within the institution (note that the number of agents in these simulations will grow concomitantly, artificially reducing the attack rate, yet the total number of cases remains stable, limited in part by the structure of interactions between HCWs and patients). Figure 3. Attack rate as a function of vaccination rate as a function of other simulation parameters. Top left, Each curve represents a different probability of transmission from an infected agent to a susceptible agent $0.01\leqslant i_{j}s_{k}\leqslant 0.04$ , where the $i_{j}s_{k}=0.025$ curve also corresponds to the targeted vaccination curve in Figure 2, left. Top right, Each curve represents a different duration of infection ($3\,\mathrm{days}\,\leqslant t\leqslant 11\,\mathrm{days}\,$ , where $t=7\,\mathrm{days}\,$ also corresponds to the targeted vaccination curve in Figure 2, left). Bottom, Attack rate (bottom left) and case counts (bottom right) as a function of vaccination rate for the targeted vaccination strategy and different values for patient expected length of stay (the “no discharge”curve corresponds to the targeted vaccination curve in Figure 2, left). Because patient discharge and replacement increases the size of the simulation population, attack rates will appear artificially low; case counts are given for ease of comparison with previous figures, where population sizes are fixed at $N=1,000$ . Each data point represents the average attack rate over 1,000 simulations (5 replicates for each of 200 randomly generated models). Finally, we address the question of how best to put our results into practice without collecting hundreds of hours of contact data. Figure 4 shows the results obtained when randomly selected subsets of observations are used to order job types for targeted vaccination. A number of different‐sized observation sets are used, ranging from 1 hour per job type (a total of only 15 hours of observation) to the complete 606‐hour observation set. The results show that even small observational data sets suffice to capture much of the requisite job‐type ordering information underlying the targeted vaccination strategy; thus, small investments in data collection can yield large gains in vaccine performance. Figure 4. Results obtained using varying‐sized subsets of the original observation data for all 3 vaccination strategies with moderately effective vaccine ($e=0.70$ ). Each data point represents the average attack rate over 1,000 simulations (5 replicates for each of 20 randomly generated models based on 10 randomly selected subsets of the observation data). ## Discussion Nosocomial influenza can have devastating outcomes for patients, and outbreaks in healthcare settings can cause serious staff shortages.1,2529 But despite years of recommendations, vaccination rates among HCWs remain low. At UIHC, prior to this study, we noticed that vaccination rates for different groups of HCWs varied greatly, and the groups of HCWs with higher vaccination rates were not necessarily those with close contact with patients: in fact, the rate of influenza vaccination among maintenance and engineering staff was higher than that among internal medicine residents.30 Conversely, the vaccination rate of transporters (employees who take patients from one area of the hospital to another) was only 10%, yet transporters had not previously been considered as a target of vaccination compliance campaigns. Some hospitals have recently resorted to employer‐mandated vaccination programs, but such efforts are often met with resistance from HCWs, including lawsuits. There are several limitations to our study. First, as noted, observational data may be biased, because the observation of HCWs may affect their behavior. However, because our simulations rely on relative (and not exact) numbers of contacts observed, if we assume that any bias introduced will tend in the same direction and be of roughly the same magnitude across groups, we do not believe that it will alter the relative performance rankings of the vaccination strategies studied here. Second, we did not observe all groups of HCWs, nor was it possible to observe all workers from every group; furthermore, our institutional review board required consent from all shadowed HCWs, as well as approval from group supervisors, which led to less‐than‐random selection. Finally, this study was performed at only 1 institution; however, the job descriptions that we chose are similar to those of other acute care facilities, and, as is evident from Figure 4, even modest observational efforts will permit other institutions to account for local context. Despite these limitations, we believe that the insights gained from our results can be used to aid the design of more effective influenza vaccination campaigns that target the HCWs most likely to transmit influenza virus. Moreover, these same insights can be used to help effectively allocate vaccine when it is in limited supply. In 2004, US hospitals struggled to ration influenza vaccine, and recommendations were made to prioritize workers with direct patient care responsibilities.31 Yet, as we have shown, even among HCWs who work directly with patients, some have a much greater effect on the spread of disease. Simulations are used in fields where experiments are not possible; healthcare epidemiology is such a discipline. In this article, we address ways to optimize vaccination strategies using observational data, social network analysis, and computational modeling. Our findings also have broader implications in the application of other infection control interventions—for example, hand hygiene, isolation, and contact precautions. ## Acknowledgments We thank Jennifer Kuntz and Shobha Kazinka for their contributions to data collection and data management, along with the hospital epidemiologists (Loreen Herwaldt and Daniel Diekema) and infection control professionals at UIHC. Financial support. The University of Iowa College of Medicine Translational Research Pilot Grant (P.M.P., T.L.T., S.V.P., and A.M.S.), National Institutes of Health Young Investigator Award (P.P.), and National Institutes of Health grant NIAID‐R21‐AI081164 (A.M.S., S.P., and P.P.). Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. ## References 1. 1. Elder AG, O’Donnell B, McCruden EA, Symington IS, Carman WF. Incidence and recall of influenza in a cohort of Glasgow healthcare workers during the 1993–4 epidemic: results of serum testing and questionnaire. BMJ 1996;313:1241–1242. 2. 2. Horcajada JP, Pumarola T, Martinez JA. A nosocomial outbreak of influenza during a period without influenza epidemic activity. Eur Respir J 2003;21:303–307. 3. 3. Salgado CD, Giannetta ET, Hayden FG, Farr BM. Preventing nosocomial influenza by improving the vaccine acceptance rate of clinicians. Infect Control Hosp Epidemiol 2004;25:923–928. 4. 4. Harrison J, Abbott P. Vaccination against influenza: UK health care workers not on‐message. Occup Med (Lond) 2002;52:277–279. 5. 5. Centers for Disease Control and Prevention. Epidemiology and Prevention of Vaccine‐Preventable Diseases. Washington, DC: Public Health Foundation, 2006. 6. 6. Weingarten S, Riedinger M, Bolton LB, Miles P, Ault M. Barriers to influenza vaccine acceptance: a survey of physicians and nurses. Am J Infect Control 1989;17:202–207. 7. 7. Lester RT, McGeer A, Tomlinson G, Detsky AS. Use of, effectiveness of, and attitudes regarding influenza vaccine among house staff. Infect Control Hosp Epidemiol 2003;24:839–844. 8. 8. Dash GP, Fauerbach L, Pfeiffer J, et al. Improving health care worker influenza immunization rates. Am J Infect Control 2004;32:123–125. 9. 9. Call to action: influenza immunization among health care personnel. National Foundation for Infectious Diseases Web site. http://www.nfid.org/publications/fluhealthcarecta08.pdf. Published 2008. Accessed July 6, 2010. 10. 10. Smith NM, Bresee JS; Centers for Disease Control and Prevention. Prevention and control of influenza: recommendations of the advisory committee on immunization practices. MMWR Morb Mortal Wkly Rep 2006;55:1–42. 11. 11. Meyers LA. Contact network epidemiology: bond percolation applied to infectious disease prediction and control. Bull New Ser Am Math Soc 2007;44:63–86. 12. 12. Newman MEJ. The spread of epidemic disease on networks. Phys Rev E Stat Nonlin Soft Matter Phys 2002;66:016128. 13. 13. Christley RM, Pinchbeck GL, Bowers RG, et al. Infection in social networks: using network analysis to identify high risk individuals. Am J Epidemiol 2005;162:1–8. 14. 14. Keeling MJ. The implications of network structure for epidemic dynamics. Theor Popul Biol 2005;67:1–8. 15. 15. Keeling MJ, Eames ETD. Networks and epidemic models. J R Soc Interface 2005;2:295–307. 16. 16. Read JM, Keeling MJ. Disease evolution on networks: the role of contact structure. Proc R Soc Lond B Biol Sci 2003;270:699–708. 17. 17. Meyers L, Newman M, Martin M, Schrag S. Applying network theory to epidemics: control measures for Mycoplasma pneumoniae outbreaks. Emerg Infect Dis 2003;9:204–210. 18. 18. Ueno T, Masuda N. Controlling nosocomial infection based on the structure of hospital social networks. J Theor Biol 2008;254:655–666. 19. 19. Van Den Dool C, Bonten MJM, Haka E, Walling J. Modeling the effects of influenza vaccination of health care workers in hospital departments. Vaccine 2009;27:6261–6267. 20. 20. Nuño M, Reichert TA, Chowell G, Gumel AB. Protecting residential care facilities from pandemic influenza. Proc Natl Acad Sci U S A 2008;105:10625–10630. 21. 21. Erdős P, Rènyi A. On random graphs I. Publ Math 1959;6:290–297. 22. 22. Erdős P, Rènyi A. On the evolution of random graphs. Publ Math Inst Hung Acad Sci 1960;5:17–61. 23. 23. Carrat F, Vergu E, Ferguson NM, et al. Time lines of infection and disease in human influenza: a review of volunteer challenge studies. Am J Epidemiol 2008;167:775–785. 24. 24. Bridges CB, Kuenhnert MJ, Hall CB. Transmission of influenza: implications for control in health care settings. Clin Infect Dis 2003;37:1094–1101. 25. 25. Ferguson NM, Mallett S, Jackson H, Roberts N, Ward P. A population‐dynamic model for evaluating the potential spread of drug‐reistant influenza virus infections during community‐based use of antivirals. J Antimicrob Chemother 2003;51:977–990. 26. 26. Everts R, Hanger H, Jennings L, Hawkins A, Sainsbury R. Outbreaks of influenza A among elderly hospital inpatients. N Z Med J 1996;109:272–274. 27. 27. Evans M, Hall K, Berry S. Influenza control in acute care hospitals. Am J Infect Control 1997;25:357–362. 28. 28. Serwint J, Miller R. Why diagnose influenza infections in hospitalized pediatric patients? Pediatr Infect Dis J 1993;12:200–204. 29. 29. Saxen H, Virtanen M. Randomized, placebo‐controlled double blind study on the efficacy of influenza immunization on absenteeism of health care workers. Pediatr Infect Dis J 1999;18:779–783. 30. 30. Polgreen PM, Pottinger J, Polgreen LA, Diekema DF, Herwaldt LA. Influenza vaccination rates, feedback, and the Hawthorne effect. Infect Control Hosp Epidemiol 2006;27:98–99. 31. 31. Talbot TR, Bradley SF, Cosgrove SE, Ruef C, Siegel JD, Weber DJ. Influenza vaccination of healthcare workers and vaccine allocation for healthcare workers during vaccine shortages. Infect Control Hosp Epidemiol 2005;26:882–890. ## Acknowledgments We thank Jennifer Kuntz and Shobha Kazinka for their contributions to data collection and data management, along with the hospital epidemiologists (Loreen Herwaldt and Daniel Diekema) and infection control professionals at UIHC. Financial support. The University of Iowa College of Medicine Translational Research Pilot Grant (P.M.P., T.L.T., S.V.P., and A.M.S.), National Institutes of Health Young Investigator Award (P.P.), and National Institutes of Health grant NIAID‐R21‐AI081164 (A.M.S., S.P., and P.P.). Potential conflicts of interest. All authors report no conflicts of interest relevant to this article. ## References 1. 1. Elder AG, O’Donnell B, McCruden EA, Symington IS, Carman WF. Incidence and recall of influenza in a cohort of Glasgow healthcare workers during the 1993–4 epidemic: results of serum testing and questionnaire. BMJ 1996;313:1241–1242. 2. 2. Horcajada JP, Pumarola T, Martinez JA. A nosocomial outbreak of influenza during a period without influenza epidemic activity. Eur Respir J 2003;21:303–307. 3. 3. Salgado CD, Giannetta ET, Hayden FG, Farr BM. Preventing nosocomial influenza by improving the vaccine acceptance rate of clinicians. Infect Control Hosp Epidemiol 2004;25:923–928. 4. 4. Harrison J, Abbott P. Vaccination against influenza: UK health care workers not on‐message. Occup Med (Lond) 2002;52:277–279. 5. 5. Centers for Disease Control and Prevention. Epidemiology and Prevention of Vaccine‐Preventable Diseases. Washington, DC: Public Health Foundation, 2006. 6. 6. Weingarten S, Riedinger M, Bolton LB, Miles P, Ault M. Barriers to influenza vaccine acceptance: a survey of physicians and nurses. Am J Infect Control 1989;17:202–207. 7. 7. Lester RT, McGeer A, Tomlinson G, Detsky AS. Use of, effectiveness of, and attitudes regarding influenza vaccine among house staff. Infect Control Hosp Epidemiol 2003;24:839–844. 8. 8. Dash GP, Fauerbach L, Pfeiffer J, et al. Improving health care worker influenza immunization rates. Am J Infect Control 2004;32:123–125. 9. 9. Call to action: influenza immunization among health care personnel. National Foundation for Infectious Diseases Web site. http://www.nfid.org/publications/fluhealthcarecta08.pdf. Published 2008. Accessed July 6, 2010. 10. 10. Smith NM, Bresee JS; Centers for Disease Control and Prevention. Prevention and control of influenza: recommendations of the advisory committee on immunization practices. MMWR Morb Mortal Wkly Rep 2006;55:1–42. 11. 11. Meyers LA. Contact network epidemiology: bond percolation applied to infectious disease prediction and control. Bull New Ser Am Math Soc 2007;44:63–86. 12. 12. Newman MEJ. The spread of epidemic disease on networks. Phys Rev E Stat Nonlin Soft Matter Phys 2002;66:016128. 13. 13. Christley RM, Pinchbeck GL, Bowers RG, et al. Infection in social networks: using network analysis to identify high risk individuals. Am J Epidemiol 2005;162:1–8. 14. 14. Keeling MJ. The implications of network structure for epidemic dynamics. Theor Popul Biol 2005;67:1–8. 15. 15. Keeling MJ, Eames ETD. Networks and epidemic models. J R Soc Interface 2005;2:295–307. 16. 16. Read JM, Keeling MJ. Disease evolution on networks: the role of contact structure. Proc R Soc Lond B Biol Sci 2003;270:699–708. 17. 17. Meyers L, Newman M, Martin M, Schrag S. Applying network theory to epidemics: control measures for Mycoplasma pneumoniae outbreaks. Emerg Infect Dis 2003;9:204–210. 18. 18. Ueno T, Masuda N. Controlling nosocomial infection based on the structure of hospital social networks. J Theor Biol 2008;254:655–666. 19. 19. Van Den Dool C, Bonten MJM, Haka E, Walling J. Modeling the effects of influenza vaccination of health care workers in hospital departments. Vaccine 2009;27:6261–6267. 20. 20. Nuño M, Reichert TA, Chowell G, Gumel AB. Protecting residential care facilities from pandemic influenza. Proc Natl Acad Sci U S A 2008;105:10625–10630. 21. 21. Erdős P, Rènyi A. On random graphs I. Publ Math 1959;6:290–297. 22. 22. Erdős P, Rènyi A. On the evolution of random graphs. Publ Math Inst Hung Acad Sci 1960;5:17–61. 23. 23. Carrat F, Vergu E, Ferguson NM, et al. Time lines of infection and disease in human influenza: a review of volunteer challenge studies. Am J Epidemiol 2008;167:775–785. 24. 24. Bridges CB, Kuenhnert MJ, Hall CB. Transmission of influenza: implications for control in health care settings. Clin Infect Dis 2003;37:1094–1101. 25. 25. Ferguson NM, Mallett S, Jackson H, Roberts N, Ward P. A population‐dynamic model for evaluating the potential spread of drug‐reistant influenza virus infections during community‐based use of antivirals. J Antimicrob Chemother 2003;51:977–990. 26. 26. Everts R, Hanger H, Jennings L, Hawkins A, Sainsbury R. Outbreaks of influenza A among elderly hospital inpatients. N Z Med J 1996;109:272–274. 27. 27. Evans M, Hall K, Berry S. Influenza control in acute care hospitals. Am J Infect Control 1997;25:357–362. 28. 28. Serwint J, Miller R. Why diagnose influenza infections in hospitalized pediatric patients? Pediatr Infect Dis J 1993;12:200–204. 29. 29. Saxen H, Virtanen M. Randomized, placebo‐controlled double blind study on the efficacy of influenza immunization on absenteeism of health care workers. Pediatr Infect Dis J 1999;18:779–783. 30. 30. Polgreen PM, Pottinger J, Polgreen LA, Diekema DF, Herwaldt LA. Influenza vaccination rates, feedback, and the Hawthorne effect. Infect Control Hosp Epidemiol 2006;27:98–99. 31. 31. Talbot TR, Bradley SF, Cosgrove SE, Ruef C, Siegel JD, Weber DJ. Influenza vaccination of healthcare workers and vaccine allocation for healthcare workers during vaccine shortages. Infect Control Hosp Epidemiol 2005;26:882–890.
2017-01-19 11:41:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6001930236816406, "perplexity": 6274.549115736429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00328-ip-10-171-10-70.ec2.internal.warc.gz"}
https://worldbuilding.stackexchange.com/questions/158035/moonlight-bright-enough-to-see-by/158159
# Moonlight bright enough to see by Is it possible for an Earth-like planet to exist where the light reflected by the moon is, not as powerful, but close in intensity to daylight? And if so, what would be necessary for it? A moon much closer to the planet? Different chemical composition? Different moon cycle? Multiple moons?? What would be the effect of these on the projected moonlight? Would its spectrum shift towards blue or towards red? Hesitant to tag this as hard-science as I'm not necessarily looking for exact equations for light propagation, etc... • I could have sworn that I answered a question on this very subject quite recently, but can't for the life of me find the relevant post. I suspect this may be a duplicate. – Starfish Prime Oct 9 '19 at 15:43 • It can be much brighter if your Earth is a moon of gas giant, but still there won't be as much light as during the day. – Alexander Oct 9 '19 at 16:16 • During full moons where I live (US East Coast), sometimes the moonlight is so bright you can read by it. Not as bright as daylight, but I'm sure you could stretch it for your story. – weakdna Oct 10 '19 at 1:50 • Where I live you can clearly have a shadow from the moon if the sky is clear en the moon full enough. – Mixxiphoid Oct 10 '19 at 4:09 • One could turn it around and make the planet of the setting itself be a moon. Its primary can then be a gas giant, which would be brighter by virtue of size, or even a brown dwarf, which might generate some light through gravitational contraction. Some larger brown dwarfs may even fuse deuterium. However, that would then likely mean the setting would be tidally locked. – Matthew Daly Oct 10 '19 at 16:39 I am not sure you understand the vast difference between the brightness of sunlight and moonlight when you ask for moonlight "almost" as bright as sunlight. And in fact the moonlight on Earth is quite adequate for many purposes, so it is possible that your story might work with moonlight no brighter than that of on Earth. The magnitude scale for apparent brightness is a reverse logarithmic scale. The higher the magnitude number, the lower the apparent brightness of a light source. The lower the magnitude, the higher the brightness. A magnitude one magnitude lower corresponds to being 2.512 times brighter. A magnitude five magnitudes lower corresponds to being 100 times brighter. The new moon, the Moon at its minimum brightness, has an apparent magnitude of -2.50, while the full moon, the Moon at its maximum brightness, has an apparent magnitude of -12.90, a difference of 10.4 magnitudes. A difference of only 10.00 magnitudes corresponds to a difference of 10,000 times in brightness. The Sun, as seen in a clear sky on Earth, has an apparent magnitude of -26.74. That is a difference of 13.84 magnitudes. A difference of 13.00 magnitudes is a difference of 126,202 times the brightness, and a difference of 14 magnitudes is 317,021 times the brightness. So as seen from Earth the Sun has a few hundred thousand times the brightness of the full Moon. Apparent magnitude You might want to ask yourself exactly what you want the extra brightness of your planet's moon for in your story, and then do research to find out how much light is needed for that, and then figure out if it is possible to increase the brightness of moonlight on your planet that much. On a clear night, you can see fairly well by starlight if you are far from man made light sources and the resulting light pollution. I used to go out at night and walk up a hill to a grassy field and look at stars and astronomical bodies with binoculars. I didn't take a flashlight with me to light my way because I wanted my eyes to become dark adapted to see in the darkness better. Human eyes adapt to see better in darkness after a few minutes in darkness. So amateur astronomers don't use flashlights or lanterns, or use only red artificial light, when setting up their equipment to observe the skies, because they don't want to interfere with their eyes adapting to see better in the dark. On a cloudy night close to a big city, you can see fairly well by city lights reflected from the clouds due to man made light pollution. On a clear moonlit night you can see fairly well without any artificial light sources. Both history and fiction have many examples of single persons or entire armies sneaking around in the dark. Of course if someone travels by night without artificial light sources they would probably have a higher than usual probability of tripping over something they don't notice or stepping into an unseen hole than if they traveled during the day. But if someone doesn't watch where they are going they could trip during broad daylight also. The light of stars, planets, and even the full Moon, is not intense enough for most people to read by. Even the light of the full Moon is not intense enough to see colors, except that objects may look faintly blueish. If you really want the moonlight on you planet to be "almost" as bright as daylight, then you do have a problem designing a different astronomical set up allowing the moonlight to be almost as bright as daylight, because on Earth daylight is hundreds of thousands of times as bright as moonlight. To get moonlight a thousand times more intense than moonlight on Earth, you might have a moon that occupies a thousand times the area of the sky as seen from the planet as the Moon has seen from the Earth. The square root of 1,000 is 31.622776. The Moon has an angular diameter in Earth's sky of about 29 to 34 arc minutes, so if your fictional planet's moon has an angular diameter of about 916.4 to 1,074 arc minutes, or 15.273 to 17.9 arc degrees, it will have 1,000 times the angular area of the Moon. If your fictional moon is at the same distance as Earth's Moon, it can have 31.622 times the angular diameter of the Moon if it has 31.622 times the actual physical diameter of the Moon. That would make the fictional moon several times the diameter of any Earth-like planet, so if the planet is supposed to be Earth-like and thus have an Earth-like size the "moon" in your story will actually be a large planet orbited by an Earth-like moon. Or the moon in your story could be the same size as the Moon but orbit the planet 31.622 times as close as the Moon orbits the Earth. Other things being equal, that will make it appear to be 1,000 times as bright as the Moon seems in Earth's sky. It should actually be more than 1,000 times as bright since the moon will be closer to the planet and it's reflected light will be more concentrated when it hits the planet. The Moon has an average distance of about 384,402 kilometers or 238,856 miles from Earth. Divided by 31.622 that makes about 12,156.157 kilometers or 7,553.4754 miles, which would be really close to Earth. I believe that a moon that orbits an Earth-like planet that closely would actually be slowing spiraling in toward the planet and would break up into rubble or collide with the planet within a few million more years. Or you could make the moon both larger than the Moon, and also much closer to the planet than the Moon, so that the two factors combined give the moon an angular diameter 31.622 times that of the Moon to make it 1,000 times brighter than the Moon, while still being far enough away to not be spiraling in to its doom. You could also make the surface material of the moon in your story more reflective than the surface of the Moon. The Moon has a rather dark, dull surface and only reflects a small percentage of the light that hits it. So your fictional moon could be more reflective than the Moon. Maybe your Earth-like planet has several large and close moons orbiting it in different orbits. And maybe your planet could have a ring of large moons around it at a fairly close distance. Recent calculations indicate that it is possible for many equally spaced objects of equal mass to share the same orbit, so a few dozen large moons sharing the same orbit around a planet would not be physically impossible, though such an arrangement would be extremely improbable. The Ultimate Engineered Solar System So you could make an astronomical arrangement where your planet has moonlight a few thousand times as bright as moonlight on Earth. But sunlight on Earth might still be tens or hundreds of times as bright as the moonlight on your planet, even if you make the moonlight on your planet a few thousand times as bright as moonlight on Earth. In my opinion, making your "planet" actually a giant, Earth-sized moon of a giant planet may be the way to get the other astronomical body as large as possible in the sky of your world, and thus reflect as much light as possible on to that world. And if you decide that is the case you should look up other questions and answers in this site about stories set on the moons of giant planets. But of course the astronomical set up necessary for your story depends on exactly what you want more moonlight for in your story, and thus how much brighter the moonlight needs to be. • Would rings work like the many moons (and be lots more probable)? – JollyJoker Oct 10 '19 at 9:06 • @Jolly Joker As you may remember from science class, "the angle of Incidence equals the angle of reflection", and you could draw a diagram of a planet with wrings. Rings are made of many small particles and are sort of flat in the plane of a planet's equator. I think that sunlight that hits the rings on the day side of the planet will be reflected toward the planet and maybe make the day brighter, while much of the sunlight hitting the rings on the night side of the planet will be reflected away from the planet into space and won't make the day brighter. Continued. – M. A. Golding Oct 10 '19 at 15:29 • @Jolly Joker Continued. I think that asking how much a ring system could illuminate the night of side of its planet might be a good separate question. In addition to other limiting factors, part of the rings on the night side of the planet should be in the planet's shadow and thus will not reflect any light onto the planet to make the night brighter. – M. A. Golding Oct 10 '19 at 15:33 • A giant ball of anodized aluminum that has the ability to reflect 99% of light may do it. – Magic Octopus Urn Oct 11 '19 at 4:49 The sun is about 400000 times brighter than the full moon. That's quite a lot. The moon, despite looking quite white, is actually a surprisingly dingy grey with an average albedo of about 0.12 (equivalent to damp soil). If you painted the moon a brilliant glossy white and raised its albedo to 1, it would be a little over 8 times brighter, which still leaves it 1/48000 times as bright as the sun. (Incidentally, an ideal material for the surface of your super white moon would be ice, which is a little implausible close in to the parent star but not entirely impossible) A moon much closer to the planet? Different chemical composition? Different moon cycle? Multiple moons?? None of the above. The apparent brightness of our super-high-albedo moon is related to its size and its distance from the sun. Even if you had ten moons, and each one had four times the apparent size of the moon (so about twice the apparent angular diameter), you'd still be 1/1200th of the sun's brightness, and that's such an astonishingly unlikely and gravitationally unstable arrangement that it isn't really worth thinking about. You'd either need to move much close to the parent star, or to substantially increase the brightness of the parent star. In either case, the apparent brightness of the sun during the daytime would be correspondingly higher, and that means that your planet is going to be roasted and won't be likely to support life (or even an atmosphere, to be honest). If you want something almost as bright as the sun without incinerating the world, you should see about building giant orbital mirrors and have them oriented such that they reflect sunlight onto earth. What would be the effect of these on the projected moonlight? Would its spectrum shift towards blue or towards red? Moonlight actually has a slightly warmer colour temperature than sunlight (about 4100K vs about 50-5800K for sunlight). A perfectly colour balanced brilliant white moon should therefore have a slightly cooler colour temperature than the moon (e.g.: more blue), which isn't what I would have expected at all. • note that "to see by" you don't have to be anywhere near the brightness of the sun. The real full moon is good enough to see at night. – Aequitas Oct 9 '19 at 23:55 • @Aequitas from the first line of the body of the question: "light reflected by the moon is, not as powerful, but close in intensity to daylight?". – Starfish Prime Oct 10 '19 at 7:17 • Could the mirror thing happen naturally? For example if that moon had been hit or hollowed out and was concave? – Cristol.GdM Oct 10 '19 at 9:02 • @Cristol.GdM probably not; you'd want a really big flat mirror (approaching the diameter of the moon itself) and such things are gravitationally unstable. Natural structures that big would likely collapse down on themselves under their own gravity, or drift apart. You probably also wouldn't get a nice smooth surface finish either, so it wouldn't be very mirror-like. – Starfish Prime Oct 10 '19 at 9:16 • What about a moon covered in a layer of highly metallic sand? This could have retro-reflective properties similar to day-glow paint, and with the sun at certain angles could appear considerably brighter than just painting it glossy white. (It's also more likely to occur in nature.) You're still not going to get anywhere near daylight brightness of course. Even if you turned the moon into a perfectly mirrored disco ball you're still only getting a fraction of the light from the sun. – Darrel Hoffman Oct 10 '19 at 14:14 Salt deposits are more stable, and they are white when powdered. The bright spots on Ceres are hydrated magnesium salts and brine deposits. I don't know if hydrated salts can retain their water content on our moon, but salts like sea water salt are white by nature and have an albedo much higher than that of Regolith. https://en.m.wikipedia.org/wiki/Bright_spots_on_Ceres Your setup may involve a shower of meteors made-up of frozen brine hitting the moon and changing its color. Moonlight has the (lack of) intensity it does because the Moon's surface (bright as the full moon looks at night, against the black of space) is quite dark -- about like worn asphalt pavement, gravel with tar between the pebbles. To make it brighter, it would need to be covered with brighter material. One fine candidate is ice; a fresh ice surface, if it's finely divided, like snow, could reflect about 5 times as much light as the regolith we see. Unfortunately, ice doesn't stay white like snow over geological time when exposed to space; it darkens and turns red. The undisturbed ice surfaces of Kuiper Belt objects and first-time comets can be darker than the rock dust that covers most of the Moon's face, and redder than a building brick, so if it's to stay bright, it'll need some mechanism to replace the surface every few thousand years or so. Most ices will reflect nearly white -- that is, they won't change the color of the light that strikes them much. Spectrography can tell what the ice is made of, to some extent, by what light it absorbs, but the color of the reflected light will read as white to the eye. • Why would ice in space be red? I could understand if it had contamination from iron or something similar, but this is the first time I've ever heard of ice turning red on its own. My google-fu is failing me too - could you perhaps provide a link or two about the phenomenon? – mccdyl001 Oct 10 '19 at 8:18 • @mccdyl001 Found some sources, nasa.gov/topics/solarsystem/sunearthsystem/main/… and sci-news.com/space/color-kuiper-belt-objects-04290.html – JollyJoker Oct 10 '19 at 9:11 • @mccdyl001 Space ices are never pure water ice; they have ammonia and methane mixed in, at a minimum -- and long term exposure to space leads to low-rate reactions that cover the surface of the ice with a dark red organic mixture of CHON compounds. Not sure how long this takes, but the moons of Jupiter and Saturn that show bright, fresh ice have been claimed to renew their surfaces every few thousand years. – Zeiss Ikon Oct 10 '19 at 11:11 • Absolutely fascinating stuff. Thank you for the links - you learn something new every day. (TL:DR; for anyone curious: organic molecules form on the surface of icy bodies thanks to interstellar radiation interacting with the common basic elements that make up these icy objects) – mccdyl001 Oct 10 '19 at 11:51 A Moon could supply a lot of light given the correct conditions. A much bigger moon in a much closer orbit would work, although such a situation might best be described as a double planet rather than a planet and a moon. Such a moon might well fill a large portion of the sky and even when only half lit by the sun would still be very bright especially if its surface where composed of highly reflective materials. I should point out that there are a number of issues with this type of arrangement. Although the roche limit for similarly sized bodies would allow a close approach such an arrangement is unlikely to be very stable and would produce a lot of strange gravitational anomalies. Multiple Moons might sound attractive but in close proximity to a planet multiple large moons would probably be highly unstable leading to a collision. Bright enough to see by can mean many things. And the human eye can adapt to a very wide range of brightness values. For example, the brightest noon sun can be 120,000 lux, but a very cloudy overcast day can be as low as 200 lux. Most people barely notice the difference because our pupils expand and contract to keep the perceived brightness roughly the same. A full moon is about .25 lux, but when fully adapted it can be possible to read by moonlight. Interior light generally is between 100 and 250 lux (the latter for a bright office, for example). So if you can deal with the amount of light in a classroom or office, you only need to improve your moonlight by a factor of 400-1000. If your people are aliens, you could also leave the moonlight about the same and simply give them slightly bigger eyes, or their eyes could have a tapetum lucidum structure at the back (that reflective surface that makes dogs and cats eyes glow when you shine a light in them), which greatly enhances their low light performance. I did a bunch of math related to this concept on the Gearbox Borderlands (video game) forums. The original post is here (go to the bottom of the linked post and expand the "details" section by clicking the arrow). Summary: An Earth-like setup (1) with a moon close enough to be $$\frac{1}{6}$$ the Sun's brightness would have civilization-destroying tidal forces unless it's tidally locked to the planet. We could handwave the density by calling it a giant comet (3) which makes the tides tolerable, or making it out of fictional, synthetic materials (4) so it has practically no tides. Alternately, we could get a much more plausible setup (4) at 71 times our moon's brightness that still has massive tidal forces, but would only render the outer portions of continents unlivable. By reducing the water mass, you could get more livable area. This is nowhere near as bright as the Sun, but is still bright enough to see by. Using the handwavium from (3), you could reduce the tides to Earth-like levels. Here are the setups I came up with: ### First Setup Moon is 14,430 miles above the planet, subtends 23.3°, and has a solid angle of 0.512 steradians (sr), which is 8000 times higher than our moon. Moon's surface is like Saturn’s moon Enceladus, with a 99% albedo, or 8.25 times brighter than Earth’s moon. This brings the total brightness to 66,000 times brighter than our moon. The Sun is about 400,000 times brighter than our moon, so this moon is about $$\frac{1}{6}$$ the Sun's brightness, or about 16000 lux. Problems: 1. Tidal forces are 4538 times higher, meaning the tides would literally be miles high, which would destroy everything on the surface unless the planet is tidally locked to its moon. This means the orbital period of the moon is exactly equal to the rotational period of the planet, and also means the moon would never appear to move in the sky unless you moved to a different part of the planet. So some parts of the planet always see their moon, while others never see it. 2. You're quite close to the Roche limit of 6000 miles, which may cause other major effects I'm not seeing. 3. Your day length is about 9 hours, which may or may not be acceptable. ### Second Setup Moon is 41,178 miles above the planet, subtends 0.75°, has a solid angle of 0.00054 sr, which is 8.4 times that of our moon. Again, it has 99% albedo, for an 8.5 multiplier. Total brightness is 71 times our moon, or 1.8% of 1% of the Sun (0.00018). Additionally, the moon is 41% larger, and twice as massive to keep its surface gravity the same. The planet is a quarter the mass of Earth and half the size. Problems: 1. Tidal forces are still 391 times higher. We can cut this down to about 200 by leaving the moon's mass alone, but its still untenable for normal life in non-locked orbits, with waves over 1000 feet tall. 2. That said, you could probably have reasonable amounts of life nearer the center of continents (away from the reach of thousand-foot tidal waves), though your world would be very active with volcanoes and such. I think that once the waves start covering the continents, the increased surface area means the water is being spread into a shorter wave, so it would likely be "only" several hundred feet of elevation that gets submerged twice a day. 3. It's not really "near daylight" anymore, but it's certainly enough to see by. 4. The day length is 90 hours. This was specifically done for Pandora, because the first Borderlands game claims the day there is 90 hours. You could play with the numbers to get something closer to your world's parameters, but you're not going to get much better tides without losing a lot of light. ### Third Setup If we stop restricting ourselves to the Borderlands setup, we can try weird things. Copy the first setup, but make the moon far less dense. Say, the density of a comet (0.6 $$\frac{g}{cm^3}$$). It's still about $$\frac{1}{6}$$ our moon's mass, and probably not physically realizable anyways (that much mass would condense into a much denser ball with gravity). But it brings the tides down to something like 100-200 feet, which would allow life to reasonably exist. And if you handwave the size, you have the advantage that it's made of lots of ice, which is highly reflective, making our 99% albedo more believable. Of course, then you have to handwave the ice seeing as our Moon has surface temperatures well over boiling water. So I'm not sure this is a great solution. ### Fourth Setup Let's use some spacewavium. A synthetic moon made of an extremely lightweight shell might work. It would have negligible mass, and therefore no discernable tides, but would still have the broad surface area needed to reflect lots of sunlight down onto the planet. There are no known methods of constructing such a shell, but it's in the realm of "probably possible". ### Notes on Lux This Wikipedia article lists lux values for different conditions. Our moonlight is 16,000 lux for the first, third, and fourth setups, and 17.5 lux for the second. In comparison, sunlight is about 100,000 lux on a typical day, 20,000 lux in shade that's just illuminated by diffuse light from the sky, 1500 lux on an overcast day, and 40 lux when overcast at sunrise or sunset. Nighttime house lighting is about 30 lux, 50-100 lux is acceptable for safely moving around a strange area without tripping over anything, and 200-500 lux is comfortable for reading. I can navigate in full moonlight (0.1 to 0.25 lux) across unknown terrain without too much difficulty (fences and potholes can be troublesome though), so those "safe movement" thresholds are way above "required to see anything" limits. The first setup would be very bright, much brighter than an overcast day and almost as bright as standing in the shade on a clear day. The second setup would be nowhere near daylight, but still easily bright enough to navigate on foot and so forth. I don't think you could safely drive at highway speeds though. ### Tidal Issues The tide difference in Cape Disappointment, Washington is about 6 feet on average, and seems pretty typical for Earth. The difference at the Bermuda Biological Station is only 2.5 feet, but the top of the island is only 249 feet above sea level. This height is pretty typical for areas around the Gulf of Mexico as well. Places like Anchorage, Alaska get really high tides averaging 26 feet difference. Canada supposedly has places with 40 feet average difference. Taken from Pinterest, but the map says it's Rand McNally, so I'm calling "fair use" given that maps at these resolutions are available straight off their online store. If we presume 5-foot average tides, then our first setup would have tides about 4.3 miles tall. That's most of Earth. The second setup would have 1955 foot tides, which is everything in the green and green-yellow zones, or basically the eastern half of the United States. Modifying the second setup to have a lower lunar mass brings us down to 977 feet, or about half of the green-yellow zone and all the green zone. As I said above, I don't think you'd actually get these numbers though, since the water's height is proportional to how much area it covers. Also, if your planet has much less water covering the surface, the relative land area destroyed by tides goes way down. • Thank you for all the input, very interesting details! – Whitehot Oct 11 '19 at 10:26 • Your answer is very good. I need to ask: can we go really wild and try a setup where the "moon" is a recently captured (in geological terms) comet made mostly of ice and still with an odd orbit and a tail, increasing its reflection area? – jean Oct 11 '19 at 20:02 This is a brief answer meant to complement others. Some years back I designed portable solar powered lights. I did substantial testing of what could be achieved with various light levels. There was much available information of what was "needed" for eg colour vision, fine work such as embroidery, general hobbies, day to day activities, finding your way around, not being in quite absolute pitch blackness, ... . In many cases the published "required" levels were well above what were adequate for the tasks. Cutting to the chase, for now. More later maybe. Light levels are measured in lux = lumens per square meter. Don't let it worry you. Lux 0.01 - 0.05 - Stumble around in the not quite dark 0.1 - 02.5 - . Bright moonlight 5 ................ Read normal with difficulty 10 -20 .........Read "OK" but poor colour rendition 25-50 ......... Colours reasonably discernible but brighter would be nicer 50 - 100+ .......... Colours Good 300 ............ LCD screen surface full white 100,000 ..... Noonday sunlight So, at 5 lux = 50 x good moonlight with longterm familiarity, and/or biological adaptation, you may be able to do OK. Chart from here - not otherwsie worth looking at. • This is actually surprisingly helpful - Thank you for that! – Whitehot Oct 11 '19 at 10:24 Don't overlook the composition of the planet's atmosphere. One with lots of moisture droplets floating around would diffuse the moonlight and increase the apparent brightness at the surface. • "A lot of moisture droplets floating around" is something we see a lot. We call it "clouds" (mist, fog, whatever). These are notorious for reducing the available light from both the Sun and the Moon. – fraxinus Oct 10 '19 at 14:30 • This astronomy answer shows just how much of the absorption spectrum is due to water vapor in our atmosphere (and the Martian atmosphere for bonus points). – MichaelS Oct 10 '19 at 21:37 In short, no. And for the TL;DR version, the long answer is also no. M. A. Golding has given a lot of very good details about why in his answer. What it boils down to is that the moon receives about the same amount of light per square foot as the Earth does (at the upper atmosphere at least) and only a small proportion of that light is reflected from the surface and only a fraction of what is reflected ends up on Earth. It doesn't matter how much you raise the albedo the resultant midnight illumination from a full moon is still going to be drastically reduced from any direct light from the sun. See M. A. Golding's answer for some math around this. Since reflection won't cut the mustard the only option is to have the 'moon' be a light source, and a fairly damned bright one at that. As far as I can tell there's nothing in current physics, astronomy or cosmology that fits the bill. It can't be a small star, it would have to be far more massive than any conventional planet to even get started. Fission is either too slow or way too fast. If the moon were white-hot it would work for a while but the surface would cool rapidly due to the escaping radiant energy and you'd still have to figure out what caused the temperature to be that high in the first place. The only other options I can see are artificial, and those are still big problems. A flat perfect mirror the same diameter as your planet would do it. A smaller convex perfect mirror would look like a smaller sun and deliver a smaller amount. You could have an array of mirrors directing extra light towards the moon, both heating it and raising the surface temperature. Don't ask me how you'd keep them aligned. The torque would probably shred your mirror. And that's without trying to figure an orbit that keeps it out of the planet's shadow. The total energy we receive from the Sun is around 175 petawatts at the 'surface' of the atmosphere. If you cut that down to just visible light - about 43% - you'll need 75 petawatts of visible light to match daylight. You could probably get away with maybe a quarter of that to maintain reasonable light levels on a cloudless night, so let's settle on about 20 petawatts of visible light to satisfy your needs. That's on the order of 1000 times the total global energy production right now. So natural sources are, I'm sorry to say, almost completely out of the question and artificial sources are going to be hugely problematic. You could throw some junk science at the problem or just go full fantasy. White holes? Crazy minerals that tap the zero point energy field or convert dark energy? Spontaneous antimatter conversion? Dyson Swarm transmitting power to light the dark side for the ignorant savages on the surface? It needs mentioning that 'moonlight' isn't the same as 'nightlight'. Earth's moon reflects light during the day too but isn't generally significant compared to sunlight. A bright moon would add to 'daylight' as well as 'nightlight'. Generally 'daylight' will be 'sunlight' + 'moonlight' + 'starlight' etc, and 'nightlight' will be 'daylight' - 'sunlight'. So if you want a negligible sunlight so that night and day are prerry close, you want a dim sun and a LOT of stars. Consider a world that is face locked to its "sun" - as Mercury was once thought to be and as the Moon is to earth. Make the 'sun' side inhospitably bright and hot and unlivable. Provide some form of thermal circulation system that allows the outward side not to freeze. eg Hand wavium Hot rivers Thermal cycling of water or magma or ... Maybe a hot rain's gonna fall"? :-) Then - piece de resistance - A series of 'moons' that orbit sequentially )or maybe chaotically relatively, or ..., and which provide light from the other-side star. Extra points, Moons are, for whatever reason, in elliptical orbits with fast transition of bright side and high elliptical loops over dark side. Add (implausible) titanium dioxide surfaces if you want really high albedo. Maybe the long passed on fathers set it up ? :-) . An alternative way to think about this perhaps is the terrain of the Earth. If you're in a valley surrounded by forest or bush (dark green, no reflections) then a full moon seems very dim. If you're on the top of a rolling hill surrounded by a snow-covered landscape then a full moon is almost dazzling. Apparent brightness is about more than just the source of the light.
2021-03-08 06:04:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39735350012779236, "perplexity": 1103.2139320207777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00021.warc.gz"}
https://www.physicsforums.com/threads/calc-speed-30-year-journey.732459/
# Calc speed 30 year journey 1. Jan 12, 2014 ### texasman1979 Here's the hypothetical scenario: A trip from earth to alpha centauri in 30 years. Steadily accelerating to half way then turning around and steadily decelerating. Trying to figure out a way to figure speeds along the time line. I don't want the answer, but the math formula to figure it out. Thx. 30 years -- time line 4.2 lightyears -- relative distance to stars Last edited: Jan 12, 2014 2. Jan 12, 2014 ### texasman1979 calc thrust non rocket engine Writing a fiction scifi book. The spacecraft will have a magnetic based propulsion system. In the book the spacecraft weighs 130 metric tons. I would like some educated numbers to go by for figuring thrust and speed from 0 thrust to 1g. Obviously, with a non rocket type propulsion there would be no mass loss as fuel is burned cause there wouldn't be any fuel. So i don't divulge too much of the story line pretend it is solar based energy making electricity making a magnetic field. This is a book I'm writing, but there is some real science behind it and I'm wanting it to be as real as it can be. Thx. 3. Jan 12, 2014 ### Staff: Mentor I merged your threads, as the questions are closely related. This is not really astrophysics, so the thread could get moved. For a uniform acceleration "a", starting at rest, and neglecting relativistic effects, the distance travelled after time t is given by $s=\frac{1}{2} a t^2$. You know t and s (half the distance and time), so you can calculate a. With that, the velocity is just acceleration*time. The deceleration part works the same way, just backwards. F=m*a Calculate a, and you can get F. What is a "magnetic based propulsion system"? You have to accelerate something backwards. Mass from the rocket, mass or light incoming from some external source, or interstellar medium. You do need fuel to power your ship - even if you have a system that does not need reaction mass. Last edited: Jan 12, 2014 4. Jan 12, 2014 ### texasman1979 two parts to this. In space, once you are going 1,000,000 miles per hour you are going to continue going 1,000,000 miles per hour till some force acts to modify that. In my story and inventor makes a new type of ship set in the next few decades. As far as the details to that, you'll have to read the book. :) So, a 30 year trip to Alpha Centauri, binary star system. The character and his craft will begin in earths orbit and sling shot around a couple planets to achieve system exit velocity. Then he will continue to accelerate toward Alpha Centauri for 15 years and then turn around and decelerate the next 15 years. At the same time the ship weighs 130 metric tons. That's 286600.6 pounds. How many pounds of thrust would be needed to gain 1 gforce from the acceleration? I am having a bit of trouble relating your math with the principles I have stated. I would help me a great deal if you related the math to my particular problem so i can relate the two better. thx. 5. Jan 13, 2014 ### Bandersnatch That's the m in F=ma. Convert to kilograms. that's the acceleration a(use the value in m/s^2) Plug these in and you get the force(thrust) in Newtons. Same with s=at^2. s is half the distance to αCen in metres; a is the acceleration you're looking for; t is the time to get half way there, in seconds. Just plug the values in. By the way, at 1g constant acceleration, you'll get there in 3.5 years on-board time, and 5.8 Earth time, as relativistic effects start to play a role at that speeds. To get there in 30 years, you need about 0.02 g acceleration(here, relativistic effects are negligible). You may find these resourses useful: http://www.projectrho.com/rocket/calculators.php in particular this one: http://mysite.verizon.net/res148h4j/javascript/script_starship.html 6. Jan 13, 2014 ### texasman1979 how did you come up with those times and g forces? that web page? how do i figure that manually? 7. Jan 13, 2014 ### Staff: Mentor That is not necessary, your ship is so powerful it would be a waste of time (several years) to use gravitational slingshots within our solar system for a tiny velocity gain. I considered all those principles for the formulas and explanations I posted. Metric units are much more convenient.
2018-05-26 04:55:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6382626295089722, "perplexity": 1580.3198554921903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867309.73/warc/CC-MAIN-20180526033945-20180526053945-00561.warc.gz"}
https://infoscience.epfl.ch/record/82505
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract This work is concerned with the computational complexity of the recognition of $ÞPtwo$, the class of regions of the Euclidian space that can be classified exactly by a two-layered perceptron. Some subclasses of $ÞPtwo$ of particular interest are also studied, such as the class of iterated differences of polyhedra, or the class of regions $V$ that can be classified by a two-layered perceptron with as only hidden units the ones associated to $(d-1)$-dimensional facets of $V$. In this paper, we show that the recognition problem for $ÞPtwo$ as well as most other subclasses considered here is \NPH\ in the most general case. We then identify special cases that admit polynomial time algorithms.
2021-04-16 16:02:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4956412613391876, "perplexity": 225.79194927770777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00631.warc.gz"}
https://direct.mit.edu/coli/article/40/4/921/1485/Reflections-on-the-Penn-Discourse-TreeBank
## Abstract The Penn Discourse Treebank (PDTB) was released to the public in 2008. It remains the largest manually annotated corpus of discourse relations to date. Its focus on discourse relations that are either lexically-grounded in explicit discourse connectives or associated with sentential adjacency has not only facilitated its use in language technology and psycholinguistics but also has spawned the annotation of comparable corpora in other languages and genres. Given this situation, this paper has four aims: (1) to provide a comprehensive introduction to the PDTB for those who are unfamiliar with it; (2) to correct some wrong (or perhaps inadvertent) assumptions about the PDTB and its annotation that may have weakened previous results or the performance of decision procedures induced from the data; (3) to explain variations seen in the annotation of comparable resources in other languages and genres, which should allow developers of future comparable resources to recognize whether the variations are relevant to them; and (4) to enumerate and explain relationships between PDTB annotation and complementary annotation of other linguistic phenomena. The paper draws on work done by ourselves and others since the corpus was released. ## 1. Introduction The Penn Discourse TreeBank, or PDTB (Prasad et al. 2008; PDTB-Group 2008) is the largest manually annotated resource of discourse relations. This annotation has been added to the million-word Wall Street Journal portion of the Penn Treebank (PTB) corpus (Marcus, Santorini, and Marcinkiewicz 1993), indicating relations between the events, facts, states, and propositions conveyed in the text—relations that are essential to its understanding. Some relations are signalled explicitly, as in Example (1), where the underlined phrase as a result signals a causal relation between the situation described in the first two sentences (called Arg1 in the PDTB, formatted here in italics) and a situation described in the third sentence (called Arg2, formatted here in bold). Other relations lack an explicit signal, as in Example (2), where there is no explicit signal of the causal relation between the situation described in the first sentence and that described in the second. Nevertheless, there is no change in meaning if the relation is made explicit—for example, using the same phrase as a result (Martin 1992). • (1) Jewelry displays in department stores were often cluttered and uninspired. And the merchandise was, well, fake. As a result, marketers of faux gems steadily lost space in department stores to more fashionable rivals—cosmetics makers. [wsj_0280] • (2) In July, the Environmental Protection Agency imposed a gradual ban on virtually all uses of asbestos. (implicit=as a result) By 1997, almost all remaining uses of cancer-causing asbestos will be outlawed. [wsj_0003] Over 18K explicitly signalled relations and over 16K implicit forms have been annotated in the PDTB 2.0 (cf. Section 3.2, Table 1), which was released in February 2008, through the Linguistic Data Consortium (LDC).1 Researchers since then, in both language technology and psycholinguistics, have begun to use the PDTB in their research, developing methods and tools for automatically annotating discourse relations (Wellner and Pustejovsky 2007; Elwell and Baldridge 2008; Pitler et al. 2008; Pitler and Nenkova 2009; Wellner 2009; Prasad et al. 2010a, 2011; Zhou et al. 2010; Ghosh et al. 2011a, 2011b, 2012; Lin, Ng, and Kan 2012; Ramesh et al. 2012), generating questions (Prasad and Joshi 2008; Agarwal, Shah, and Mannem 2011), ensuring an appropriate realization of discourse relations in the output of statistical machine translation (Meyer 2011; Meyer and Popescu-Belis 2012; Meyer and Webber 2013), and testing hypotheses about human discourse processing (Asr and Demberg 2012a, 2012b, 2013; Jiang 2013; Patterson and Kehler 2013). Other researchers have adapted the PDTB style of annotation to create comparable resources in other languages and genres (Section 4). Table 1 Total relations annotated in the PDTB. PDTB RelationsNo. of tokens Explicit 18,459 Implicit 16,224 AltLex 624 EntRel 5,210 NoRel 254 Total 40,600 PDTB RelationsNo. of tokens Explicit 18,459 Implicit 16,224 AltLex 624 EntRel 5,210 NoRel 254 Total 40,600 What then are the aims of this paper? First, for those researchers who are unaware of the PDTB, Section 2 of the paper lays out the key ideas behind the PDTB annotation methodology, and Section 3 describes the corpus in more detail than previous papers (Prasad et al. 2008; PDTB-Group 2008) and presents what we have learned since release of the corpus in 2008. Secondly, for those researchers who have used the PDTB, Section 3 aims to point out significant features of its annotation that have either been ignored or taken to be intrinsic when they are simply accidental. We hope that this will enable researchers to derive more from the corpus in the future and recognize the value of having it more completely annotated. Thirdly, annotation of comparable resources in other languages and genres has turned out to vary from PDTB annotation in ways that may be of interest to people contemplating the development of comparable resources in other languages and genres. Section 4 summarizes and explains the sources of this variation. Fourthly, Section 5 aims to show how PDTB annotation complements TimeBank (Pustejovsky et al. 2003a) and PropBank (Palmer, Gildea, and Kingsbury 2005) annotation over the same Penn TreeBank corpus. Section 6 closes with a summary of the key points. Although extensive documentation can be found on the PDTB Web site, along with discussions of various aspects of PDTB annotation, there has not as yet been as comprehensive and quantified a discussion of issues as presented here, especially concerning comparable corpora and complementary annotation. Providing this discussion is a major goal of this paper. We ourselves will be addressing many of these issues in the next few years. ## 2. Key Ideas Underlying PDTB Annotation Two key ideas underlie the methodology used in annotating the PDTB, setting it apart from other efforts to annotate discourse relations (e.g., Carlson, Marcu, and Okurowski 2001; Polanyi et al. 2004; Baldridge, Asher, and Hunter 2007). First, it makes no commitment to any kind of higher-level discourse structure over the discourse relations annotated between individual text spans. Thus, while theory-neutral itself with respect to higher-level discourse structure, the PDTB invites experimentation with approaches to high-level topic and functional structuring (Stede 2012; Webber, Egg, and Kordoni 2012) or to hierarchical structuring (Mann and Thompson 1988; Asher and Lascarides 2003), as a resource for research aimed at a “data-driven and emergent theory of discourse structure” (Bunt, Prasad, & Joshi 2012, page 61). Secondly, the annotation of discourse relations is lexically grounded. Rather than asking annotators to directly classify the sense of relations, which is a difficult task (Stede 2008), annotators were asked to look at lexical items that can signal discourse relations, such as the expression As a result in Example (1). When they did signal discourse relations, their arguments and senses were then annotated. Annotators were also asked to look at adjacent sentences that lacked one of these explicit signals. Where they inferred a discourse relation, they first labeled it with a lexical item that could serve as its explicit signal (such as As a result in Example (2)), before going on to classify its sense. In both cases, this lexical grounding was aimed at making the annotation more reliable, but it can also serve as a feature in the automated identification of discourse relations mentioned in Section 1. A more detailed introduction to the PDTB can be found in the PDTB-2.0 overview paper (Prasad et al. 2008) and the PDTB-2.0 annotation manual (PDTB-Group 2008). Other papers describe specific aspects of the annotation such as the senses used in annotating relations (Miltsakaki et al. 2008), alternative lexicalizations (Prasad, Joshi, and Webber 2010b), and attribution (Prasad et al. 2007). ## 3. Key Features of PDTB Annotation Here we discuss four key aspects of PDTB annotation that have been partially ignored or misunderstood: Explicitly signalled discourse relations; implicit discourse relations; properties of the arguments to discourse relations; and several issues concerning the senses of discourse relations and their annotation. These discussions extend the description of these features in the original PDTB overview paper and annotation manual. ### 3.1 Explicitly Signaled Discourse Relations As Patterson and Kehler (2013) note, the inference of discourse relations may draw heavily upon world knowledge, but may also be facilitated by specific linguistic signals. It is these signals that we discuss here, distinguishing between (1) the linguistic expressions that can explicitly signal a discourse relation; (2) the resource-limited subset of these expressions that were annotated as such in the PDTB; and (3) the consequences of this resource limit on annotation for using the PDTB. We have taken the view that discourse relations hold between two and only two (possibly discontinuous) spans of text that can be interpreted as propositions, eventualities, beliefs, etc. (what Asher [1993] has called abstract objects). As such, the spans are primarily one or more sentences or clauses, and the expressions that can signal relations between them come from four well-defined syntactic classes: • • Subordinating conjunctions: because, although, when, if, as, etc. • • Coordinating conjunctions: and, but, so, nor, or (and paired versions of the latter — neither…nor, either…or) • • Prepositional phrases: as a result, in comparison, on the one hand…on the other hand, etc. • • These we have called discourse connectives, or explicit connectives. During the pilot phase of PDTB annotation, we took as explicit signals of discourse relations, linguistic expressions suggested by previous researchers (Halliday and Hasan 1976; Martin 1992; Knott 1996; Forbes-Riley, Webber, and Joshi 2006). This set was then enlarged as new connectives were found in the WSJ corpus itself. Also identified during this phase were productive modifiers of explicit connectives such as apparently, at least partly, in large part, even, only, and so on, which were then annotated as connective modifiers.2 What were not taken to be discourse connectives were adverbial cue phrases, including sentence-initial Now (Example (3)), Well (Example (4)), So (Example (5)), and OK (Example (6)), because they signal topic changes such as the beginning of a subtopic or a return to a previous topic (Hirschberg and Litman 1993), rather than relating particular discourse elements. • (3) Now why, you have to ask yourself, would intelligent beings haul a bunchof rocks around the universe? [wsj_0550] • (4) Well, mankind can rest easier for now. [wsj_1272] • (5) So, OK kids, everybody on stage for “Carry On Trading.” [wsj_2402] • (6) When Mr. Jacobson walked into the office at 7:30 a.m. EDT, he announced:“OK, buckle up.” [wsj_1171] We did not intend to annotate as discourse connectives pragmatic markers such as actually and in fact, which serve to signal the conversational role of the speaker's matrix utterance—specifically, that it is “either aligned with or contrary to something previously said by another speaker, by the speaker on a previous occasion or to what people in general say” (Aijmer and Simon-Vandenbergen 2004). But in fact was annotated in the PDTB as a discourse connective, whereas actually was not. Nevertheless, this accidental annotation provides interesting information on what discourse relations pragmatic markers are associated with, which seems worth further study. Resources then limited which types of explicit linguistic signals of discourse relations were actually annotated as such. In particular, sentence-initial prepositional phrases with an overt deictic argument (e.g., for that reason, by then) were not included in the set of explicit discourse connectives and hence not systematically annotated because it was felt this could be put off until deictic coreference was annotated more generally. The consequences of limiting a priori what were taken as possible signals for a discourse relation3 meant that adjacent sentences lacking one of these expressions might contain a different sort of evidence for a discourse relation between them. The consequence for annotating implicit discourse relations is described in the next section. ### 3.2 Implicit Relations The PDTB calls discourse relations that lack an explicit discourse connective between their arguments implicit discourse relations. Users of the PDTB thus need to understand (1) where and how implicit relations were annotated and (2) what was done in their absence. As to the first point, the PDTB did not mandate unconstrained search for implicit discourse relations. Rather, annotators were asked to consider implicit discourse relations only between adjacent sentences within a paragraph, in the absence of an explicit connective relating them. The procedure involved (1) identifying one or more connectives that could be inserted between the two sentences without changing the discourse relation(s) between them, and then (2) specifying the sense of those relations. This had several consequences, each of which is discussed further in this section: • 1. A sentence might bear no relation to its left-adjacent neighbor, even though a wider search might find some earlier text to which it was related. • 2. Paragraph-initial sentences were taken to have no left-adjacent neighbor and were thus not examined as an argument to a discourse relation unless they contained an explicit discourse connective. • 3. Implicit discourse relations were not annotated within a sentence except between clauses connected by a semicolon. • 4. There were cases where annotators could not insert a connective between sentences because to do so appeared redundant. • 5. There were cases where annotators could not insert a connective between sentences because they did not infer a discourse relation between them. Rather, the later sentence simply provided more information about an entity mentioned in the previous one. • 6. Despite there being common patterns of multiple explicit connectives, annotators were not asked whether an implicit discourse relation might hold concurrently with a relation signalled with an explicit connective. Point (1) has been addressed in the BioDRB (Prasad et al. 2011), which adheres to most of the PDTB annotation conventions but allows an implicit discourse relation to hold between non-adjacent sentences within the same paragraph (cf. Section 4.5). This has reduced the proportion of potential implicit relations that were marked NoRel from 1.15% in the PDTB (254/22141, cf. Table 1) to 0.9% (29/3223) in the BioDRB (Prasad et al. 2011). The same choice was made in the Hindi DRB (Kolachina et al. 2012). Points (2) and (3) remain gaps in PDTB annotation that we plan to address in the future. Example (7) illustrates point (3): One could insert a connective such as afterwards or thereafter before the free adjunct (i.e., afterwards returning …), making explicit the relation of temporal precedence between the event expressed in the main clause and that in the free adjunct. (Arg1 and Arg2 are not indicated in italics and bold in Example (7) because free adjuncts have not yet been annotated in the PDTB.) • (7) He flew to Fort Bragg, N.C., in September of that year for a course inpsychological operations, returning to the School of the Americas inPanama for a two-month course called “military intelligence forofficers.” [wsj_2013] In Section 5.2, we discuss how PropBank ArgM annotation can be used in addressing gaps in the annotation of sentence-internal implicit relations in the PDTB. With respect to point (4), cases where inserting connectives seemed redundant were taken to arise from the relation being signalled by an expression from outside the set of explicit connectives. These expressions were annotated as Alternative Lexicalizations of evidence for discourse relations and their Arg1 and Arg2 annotated accordingly. We have counted them under AltLex relations in Table 1 rather than as Implicit discourse relations (also shown there). For example, in Example (8), inserting a connective like because between the sentences was felt to be redundant. Here, One reason is was annotated as an alternative lexicalization of the causal relation between them (indicated in small capitals adjacent to Arg2). • (8) Now, GM appears to be stepping up the pace of its factory consolidation to get in shape for the 1990s. (Contingency.Cause.Reason) One reason ismountingcompetition from new Japanese car plants in the U.S. that are pouringout more than one million vehicles a year at costs lower than GM canmatch. [wsj_2338] Some AltLex expressions are the deictic PPs which were not annotated as explicit discourse connectives due to resource limitations (cf. Section 3.1). Other expressions such as quite the contrary, eventually, and thereafter (nearly 15% of alternative lexicalizations) meet all the criteria for explicit connectives, even though they had not been included earlier. Another 9% of expressions were found to be phrases such as What's more (Example (9)), which suggests that the range of discourse connectives should be widened to include other syntactic classes. • (9) Marketers themselves are partly to blame: They've increased spending for coupons and other short-term promotions at the expense of image-building advertising. (Expansion.Conjunction) What's more, a flood of new products has given consumers a dizzying choice of brands, many of which are virtually carbon copies of one other. [wsj_1856] AltLex expressions are under-annotated in the PDTB because they were only annotated when annotators found it redundant to insert an implicit connective between adjacent sentences. For example, whereas 15 tokens of that means were noticed and annotated as AltLex, another 18 in the corpus were not examined, such as the one following and in Example (10). • (10) I see a lot of evidence indicating a slower economy, andthat means my interest-rate outlook has a downward tilt,” said Garnett L. Keith Jr.…[wsj_1694] As a result, AltLex expressions cannot be exploited in machine learning—for example, for inducing a model of discourse relation annotation—because no individual AltLex expression can be guaranteed to be fully annotated in the corpus. Everyone who has attempted to annotate or catalogue discourse connectives has commented on the lack of a complete list of words and phrases serving this role (Versley 2010; Rysová 2012; Meyer and Webber 2013). Rather than provide annotators with an incomplete list of connectives and allowing them to identify alternative lexicalizations during annotation), one might consider giving them complete freedom as to what to annotate as grounding for discourse relations. Although such a process has its own problems (Section 4.1), expert annotators and good training may make it a plausible basis for effective discourse annotation. As for point (5), if annotators were unable to insert a connective between sentences because they were not able to infer a discourse relation between them, they were asked to check whether the second sentence provided more information about one or more entities mentioned in the previous sentence, as in • (11) Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29. Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group. [wsj_0001] If it did, annotators were told to annotate the relation type as EntRel. EntRel captures entity-based coherence between sentences (Knott et al. 2001) realized either directly (i.e., via an anaphoric pronoun or NP in the second sentence) or indirectly (i.e., through a bridging inference). Annotators were not asked to annotate their evidence for EntRel. Although it might be possible to use a resource such as the coreference annotation in OntoNotes (Weischedel et al. 2012) to recover what the annotators had in mind, in cases such as Example (12) the entity-based coherence is less obvious. • (12) This financing system was created in the new law in order to keep the bailout spending from swelling the budget deficit. Another $20 billion would be raised through Treasury bonds, which pay lower interest rates. [wsj_2200] Such cases would benefit from the entity or entities that ground this relation being annotated explicitly, as well as provide valuable data for studies of entity-based coherence. Table 1 also notes the 254 cases where the annotators did not see either an Alternative Lexicalization or an Entity-relation between adjacent sentences (within the same paragraph). These they annotated as NoRel. Point (6) involves the possibility of an explicit connective or AltLex expression occurring concurrently with an implicit connective. That is, a recent unpublished pilot study carried out at the University of Edinburgh (Jiang 2013) used MTurk to show that readers presented with text containing a discourse adverbial also infer the sense associated with a conjunction (coordinating or subordinating), even when no conjunction is present in the text. The study involved 80 items taken from the freely available Corpus of American English, each consisting of a clause, followed by a gap, followed by a clause containing one of four discourse adverbials (after all, in fact, in general, instead). Each HIT (Human Intelligence Task) consisted of one item and six conjunctions (and, because, before, but, or, so, or none). The Turker was asked to insert into the gap, the conjunction (or none) that seemed most natural between the clauses. For example, in Example (13), 50/52 Turkers inserted but into the gap, showing that they interpreted the relation between the clauses as being the same as if what was explicit was but instead: • (13) Logically, she should be dead. Instead, she feels fine, caring for her daughters and walking a pedometer-measured two miles a day. In Example (14), 49/52 Turkers inserted so into the gap, showing that they interpreted the relation as being the same as if what was explicit was so instead: • (14) He suspected he shouldn't say that. Instead he lied. Notice how different Example (13) would be if it were realized with so instead, or Example (14) with but instead: Neither is what the writer intended. The effect was less strong in Example (15), where 33/52 Turkers chose because, showing that they interpreted the relation as being the same as because instead, whereas the other Turkers chose differently among the other options: • (15) If he'd expected her to be upset, he was disappointed. Instead, she laughed, clapping her hands. Although it is not yet clear which discourse adverbials are compatible with one or more concurrent implicit relations, it is nevertheless likely that such discourse relations are under-annotated in the PDTB and should be addressed. ### 3.3 Arguments The two arguments to discourse relations contribute to the senses of the relations taken to hold between them. The PDTB gives annotators two ways to restrict these arguments to only the material needed for these relations. As already noted, it is the events, states, propositions, claims, etc., in a text that participate in discourse relations. In English, such abstract entities tend to be conveyed through sentences, clauses, nominalizations, and verb phrases. Hence, these are what can be annotated in the PDTB as arguments.4 Also because discourse deictics (e.g., this, that, so) can refer back to such interpretations (Example (16)), as can particles such as yes and no that function as responses to questions (Example (17)), these can also be annotated as arguments. • (16) Evaluations suggest that good ones are—especially soifthe effects on participants are counted. [wsj_2412] • (17) Underclass youth are a special concern. [Sup1 Are such expenditures worthwhile, then]? Yes, iftargeted. [wsj_2412] One way to limit arguments to only the minimal text needed for a given discourse relation (a minimality principle), was to allow annotators to specify that other text appeared relevant but not necessary to that interpretation. Specifically, they could annotate as Sup1, material supplementary to Arg1, as in Example (17), where the preceding question was annotated as relevant to interpreting the question response particle, and as Sup2, material supplementary to Arg2, as in Example (18), where the material enclosed in square brackets was annotated as relevant but not necessary to interpreting the temporal relation expressed with then. • (18) It acquired Thomas Edison's microphone patentandthenimmediately sued the Bell Co. [Sup2 claiming that the microphone invented by my grandfather, Emile Berliner, which had been sold to Bell for a princely$50,000, infringed upon Western Union's Edison patent.] [wsj_0091] Supplementary information (both Sup1 and Sup2) appears to have been under-annotated in the PDTB, mainly because annotaters were only invited, and not required, to check whether any text should be so annotated. This shows when one compares the number of Sup1 or Sup2 annotations on explicit discourse relations, which were annotated first, with the number of such annotations on implicit discourse relations, which were annotated on a subsequent pass: 1,571 explicit relations were annotated with supplementary information, whereas only 126 implicit relations were, despite nearly equal numbers of both. Before considering the existence of Sup1 or Sup2 as a feature indicative of the likely use of an explicit connective to signal a discourse relation (Patterson and Kehler 2013), it must be assessed whether this is an accidental feature of the PDTB's annotation or an intrinsic feature of the discourse relations themselves. A second way of limiting arguments to only the minimal text needed to complete a given discourse relation involves the separate annotation of attribution (Prasad et al. 2007). This allows the attribution holding between an agent and an abstract object to be included or excluded from the discourse relation as appropriate. For example, in Example (19), annotators could exclude the attribution phrase “said Howard Rubel, an analyst with C.J. Lawrence, Morgan Grenfell Inc. in New York” from Arg1, as unnecessary for the discourse relation, while including the attribution phrase “Mr. Asman is also annoyed” as necessary for the discourse relation in Example (20).5 • (19) Defense contractors “cannot continue to get contracts on that basis,” said Howard Rubel, an analyst with C.J. Lawrence, Morgan Grenfell Inc. in New York. (implicit=because) “The pain is too great.” [wsj_0673] • (20) Mr. Asman is also annoyed that Mr. Castro has resisted collaboration with U.S. officials, even thoughby his own account that collaboration has been devised essentially as a mechanism for acts directly hostile to the Cuban regime, such as facilitating defections. [wsj_1416] Attribution differs from supplementary information in that, when its polarity is negative, it can interact with discourse relations. (Sup has no such interaction.) This can be seen by contrasting Example (21), where (negative) denying is part of Arg2, and Example (22), where (negative) denying is not part of Arg1, but is rather its attribution. • (21) The U.S. wants the removal of what it perceives as barriers to investment; (Comparison.Contrast) Japan denies there are real barriers. [wsj_0082] • (22) Viacom denies it's using pressure tactics. (Expansion.Restatement.Specification) “We're willing to negotiate,” says Dennis Gillespie,executive vice president of marketing. [wsj_0060] In Example (21), the wanting in Arg1 is taken to contrast with the denying in Arg2. But in Example (22), the negative polarity of denying as the attribution of Arg1 means that being willing to negotiate is taken to further specify not using pressure tactics. These techniques are concerned with excluding material unnecessary to concluding the existence of a particular discourse relation. There is no comparable attempt to ensure that spans annotated as arguments to discourse relations include all the features that motivate a given relation (Section 3.4). This can be seen with discourse relations associated with the connective instead. Its Arg1 must convey an alternative that does not hold (Webber 2013): In Example (23), Arg1 conveys that “a price for the new shares has been set” is an alternative that does not hold. • (23) No price for the new shares has been set. Instead, the companies will leave it up to the marketplace to decide. [wsj_0018] But the features that allow an argument to convey an alternative that does not hold may not be present in the argument itself. For example, the PDTB annotators agreed that the clause “to be any silver lining” was Arg1 of instead in Example (24), based on the minimality principle mentioned at the start of this section. But there is nothing in this argument that conveys that this alternative does not hold. That would require Arg1 to be “there isn't likely to be any silver lining.” However, the annotators did not take such an argument to be minimal. • (24) In China, however, there isn't likely to be any silver lining because the economy remains guided primarily by the state.Instead, China is likely to shell out ever-greater subsidies to its coddled state-run enterprises, which ate up $18 billion in bailouts last year. [wsj_1646] Although in the majority of cases, minimal argument spans do contain all the features needed to license the annotated sense, this was not required by the PDTB guidelines. This point has been missed in efforts to use the PDTB in training automated sense recognition. ### 3.4 Senses and their Annotation A well-known feature of the PDTB is its three-level hierarchy of senses (Figure 1). The express purpose of this hierarchy was to allow back-off to a more general sense if (1) an individual annotator could not decide among its more specific senses or if (2) pairs of annotators disagreed as to a more specific sense. Figure 1 PDTB sense hierarchy. Figure 1 PDTB sense hierarchy. Nevertheless, many researchers interested in inducing automated classifiers for explicit and/or implicit discourse relations have used the four top-level (level-1) sense classes for their research (e.g., Pitler et al. 2008; Pitler and Nenkova 2009; Zhou et al. 2010) because of the relatively large number of tokens in each class at this level of specificity (Table 2). Table 2 Total explicit and implicit relations that fall under each level-1 sense. level-1 PDTB SensesNo. of explicitsNo. of implicits Contingency 3,741 4,255 Comparison 5,589 2,503 Expansion 6,423 8,861 Temporal 3,696 950 level-1 PDTB SensesNo. of explicitsNo. of implicits Contingency 3,741 4,255 Comparison 5,589 2,503 Expansion 6,423 8,861 Temporal 3,696 950 Our goal here is not to defend the hierarchy or its sense inventory, but rather to discuss three issues in sense labeling, which should help readers to better understand both the PDTB and the comparable corpora described in Section 4. The issues are: (1) senses found to be missing from the sense inventory; (2) disagreements between annotators; and (3) annotation of multiple concurrent discourse relations. #### 3.4.1 Missing Senses We have long realized that there are gaps in the set of available senses. Some of these gaps are noted in the PDTB Annotation Manual (PDTB-Group 2008), such as the absense of a Similarity sense for labeling explicit relations headed by as if and the absense of a Purpose sense for labeling explicit relations headed by so that. Cases of the latter, as well as relations conveying the sense that one argument was the Goal of the other, were simply annotated with the overloaded label Contingency.Cause.Result. The 34 cases of the subordinating conjunction just as, which can also signal Similarity, were found on subsequent analysis of the corpus to be annotated as either Temporal.Synchrony (13 tokens) or Expansion.Conjunction (1 token), or to have been left unannotated (20 tokens). Some of these and other missing senses have been added to the sense inventories used in annotating corpora comparable to the PDTB (Section 4.2). They should also be added to the PDTB, and senses known to be overloaded should be split appropriately. Although this will eliminate already noted sense gaps, the potential remains for additional senses to be identified, and hence this remains a problem. #### 3.4.2 Disagreements Between Annotators Sense annotation was done by two annotators. Disagreement at level-1 occurred when the two annotators picked senses that belonged at or under different level-1 classes. Disagreement at level-2 occurred when the annotators picked senses within the same level-1 class but different level-2 classes (e.g., Contrast versus Concession). Similarly, disagreement at level-3 occurred when the annotators picked different senses under the same level-2 sense class (e.g., juxtaposition versus opposition). Adjudication for disagreement at level-1 was done manually, by a team of experts, and disagreements at level-2 or level-3 were handled through automatic back-off to the next higher level. For example, a juxtaposition versus opposition disagreement would lead to a relation being automatically assigned their level-2 parent, namely, Contrast. Annotation associated with automated back-off has contributed to there being only a level-1 sense annotation for 444 explicit and 257 implicit relations in the PDTB, almost all of which are either Comparison or Expansion. Although neither label is very informative, one might interpret such a label simply as under-specified with respect to its more specific level-2 daughters. #### 3.4.3 Multiple Concurrent Discourse Relations Researchers using the PDTB for automated sense labeling of discourse relations have, by and large, assumed that its four level-1 senses (Figure 1) are disjoint. That is incorrect. Particular level-3 senses may be disjoint because they are defined as each other's inverse and hence can't both hold—for example, Reason and Result, Precedence and Succession, Expectation and Contra-expectation. Other senses may be disjoint because their defining inferences contradict one another: for example, Reason requires Arg2 to precede or coincide with Arg1, whereas Precedence requires that Arg1 precede Arg2. They cannot both hold. However, most senses are compatible. This is evident in the fact that annotators were allowed to assign up to two sense labels to each explicit or implicit connective, representing concurrent discourse relations. With explicit connectives, 999 of the 18,459 tokens (5.4%) were assigned two concurrent sense labels, with the most common pairs shown in Table 3. Table 3 Most common (≥10) pairs of level-2 sense labels on the 999 multiply-labeled explicit discourse relations. In pairs above the double line, one of the senses is temporal. countconnectivesenses 50 after Contingency.Cause Temporal.Asynchronous 30 and Expansion.Conjunction Temporal.Asynchronous 145 as Contingency.Cause Temporal.Synchrony 30 meanwhile Comparison.Contrast Temporal.Synchrony 92 meanwhile Expansion.Conjunction Temporal.Synchrony 10 since Contingency.Cause Temporal.Asynchronous 66 when Contingency.Cause Temporal.Asynchronous 41 when Contingency.Cause Temporal.Synchrony 65 when Contingency.Condition Temporal.Synchrony 12 when Contingency.Condition Temporal.Asynchronous 59 while Comparison.Contrast Temporal.Synchrony 21 while Expansion.Conjunction Temporal.Synchrony 138 and Contingency.Cause Expansion.Conjunction 13 but Expansion.Conjunction Comparison.Pragmatic contrast 10 if Comparison.Concession Contingency.Condition 11 while Comparison.Contrast Expansion.List countconnectivesenses 50 after Contingency.Cause Temporal.Asynchronous 30 and Expansion.Conjunction Temporal.Asynchronous 145 as Contingency.Cause Temporal.Synchrony 30 meanwhile Comparison.Contrast Temporal.Synchrony 92 meanwhile Expansion.Conjunction Temporal.Synchrony 10 since Contingency.Cause Temporal.Asynchronous 66 when Contingency.Cause Temporal.Asynchronous 41 when Contingency.Cause Temporal.Synchrony 65 when Contingency.Condition Temporal.Synchrony 12 when Contingency.Condition Temporal.Asynchronous 59 while Comparison.Contrast Temporal.Synchrony 21 while Expansion.Conjunction Temporal.Synchrony 138 and Contingency.Cause Expansion.Conjunction 13 but Expansion.Conjunction Comparison.Pragmatic contrast 10 if Comparison.Concession Contingency.Condition 11 while Comparison.Contrast Expansion.List Is 5.4% an accurate indicator of the frequency of multiple concurrent discourse relations between two arguments when they are linked by an explicit discourse connective? Evidence for a higher figure comes from an early experiment with two connectives, since and when (Miltsakaki et al. 2005). There, two annotators were given the option of labeling relations linked by one of these connectives in the WSJ corpus as either temporal or causal or temporal/causal, to indicate that both senses were conveyed. (The experiment was done on the 184 relations in the corpus headed by since and the first 100 relations headed by when, out of a total of 989.6) Those headed by since were annotated temporal/causal 21 times by one of the annotators (11.3%) and 16 times by the other (8.6%). Those headed by when were annotated temporal/causal even more frequently: 22% by one annotator and 28% by the other. Compare this with the counts for multiply-labeled since and when in Table 3: Only 10/184 tokens of since (5.4%) were annotated with both a temporal and a causal sense, and only 184/989 tokens of when (18.6%) were annotated with both some kind of temporal and some kind of causal sense. In both cases, this is significantly less frequent than in the earlier experiment, suggesting that if annotators are not given explicit joint-sense options (such as temporal/causal or more specific pairs) and only invited to use multiple concurrent sense labels if they take multiple discourse relations to hold, their use of multiple labels may be intermittent at best. This is a loss to both language technology and theoretical and psycholinguistic understanding of discourse relations, and a situation that deserves to be fixed. In the case of implicit discourse relations, annotators could assign more than one sense label to a single implicit connective or they could insert more than one implicit connective, which were then individually sense-labeled. Both options indicated that concurrent discourse relations could be taken to hold between the specified arguments. Of the 16,224 implicit relations, 359 (2.2%) were annotated with a single implicit connective with multiple senses, and 171 (1.1%) were annotated with two implicit connectives, each taken to have a single sense. Both of these are very small numbers, so no hard conclusions can be drawn. Nevertheless, one might sample whether more of the implicit relations annotated with some causal sense might be more accurately annotated with some temporal sense as well. We close by noting that of the 171 cases annotated with two implicit connectives, with each assigned a single sense, over half (93/171 = 54.4%) involved a connective paired with for example, for instance, or for one thing (e.g., since, for example; as, for instance; because, for one thing); 13 more (7.5%) were paired with in particular or specifically (e.g., in particular, because; specifically, because) and another 13 (7.5%) were paired with in fact (e.g., although, in fact; so, in fact). All but in fact are really connective modifiers (Section 3.1), even though they can also appear separately as connectives in their right. Such cases deserve further analysis, in connection with getting a better understanding of modified connectives, their prevalence, and their semantics. ## 4. Annotated Corpora Comparable to the PDTB We noted in Section 1 that release of the PDTB has spawned similar efforts to annotate resources in other languages and genres following a lexically grounded approach to discourse relations. We also noted that these efforts vary in interesting ways from that of the PDTB. Here we describe both the nature and the sources of this variation, so that people contemplating development of comparable resources in additional languages and/or genres will recognize variation that is appropriate to their situation, while avoiding unnecessary variation that prevents inter-operability of these comparable corpora (Bunt, Prasad, and Joshi 2012). Table 4 identifies the corpora we will discuss and the extent of their current annotation: the BioDRB (Prasad et al. 2011), the Leeds Arabic Discourse TreeBank, or LADTB (Al-Saif and Markert 2010, 2011; Al-Saif 2012), the Chinese Discourse TreeBank (Xue 2005; Zhou and Xue 2012; Zhou and Xue (in press)), the Turkish Discourse Bank or TDB (Zeyrek et al. 2008, 2009; Aktaş, Bozşahin, and Zeyrek 2010; Zeyrek et al. 2010; Demirsahin et al. 2013; Zeyrek et al. 2013), the Hindi Discourse Relation Bank (Oza et al. 2009; Kolachina et al. 2012; Sharma et al. 2013), and the Prague Discourse TreeBank, or PDiT (Mladová, Zikánová, and Hajičová 2008; Jínová, Mírovský, and Poláková 2012; Rysová 2012; Poláková et al. 2013), now part of the Prague Dependency TreeBank, version 3.0, PDT 3.0 (Bejček et al. 2013). (A comparable discourse treebank is being developed for French (Danlos et al. 2012), but it has not yet been released and the information needed to compare it to the other corpora in Table 4 is not available.) Table 4 Comparison of the PDTB and comparably annotated corpora. Count is the number of annotated relations; Coverage is the text genre(s) in the corpus; Mods=Y if connective modifiers are annotated. Impl=Y if implicit connectives are annotated. EntR=Y if Entity Relations are annotated. AltL=Y if Alternative Lexicalizations are annotated. Attr=Y if attribution is annotated. Supp=Y if arguments can have supplementary text. Sens=Y if senses have been annotated. Mult=Y if multiple sense relations can be annotated for a single connective. NameCoverageCountModsImplEntRAltLAttrSuppSensMult PDTB WSJ news, essays 40,600 BioDRB Biomed papers 5,859 LADTB Arabic news 6,328 N1 Chinese DTB Xinhua news 3,951 Y2 Turkish DB novels, news, etc. 8,484 Hindi DRB news ∼5K PDT 3.0 news 20,542 Y3 (PDiT 1.0) NameCoverageCountModsImplEntRAltLAttrSuppSensMult PDTB WSJ news, essays 40,600 BioDRB Biomed papers 5,859 LADTB Arabic news 6,328 N1 Chinese DTB Xinhua news 3,951 Y2 Turkish DB novels, news, etc. 8,484 Hindi DRB news ∼5K PDT 3.0 news 20,542 Y3 (PDiT 1.0) 1∼70% of adjacent sentences in the LADTB are linked by an explicit connective, compared with ∼12% in the PDTB. 2In 20 randomly selected files, over 80% of DRels were found to be implicit, compared with around 54.5% in the PDTB (Zhou and Xue 2012). 3Included in coreference annotation. Although these comparable corpora differ in ways to be discussed subsequently, they all adhere to the key ideas of PDTB annotation (Section 2) in being neutral to any discourse structure beyond the argument structure of individual discourse relations and in grounding discourse relations in lexical expressions. Where they annotate implicit discourse relations (Table 4), these comparable corpora follow the PDTB in annotating an inferred lexical grounding. All of the corpora also follow the PDTB in taking discourse relations to hold between two and only two abstract objects, called Arg1 and Arg2, each associated with a possibly discontinuous text span. Although not every corpus annotates atttribution, where it is annotated, it is separate from the annotation of discourse relations. ### 4.1 Annotation Workflow Because one purpose of this section is to inform people considering the development of similar resources in other languages and genres, we will briefly mention how workflow has varied in the development of comparable corpora and how it has affected annotator effort and inter-annotator agreement. Workflow on the PDTB itself was based on the idea of using easier tasks to pave the way for more difficult ones. In practice, this meant separating the annotation of explicit and implicit relations, as explicits were perceived as easier to annotate. Explicit discourse relations were annotated one connective at a time throughout the corpus, before moving on to the next one on the list. The rationale for this was to improve the annotators' ability to annotate a particular connective by focusing their attention on that connective. Subsequently, implicit discourse relations were annotated document by document, analyzing each pair of adjacent sentences within each paragraph, as described in Section 3.2. Later, senses were annotated for explicit and implicit discourse relations—explicits by connective, and implicits by document. Even for annotating explicit relations, this is not the only possible workflow. In annotating the LADTB (Al-Saif 2012; Al-Saif and Markert 2011), the nature of Modern Standard Arabic (MSA) demanded a different workflow. In MSA, as in English, words that can function as discourse connectives also have non-discourse functions. As such, confirming that a potential connective has a discourse function is directly related to identifying its arguments. One common form of argument to discourse relations in Arabic news texts is an Al-maSdar noun, which is a tense-less expression of an event.7 Their frequency affected annotation workflow in the LADTB. The LADTB used a workflow for annotating explicit discourse relations that involved highlighting for the annotators all potential discourse connectives (including word-initial clitics), based on a pre-compiled list. An annotator first read through the entire text to achieve an overall understanding, before stepping through the highlighted items one by one. In order to tell if a potential connective has a discourse function, the annotator would see whether it had arguments, including strings interpretable as al-maSdar nouns. Workflow thus involved simultaneous confirmation of potential discourse connectives and identification of their arguments. After that, the annotator would add the one or more senses that a relation expresses. If a potential connective did not express a discourse function, the annotator would note it and go on to the next highlighted item. Workflow for the BioDRB (Prasad et al. 2011) was designed to address the difficulty perceived in annotating inter-sentential relations in scientific text. On encountering a new sentence, the annotator had to first mark its inter-sentential relation(s) with the prior discourse, and only then annotate any intra-sentential relations within it. In this way, annotators were made to first attend to relations that were harder to pin down, as they progressed in their sequential reading and annotation of the text. Ongoing annotation of the Chinese Discourse TreeBank (Zhou and Xue 2012) follows a fully sequential annotation strategy, largely for a language-specific reason—the customary writing style of Chinese, which often does not bother to distinguish the end of a sentence (marked with a full stop) from the end of a clause (marked with a comma). This has two major consequences: No rigid distinction can be made between inter- and intra-sentential connectives, and annotators must consider implicit relations both between full-stop delimited sentences and comma-delimited clauses. (The latter have not been annotated in the current PDTB.) Annotating implicit relations between comma-delimited clauses results in many more implicit relations. Zhou and Xue (2012) report a 18–82% split in their data between explicit and implicit discourse relations, compared with a 46–54% split in the PDTB. As such, having a separate task to cover 18% of the data was disfavored. Although the particular style of annotation should have no effect on the content of annotation, it can affect inter-annotator agreement. To this end, researchers developing the Hindi DRB (Oza et al. 2009; Kolachina et al. 2012) carried out a systematic study of three workflow strategies (Sharma et al. 2013). The first strategy modeled the task exactly as in PDTB. In the second, explicits and implicits were annotated in exactly the order in which they were encountered on a sequential reading of the text. The third strategy operated per text, with annotators first marking all of its explicit connectives, and then its implicit relations, before moving on to another text. The latter two strategies were designed to ensure that annotators were aware of the coherence and flow of the discourse when carrying out the task. Sharma's findings show that better agreement is obtained when the annotators' attention is held to the text, but with no clear preference for a fully sequential approach (as in the second strategy), or an approach that separated the tasks on a text-by-text basis (as in the third). Although ultimately, the choice of workflow may be language- or genre-specific, as noted for Chinese, the final choice, then, should be driven by considerations of annotation reliability, which seems to be enhanced by the annotators attending to the coherence and flow of the discourse. Interoperability among these resources is not an issue here, as long as whatever strategy is used yields highly consistent annotation. ### 4.2 Inventory and Organization of Senses The senses of discourse relations used in the PDTB and the hierarchy in which they are organized (Miltsakaki et al. 2008) drew on both in-house experiments (Miltsakaki et al. 2004, 2005) and previous work on the semantics of discourse relations (Lakoff 1971; Moens and Steedman 1988; Sweetser 1990; Jayez and Rossari 1998; Kehler 2002), among others. Neither has been adopted without some change in the comparable corpora: Additional senses have been introduced, while other senses have been eliminated or modified; the sense hierarchy has been modified, and in one case, abandoned. For example, Oza et al. (2009) propose a more general and uniform treatment of those discourse relations that are pragmatic, relating the speech act of one argument to either the content or speech act of the other. Al-Saif and Markert (2011) do the same. Almost every corpus includes at least one additional sense class, including similarity, purpose, background, and gradation, among others, motivated more by the genre of the texts being annotated than by their language. Changes in the PDTB sense hierarchy have been either at its root or its leaves: The BioDRB has eliminated the four top-level classes, adopting a two-level hierarchy. The Prague Discourse TreeBank (Poláková et al. 2012) now part of the PDT 3.0 and the LADTB (Al-Saif and Markert 2011) have also adopted a two-level hierarchy, but they preserve the top-level classes, collapsing the second and third levels of annotation. The Chinese Discourse TreeBank (Zhou and Xue 2012) has eliminated the hierarchy entirely, using a flat classification of just twelve sense categories. Sense annotation has not yet begun on the Turkish Discourse Bank. From the standpoint of interoperability, a shared assumption about the meaning and classification of discourse relation senses is of utmost importance, because conflicts in the assumed meaning of labels would preclude any kind of comparative studies of the annotated resources, both within and across languages and domains. With the growing number of variations in sense annotation schema, we believe it is critical to collect the insights and findings from these studies and to find common threads, since we believe there is much that is common between them. Indeed, some recent work has usefully provided a mapping between their classification schemes and the PDTB classification (Prasad et al. 2011; Zhou and Xue 2012). ### 4.3 Annotation of Explicit Connectives An obvious way in which the corpora vary is in the choice of explicit connectives to be annotated. Because of the rich morphology of Turkish, explicit connectives in Turkish include morphological suffixes attached to verb roots and complex subordinators consisting of a connective and a nominalizing suffix. The former have not yet been annotated in the Turkish Discourse Bank (TDB), although the latter have been (Zeyrek et al. 2013). Counterparts of the latter, called phrasal expressions in the TDB, appear as AltLex in the PDTB. As in the LADTB (Al-Saif 2012), nominalizations are commonly annotated as arguments in the TDB. As well as being a morphologically rich language, with prefix clitics (as well as separate words and phrases) serving as explicit connectives, Arabic writing tends towards long sentences conjoined with coordinating conjunctions (Ostler 1987), with the equivalent of and commonly found at the beginning of sentences and paragraphs (Al-Saif 2012). It was so common at the beginning of paragraphs in the newswire text annotated in the LADTB that all such tokens were simply assigned a Conjunction relation to the closest proposition, unless a clearer discourse relation was explicitly indicated. ### 4.4 Lexical Grounding for Implicit Discourse Relations The approach used in lexically grounding implicit discourse relations seems to be language-specific. For English, the PDTB's lexically grounded approach led to guidelines for annotating implicit relations in which annotators were asked to identify one or more connectives that could be inserted between proposed arguments to express the discourse relation(s) they took to hold between them (cf. Section 3.2). This was meant to serve as explicit evidence for their decisions. For the Chinese Discourse TreeBank, Zhou and Xue (2012) adopt a different strategy, effectively using paraphrase rather than insertion. This is because, in a majority of cases, the wording rejects insertion of a connective even if it expresses the underlying discourse relation exactly (or sometimes, maybe the wording itself is the reason for not having a connective). (Zhou and Xue 2012, page 73) This suggests that Chinese may use particular syntactic constructions to indicate intra-sentential discourse relations even more than English and German do (Meyer and Webber 2013). Thus, instead of having their annotators insert explicit connectives, Zhou and Xue (2012) have them paraphrase the relation between proposed arguments in terms of explicit connectives that typically express each discourse relation. These prototypical connectives then serve as the lexical grounding for the relation. Although this is the only case we are aware of that has used a different approach to lexically grounding implicit relations, it is something that corpus developers should keep in mind, especially when considering the annotation of discourse relations within sentences. ### 4.5 Locus of Implicit Relations The corpora differ in where they look for implicit discourse relations. As noted in Section 3.2, implicit relations were only considered in the PDTB between adjacent sentences within the same paragraph. Although a sentence might have an implicit relation to a sentence further afield, we decided that it would add too much to an already costly effort to have annotators seek them out. With respect to implicit discourse relations within a single sentence, we are aware of having deliberately ignored (for lack of resources) discourse relations that we know are there. Where to look for implicit discourse relations is, in part, language-specific. We have already noted (Section 4.1) that the structure of Chinese sentences is such that a much larger proportion of discourse relations in Chinese occur intra-sententially. Hence the greater need to look for them there. As for looking for discourse relations further afield, the comparable corpora vary, but not for language-specific reasons. Rather, it follows from the cost–coverage decision that all annotation efforts face. In this case, in both the Hindi Discourse Resource and the BioDRB, implicit discourse relations have been sought more widely, allowing a sentence to be related to a non-adjacent sentence within the same paragraph. ### 4.6 Naming Convention for Arg1 and Arg2 One final way in which corpora comparable to the PDTB vary is with respect to the naming convention for Arg1 and Arg2. In the PDTB, the choice follows syntactic criteria: With explicit discourse connectives, Arg2 is the argument syntactically bound to the connective, and Arg1 is the other argument. With implicit connectives, Arg1 is the left-adjacent sentence, and Arg2, the right-adjacent one. Although the same convention is followed in the BioDRB and Turkish Discourse Bank, the Chinese Discourse TreeBank, the Prague Discourse TreeBank, and the Hindi Discourse Resource have followed a semantically driven convention, in which arguments that play the same semantic role have the same label. This then eliminates level-3 senses in the PDTB sense hierarchy (e.g., reason/result, expectation/contra-expectation, precedence/succession) whose only purpose is to reflect the different linear order of the arguments. Again, while these differences are admissible without impacting the annotation scheme in any major way, comparative studies using these corpora need to be sensitive to these differences. We note, however, that in an experiment using this strategy for the Hindi annotation, Kolachina et al. (2012) report poor agreement for arguments of relations, and speculate that it was harder for annotators to use the semantic labeling convention. ### 4.7 Summary Discourse annotation efforts that have followed the PDTB in adopting a lexically grounded (or adjacency-based) approach to annotation nevertheless differ from the PDTB in ways discussed earlier. Still, it appears that none of these differences is so great as to affect their interoperability with the PDTB or each other, or their use in multi-lingual language technology or machine translation (Meyer 2011; Meyer and Popescu-Belis 2012; Meyer and Webber 2013). ## 5. Complementary Annotations Some of the linguistic phenomena annotated in the PDTB have also been annotated in connection with other levels of linguistic annotation—in particular, the temporal annotation of the Wall Street Journal portion of the Penn Treebank corpus found in TimeBank 1.2 (Pustejovsky et al. 2003a) and the verb-argument annotation found in PropBank (Palmer, Gildea, and Kingsbury 2005). Here we describe how these annotations are related. We had both practical and theoretical motivation for carrying out the work described here. From a practical perspective, it might allow for future merging of annotation layers (Pustejovsky et al. 2005), future seeding of one annotation layer with another, and/or future consistency checking based on constraints between annotation at different levels. From a deeper, theoretical perspective, the work has the potential to lower, or even remove, barriers that have long existed between linguistic research at the sentence level and at the discourse level—barriers that have been equally obstructive to research in computational linguistics. This work can thus be seen as a small step towards “the transition from sentence to discourse.” ### 5.1 TimeML and TimeBank The TimeML temporal/event annotation (Pustejovsky et al. 2003b) of texts from the Penn TreeBank forms part of the TimeBank 1.2 corpus (Pustejovsky et al. 2003a). TimeML supports the annotation of events, time periods, and temporal relations, expressed either explicitly or implicitly in a text. The information that TimeML makes explicit includes temporal expressions such as the date 10/26/1989 (tagged as timex3), temporal events such as Nigel Lawson resigning as Chancellor of the Exchequer (tagged as event), temporal signals such as after, during, and in (tagged as signal), and temporal relations between pairs of temporal expressions or event instances, or between a temporal expression and an event instance (tagged as tlink). The set of temporal relations comes from Allen (1984). When a temporal relation is explicitly indicated by a temporal signal, the signal is included in the tlink. This enables a clear correspondence with the PDTB. A temporal event is annotated on the head of the syntactic construction that expresses it—the verb, in the case of a clause, as in the annotation of resume and warrant in Example (25), where the signaluntil is asserted to signal the temporal relation between resume and warrant. • (25) He <event>said</event> <event>construction</event> wouldn't <event>resume</event> <signal>until</signal> market conditions <event>warrant</event> it. [wsj_0610] This corresponds to the PDTB annotation: • (26) He said construction wouldn't resumeuntil (Temporal.Asynchronous.Precedence) market conditions warrant it. But TimeML also annotates events expressed as nominalizations (e.g., construction in Example (25)) and simple nouns (e.g., tax in Example (27)). • (27) And while there was no profit this year from discontinued operations,last year they <event>contributed</event>$34 million, <signal>before</signal> <event>tax</event>. [wsj_0127] Because temporal events are not limited to clauses, signals of temporal relations are not limited to clausal coordinators or subordinators or discourse adverbials, but also include prepositions such as before in Example (27). This is not annotated in the PDTB. TimeML also allows for the annotation of certain non-temporal relations between events, including conditional, evidential, non-evidential, and factive relations. These are tagged slink (for Subordination Link). As with temporal relations, when these non-temporal relations are indicated with a signal (such as if for a conditional relation), the signal is included in the slink. Although many of the same linguistic elements have been annotated in both the PDTB and TimeBank, the annotation itself can be quite different. The simplest difference relates to the sense of temporal relations: TimeML allows more specific relations between events than the PDTB's three broad relations Temporal.asynchronous.precedence (before), Temporal.asynchronous.succession (after), and Temporal.synchrony (same time). For example, TimeML annotators can indicate that one event is immediately before or immediately after another, although TimeBank annotators have not used this when annotating relations signalled by before or after. A more significant difference lies in where temporal relations are inferred in the PDTB and TimeBank. As noted earlier, the PDTB aims to annotate every discourse relation—including temporal relations—that holds between abstract objects (mainly clausal or sentential interpretations) that are signaled by an explicit discourse connective (or some alternative lexicalization of a connective) or by the fact of sentence adjacency. In the latter case, either an implicit discourse relation will be inferred between them, or a relation involving some entity mentioned in the first sentence (EntRel), or no relation at all (NoRel). In all cases, something will be annotated. In contrast, TimeML guidelines specify that if a temporal relation is explicitly signaled in the text, then events and/or time periods specified in different sentences may be linked through signals such as previously, earlier, at the same time, then, or meanwhile. If no temporal relation is explicitly signaled, then temporal elements in different sentences are not linked, so there are no tlinks in TimeBank corresponding to the PDTB's 950 implicit temporal relations between adjacent sentences. On the other hand, the TimeML guidelines allow a temporal relation to be inferred from discourse relations that are not primarily temporal. For example, some discourse relations annotated in the PDTB as causal (i.e., Contingency.cause.reason or Contingency.Cause.result) are annotated as temporal relations in TimeBank because both arguments express temporal events and because a cause event starts before its result. This is the case in Example (28), where TimeBank annotates the holding event as occurring before the adjusting event, which has a negative polarity attribute.8 In the PDTB, only the explicitly signaled causal relation is annotated. • (28) Previously, Columbia didn't have to adjust the book value of its junk-bond holdings to reflect declines in market prices, because (Contingency.Cause.reason) it held the bonds as long-term investments. [wsj_1013] However, not all relations annotated in the PDTB as Contingency.Cause.reason have a corresponding temporal annotation in TimeBank: Those corresponding to generic statements (e.g., Example (29)) do not, because generic statements are not taken to express temporal relations. • (29) It's harder to sell stocks when the sell programs come inbecause (Contingency.Cause.reason) some market makers don't want to {take the orders}. wsj_0585] Finally, there is some correspondence between PDTB annotation and the non-temporal relations that TimeBank annotates as slink. What TimeBank annotates as a conditionalslink overlaps with the explicit discourse relations annotated in the PDTB as Contingency.Condition. Other types of slink (modal, factive, evidential, and negative evidential) are related to the properties of Attribution in the PDTB (cf. Section 3.3; Prasad et al. 2007). Enriching TimeBank relations based on annotation in the PDTB, and vice versa, would require a more detailed study of both the annotation frameworks and annotation practice in the two corpora. The same would go for using the annotation in one as a consistency check on the other. Nevertheless, both would potentially be of great value to delivering more usefully annotated resources. ### 5.2 PropBank More interesting is the relation between PDTB annotation of discourse relations and PropBank annotation (Palmer, Gildea, and Kingsbury 2005) of the sentence-internal argument structure. PropBank provides, for each verb predicate in the Penn TreeBank, its sense and the semantic role of each of its arguments. An argument can be either required by the verb through its valency and assigned an index such as Arg0, Arg1, and so on, or accepted as a modifier (annotated with an ArgM label). ArgM arguments are further assigned functional tags such as MNR (manner), MOD (modal), TMP (temporal), CAU (causal), DIS (discourse), and so forth. For example, the PropBank annotation of one instance of the verb rent is shown in Figure 2a. Besides its subject and object (Arg0 and Arg1), the modal auxiliary will is annotated as ArgM-MOD and the subordinate clause headed by until is annotated ArgM-TMP. Figure 2 (a) PropBank annotation of the verb rent; (b) PDTB annotation of the sentence that rent heads. Figure 2 (a) PropBank annotation of the verb rent; (b) PDTB annotation of the sentence that rent heads. Many of the ArgM arguments in PropBank are either clauses or nominalizations that denote events. Many of these align with discourse relations in the PDTB. For example, the ArgM annotation of the subordinate clause in Figure 2a corresponds to Arg2 of the PDTB annotation of the discourse relation associated with until (Figure 2b). We can quantify the extent and nature of this correspondence between the two annotation layers, and in doing so consider two related questions: (1) How many and to what extent are the intra-sentential relations in PDTB also accounted for by the dependencies annotated in PropBank, and (2) Are there gaps in the discourse-level annotation that can be identified from the PropBank layer? Our analysis here is in terms of the annotation in PropBank-1.9 Although the scope of this annotation has been extended within the Ontonotes project (Weischedel et al. 2012), this does not affect our general points. #### 5.2.1 Correspondence of PDTB Intra-sentential Relations with PropBank We first assess whether the intra-sentential relations in the PDTB can be fully accounted for by the verb-ArgM dependencies in PropBank in terms of quantity, content, and consistency. If so, annotating them again at the discourse level would have involved needless repetition. There are 11,830 intra-sentential relations annotated in the PDTB, accounting for 28% of the discourse relations annotated in the corpus. Of these 11,830 relations (all of which are candidates for overlap with PropBank), 11,236 involve explicit connectives and 594 do not. The latter primarily hold between independent clauses separated by a semicolon. As the clauses so-linked are independent, neither being a modifier of the other, these relations are not covered in PropBank. The other 11,236 intra-sentential relations include relations between clauses linked by an explicit coordinating conjunction (such as and and but). Like the semicolon-separated clauses, these are independent and so also outside the scope of PropBank. The set also includes relations between clauses in the same sentence signaled by a discourse adverbial. In PropBank, discourse adverbials are generally annotated as discourse-linking modifiers (ARGM-DIS), leaving unspecified what they link to. For example, in Example (30), the discourse adverbial instead conveys a relation between the two “throwing” propositions. While PropBank annotates instead as ARGM-DIS of the main clause throw, it does not explicitly link instead to its other argument. • (30) When the champ has lost his stuff, the great mystery novelist wrote, when he can no longer throw the high hard one, he throws his heartinstead. [wsj_1649] PropBank-1 did not annotate arguments to copula verbs, so subordinating clauses attached to these verbs were not covered. However, copula verbs have subsequently been included in extensions covering over 75% of PropBank-1 and released as part of Ontonotes-5.0, so these subordinating clauses are now marked as arguments. Besides differing in terms of intra-sentential coverage, PropBank and the PDTB also differ in their semantics. Specifically, even those PropBank ArgM roles that are closest to discourse relations—ArgM-CAU (causal), ArgM-TMP (temporal), ArgM-PNC (purpose), ArgM-MNR (manner), and ArgM-ADV (adverbial)—differ from the semantics of PDTB senses in several ways. • • Specificity: ARG-TMP is annotated where the PDTB annotates a more specific sense (Synchrony, Precedence, and Succession) of its top-level class Temporal. • • Heterogeneity: Subordinate clauses that are labeled as ArgM-ADV correspond to the full range of PDTB senses. • • Multiplicity: The PDTB allows more than one sense label to be associated with a single discourse connective to indicate that multiple sense relations hold concurrently (e.g., a token of since may be labeled with both a temporal and causal sense). In contrast, PropBank only permits a constituent to fill a single functional role. Nevertheless, the seven cases where subordinate clauses are annotated as causal (ArgM-CAU) in PropBank and some form of Temporal sense in the PDTB reveal additional cases of the earlier-mentioned under-annotation of multiple concurrent senses in the PDTB (Section 3.4.3). • • Coverage: The PDTB's sense inventory currently lacks senses corresponding to PropBank's ArgM-MNR and ArgM-PNC roles. This will be discussed in Section 5.2.2. In terms of consistency, there are also some mismatches in alignment between PDTB arguments and PropBank's semantic role structure, due to the fact that PropBank annotation is tied directly to the syntactic trees in the PTB. Figure 3(b) shows the PropBank annotation of the verb say over the PTB parse tree shown in Figure 3(a), with its initial when-clause parsed as a temporal modifier (ArgM-TMP) of say. In contrast, PDTB annotation has been done over the raw text, with discontinuous spans permitted as arguments. This allows attribution to be included or excluded from a discourse relation (Section 3.3). In this case, Figure 3(c) shows attribution is excluded: The temporal relation (succession) is annotated between winning and awarding, implying that Mr. Green's winning of the verdict was followed by the judge giving him the additional award. Given the difference in annotation practice, the extent of such mismatches between PDTB and PropBank is expected to be the same as that between PDTB and PTB (Dinesh et al. 2005). Figure 3 Comparison of (a) Penn TreeBank; (b) PropBank; (c) PDTB annotation. Figure 3 Comparison of (a) Penn TreeBank; (b) PropBank; (c) PDTB annotation. In this section, we have considered whether the intra-sentential relations in the PDTB (i.e., those with both arguments in the same sentence) can be fully accounted for by the verb-ArgM dependencies in PropBank. We have shown that the account is only partial, due in part to the significantly different goals of the two annotation projects and in part to differences in methodological choices. As such, even intra-sententially, a separate layer of discourse relation annotation is motivated. #### 5.2.2 Potential of Seeding New Discourse Relations from PropBank Next, we assess whether any of the verb-ArgM dependencies in PropBank could potentially correspond to discourse relations that are not yet annotated in the PDTB. If the type and number of such relations is significant, then PropBank annotations could be used to seed the PDTB with new relations, which could then be corrected and/or annotated manually. To do this, we aligned the PropBank annotations with the PTB and the PDTB. We started by considering only the five ArgM types mentioned earlier, which gave us a total of 43,432 verb–ArgM dependencies. For ease of analysis, we ignored the tokens of split ArgMs (i.e., ArgMs that are not spanned by a single node). We also ignored tokens from WSJ texts that were not included in the PDTB distribution because of problems with conversion of the parsed files to stand off annotation format (PDTB-Group 2008). From the total of 43,432 ArgMs, we identified 11,538 ArgMs as clausal, using the PropBank alignment with the PTB. 4116 of these are free adjuncts (Example (31)), all of which are new potential discourse relations to consider for the PDTB (cf. Section 3.2). The remaining 7,422 clauses either start with a subordinator or subordinating conjunction, including both finite (Example (32)) and non-finite (Example (33)) adverbial clauses, or are reduced clauses (Example (34)). For these explicitly subordinated clauses, PropBank alignment with the PDTB shows that explicit subordinators/subordinating conjunctions were annotated as connectives in 5,471 of the 7,422 ArgMs, leaving the remaining 1,951 ArgMs as new potential discourse relations for the PDTB. Each of these 6,067 new potential relations identified from the PropBank (i,e., the 4,116 free adjuncts and the 1,951 subordinated clauses) would still have to be reviewed manually to determine whether it does in fact fulfill a discourse function or not. • (31) They say greedy market manipulators have made a shambles of thenation's free-enterprise system [ArgMADV turning the stock marketinto a big gambling den, with the odds heavily stacked against thesmall investor]. • (32) That \$130 million, Mr. Sherwood said, “gives us some flexibility [ArgMCAU in case Temple raises its bid]. • (33) Those dividend bulls argue that corporations are in the unusual positionof having plenty of cash left over [ArgMTMP after paying dividendsand making capital expenditures]. • (34) [ArgMADV If not for a 59.6% surge in orders for capital goods by defensecontractors], factory orders would have fallen 2.1%. New potential relations identified through PropBank would allow for expanding not just the number of PDTB relations, but also the repertoire of connectives (such as in case in Example (32)) and sense categories, in particular a manner relation, corresponding to ArgM-MNR, and a purpose relation, corresponding to ArgM-PNC, that are not currently covered in the PDTB. It is important to note here that manner and purpose arguments annotated in PropBank will only be considered arguments to discourse relations when they denote events, facts, states, or propositions, since these are what are taken to be arguments to discourse relations in the PDTB. ### 5.3 Summary Linguistic annotation invariably involves a considerable amount of time and effort. When linguistic analysis at multiple levels is encoded on the same source corpus as different layers of annotation, there is potential value in assessing how the annotation content of the different layers differ from each other, and in exploring how annotations from one layer can be exploited usefully for annotation in other layers. This section has compared the annotation content of the PDTB with that of TimeBank and PropBank, showing that while some of the linguistic phenomena annotated in PDTB have also been annotated in TimeBank and PropBank, there are significant differences in both the extent and the content of the annotation. This section has also discussed some of the ways in which annotations from one layer can enrich and/or improve the consistency of annotation in other layers. ## 6. Conclusion Our goals in this paper have been to • • give a thoughtful description of the PDTB that reflects what we have learned since release of the corpus in 2008; • • correct some assumptions about the PDTB that show that researchers may either be ignoring significant features of its annotation of discourse relations or taking accidental properties of its annotation to be intrinsic properties of the discourse relations themselves; • • describe and place in context the ways in which annotation of comparable resources in other languages and genres has varied from that of the PDTB; and • • provide an analysis of the relation of PDTB annotation to that of TimeBank and PropBank over the same Penn TreeBank corpus and show how they are, in large part, complementary. We hope that the number of researchers able to make use of the PDTB will continue to grow, as will the number of similarly annotated corpora. We ourselves hope to be able to enrich the PDTB in the future—widening the scope of discourse relations that are annotated, improving the recording of evidence for annotation decisions, and expanding the annotation to include additional textual genres, especially ones that are less formal than news texts, such as public talks and consumer health advice. The results should be of further benefit to a growing community of scholars and developers considering the challenges of extended text. ## Acknowledgments This work was partially supported by NSF grants EIA–02024417, RI–0705671, and CNS–1059353. We would like to thank other members of the team involved in developing the PDTB: Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, and Geraud Campion. We are also grateful to Katja Markert, Jiří Mírovský, Dipti Misra Sharma, Himanshu Sharma, Nianwen Xue, and Deniz Zeyrek for providing valuable clarification and information about their discourse annotation projects, and to Sameer Pradhan for clarification of Ontonotes and PropBank. We would also like to thank our three anonymous reviewers for helping us deliver as clear and informative a paper as possible. ## Notes 1 LDC Catalog ID LDC2008T05. http://www.seas.upenn.edu/_pdtb provides more information on the PDTB, including a complete list of publications. 2 We have not yet seen much use made of these modifiers, even though it is clear that they can, for example, be used to disambiguate connectives. (E.g., all tokens of modified ever since convey a temporal sense, while only 51% of unmodified since do. Similarly, 76% of modified even though convey the sense Concession, whereas only 37.3% of unmodified though do.) 3 “Possible” because many expressions on the list have non-discourse functions as well—e.g., in addition to functioning as a discourse connective expressing a “result” relation, so can also function as an intensifier (so short) or as part of the verb phrase anaphor do so. Part of the annotation process involved excluding tokens that did not function as discourse connectives. 4 Because neither and nor or was annotated as a discourse connective when it conjoined VPs, so-conjoined VPs were not annotated as arguments. 5 The PDTB also annotates attribution relations, capturing their textual signal and semantic features over each discourse relation and each of its arguments. For a full description of attribution and its annotation, the reader is referred to Prasad et al. (2007). Attribution is now being annotated as a separate layer over the WSJ (Pareti 2012), building on the PDTB attribution scheme, but aiming to capture the phenomena more comprehensively than in the PDTB. 6 Miltsakaki et al. (2005) reported 186 tokens of since as discourse connectives; PDTB-Group (2008) subsequently reported 184 tokens. Most likely, two were later found not to be connectives. 7 Although the PDTB admits nominalizations as arguments to explicit discourse connectives, they constitute only a small fraction of its arguments. ## References References Agarwal , M. , R. Shah , and P. Mannem . 2011 . Automatic question generation using discourse cues . In Proceedings of the ACL HLT 2011 Workshop on Innovative Use of NLP for Building Educational Applications , pages 1 9 , Portland, OR . Aijmer , K. and A.-M. Simon-Vandenbergen . 2004 . A model and a methodology for the study of pragmatic markers . Journal of Pragmatics , 36 : 1781 1805 . Aktaş , B. , C. Bozşahin , and D. Zeyrek . 2010 . Discourse relation configurations in Turkish and an annotation environment . In Proceedings of the 4th Linguistic Annotation Workshop , pages 202 206 , Uppsala . Al-Saif , A. 2012 . Human and automatic annotation of discourse relations for Arabic . Ph.D. thesis, University of Leeds . Al-Saif , A. and K. Markert . 2010 . The Leeds Arabic Discourse Treebank: Annotating discourse connectives for Arabic . In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC-2010) , pages 2,046 2,053 , Valletta . Al-Saif , A. and K. Markert . 2011 . Modelling discourse relations for Arabic . In Proceedings of the Conference on Empirical Methods in Natural Language Processing , pages 736 747 , Edinburgh . Allen , J. 1984 . Towards a general theory of action and time . Artificial Intelligence , 23 : 123 154 . Asher , N. 1993 . Reference to Abstract Objects . Kluwer , Dordrecht . Asher , N. and A. Lascarides . 2003 . Logics of conversation . Cambridge University Press . Asr , F. T. and V. Demberg . 2012a . Implicitness of discourse relations . In Proceedings of COLING , pages 2,669 2,684 , Mumbai . Asr , F. T. and V. Demberg . 2012b . Measuring the strength of linguistic cues for discourse relations . In Proceedings of the Workshop on Advances in Discourse Analysis and its Computational Aspects (ADACA) , pages 33 42 , Mumbai . Asr , F. T. and V. Demberg . 2013 . On the information conveyed by discourse markers . In Proceedings of the 4th Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL) , pages 84 93 , Sofia . Baldridge , J. , N. Asher , and J. Hunter . 2007 . Annotation for and robust parsing of discourse structure on unrestricted texts . Zeitschrift fur Sprachwissenschaft , 26 : 213 239 . Bejček , E. , E. Hajičová , J. Hajič , P. Jínová , V. Kettnerová , V. Kolářová , M. Mikulová , J. Mírovský , A. Nedoluzhko , J. Panevová , L. Poláková , M. Ševčíková , J. Štěpánek , and Šárka Zikánová . 2013 . Prague Dependency Treebank 3.0 data/software . Technical report, Univerzita Karlova v Praze, MFF, FAL, Prague. http://ufal.mff.cuni.cz/pdt3.0/ . Bunt , H. , R. , and A. Joshi . 2012 . First steps towards an ISO standard for annotating discourse relations . In Proceedings of the Joint ISA-7, SRSL-3, and I2MRT Workshop on Semantic Annotation and the Integration and Interoperability of Multimodal Resources and Tools , pages 60 69 , Istanbul . Carlson , L. , D. Marcu , and M. E. Okurowski . 2001 . Building a discourse-tagged corpus in the framework of rhetorical structure theory . In Proceedings of the 2nd SIGDIAL Workshop on Discourse and Dialogue, Eurospeech 2001 , pages 1 10 , Aalborg . Danlos , L. , D. Antolinos-Basso , C. Braud , and C. Roze . 2012 . Vers le FDTB: French Discourse Tree Bank . In Proceedings of the Joint Conference JEP-TALN-RECITAL , pages 471 479 , Grenoble . Demirsahin , I. , A. Ozturel , C. Bozsahin , and D. Zeyrek . 2013 . Applicative structures and immediate discourse in the Turkish Discourse Bank . In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse , pages 122 130 , Sofia . Dinesh , N. , A. Lee , E. Miltsakaki , R. , A. Joshi , and B. Webber . 2005 . Attribution and the (non)-alignment of syntactic and discourse arguments of connectives . In Proceedings of the ACL Workshop on Frontiers in Corpus Annotation II: Pie in the Sky , pages 29 36 , Ann Arbor, MI . Elwell , R. and J. Baldridge . 2008 . Discourse connective argument identification with connective specific rankers . In Proceedings of ICSC-2008 , pages 198 205 , Santa Clara, CA . Forbes-Riley , K. , B. Webber , and A. Joshi . 2006 . Computing discourse semantics: The predicate-argument semantics of discourse connectives in D-LTAG . Journal of Semantics , 23 : 55 106 . Ghosh , S. , R. Johansson , G. Riccardi , and S. Tonelli . 2011a . Shallow discourse parsing with conditional random fields . In Proceedings of the International Joint Conference on Natural Language Processing , pages 1,071 1,079 , Chiang Mai . Ghosh , S. , R. Johansson , G. Riccardi , and S. Tonelli . 2012 . Improving the recall of a discourse parser by constraint-based postprocessing . In Proceedings of the Eighth International Conference on Language Resources and Evaluation , pages 2,791 2,794 , Istanbul . Ghosh , S. , S. Tonelli , G. Riccardi , and R. Johansson . 2011b . End-to-end discourse parser evaluation . In Proceedings of the Fifth IEEE International Conference on Semantic Computing , pages 169 172 , Palo Alto, CA . Halliday , M. A. K. and R. Hasan . 1976 . Cohesion in English . Longman , London . Hirschberg , J. and D. Litman . 1993 . Empirical studies on the disambiguation of cue phrases . Computational Linguistics , 19 ( 3 ): 501 530 . Jayez , J. and C. Rossari . 1998 . Pragmatic connectives as predicates: the case of inferential connectives . In P. St Dizier , ed., Predicative forms in natural language and in lexical knowledge bases , pages 285 319 . Springer , Dordrecht . Jiang , X. 2013 . Predicting the use and interpretation of implicit and explicit discourse connectives . Ph.D. thesis, M.Sc. Thesis, School of Psychology, Philosophy and Language Sciences (PPLS), University of Edinburgh . Jínová , P. , J. Mírovský , and L. Poláková . 2012 . Semi-automatic annotation of intra-sentential discourse relations in PDT . In Proceedings of the Workshop on Advances in Discourse Analysis and its Computational Aspects (ADACA) , pages 43 58 , Mumbai . Kehler , A. 2002 . Coherence, Reference, and the Theory of Grammar . CSLI Publications , Palo Alto, CA . Knott , A. 1996 . A Data-Driven Methodology for Motivating a Set of Coherence Relations . Ph.D. thesis, University of Edinburgh . Knott , A. , J. Oberlander , M. O'Donnell , and C. Mellish . 2001 . Beyond elaboration: The interaction of relations and focus in coherent text . In T. Sanders , J. Schilperoord , and W. Spooren , editors, Text Representation: Linguistic and Psycholinguistic Aspects , pages 181 196 . John Benjamins Publishing . Kolachina , S. , R. , D. M. Sharma , and A. Joshi . 2012 . Evaluation of discourse relation annotation in the Hindi Discourse Relation Bank . In Proceedings of the Eighth International Conference on Language Resources and Evaluation , pages 823 828 , Istanbul . Lakoff , R. 1971 . Ifs, ands and buts about conjunction . Studies in Linguistic Semantics , 3 : 114 149 . Lin , Z. , H. T. Ng , and M.-Y. Kan . 2012 . A PDTB-styled end-to-end discourse parser . Natural Language Engineering , 20 : 151 184 . Mann , W. C. and S. A. Thompson . 1988 . Rhetorical Structure Theory: Toward a functional theory of text organization . Text , 8 ( 3 ): 243 281 . Marcus , M. P. , B. Santorini , and M. A. Marcinkiewicz . 1993 . Building a large annotated corpus of English: The Penn Treebank . Computational Linguistics , 19 ( 2 ): 313 330 . Martin , J. R. 1992 . English Text: System and Structure . Benjamins , Amsterdam . Meyer , T. 2011 . Disambiguating temporal-contrastive connectives for machine translation . In Proceedings of the ACL 2011 Student Session , pages 46 51 , Portland, OR . Meyer , T. and A. Popescu-Belis . 2012 . Using sense-labeled discourse connectives for statistical machine translation . In Proceedings of the Workshop on Hybrid Approaches to Machine Translation (HyTra) , pages 129 138 , Avignon . Meyer , T. and B. Webber . 2013 . Implicitation of discourse connectives in (machine) translation . In Proceedings of the ACL Workshop on Discourse in Machine Translation , pages 19 26 , Sofia . Miltsakaki , E. , N. Dinesh , R. , A. Joshi , and B. Webber . 2005 . Experiments on sense annotation and sense disambiguation of discourse connectives . In Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories (TLT) , pages 1 12 , Barcelona . Miltsakaki , E. , R. , A. Joshi , and B. Webber . 2004 . Annotating discourse connectives and their arguments . In Proceedings of the Workshop on Frontiers in Corpus Annotation (Human Language Technology Conference and the Conference of the North American Association of Computational Linguistics) , pages 9 16 , Boston, MA . Miltsakaki , E. , L. Robaldo , A. Lee , and A. Joshi . 2008 . Sense annotation in the Penn Discourse Treebank . Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science , 4919 : 275 286 . , L. , Šárka Zikánová , and E. Hajičová . 2008 . From sentence to discourse: Building an annotation scheme for discourse based on Prague Dependency Treebank . In Proceedings of the Sixth International Language Resources and Evaluation (LREC'08) , pages 2,564 2,570 , Marrakech . Moens , M. and M. Steedman . 1988 . Temporal ontology and temporal reference . Computational Linguistics , 14 ( 2 ): 15 28 . Ostler , S. 1987 . Academic and ethnic background as factors affecting writing performance . In A. Purves , editor, Writing across Languages and Cultures: Issues in Contrastive Rhetoric , pages 261 272 . Sage . Oza , U. , R. , S. Kolachina , S. Meena , D. M. Sharma , and A. Joshi . 2009 . Experiments with annotating discourse relations in the Hindi Discourse Relation Bank . In Proceedings of the 7th International Conference on Natural Language Processing (ICON) , pages 1 10 , . Palmer , M. , D. Gildea , and P. Kingsbury . 2005 . The Proposition Bank: An annotated corpus of semantic roles . Computational Linguistics , 31 ( 1 ): 71 106 . Pareti , S. 2012 . . In Proceedings of the 8th Conference on International Language Resources and Evaluation (LREC12) , pages 3,213 3,217 , Istanbul . Patterson , G. and A. Kehler . 2013 . Predicting the presence of discourse connectives . In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing , pages 914 923 , Seattle, WA . PDTB-Group . 2008 . The Penn Discourse TreeBank 2.0 Annotation Manual . Technical report IRCS-08-01, Institute for Research in Cognitive Science, University of Pennsylvania . Pitler , E. and A. Nenkova . 2009 . Using syntax to disambiguate explicit discourse connectives in text . In Proceedings of the Joint Conference of the 47th Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing , pages 13 16 , Singapore . Pitler , E. , M. Raghupathy , H. Mehta , A. Nenkova , A. Lee , and A. Joshi . 2008 . Easily identifiable discourse relations . In Proceedings of COLING , pages 87 90 , Manchester . Poláková , L. , P. Jínová , Šárka Zikánová , Z. Bedřichová , J. Mírovský , M. Rysová , J. Zdeňková , V. Pavlíková , and E. Hajičová . 2012 . Manual for annotation of discourse relations in the Prague Dependency Treebank . Technical report TR-2012/47, Institute of Formal and Applied Linguistics, Charles University in Prague, Prague, Czech Republic . Poláková , L. , J. Mírovský , A. Nedoluzhko , P. Jínová , V. Zikánová , and E. Hajičová . 2013 . Introducing the Prague Discourse Treebank 1.0 . In Proceedings of the 6th International Joint Conference on Natural Language Processing , pages 91 99 , Nagoya . Polanyi , L. , C. Culy , M. van den Berg , G. L. Thione , and D. Ahn . 2004 . Sentential structure and discourse parsing . In ACL 2004 Workshop on Discourse Annotation , pages 80 87 , Barcelona . , R. , N. Dinesh , A. Lee , A. Joshi , and B. Webber . 2007 . Attribution and its annotation in the Penn Discourse TreeBank . Traitement Automatique des Langues, Special Issue on Computational Approaches to Document and Discourse , 47 ( 2 ): 43 64 . , R. , N. Dinesh , A. Lee , E. Miltsakaki , L. Robaldo , A. Joshi , and B. Webber . 2008 . The Penn Discourse TreeBank 2.0 . In Proceedings of LREC , pages 2,961 2,968 , Marrakesh . , R. and A. Joshi . 2008 . A discourse-based approach to generating why-questions from texts . In Proceedings of the Workshop on the Question Generation Shared Task and Evaluation Challenge , pages 1 3 , Arlington, VA . , R. , A. Joshi , and B. Webber . 2010a . Exploiting scope for shallow discourse parsing . In Proceedings of the Seventh International Conference on Language Resources and their Evaluation , pages 2,076 2,083 , Valletta . , R. , A. Joshi , and B. Webber . 2010b . Realization of discourse relations by other means: Alternative lexicalizations . In Proceedings of the 23rd International Conference on Computational Linguistics , pages 1,023 1,031 , Beijing . , R. , S. McRoy , N. Frid , A. Joshi , and H. Yu . 2011 . The Biomedical Discourse Relation Bank . BMC Bioinformatics , 12 ( 188 ): 1 18 . Pustejovsky , J. , P. Hanks , R. Sauri , A. See , R. Gaizauskas , A. Setzer , and D. . 2003a . The Timebank corpus . In Proceedings of the Corpus Linguistics Meeting , pages 647 656 , Lancaster . Pustejovsky , J. , A. Meyers , M. Palmer , and M. Poesio . 2005 . Merging PropBank, NomBank, TimeBank, Penn Discourse Treebank and Coreference . In Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky , pages 5 12 , Ann Arbor, MI . Pustejovsky , J. , J. Castaño , R. Ingria , R. Sauri , R. Gaizauskas , A. Setzer , and G. Katz . 2003b . TimeML: Robust specification of event and temporal expressions in text . , 3 : 28 34 . Ramesh , B. , R. , T. Miller , B. Harrington , and H. Yu . 2012 . Automatic discourse connective detection in biomedical text . Journal of the American Medical Informatics Association , 19 ( 5 ): 800 808 . Rysová , M. 2012 . Alternative lexicalizations of discourse connectives in Czech . In Proceedings of the 8th International Conference on Language Resources and Evaluation , pages 2,800 2,807 , Istanbul . Sharma , H. , P. Dakwale , D. Sharma , R. , and A. Joshi . 2013 . Assessment of different workflow strategies for annotating discourse relations: A case study with HDRB . In A. Gelbukh , editor, Computational Linguistics and Intelligent Text Processing, LNCS 7816 , pages 523 532 . Springer . Stede , M. 2008 . RST revisited: Disentangling nuclearity . In C. Fabricius-Hansen and W. Ramm , editors, ‘Subordination’ versus ‘coordination’ in sentence and text–from a cross-linguistic perspective , pages 33 58 . John Benjamins , Amsterdam . Stede , M. 2012 . Discourse Processing . Morgan & Claypool Publishers . Sweetser , E. 1990 . From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantics . Cambridge University Press . Versley , Y. 2010 . Discovery of ambiguous and unambiguous discourse connectives via annotation projection . In Proceedings of the Workshop on the Annotation and Exploitation of Parallel Corpora (AEPC) , pages 83 92 , Tartu . Webber , B. 2013 . What excludes an alternative in coherence relations? In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) , pages 276 287 , Potsdam . Webber , B. , M. Egg , and V. Kordoni . 2012 . Discourse structure and language technology . Natural Language Engineering , 18 ( 4 ): 437 490 . Weischedel , R. , M. Palmer , M. Marcus , E. Hovy , S. , L. Ramshaw , N. Xue , A. Taylor , J. Kaufman , M. Franchini , M. El-Bachouti , R. Belvin , and A. Houston . 2012 . Ontonotes release 5.0 . Technical report, Linguistic Data Consortium . Wellner , B. 2009 . Sequence Models and Re-ranking Methods for Discourse Parsing . Ph.D. thesis, Brandeis University, Boston, MA . Wellner , B. and J. Pustejovsky . 2007 . Automatically identifiying the arguments of discourse connectives . In Proceedings of EMNLP-CoNLL , pages 92 101 . Xue , N. 2005 . Annotating discourse connectives in the Chinese Treebank . In Proceedings of the ACL Workshop on Frontiers in Corpus Annotation II: Pie in the Sky , pages 84 91 , Ann Arbor, MI . Zeyrek , D. , Ümit Deniz Turan , and I. Demirşahin . 2008 . Structural and presuppositional connectives in Turkish . In A. Benz , P. Kühnlein , and M. Stede , editors, Constraints in Discourse 3 , pages 131 137 . University of Potsdam , Germany . Zeyrek , D. , Ümit Deniz Turan , C. Bozşahin , R. Çakıcı , A. Sevdik-Çallı , I. Demirşahin , B. Aktaş , İhsan Yalçınkaya , and H. Ögel . 2009 . Annotating subordinators in the Turkish Discourse Bank . In Proceedings of the Third Linguistic Annotation Workshop (LAW III), ACL-IJCNLP-2009 , pages 44 48 , Singapore . Zeyrek , D. , I. Demirşahin , A. Sevdik-Çallı , H. Ögel , İhsan Yalçınkaya , and Ümit Deniz Turan . 2010 . The annotation scheme of the Turkish Discourse Bank and an evaluation of inconsistent annotations . In Proceedings of the Fourth Linguistic Annotation Workshop (LAW-IV), ACL 2010 , pages 282 289 , Uppsala . Zeyrek , D. , I. Demirşahin , A. Sevdik-Çallı , and R. Çakıcı . 2013 . Turkish Discourse Bank: Porting a discourse annotation style to a morphologically rich language . Dialogue and Discourse , 4 ( 2 ): 174 184 . Zhou , Y. and N. Xue . ( in press ). The Chinese Discourse TreeBank: A Chinese corpus annotated with discourse relations . Journal of Language Resources and Evaluation . Zhou , Y. and N. Xue . 2012 . PDTB-style discourse annotation of Chinese text. In Proceedings of the 50th Annual Meeting of the ACL , pages 69 77 , Jeju Island . Zhou , Z.-M. , M. Lan , Y. Xu , Z.-Y. Niu , J. Su , and C. L. Tan . 2010 . Predicting discourse connectives for implicit discourse relation recognition . In Proceedings of the 23rd International Conference on Computational Linguistics (COLING) , pages 1,507 1,514 , Beijing . ## Author notes * Department of Health Informatics and Administration, University of Wisconsin-Milwaukee, 2025 E. Newport Ave (NWQB), Milwaukee WI 53211. E-mail: prasadr@uwm.edu. ** School of Informatics, University of Edinburgh, 10 Crichton Street (IF4.29), Edinburgh UK EH8 9AB. E-mail: bonnie.webber@ed.ac.uk. Institute for Research in Cognitive Science, University of Pennsylvania, 3401 Walnut Street (Suite 400A), Philadelphia PA 19104-6228. E-mail: joshi@seas.upenn.edu.
2021-04-11 06:27:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45017170906066895, "perplexity": 6026.6469470542925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00118.warc.gz"}
https://imathworks.com/tex/tex-latex-pgfplots-axis-xmode-ymode-in-user-defined-style/
# [Tex/LaTex] pgfplots – axis xmode/ymode in user-defined style pgfplotstikz-styles I would like to specify xmode and ymode for the axis with \pgfplotsset (or any other way, so that the mode can be changed by an if statement) The error message which I get is: ! Package pgfplots Error: Sorry, you can't change /pgfplots/xmode' in this con text. Maybe you need to provide it as \begin{axis}[/pgfplots/xmode=...] ?. Here is a minimal working example: \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.5} \def\someoption{1} % change this to zero to see the error \ifcase\someoption % this is the reason why I want to define xmode with styles \pgfplotsset{/pgfplots/mystyle/.style={ /pgfplots/xmode=log, % this is not working /pgfplots/ymode=log % this too }} \else \pgfplotsset{/pgfplots/mystyle/.style={}} \fi \begin{document} \section*{with style: (not working)} \begin{tikzpicture} \begin{axis}[/pgfplots/mystyle] \addplot coordinates {(10, 100) (100, 15) (2000, 200)}; \end{axis} \end{tikzpicture} % how it should look like \section*{with hard-coded axis properties:} \begin{tikzpicture} \begin{axis}[xmode=log,ymode=log] \addplot coordinates {(10, 100) (100, 15) (2000, 200)}; \end{axis} \end{tikzpicture} \end{document} This is related to Axis limits in user-defined style, but this doesn't help in this case. You are encountering an internal limitation of pgfplots - that's why it is claiming that you cannot change the key in your context (it's not a bug, it's a feature). The reason is key filtering: pgfplots extracts the xmode and ymode keys from the input argument list first to decide which default configuration set (every loglog axis etc) should be activated. To this end, it sets only options which belong to the key family /pgfplots/scale. Without this restriction, keys might be set in wrong contexts. If you can ensure that your style mystyle contains nothing more than xmode and/or ymode, you can solve the problem using \ifcase\someoption % this is the reason why I want to define xmode with styles \pgfplotsset{/pgfplots/mystyle/.style={ /pgfplots/xmode=log, % this is not working /pgfplots/ymode=log % this too }, mystyle/.belongs to family=/pgfplots/scale,% <---- } \else \pgfplotsset{/pgfplots/mystyle/.style={}} \fi if your style sets more than that, you should at least ensure that it contains only changes to other styles (like ticklabel style={...}`). This is not documented in the manual. I am not sure if it should be.
2022-11-28 19:14:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386626601219177, "perplexity": 2597.9857084392766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00165.warc.gz"}
http://www.dsplog.com/2008/05/24/article-in-dspdesignlinecom-m-qam-symbol-error/print/
- DSP log - http://www.dsplog.com - Article in DSPDesignLine.com: M-QAM symbol error Posted By Krishna Sankar On May 24, 2008 @ 6:56 am In Modulation | 21 Comments Its been a nice week for me, wherein I guest posted an article in DSPDesignLine.com [1]. The article derives the theoretical symbol error rate for M-QAM modulation. The theoretical results are further supplemented by Matlab/Octave simulation scripts. Those who are familiar with derivation of symbol error rate for 16-QAM modulation [2] will find the equations easy to interpret. As we did for 16-QAM, (a) We identify the three different types of symbols – corner, inside, neither inside nor corner. (b) Then we find the symbol error probability for each of the three types of symbols (c) Total error probability is found assuming that all the symbols are equally likely. The same article is cross-posted also in Embedded.com [4] For those who are not interested in the full article, the probability of error for M-QAM modulation is, $P(e|MQAM) =2\left(1-\frac{1}{\sqrt{M}}\right)erfc\left(k\sqrt{\frac{E_s}{N_0}}\right)-\left(1-\frac{2}{\sqrt{M}}+\frac{1}{M}\right)erfc^2\left(k\sqrt{\frac{E_s}{N_0}}\right)$. URL to article: http://www.dsplog.com/2008/05/24/article-in-dspdesignlinecom-m-qam-symbol-error/ URLs in this post: [1] DSPDesignLine.com: http://www.dspdeisgnline.com [2] symbol error rate for 16-QAM modulation: http://www.dsplog.com/2007/12/09/symbol-error-rate-for-16-qam/ [3] published in DSPDesignLine.com: http://www.dspdesignline.com/howto/207601769;jsessionid=BNSZSDNOJRYKEQSNDLPSKH0CJUNN2JVN [4] Embedded.com: http://www.embedded.com/columns/technicalinsights/207801597?_requestid=88879
2018-07-22 16:09:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8286070823669434, "perplexity": 2996.2221445499604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00325.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-5-linear-functions-5-3-slope-intercept-form-practice-and-problem-solving-exercises-page-310/30
Algebra 1 y=-$\frac{1}{4}$x+$\frac{9}{4}$ Slope-intercept form is written as y=mx+b. m=$\frac{y_{2}-y_{1}}{x_{2}-x_{1}}$, and it's given that $x_{1}$=-3, $x_{2}$=1, $y_{1}$=3, $y_{2}$=2, we can solve for m by plugging in the givens into the slope equation. Therefore, m=$\frac{2-3}{1-(-3)}$=$\frac{-1}{4}$=-$\frac{1}{4}$. Now we know the y=(-$\frac{1}{4}$)x+b=-$\frac{1}{4}$x+b, and we can solve for b by plugging in one of the two points, in this case we'll plug in the point (1,2) 2=(-$\frac{1}{4}$)(1)+b, and when we add $\frac{1}{4}$ to each side, we get b=$\frac{9}{4}$ Therefore, y=-$\frac{1}{4}$x+$\frac{9}{4}$
2021-05-10 04:34:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900740921497345, "perplexity": 622.9496935709741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00079.warc.gz"}
https://chemistry.stackexchange.com/questions/15623/how-can-i-work-out-what-reaction-will-happen
# How can I work out what reaction will happen? I have recently started learning chemistry, and I've been reading a GCSE (UK high school level) textbook on the stuff :D Much of it has been fairly straightforward, though there has been a reoccurring theme that has been bothering me. I've encountered many instances where the author simply presents a reaction with little explanation, as if to suggest that the reason as to why it would occur is obvious. Let me give you an example I have come across just today. Ethene in bromine water: I think to myself, 'why would this happen?'. There are no 'free electrons' that I can see in ethene. OK, bromine is highly reactive, but what will it do? Will it react with H, or C, and how? What about the bromine bond? It's all very uncertain in my mind. I always try to reduce it to an energy problem. We just want the most stable state, right? But, there are many possible outcomes. Am I supposed to apply some logic, follow some flowchart, or work out all possible outcomes and calculate which has the lowest energy? Do chemists just remember all simple reactions, and reduce problems to those? Or, am I just overthinking it? A side problem to this is why, if we place an alkane in bromine water, we don't get the same thing happening with ethane. There are no 'free electrons' that I can see in ethene That's correct, but... In ethene there are two bonds between the two carbons that hold the two carbons together. A chemist calls this a "double" bond. The important thing about a double bond (that's not obvious from the drawing) is that the two bonds are not the same. One is a strong sigma bond, just like in ethane; the other is a weaker pi bond. While the electrons in the pi bond are not free, they are not held as tightly as the electrons in the sigma bond and are, therefore, more available for reaction. This is why ethene is so much more reactive than ethane. Am I supposed to apply some logic... Or, am I just overthinking it? Yes, there is logic and that body of logic is what organic chemistry is all about. In the case of bromine reacting with ethene, the reaction belongs to a class of reactions known as electrophilic additions. Most reactions in organic chemistry involve a molecule that has electrons available to share (a nucleophile) and a molecule that would like more electrons (an electrophile). These two molecules react (form a chemical bond) by the one donating\sharing its electrons and the other molecule accepting\sharing those electrons. Here, the pi bond in ethene has electrons that it is willing to share (they're not free, but they're not tightly held either), while the bromine molecule is highly polarizable, much like $\ce{Br^{\delta +}-Br^{\delta -}}$, with the positively polarized bromine atom wanting electrons. The reaction proceeds as pictured below, to yield 1,2-dibromoethane. A side problem to this is why, if we place an alkene in bromine water, we don't get the same thing happening with ethane. I bet you can answer that one now. Ethene has a pi bond with electrons loosely held and available for reaction. Ethane, on the other hand, only has a sigma bond where the electrons are more tightly held and less available for reaction. You must learn some basic things in organic chemistry to understand mechanisms for such simple reactions and these are: functional groups, nucleophiles, electrophiles, leaving groups, curly arrows. You should easily find explanations of these in any organic chemistry textbook. Then you could use this link http://www.chemguide.co.uk/mechanisms/eladd/symbr2.html#top to understand what is happening in this reaction. Here are the other basic types of reaction in organic chemistry: http://www.chemguide.co.uk/mechmenu.html#top
2021-10-20 11:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4718608856201172, "perplexity": 1176.9830649996563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00654.warc.gz"}
https://groups.google.com/forum/?_escaped_fragment_=topic/sci.math.research/dt3GA2H4S_w
## What are identities in elementary mathematics? Showing 1-8 of 8 messages What are identities in elementary mathematics? Sergei Akbarov 10/22/10 2:00 PM Dear colleagues,Who can explain me what people in Calculus (or, I do not know, maybe I should say "in elementary mathematics") mean by equality of elementary functions? I mean, everybody knows, of course, that equality of functionsf(x)=g(x)on, say, an interval I means that they coinside in each point x\in I.This is the definition for all, not necessarily elementary functions. But one can expect that elementary functions (I mean, x^a, a^x, sin, cos, etc.) can be defined independently in purely algebraic way, say, like an algebra, generated by a list of identities. Of course, the definition must be such that all the other identities could be deduced from this prearrannged list as corollaries, and this will be equivalent to the definition of this algebra as a subalgebra of all functions on an interval I.People in computer algebra are discussing different ways to teach computer to recognize identities in elementary mathematics, see, e.g the book "A=B"http://www.math.upenn.edu/~wilf/AeqB.htmland the identities they consider are "analytical identities", i.e. to study them we should consider our functional algebra as a subalgebra of the algebra of all functions on I (or of all continuous, smooth, analytical, etc., functions on I). But perhaps there is a way to separate elementary functions from analysis? I mean to axiomatize them in such a way that the question of whether f(x)=g(x) is true or not will be just a question, whether this identity can be deduced in algebraic way from the axioms of the theory?Were there any investigations in this field? Does such approach indeed has a chance to exist, or, maybe there are some negative results?I would greatly appreciate any references, suggestions, etc.Sergei Akbarov Re: What are identities in elementary mathematics? Norbert Marrek 10/23/10 9:00 AM In Z/2Z the polynomial functions f(X)=X+1 and g(X)=X^2+1have the save values f(9)=g(0)=1 and f(1)=g(1)=0.Would you consider X+1 the same as X^2+1?Aloha        Norbert Re: What are identities in elementary mathematics? tc...@lsa.umich.edu 10/23/10 9:00 AM In article ,Sergei Akbarov wrote:>But perhaps there is a way to separate elementary functions from>analysis? I mean to axiomatize them in such a way that the question of>whether f(x)=g(x) is true or not will be just a question, whether this>identity can be deduced in algebraic way from the axioms of the theory?I think the top two Google Scholar hits on "recognizing zero" shouldpoint you in the right direction.-- Tim Chow       tchow-at-alum-dot-mit-dot-eduThe range of our projectiles---even ... the artillery---however great, willnever exceed four of those miles of which as many thousand separate us fromthe center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences Re: What are identities in elementary mathematics? David Hobby 10/24/10 2:00 PM On Oct 23, 12:00�pm, "tc...@lsa.umich.edu" wrote:> In article ,>> Sergei Akbarov wrote:> >But perhaps there is a way to separate elementary functions from> >analysis? I mean to axiomatize them in such a way that the question of> >whether f(x)=g(x) is true or not will be just a question, whether this> >identity can be deduced in algebraic way from the axioms of the theory?>> I think the top two Google Scholar hits on "recognizing zero" should> point you in the right direction.> --> Tim Chow � � � tchow-at-alum-dot-mit-dot-edu...As an aside, I'd expect axiomatizing the theory of elementaryfunctionsto be tricky.  Compare this to Tarski's High School Algebra Problem,which considered a small subset of all elementary functions and wasstill quite difficult.http://en.wikipedia.org/wiki/Tarski%27s_high_school_algebra_problem---David Hobby Re: What are identities in elementary mathematics? Dan Luecking 10/25/10 5:00 PM I certainly would, because surely x^2 = x is one of the identities one would include as an axiom for elementaryfunctions over Z/2Z.Similarly, I would consider e^x e^y the same as e^{x+y} over the reals (thought not over the ring of n x n matrices).DanTo reply by email, change LookInSig to luecking Re: What are identities in elementary mathematics? mjc 10/28/10 1:00 AM >> I certainly would, because surely x^2 = x is one of the> identities one would include as an axiom for elementary> functions over Z/2Z.>> Similarly, I would consider e^x e^y the same as e^{x+y}> over the reals (thought not over the ring of n x n> matrices).>> DanI would consider both of these to be theorems, not axioms (the firstfrom 1+1=0, the second from the definition of e^x and properties ofexponentiation and limits). Re: What are identities in elementary mathematics? Ilya Zakharevich 10/28/10 12:00 PM On 2010-10-28, mjc wrote:>>>> I certainly would, because surely x^2 = x is one of the>> identities one would include as an axiom for elementary>> functions over Z/2Z.>>>> Similarly, I would consider e^x e^y the same as e^{x+y}>> over the reals (thought not over the ring of n x n>> matrices).> I would consider both of these to be theorems, not axioms (the first> from 1+1=0,I doubt it.  In F_4, 1+1=0, but x^2 is not equal to x.> the second from the definition of e^x and properties of> exponentiation and limits).I do not know what is a "limit".  And I wonder which "property ofexponentiona" would imply "e^x e^y is the same as e^{x+y}"...Ilya  [P.S.  Is it my imagination, or had the quality of moderation         slipped a bit during the last year?] Re: What are identities in elementary mathematics? Dan Luecking 10/29/10 4:30 AM I was referring to the original poster's discussion: axioms for a system used to determine equality of elementary functions in a purely algebraic way. This system needs a list of rules or identities for transforming one function into another. These would be the _axioms_ of such a system, and they could certainly be required to depend on what the variables represent.For example, the algebra of elementary functions (over C) is defined to be the smallest set of formulas that contains all constants, z and e^z, and is closed under the usual arithmetic operations plus also functional composition and inversion. The usual field axioms would have to be part of the set of rules (or axioms), and some of the laws of exponents (since not all follow algebraicly from the fieldaxioms).Over C, sin z can be defined in terms of e^z, and varioustrig identities follow from identities for e^z. But over R, one would have to add at least sin x to the starting list. And some more identities for things like sin(x+y).Finally, if our elementary functions are going to be operating over Z/2Z, we need identities or axioms appropriate for that system. Surely we would want our system to include x^2 = x as this is both useful and, in the form (x-1)x = 0, concisely expresses the single defining property of Z/2Z among fields: every element is either 0 or 1. If we did not include it we would have to include something else that expresses the difference between Z/2Z and R or C.My main point was that the set of identities (or axioms) differs depending on what the variables in our functions represent. So x^2 is the same as x in Z/2Z (and not in R)because some axioms make it so. In my previous post, I did not say that the e^{x+y} = e^x e^y was an axiom, but that it is an identity valid for real variables and not for functions of matrix variables. (Still, it doesn't follow from the field axioms unless x and y are rational, so _some_ axiom would still have to be added, and the rules of the game were that it has to be algebraic.) DanTo reply by email, change LookInSig to luecking
2016-08-25 00:04:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430504202842712, "perplexity": 1375.513457600213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292675.36/warc/CC-MAIN-20160823195812-00174-ip-10-153-172-175.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-factor-b-2-p-2
# How do you factor b^2 - p^2? Jul 16, 2016 $\left(b + p\right) \left(b - p\right)$ #### Explanation: Difference of two squares $\left(b + p\right) \left(b - p\right)$ The clues to look out for are:
2019-09-16 10:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167574048042297, "perplexity": 10919.469078889644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00017.warc.gz"}
https://math.stackexchange.com/questions/3121690/partial-differential-equation-heat-equation
# partial differential equation heat equation I received this homework problem from my professor, but I am unsure of how to get it going. It doesn't correlate exactly with anything in our book. Consider the heat equation for $$u(x,t)$$ $$u_t = ku_{xx}, -\infty < x < \infty, t > 0 \tag{1}$$ $$u(x,0) = f(x) \tag{2}$$ assume that a network of sensors is uniformly distributed in space, at a spatial increment of $$\Delta x$$, and that the smallest absolute amount of change (variation) in $$u$$ that can be detected by a sensor (sensor's activity) is a constant $$|\delta u|_{min} = \epsilon > 0 \tag{3}$$ consider the problem detecting the presence of an impulse variation in the initial condition $$\delta f(x) = a \delta(x - x_{0}) \tag{4}$$ where $$a > 0$$ is a constant coefficient and the location $$x_0$$ of the impulse may be unknown. 1. If $$\Delta x = 1$$, find the maximum value of $$\epsilon$$ s.t. any impulse $$(4)$$ in the initial condition with $$a \geq 1$$ will be detected by at least one sensor. 2. Given $$\epsilon > 0$$ find the maximum value of $$\Delta x$$ to guarantee that any impulse $$(4)$$ in the initial condition with $$a \geq 1$$ will be detected by at least one sensor. 3. Given the network of parameters $$\epsilon > 0$$, $$\Delta x > 0$$, find the minimum value $$A > 0$$ s.t. any impulse $$(4)$$ in the initial condition with $$a \geq A$$ will be detected by at least one sensor. As the problem is a linear by denoting $$u$$ the solution with $$f$$ as initial condition and $$u + \delta u$$ the solution with $$f + \delta f$$ as initial condition, you have that $$\delta u$$ is solution of $$(\delta u)_t = k(\delta u)_{xx}$$ $$\delta u(x,0)=\delta f(x) a \delta(x-x_0)$$ in this case there is a well known explicit solution, the heat kernel so you obtain $$\delta u(x,t)=\frac{a}{\sqrt{4 \pi t}}e^{-\frac{(x-x_0)^2}{4t}}.$$ When $$x$$ is fixed the study of the function $$t \mapsto \delta u(x,t)$$ show that the maximum is $$a \frac{1}{\sqrt{2 \pi e}} \frac{1}{|x-x_0|}$$ so in the worst case the distance betwwen $$x_0$$ and the closest sensor is $$\frac{ \Delta x}{2}$$ the initial condition will be detected as long as $$a \sqrt{\frac{2}{ \pi e}} \frac{1}{\Delta x} \geq \epsilon.$$
2019-05-25 13:34:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953049182891846, "perplexity": 60.270800279431874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00134.warc.gz"}
https://www.physicsforums.com/threads/area-under-an-inverse-trigonometric-function.953365/
# Area under an inverse trigonometric function Gold Member ## Homework Statement Find the area bounded by arcsinx, arccosx and the x axis. Hint-you don't need to integrate arcsinx and arccosx ## Homework Equations All pertaining to calculus ## The Attempt at a Solution I drew the correct graph and marked their intersection at (1/√2, pi/4) and painstakingly found the answer by integrating inverse functions as (root 2 - 1). Any insight on how to use the hint and make this easier? ## Answers and Replies Related Calculus and Beyond Homework Help News on Phys.org Stephen Tashi One thought is to consider the rectangle with vertices $(0,0),(1,0),(1,\frac{\pi}{2}),(0,\frac{\pi}{2})$ and find other areas within that rectangle by integrating $\sin(y)$ or $\cos(y)$. LCKurtz Homework Helper Gold Member Or draw the graphs of $y=\sin x$ and $y = \cos x$ and look for a congruent area. It will give you a trivial integration. Gold Member Or draw the graphs of $y=\sin x$ and $y = \cos x$ and look for a congruent area. It will give you a trivial integration. Elegant! yet I had to use a grapher to see the congruency clearly. Is there another way I can see this easily e.g during a exam (perhaps an algebraic method?) LCKurtz Homework Helper Gold Member Elegant! yet I had to use a grapher to see the congruency clearly. Is there another way I can see this easily e.g during a exam (perhaps an algebraic method?) I suggested that because most people find it easy to think of $y$ in terms of $x$. But if you take your equations $y = \arccos x$ and $y = \arcsin x$ and think of them as $x$ as a function of $y$ you have $x = \cos y$ and $x = \sin y$ and the calculation of the area is the same simple integral as a $dy$ integration. Ray Vickson Homework Helper Dearly Missed ## Homework Statement Find the area bounded by arcsinx, arccosx and the x axis. Hint-you don't need to integrate arcsinx and arccosx ## Homework Equations All pertaining to calculus ## The Attempt at a Solution I drew the correct graph and marked their intersection at (1/√2, pi/4) and painstakingly found the answer by integrating inverse functions as (root 2 - 1). Any insight on how to use the hint and make this easier? Elegant! yet I had to use a grapher to see the congruency clearly. Is there another way I can see this easily e.g during a exam (perhaps an algebraic method?) Think of it this way: draw the graphs of $y = \sin x$ and $y = \cos x$ for $0 \leq x \leq \pi/2$. Here, the x-axis is horizontal and the y-axis is vertical Now rotate the graph paper through 90 degrees, so the old x-axis is now vertical and the old y-axis is now horizontal. You would now be looking at the plots of $\arcsin y$ and $\arccos y$, $0 \leq y \leq 1.$ On the rotated graph, shade in the required area between the plots of $\arcsin y, \arccos y$ and $y = 0$. Now rotate the graph paper back to its original orientation, with the x-axis horizontal and the y-axis vertical again. All you will have done is rotated an area through 90 degrees, without changing its numerical value. That means that you can evaluate the area by looking at plots of $\sin x$ and $\cos x,$ which might---and in this case, does---lead to an easier problem. Here comes the final trick: you can do all that in your head, without ever drawing a single actual graph! Gold Member Thank you very much everyone! All the inversion and the change in function (in terms of y instead of x) seemed a bit strange but I finally wrapped my head around it after solving this multiple times in my head, its quite elegant in fact. Thank you for your help :D
2020-04-08 13:28:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713397145271301, "perplexity": 551.3368728300052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00463.warc.gz"}
https://www.thestudentroom.co.uk/showthread.php?t=4323522
You are Here: Home >< Maths 1. question: if , prove that the quadratic expression is positive for all real values of x when . let if f(x) >0 for all real values of x, it has no real roots wil produce a negative number let call it -g = d since then when , f(x) > 0 for all real values of x Please I want to know if my proof correctly answers the question please tell where i went wrong thank you 2. (Original post by bigmansouf) question: if , prove that the quadratic expression is positive for all real values of x when . let if f(x) >0 for all real values of x, it has no real roots wil produce a negative number let call it -g = d since then f(x) > 0 for all real values of x Please I want to know if my proof correctly answers the question please tell where i went wrong thank you Firstly, you should have more working when you complete the square. You are asked to prove that if then . What you have attempted to do instead is prove that if then and then use this to prove . But you don't need the first part since you're told already that . The major problem is that knowhere in your working have you actually proved why implies that . You could talk about the discriminant/no real roots to show this but I assume from your working that you need to prove it by completing the square? You correctly completed the squre but didn't use it for anything. must always be positive. One more thing: This doesn't mean much. I assume you're trying to say that is a complex number but the set of complex numbers includes the reals and the imaginary numbers. What you mean to say is that is imaginary. There is a mathematical way to write this but you may as well just write it in words and I recommend this unless you're confident with using the formal notation. 3. (Original post by notnek) Firstly, you should have more working when you complete the square. You are asked to prove that if then . What you have attempted to do instead is prove that if then and then use this to prove . But you don't need the first part since you're told already that . The major problem is that knowhere in your working have you actually proved why implies that . You could talk about the discriminant/no real roots to show this but I assume from your working that you need to prove it by completing the square? You correctly completed the squre but didn't use it for anything. must always be positive. One more thing: This doesn't mean much. I assume you're trying to say that is a complex number but the set of complex numbers includes the reals and the imaginary numbers. What you mean to say is that is imaginary. There is a mathematical way to write this but you may as well just write it in words and I recommend this unless you're confident with using the formal notation. thank you very much for helping me here goes my second attempt let since \rightarrow since , has a least value of when since if is the least values, taking the input values of x that are symmetrical about i.e gives since the least value any inputs of x that are symmetrical will also give output of the same where ( f(x) for points symmetrical about x = -b/2a will be > 0 and > (4ac-b^2) / (4a) explaining incase you dont understand when , Please i understand you said to stick to using words i want to do better and writing mathematical notation was advised by my teacher . if i am wrong with the notation please show me the correct way. I know that the last point which i was saying ( f(x) for points symmetrical about x = -b/2a will be > 0 and > (4ac-b^2) / (4a) I want to improve my mathematical notation as i am very slow in writing thank you 4. (Original post by bigmansouf) thank you very much for helping me here goes my second attempt I'm going to try explaining this to you because I don't feel like subtle hints are working very welly. (sorry for butting in, notnek!) You're over complicating it. From you know that (i) the squared part is always greater than or equal to 0, because that's a basic property of a square. (ii) you're multiplying the square by , which you know is . So . Now, it would be really nice if we could prove that was positive, in which case, we would know that was a sum of two positive terms and hence would be positive itself. But, how to do that... well, for a fraction to be positive, we want both its numerator and denominator to be positive. You can see trivially that the denominator is always positive here (because you're given that, so must also be ). You're reduced the entire problem to proving that the numerator is also positive. So, how can we prove that ? Oh... what a coincidence! This is precisely what we were assuming! In writing the actual proof up, it would go something like this: , since the first term is non-negative. Since we can then conclude that and so also. So is the sum of two positive terms and is hence positive itself. NB: When I say "two positive terms" I really mean "sum of one non-negative term and positive term", but the latter is a bit of a mouthful.... and the distinction hardly matters there anyway, I've explicitly used non-negative where the distinction does matter. 5. (Original post by Zacken) I'm going to try explaining this to you because I don't feel like subtle hints are working very welly. (sorry for butting in, notnek!) You're over complicating it. From you know that (i) the squared part is always greater than or equal to 0, because that's a basic property of a square. (ii) you're multiplying the square by , which you know is . So . Now, it would be really nice if we could prove that was also positive, in which case, we would know that was a sum of two positive terms and hence would be positive itself. But, how to do that... well, for a fraction to be positive, we want both its numerator and denominator to be positive. You can see trivially that the denominator is always positive here (because you're given that, so must also be ). You're reduced the entire problem to proving that the numerator is also positive. So, how can we prove that ? Oh... what a coincidence! This is precisely what we were assuming! In writing the actual proof up, it would go something like this: , since the first term is non-negative. Since we can then conclude that and so also. So is the sum of two positive terms and is hence positive itself. thank you woow i dont know what to say but thanks can i conclude from what you said that if a quadratic function is a sum of two squares therefore f(x) > 0 is positive for all real values of x and thus will have no real roots I am just trying to make note on this question 6. (Original post by bigmansouf) thank you very much for helping me here goes my second attempt I think you are over-complicating a quite simple and straight-forward proof. It doesn't flow nicely at all past the completed square form and your notation goes way out of hand. Once you complete the square, you're left with From here you know that: (ALWAYS due to squaring) One of the conditions you are given is therefore The other condition is therefore Summing the two terms greater than 0 will obviously give you something greater than 0 therefore Why are you talking about the roots??? EDIT: Nevermind, Zacken beat me to it. 7. (Original post by bigmansouf) thank you woow i dont know what to say but thanks Notnek deserves it more than I do. can i conclude from what you said that if a quadratic function is a sum of two squares therefore f(x) > 0 is positive for all real values of x and thus will have no real roots I am just trying to make note on this question Indeed! (although this is not the case in this particular question) - here it's "if a function is a sum of two positive things for all real values of x, then the function is positive for all real values of x". In another question, where you could have then you can't quite say that , the best you can say is that , because , not . However, if you had something like then you can definitely say that and so it has no real roots, since both brackets can't be simultaneously 0 for distinct . In fact, you use this technique a lot when you're asked to do stuff like "prove that has no real roots". You do: so that means that has no real roots. 8. (Original post by RDKGames) ... Might be worth clearing up your inequalities as well, to prevent further confusion; it's , not . 9. (Original post by Zacken) Notnek deserves it more than I do. Indeed! (although this is not the case in this particular question) - here it's "if a function is a sum of two positive things for all real values of x, then the function is positive for all real values of x". In another question, where you could have then you can't quite say that , the best you can say is that , because , not . However, if you had something like then you can definitely say that and so it has no real roots. In fact, you use this technique a lot when you're asked to do stuff like "prove that has no real roots". You do: so that means that has no real roots. thank i gave him a rep so did u and RDK games i am going to work harder i feel i got played around by bostock and chandler (making easy stuff look difficult) 10. (Original post by bigmansouf) thank i gave him a rep so did u and RDK games i am going to work harder i feel i got played around by bostock and chandler (making easy stuff look difficult) Good man. That's a good idea! You should probably start off with easier resources and then use B&C to supplement your learning instead of learning solely from it - some of the stuff is a bit archaic. Updated: September 19, 2016 TSR Support Team We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out. This forum is supported by: Today on TSR Poll Useful resources ### Maths Forum posting guidelines Not sure where to post? Read the updated guidelines here ### How to use LaTex Writing equations the easy way ### Study habits of A* students Top tips from students who have already aced their exams
2017-06-26 05:24:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002157807350159, "perplexity": 467.76239890266305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00359.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/yes-use-equation-inverse-fourier-transforms-spectra-figure-p71-6-attached-q3122306
## Fourier Transform yes, use the equation above to find the inverse Fourier transforms of the spectra in figure P7.1-6 attached.
2013-05-24 08:09:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117332696914673, "perplexity": 982.6723866635153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704368465/warc/CC-MAIN-20130516113928-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/218439-find-all-zeros-function-algebraically-print.html
# Find all the zeros of the function algebraically. • May 1st 2013, 09:37 AM captaincrunch383 Find all the zeros of the function algebraically. Find all real zeros of the function algebraically. f(x)= x^4 - x^3 - 15x^2 help anyone? • May 1st 2013, 10:20 AM Shakarri Re: Find all the zeros of the function algebraically. $x^4-x^3-15x^2=x^2(x^2-x-15)$ Can you find all the zeros in that? • May 1st 2013, 10:35 AM captaincrunch383 Re: Find all the zeros of the function algebraically. Quote: Originally Posted by Shakarri $x^4-x^3-15x^2=x^2(x^2-x-15)$ Can you find all the zeros in that? no i can not =/ can you help me, i'm getting desperate. • May 1st 2013, 10:40 AM sciencepal Re: Find all the zeros of the function algebraically. 0,0, $1\pm\sqrt{61}/2$ • May 1st 2013, 10:41 AM captaincrunch383 Re: Find all the zeros of the function algebraically. Quote: Originally Posted by sciencepal 0,0, $1\pm\sqrt{61}/2$ can you please show how you got that? this is just a sample problem and i'm goin to have a similar problem on my test, i'd appreciate it. • May 1st 2013, 10:42 AM dokrbb Re: Find all the zeros of the function algebraically. Shakarri meant that one, $x^2(x^2-x-15)$ now you have in the brackets a second degree equation, it's impossible that you can not find the zeros from that one, try it, ps: oops, we might have answered in the same time • May 1st 2013, 10:42 AM captaincrunch383 Re: Find all the zeros of the function algebraically. Quote: Originally Posted by dokrbb Shakarri meant that one, $x^2(x^2-x-15)$ now you have in the brackets a second degree equation, it's impossible that you can not find the zeros from that one, try it, dokrbb so you factored out an x squared, i see that, but i still don't know how to find the zeros from that one. =/ • May 1st 2013, 10:52 AM Shakarri Re: Find all the zeros of the function algebraically. $x^2(x^2-x-15)$ is zero when either $x^2=0$ or $x^2-x-15=0$ Find the values of x which make either of those zero. • May 1st 2013, 10:58 AM captaincrunch383 Re: Find all the zeros of the function algebraically. Quote: Originally Posted by Shakarri $x^2(x^2-x-15)$ is zero when either $x^2=0$ or $x^2-x-15=0$ Find the values of x which make either of those zero. that sounds completely foreign to me =/ there is a similar problem to this on my test, and i'm trying to figure this out. i was hoping you or someone would briefly post the steps used to solve it. sorry if i sound rude • May 1st 2013, 11:23 AM captaincrunch383 Re: Find all the zeros of the function algebraically. Never mind i got it.
2014-03-16 10:05:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129937052726746, "perplexity": 814.10960367627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702159/warc/CC-MAIN-20140313024502-00009-ip-10-183-142-35.ec2.internal.warc.gz"}
https://solvedlib.com/thats-the-answer-i-got-also-but-its-marked-wrong,326301
# Thats the answer i got also but its marked wrong on my project, i really thought... ###### Question: thats the answer i got also but its marked wrong on my project, i really thought i was losing my mind 8:21 Expert Q&A + If your salary is 39,000 and you get paid biweekly would your pay period amount be $1500.00? 39000/26?? Expert Answer Anonymous 8 minutes later Solution: Annual salary =$39,000 Total bi weekly period in a year = 26 Pay period amount = $39,000/26 -$1,500 Was this answer helpful? BODO #### Similar Solved Questions ##### A delivery truck travels northward from a warehouse at 23.0 m/s and arrives at Walmart 4200.... A delivery truck travels northward from a warehouse at 23.0 m/s and arrives at Walmart 4200. m away. The truck stays there for 67.0 s before it is ordered to turn back southward. The truck travels southward at 20.0 m/s for 48.0 s and arrives at Kroger. Choose the location of the warehouse to be the ... ##### Given f(x) = 1.5^x and the interval [2, 5], compute thefollowing: LHS6, RHS6, MS6 Given f(x) = 1.5^x and the interval [2, 5], compute the following: LHS6, RHS6, MS6... ##### 1. Which of the following adjusting entries would not be reversed in the following accounting period?... 1. Which of the following adjusting entries would not be reversed in the following accounting period? a. an entry that recognized an accrued revenue earned during the current period b. an entry that transferred a portion of a revenue account to a liability account c. an entry that recognized an accr... ##### JuicWhateLetwarn Inanxtots (IZland [31tScka on8 4766 UegirtI 423 Dautea deqieei 1Q } degiesFind tnc (UKAI pouut 08 ttc unrrWilatr function f) 2 04A4e^x ]n Ihe tanoo ,40 {ntei Uic * valuc 0/ Ie ckal Doint_ Manr LWc Jout to at lxst on dccmnalnliceNun 4dIt tc gludiont voctor 06 4 Dnallale nulicUotidurcclonlerrWa" Juectn ot Ina Irrtot IIARSeied 0tle aF=In " {Dd45L0 juic Whate Letwarn Inanxtots (IZland [31t Scka on8 4766 UegirtI 423 Dautea deqieei 1Q } degies Find tnc (UKAI pouut 08 ttc unrrWilatr function f) 2 04A4e^x ]n Ihe tanoo ,40 {ntei Uic * valuc 0/ Ie ckal Doint_ Manr LWc Jout to at lxst on dccmnalnlice Nun 4d It tc gludiont voctor 06 4 Dnallale nulicUo... ##### ObservationsThe insoluble salt, NaOH (aq) , and KNOs (aq) solutions needed for this experiment are prepared beforehand and made available t0 students. We start by placing known mass of insoluble potassium bitartrate (KHC HOs (s) salt into each of seven different KNO; (aq) solutions of known concentration; For each mixture we allow enough time to reach equilibrium between the solid and Ihe aqueous solulion_ It will appear as though no sall has dissolved but small quantities of the insoluble KHC H Observations The insoluble salt, NaOH (aq) , and KNOs (aq) solutions needed for this experiment are prepared beforehand and made available t0 students. We start by placing known mass of insoluble potassium bitartrate (KHC HOs (s) salt into each of seven different KNO; (aq) solutions of known concent... ##### The following data items are maintained in a company's database for each fixed asset item that... The following data items are maintained in a company's database for each fixed asset item that it owns: fixed asset number, fixed asset description, fixed asset classification, location, responsible manager, maintenance schedule, purchase price, purchase date, vendor information, depreciation me... ##### Provide the IUPAC name for the following amines: NHz Provide the IUPAC name for the following amines: NHz... ##### Part 2 Random Variables Explain whelher Ihe given funciion can be cumulalive distribulion (uncllon ol some conlinuous random variable'F()eG0t5+30/2, 1 >"/2 Part 2 Random Variables Explain whelher Ihe given funciion can be cumulalive distribulion (uncllon ol some conlinuous random variable' F()e G0t 5+30/2, 1 >"/2... ##### CChapter 19-Homework Problem 19.42 v Part A The end of one bar is wolded to the... CChapter 19-Homework Problem 19.42 v Part A The end of one bar is wolded to the end of the other bar (Figure 1) In terms of R what is the resistance of Express your answer in terms of R his combination? Each of two identical uniform metal bars has a resistance R Submit Part 8 What is the reoitne ars... ##### Problem: company interviewed nine people for anentry-level position Two resources director ofa small The human qualified , and two were rated as unqualified In how many were rated as highly qualified; five were rated different - wayscould the ralings be assigned? problem: company interviewed nine people for anentry-level position Two resources director ofa small The human qualified , and two were rated as unqualified In how many were rated as highly qualified; five were rated different - wayscould the ralings be assigned?... ##### Some measurements of the initial rate of a certain reaction are given in the table below... some measurements of the initial rate of a certain reaction are given in the table below CO ADVANCED MATERIAL Deducing a rate law from Initial reaction rate data Some measurements of the initial rate of a certain reaction are given in the table below. N2] [H] initial rate of reaction 1.61 M 0.71... ##### 5.R.85Questlon HelpSuppose that you @nfow Find Ine prcbabillty thal You get = IcaS] Iw0 SucportInal You Iov dcamns Fmdinu probabiily InalYou94 acalsi Gi (2,26"hard 'ourThe probab iity getling Icaeieo 46 MRouric lour decimill pIcu Mrnusedud 5.R.85 Questlon Help Suppose that you @nfow Find Ine prcbabillty thal You get = IcaS] Iw0 SucportInal You Iov dcamns Fmdinu probabiily InalYou94 acalsi Gi (2,26 "hard 'our The probab iity getling Icaeieo 46 MRouric lour decimill pIcu Mrnusedud... ##### 26. A company's inventory records indicate the following data for the month of April. April 1... 26. A company's inventory records indicate the following data for the month of April. April 1 April 5 April 9 April 14 April 20 April 30 Beginning Purchase Sale Purchase Sale Purchase 100 units at $10 each 100 units at$ 11 each 150 units at $10 each 50 units at$ 12 each 60 units at \$ 10 each... ##### "TDS" (total dissolvcd solids} Is Masung vulet pullty represeni7g the numbl Olous P- millon (pprn) ot rriinetal content (colc Um , Macncsum cic conlaincd water Ihe lower the TDS voluc tht puer Ihe Watct Accoraingly Ine Indina Dcpument natcuenled 90 , conlidcnce Irtetyal cOMnd [hc avclege IDS valic belwcch two towns Sankmnc Heath (5}und Danville (Di: The resulting Interval Wa5" -52 1 pprn PpI Wlech 0l Ihe lollating intcrprclatorts) Istarc corrcc?You Must Make sclcction Ior Ihc Nton "TDS" (total dissolvcd solids} Is Masung vulet pullty represeni7g the numbl Olous P- millon (pprn) ot rriinetal content (colc Um , Macncsum cic conlaincd water Ihe lower the TDS voluc tht puer Ihe Watct Accoraingly Ine Indina Dcpument natcuenled 90 , conlidcnce Irtetyal cOMnd [hc avclege I... ##### How do you write 67.2 million in scientific notation? How do you write 67.2 million in scientific notation?... ##### HOCHO HO HOCH HO HO OH The structure of a common carbohydrate is shown. What is... HOCHO HO HOCH HO HO OH The structure of a common carbohydrate is shown. What is the name of this carbohydrate? ? cellulose ? D-lactose sucrose D-maltose D-cellobiose ? d starch The correct name is not listed here.... ##### Suppose a student carried out the reduction of methyl linoleate starting with 3.028 grams of methyl linoleate. What is the theoretical number of grams of Hz necessary to complete this reaction? Methyl linoleate CH;CHz(CH-CH-CHz)(CHzJCOOCHs = 12 027 42 co € Pko' 252 9 Imo ! Imo ( 1l CMot2= neore elicc 463 9 Mwt I6( V (2x) numley Xik Mod 252 Suppose a student carried out the reduction of methyl linoleate starting with 3.028 grams of methyl linoleate. What is the theoretical number of grams of Hz necessary to complete this reaction? Methyl linoleate CH;CHz(CH-CH-CHz)(CHzJCOOCHs = 12 027 42 co € Pko' 252 9 Imo ! Imo ( 1l CMot2=... ##### M 8 ? 8 2 pFE 1 1 1 3 1 J 0 8 3 1 { 0 M 8 ? 8 2 pFE 1 1 1 3 1 J 0 8 3 1 { 0... ##### The automatic opening device or a military cargo parachute has been designed to open when the... The automatic opening device or a military cargo parachute has been designed to open when the parachute is 190 m above the ground. Suppose opening altitude actually has a normal distribution with mean value 190 and standard deviation 33 m. Equipment damage will occur if the parachute opens at an alt... ##### Determine suitable form for the particular solution of the ODE if the method of undetermined coefficients is to be used. (You do not need to solve for the coefficients:) y" + 4y = 1 + sin(2t) Determine suitable form for the particular solution of the ODE if the method of undetermined coefficients is to be used. (You do not need to solve for the coefficients:) y" + 4y = 1 + sin(2t)... ##### § 1) Find V2 3 V 5 V 2) Find I -6 A 2 A § 1) Find V2 3 V 5 V 2) Find I -6 A 2 A... ##### Question 5 (7/100) test for which you have mastered 70% ofthe material. Assumd You ate taking multiple choice the answer t0 rundom test question_ and this means that you have chance of knowing question then you rndomly select umong the four answer thatifyou don" know the answer t0 that this holds for cach question, independent of the others. choices. Finully. ussume What Is your expected score (a5 J percent) on the exam? Question 5 (7/100) test for which you have mastered 70% ofthe material. Assumd You ate taking multiple choice the answer t0 rundom test question_ and this means that you have chance of knowing question then you rndomly select umong the four answer thatifyou don" know the answer t0 that this ho... ##### A thin glass nod is a of radius R see the fgureFigure 1) A where λ's... A thin glass nod is a of radius R see the fgureFigure 1) A where λ's apost, constrt Port Pisatthe center of the S Part B Determne the acceleration (magntude drection) ofan elector, placed at point P, assuming R = 1 2 am and Aa-1 l pC/m m/s O to the le DOLL... ##### 91678 378148 BulqEEmIDBUILDING SKILLS4807 0 _RoasoningNomOr numder O"CC 07o i TAMT hpe no Fitant numporDecide whether Ihe statement true or false. Explain vour answer;Every number has an addltlve Inverse The product of @ number and Its mulliplicalive Inverse IsEat erampiesThe multiplicative iInverse of x Is -X The identily element for muitiplilcation Is 1 , Every number has multlpllcative inverse Tha sum of @ number and its additive invorse Is 0, The aadltlve inverse of ais The identity ele 91678 378148 Bulq EEmID BUILDING SKILLS 4807 0 _ Roasoning Nom Or numder O"CC 07o i TAMT hpe no Fitant numpor Decide whether Ihe statement true or false. Explain vour answer; Every number has an addltlve Inverse The product of @ number and Its mulliplicalive Inverse Is Eat erampies The multipli... ##### 7 The surface area of the portion of the paraboloid 3= +4 below the plane : = 1 is(6.05)AT (5v5 _ 1)B;6 (5v5 +1) C 3" (5v5 _ 1) D T (5v5+1) 7 The surface area of the portion of the paraboloid 3= +4 below the plane : = 1 is (6.05) A T (5v5 _ 1) B; 6 (5v5 +1) C 3" (5v5 _ 1) D T (5v5+1)... ##### Icccnnd broma Tract {0 erc Jedine Hls) Bi-(9) 2[B-(9)Whai I cquilibnum compotiton of4 nilxtute L4SC cquilibrium constan K for this feaction &t 145 € % !03{Drz] [B-lEbmitaNttodtKotr Enkntroun Icccnnd broma Tract {0 erc Jedine Hls) Bi-(9) 2[B-(9) Whai I cquilibnum compotiton of4 nilxtute L4SC cquilibrium constan K for this feaction &t 145 € % !03 {Drz] [B-l EbmitaNttodt Kotr Enkntroun... ##### Lean Accounting (Requirement 3 at the bottom!) Com-Tel Inc. manufactures and assembles two models of smartphones—the Tig... Lean Accounting (Requirement 3 at the bottom!) Com-Tel Inc. manufactures and assembles two models of smartphones—the Tiger Model and the Lion Model. The process consists of a lean cell for each product. The data that follow concern only the Lion Model lean cell. For the year, Com-Tel Inc. budg... ##### PN 105 Fundamentals of Nursing I Diuretics/Hypokalemia JS 72 years of age has had vomiting and... PN 105 Fundamentals of Nursing I Diuretics/Hypokalemia JS 72 years of age has had vomiting and diarhea for two days. He is on digoxin 0.25 mg per day and hydrochlorothiazide 50 mg per day. He complains of being dizzy. His blood pressure slightly is lower than usual. His serum potassium level is 3.2 ... ##### Current is applied to a molten mixture of CuF, NiCl2, and CaS. A. What is produced... Current is applied to a molten mixture of CuF, NiCl2, and CaS. A. What is produced at the cathode? B.What is produced at the anode?... ##### 1 3 [ 4 1 4 3 ~LJ 5 8 ~II 3 1 2 8 2 0 + [ e : 2 8 1 : 4 2 2 1 0 LI 3 4 2 L Hl 4 3 { 3 1l 5 L 3 1 5 1 3 [ 4 1 4 3 ~LJ 5 8 ~II 3 1 2 8 2 0 + [ e : 2 8 1 : 4 2 2 1 0 LI 3 4 2 L Hl 4 3 { 3 1l 5 L 3 1 5... ##### Please answer and I will rate! 4. a) Define the concept of NP-completeness b) If A... please answer and I will rate! 4. a) Define the concept of NP-completeness b) If A is NP-complete, and A has a polynomial time algorithm, then a polynomial time algorithm to find a longest path in a directed graph. Answer:... ##### Tech Solutions is a consulting firm that uses a job-order costing system. Its direct materials consist... Tech Solutions is a consulting firm that uses a job-order costing system. Its direct materials consist of hardware and software that it purchases and installs on behalf of its clients. The firm's direct labor includes salaries of consultants that work at the client's job site, and its overhe... ##### For f(t)= (t/e^(1-3t),2t^2-t) what is the distance between f(2) and f(5)? For f(t)= (t/e^(1-3t),2t^2-t) what is the distance between f(2) and f(5)?...
2022-07-07 11:25:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4579443633556366, "perplexity": 12433.48676719952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00369.warc.gz"}
https://mizugadro.mydns.jp/t/index.php?title=SuZex_approximation&oldid=939
# SuZex approximation (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Fig.1. $y\!=\!\mathrm{SuSex}(x)~$, thick blue line, and $y\!=\!\mathrm{zex}(x)\!=\!x\mathrm e^x$, thin line Fig.2.Complex map of function SuZex: $~u\!+\!\mathrm i v= \mathrm{SuZex}(x_1+x\!+\!\mathrm i y)$ This article collects approximations of function SuZex, which is superfunction of zex$\,(z)=z\exp(z)~$. The complex map of SuZex is shown in figure at right. Below, it is compared to similar maps for various approximations of SuZex with elementary functions. All the maps are supposed to be displayed in the same scale. ## Background SuZex is superfuncton for the transfer function $T=\,\,$zex; (1) $~ ~ ~ T(z)=\mathrm{zex}(z) = z\,\exp(z)~$ The superfunction $F=\mathrm{SuZex}$ satisfies the transfer equation (2) $~ ~ ~ T(F(z))=F(z\!+\!1)$ (3) $~ ~ ~ F(0)=1$ Also, it is assumed that the solution $F=\mathrm{SuZex}$ decays to the stationary point 0 of the transfer function $T$ by (1) at infinity, except some strip along the positive part of the real axis. The superfunction $F=\mathrm{SuZex}$ is real–holomorphic; $F(z^*)=F(z)^*$, For real values of the argument, the explicit plot $y=\mathrm{SuZex}(x)$ is shown in figure 2. The function is positive and increasing along the whole real axis; all its derivatives are also positive. The function slowly raises since zero at minus infinity, passes through point (0,1) and then shows the fast growth, similar to that of the SuperFactorial and that o the tetration (to base $b\!>\! \exp^2(-1)$). Sorry, the figure 2 is not yet loaded. For the efficient (id est, fast and precise) evaluation of SuZex, various approximations are described below. They are used in the C++ implementation of function SuZex, called by generators of figures, and, in particular, figures 1 and 2. ## Taylor expansion at zero $\begin{array}{rl} n&c_n\\ 0&1.\\ 1& 0.7136859972397819\\ 2& 0.4476015977075872\\ 3& 0.2601727742107340\\ 4& 0.1435143556173126\\ 5& 0.0761226590985602\\ 6& 0.0391492705396854\\ 7& 0.0196328627708593\\ 8& 0.0096397364554956\\ 9& 0.0046483274109356\\ 10&0.0022065106994065\\ 11&0.0010330188519910\\ 12&0.0004777086233718\\ 13&0.0002184810190843\\ 14&0.0000989267835284\\ 15&0.0000443861459533\\ 16&0.0000197488866685 \end{array}$ Fig.3. Taylor approximation (4) with 48 terms; $!u\!+\!\mathrm i v= P_{48}(x\!+\!\mathrm i y)$, left; the same and map of $u\!+\!\mathrm i v= \mathrm{SuZex}(x\!+\!\mathrm i y)$, center; and the agreement $A_{48}(x\!+\!\mathrm i y)$, right. Fig.4. Map od the Asymptotic approximation $Q_{20}~$ by equation ($~$); $~u\!+\!\mathrm i v= Q_{20}(x_1+x\!+\!\mathrm i y)$ The simple approximation of any function is, perhaps, the truncated Taylor series, which is, actually, a polynomial. The complex map of such a polynomial of power $N\!=\!42$ is shown in the figure 3, (4) $~ ~ ~ ~\displaystyle \mathrm{SuZex}(z) \approx P_{N}(z)=\sum_{n=0}^{N} \,c_n\, z^n$ Approximations for the first 17 coefficients $c$ or the expansion are shown in table at left. More coefficients are available at SuZexTay0co.cin . The series converges, and the increase of the number of terms taken into account extends the range of approximation. However, due to the fast growth of the function at real values of the argument, practically the application of the approximation is limited by a circle $|z|\!<\!2$; for larger values, the enormous amount of coefficients should be taken into account, and the rounding errors destroy the precision of the approximation. For evaluation of SuZex of real argument, the polynomial approximation is sufficient, the values of function can be reconstructed applying iteratively the Transfer equation (5) $~ ~ ~ ~ \mathrm{SuZex}(z\!+\!1)=\mathrm{zex}\Big(\mathrm{SuZex}(z)\Big)$ or its modification (6) $~ ~ ~ ~ \mathrm{SuZex}(z\!-\!1)=\mathrm{LambertW}\Big(\mathrm{SuZex}(z)\Big)$ For function zex$(z)=z \exp(z)$ and its inverse LambertW$=\mathrm{zex}^{-1}$, the efficient compex(double) numerical implementations are available. Figures at right show the complex map of the approximation $P_{48}$, its overlap with the map of SuZex and the agreement $A_{48}$ with SuZex defined with (7) $~ ~ ~ ~\displaystyle A_{48}(z)= -\lg \left( \frac \right)$ This agreement indicates, how many decimal digits does the approximation for certain $z=x\!+\!\mathrm i y$; levels $A=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15$ are drawn. In the central part, the approximation provides of order of 15 decimal digits. Outside the outher loop, the agreement is smaller than unity, id est, even first digit of the estimate of SuZex by the approximation $P_{48}$ is doubtful. This loop has light excess at the real part of the real axis bevause of huge denominator in the definition of $A_{48}$. The inner part is still sufficient for the implementation of SuZex along the real axis, using the transfer equation; the error due to the approximation is smaller than the rounding error at the double precision arithmetics. Perhaps, even less terms would be sufficient for the professional implementation of this superfunction. ## Expansion at infinity Fig.5. Map of agreement $B_{20}$ by equation (12) for the asymptotic approximation $Q_{20}$ by equations (9, (10); (8) $~ ~ ~ u\!+\!\mathrm i v\!=\!B_{20}(x\!+\!\mathrm i y)$ For large values of the argument, SuZex can be approximated using the asymptotic expansion. Let (9) $~ ~ ~ \displaystyle Q_N(z)=\frac{1}{z}\, \sum_{n=0}^{N} \, z^{-n}\, \sum_{m=0}^n\, a_{m,n} \ell^m$ where $\ell=\ln(-z)$. Then (10) $~ ~ ~\mathrm{SuZex}(z) \approx Q_N(x_1\!+\!z)~$ where $x_1\!\approx\! -1.1259817765745028~$ is solution of the equation (11) $~ ~ ~\displaystyle \lim_{k \rightarrow \infty} \mathrm{zex}^k\Big(Q_N(x_1-k)\Big) = 1$ For $N\!=\!20$, the complex map of approximation (8) is shown in Figure 4. Practically, with $k\!=\! 20$, the error of evaluation of $x_1$ becomes of order of rounding errors at the "double" precision; at $\Re(z)\!<\!-20$, the last term in the asymptotic expansion is smaller than $10^{-16}$ and does not contribute to the estimated value. The complex map of $Q(x_1\!+\!z)$ for $z\!=\!x\!+\!\mathrm i y$ is shown in figure at right. In the region $~x\!<\!0~$, $~x^2\!+\!y^2\!<\!4~$, the approximation through $Q(20)$ shows reasonable agreement with the polynomial expansion $P_{48}$ above. In particular, the same level $v\!=\!0.4$ looks in the similar way in both figures. However, the corresponding values of the argument are out of range of high precision of each of the approximations. Coefficients $a_{n,m}$ in the expansion: $\begin{array}{cccccccccc} ~ ~ n ~ \backslash~ m\!\! & \bf 0 & \bf 1 & \bf 2 & \bf 3 &\\ \!\bf 0&-1& 0 & 0 &0\\ \!\bf 1& 0&1/2&0 &0\\ \!\bf 2& -1/6& 1/4& -1/4 &0\\ \!\bf 3& {-7}/{48}~ & {3}/{8} & -{5}/{16}~ & {1}/{8} \end{array}$ Below the same table appears as the Mathematica output produced after calculation of the coefficients with the command TeXForm[Table[Table[If[n >= m, a[n, m], 0], {m, 0, 7}], {n, 0, 7}]] $\!\!\!\!\!\!\!\!\!\! ^{ \begin{array}{ccccccccccc} -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{1}{6} & \frac{1}{4} & -\frac{1}{4} & 0 & 0 & 0 & 0 & 0 \\ -\frac{7}{48} & \frac{3}{8} & -\frac{5}{16} & \frac{1}{8} & 0 & 0 & 0 & 0 \\ -\frac{707}{4320} & \frac{23}{48} & -\frac{17}{32} & \frac{13}{48} & -\frac{1}{16} & 0 & 0 & 0 \\ -\frac{1637}{8640} & \frac{1121}{1728} & -\frac{83}{96} & \frac{37}{64} & -\frac{77}{384} & \frac{1}{32} & 0 & 0 \\ -\frac{274133}{1209600} & \frac{15427}{17280} & -\frac{1619}{1152} & \frac{443}{384} & -\frac{205}{384} & \frac{87}{640} & -\frac{1}{64} & 0 \\ -\frac{4024763}{14515200} & \frac{142801}{115200} & -\frac{156559}{69120} & \frac{1915}{864} & -\frac{1307}{1024} & \frac{53}{120} & -\frac{223}{2560} & \frac{1}{128} \end{array}}$ The precision of the approximation through the asymptotic expansion (9) can be characterized with agreement (12) $~ ~ ~ \displaystyle B_{N}(z)=- \lg\left( \frac{|Q_N(x_1\!+\!z)-\mathrm{SuZex}(z)|} \right)$ For $N\!=\!20$, this function is shown in figure 5. Outside the loops, the approximation provides at least 15 correct decimal digits. While $\Re(z)<5$, for $|z\!+\!2|>8$, the error of approximation of $\mathrm{SuZex}(z)$ with $Q_N(x_1\!+\!z)$ is smaller than the rounding errors at the use of the complex double arithmetics. ## Taylor expansion at $z=-12+x_1$ Fig.6. Approximation of SuZex with the Taylor expansion $\Phi_{12,80}$ at $-12+x_1$ ; here $u\!+\!\mathrm i v=\Phi(x_1\!+\!12+ x\!+\!\mathrm i y)$ Fig.7. Agreement $C_{80}$ by (15) of the approximation of SuZex with the Taylor expansion $\Phi_{12,80}$ at $-12+x_1$ At positive part of the real axis, the asymptotic approximation above has the cutline; for moderate values of the real part of the argument, the Taylor expansion would be better. In addition, the precise evaluation with direct application of the asymptotic expansion requires $N$ of order of 20, and needs approximately $N^2$ (id est, of order of 400) operations. The speed of evaluation can be boosted doe an order of magnitude with the Taylor expansion at some point far from the region of fast growth of function SuZex. Such a point is chosen to be $z_{12}=-12\!+\!x_1\approx -13.1$ The Taylor approximation $\Phi$ of SuZex at some point $z_{12}$ can be written as the truncated Taylor series, (13) $~ ~ ~ \Phi_{12,N}(z)=$ $\displaystyle \sum_{n=0}^N f_n (z-z_{12})^n=$ $\displaystyle \sum_{n=0}^N f_n (z+12-x_{1})^n$ The coefficients $f$ in (13) were evaluated from the asymptotic approximation, expanding expression (14) $~ ~ ~ \mathrm{zex}^8\Big( Q_{20}(-20\!+\!z) \Big)$ at small values of $z$. The complex map of the expansion (13) for $N\!=\!80$ is shown in figure 6. The point of expansion (approximately, $-13.1259817765745028$) is a little bit outside the field covered by the map. In order to simplify the comparison with other pictures, the same scale is used. The constant 20 in expression (14) is chosen for the following reason. While doing numerical calculation with complex double variables, with 21 terms of the expansion (8), the last term does not affect the value $Q_20(z)$ for $\Re(z)<-20$. The additional misplacement of the expansion point to the left would not improve the precision of the evaluation. The precision can be improved, using some patience (to press a key, to have a tea) and the long double variables; then it would have sense to get more terms in asymptotic expansion (9) and use more iterations of function zex in (14). However, the 15 digits seem to be more than sufficient to plot all the figures. In particular, the expansion (13) reproduces such properties of function SuZex as its value unity at zero and even value 2.7.. (approximately $\mathrm e$) at unity, although $\Phi(z)$ it is designed to approximate $\mathrm{Suzex}(z)$ in vicinity of $z\!=\!-12$, id est, mainly outside of the field of the map shown. The precision of approximation of SuZex with function $\Phi$ by (13) can be characterized with the agreement (15) $~ ~ ~ \displaystyle C_{N}(z)=- \lg\left( \frac{|\Phi_N(z)-\mathrm{SuZex}(z)|} \right)$ For $N=80$, the map of agreement by (15) is shown in Figure 7. (Hope, the last figure to be loaded for this article) Inside the loop, the approximation (14) gives at least 15 decimal digits, and the deviation in 16th digits (not plotted) should be attributed to the rounding errors (in particular in the evaluation of the Taylor coefficients $f$, but not to the lack of term in the Taylor polynomial (13). ## Numerical implementation The numerical impementation SuZex.cin in C++ of function SuZex is constructed on the base of approximations described above. For $|\Im(z)|\!>\!8$ the espansion $Q$ is used. For small values of $|z|<1.6$, the Taylor expansion (4) at zero is used, truncated keeping the 96th power of $z$, id est, $P_{96}(z)$ is used as approximation of $\mathrm{SuZex}(z)$ For positive $~\Re(z)~$, but $~|\Im(z)|\!<\!1.5~$, the iteration $~\mathrm{zex}^n( P_{96}(z\!-\!n)~$is used as approximation of $\mathrm{SuZex}(z)$, with integer $~n \!\approx\! \Re(z)$. If not a case, for $|z\!+\!12\!+\!x_1|<8.1$, the approximation $\Phi$ is used; and, fot positive $~\Re(z)~$, but $~|\Im(z)|\!<\!1.5~$, the iteration $~\mathrm{zex}^n\!\Big( \Phi(z\!-\!n)\Big)~$is used as approximation of $\mathrm{SuZex}(z)$, with integer $~n \!\approx\! \Re(z)$. For the rest of cases, id est, $\Re(z)<-14$, again, the asymptotic expansion $Q$ is used. The approximations described above have wide areas of overlapping, where at least two or three of approximations above provide at least 15 correct decimal digits. This allows to use the implementation without any further worries about the error of the approximation. The SuZex can be used as any other special function with known behavior, known asymptotic and the efficient algorithm for the evaluation available.
2019-11-19 15:24:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613914847373962, "perplexity": 518.9155024728268}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00228.warc.gz"}
http://thegrantlab.org/bio3d/reference/core.find.html
Perform iterated rounds of structural superposition to identify the most invariant region in an aligned set of protein structures. core.find(...) # S3 method for pdbs core.find(pdbs, shortcut = FALSE, rm.island = FALSE, verbose = TRUE, stop.at = 15, stop.vol = 0.5, write.pdbs = FALSE, outpath="core_pruned", ncore = 1, nseg.scale = 1, progress = NULL, ...) # S3 method for default core.find(xyz, ...) # S3 method for pdb core.find(pdb, verbose=TRUE, ...) ## Arguments pdbs a numeric matrix of aligned C-alpha xyz Cartesian coordinates. For example an alignment data structure obtained with read.fasta.pdb or pdbaln. if TRUE, remove more than one position at a time. remove isolated fragments of less than three residues. logical, if TRUE a “core\_pruned” directory containing ‘core structures’ for each iteraction is written to the current directory. minimal core size at which iterations should be stopped. minimal core volume at which iterations should be stopped. logical, if TRUE core coordinate files, containing only core positions for each iteration, are written to a location specified by outpath. character string specifying the output directory when write.pdbs is TRUE. number of CPU cores used to do the calculation. ncore>1 requires package ‘parallel’ installed. split input data into specified number of segments prior to running multiple core calculation. See fit.xyz. progress bar for use with shiny web app. a numeric matrix of xyz Cartesian coordinates, e.g. obtained from read.dcd or read.ncdf. an object of type pdb as obtained from function read.pdb with multiple frames (>=4) stored in its xyz component. Note that the function will attempt to identify C-alpha and phosphate atoms (for protein and nucleic acids, respectively) in which the calculation should be based. arguments passed to and from functions. ## Details This function attempts to iteratively refine an initial structural superposition determined from a multiple alignment. This involves iterated rounds of superposition, where at each round the position(s) displaying the largest differences is(are) excluded from the dataset. The spatial variation at each aligned position is determined from the eigenvalues of their Cartesian coordinates (i.e. the variance of the distribution along its three principal directions). Inspired by the work of Gerstein et al. (1991, 1995), an ellipsoid of variance is determined from the eigenvalues, and its volume is taken as a measure of structural variation at a given position. Optional “core PDB files” containing core positions, upon which superposition is based, can be written to a location specified by outpath by setting write.pdbs=TRUE. These files are useful for examining the core filtering process by visualising them in a graphics program. ## Value Returns a list of class "core" with the following components: volume total core volume at each fitting iteration/round. length core length at each round. resno residue number of core residues at each round (taken from the first aligned structure) or, alternatively, the numeric index of core residues at each round. step.inds atom indices of core atoms at each round. atom atom indices of core positions in the last round. xyz xyz indices of core positions in the last round. c1A.atom atom indices of core positions with a total volume under 1 Angstrom\^3. c1A.xyz xyz indices of core positions with a total volume under 1 Angstrom\^3. c1A.resno residue numbers of core positions with a total volume under 1 Angstrom\^3. c0.5A.atom atom indices of core positions with a total volume under 0.5 Angstrom\^3. c0.5A.xyz xyz indices of core positions with a total volume under 0.5 Angstrom\^3. c0.5A.resno residue numbers of core positions with a total volume under 0.5 Angstrom\^3. ## References Grant, B.J. et al. (2006) Bioinformatics 22, 2695--2696. Gerstein and Altman (1995) J. Mol. Biol. 251, 161--175. Gerstein and Chothia (1991) J. Mol. Biol. 220, 133--149. ## Note The relevance of the ‘core positions’ identified by this procedure is dependent upon the number of input structures and their diversity. ## Author Barry Grant read.fasta.pdb, plot.core, fit.xyz ## Examples if (FALSE) { ##-- Generate a small kinesin alignment and read corresponding structures pdbfiles <- get.pdb(c("1bg2","2ncd","1i6i","1i5s"), URLonly=TRUE) pdbs <- pdbaln(pdbfiles) ##-- Find 'core' positions core <- core.find(pdbs) plot(core) ##-- Fit on these relatively invarient subset of positions #core.inds <- print(core, vol=1) core.inds <- print(core, vol=0.5) xyz <- pdbfit(pdbs, core.inds, outpath="corefit_structures") ##-- Compare to fitting on all equivalent positions xyz2 <- pdbfit(pdbs) ## Note that overall RMSD will be higher but RMSF will ## be lower in core regions, which may equate to a ## 'better fit' for certain applications gaps <- gap.inspect(pdbs$xyz) rmsd(xyz[,gaps$f.inds]) rmsd(xyz2[,gaps$f.inds]) plot(rmsf(xyz[,gaps$f.inds]), typ="l", col="blue", ylim=c(0,9)) points(rmsf(xyz2[,gaps$f.inds]), typ="l", col="red") } if (FALSE) { ##-- Run core.find() on a multimodel PDB file pdb <- read.pdb('1d1d', multi=TRUE) core <- core.find(pdb) ##-- Run core.find() on a trajectory trtfile <- system.file("examples/hivp.dcd", package="bio3d") trj <- read.dcd(trtfile) ## Read the starting PDB file to determine atom correspondence pdbfile <- system.file("examples/hivp.pdb", package="bio3d") pdb <- read.pdb(pdbfile) ## select calpha coords from a manageable number of frames ca.ind <- atom.select(pdb, "calpha")$xyz frames <- seq(1, nrow(trj), by=10) core <- core.find( trj[frames, ca.ind], write.pdbs=TRUE ) ## have a look at the various cores "vmd -m core_pruned/*.pdb" ## Lets use a 6A^3 core cutoff inds <- print(core, vol=6) write.pdb(xyz=pdb$xyz[inds$xyz],resno=pdb$atom[inds$atom,"resno"], file="core.pdb") ##- Fit trj onto starting structure based on core indices xyz <- fit.xyz( fixed = pdb$xyz, mobile = trj, fixed.inds = inds$xyz, mobile.inds = inds\$xyz) ##write.pdb(pdb=pdb, xyz=xyz, file="new_trj.pdb") ##write.ncdf(xyz, "new_trj.nc") }
2020-10-25 07:28:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3138388693332672, "perplexity": 7686.885178279563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00547.warc.gz"}
https://dmoj.ca/problem/dqu
View as PDF Points: 20 (partial) Time limit: 1.5s Memory limit: 256M Author: Problem types You are given an array of size , with all elements initially equal to . Support the following operations: • Type 1: Given and , increment all with by . • Type 2: Given and , return the sum of all for . #### Input Specification The first line contains integers and , the size of the array and the number of operations to be performed. The next lines each contain integers , the type number of the operation and the parameters and for that operation. #### Output Specification For each operation of type output an integer on its own line, the return value of the operation. #### Sample Input 8 8 2 1 8 1 1 4 2 1 2 2 2 8 1 2 3 1 2 7 1 2 8 2 1 8 #### Sample Output 0 1 0 4 #### Explanation Right before the last operation, . The sum of all for is .
2022-12-06 00:50:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3527447581291199, "perplexity": 1092.7683859300919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00017.warc.gz"}
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Shore_durometer
# Shore durometer The Shore durometer is a device for measuring the hardness of a material, typically of polymers, elastomers, and rubbers.[1] Higher numbers on the scale indicate a greater resistance to indentation and thus harder materials. Lower numbers indicate less resistance and softer materials. The term is also used to describe a material's rating on the scale, as in an object having a “‘Shore durometer’ of 90.” The scale was defined by Albert Ferdinand Shore, who developed a suitable device to measure hardness in the 1920s. It was neither the first hardness tester nor the first to be called a durometer (ISV duro- and -meter; attested since the 19th century), but today that name usually refers to Shore hardness; other devices use other measures, which return corresponding results, such as for Rockwell hardness. ## Durometer scales There are several scales of durometer, used for materials with different properties. The two most common scales, using slightly different measurement systems, are the ASTM D2240 type A and type D scales. The A scale is for softer ones, while the D scale is for harder ones. However, the ASTM D2240-00 testing standard calls for a total of 12 scales, depending on the intended use: types A, B, C, D, DO, E, M, O, OO, OOO, OOO-S, and R. Each scale results in a value between 0 and 100, with higher values indicating a harder material.[2] ## Method of measurement Durometer, like many other hardness tests, measures the depth of an indentation in the material created by a given force on a standardized presser foot. This depth is dependent on the hardness of the material, its viscoelastic properties, the shape of the presser foot, and the duration of the test. ASTM D2240 durometers allows for a measurement of the initial hardness, or the indentation hardness after a given period of time. The basic test requires applying the force in a consistent manner, without shock, and measuring the hardness (depth of the indentation). If a timed hardness is desired, force is applied for the required time and then read. The material under test should be a minimum of 6 mm (0.25 inches) thick.[3] Test setup for type A & D[3] DurometerIndenting footApplied mass (kg)Resulting force (N) Type A Hardened steel rod 1.1 mm – 1.4 mm diameter, with a truncated 35° cone, 0.79 mm diameter0.8228.064 Type D Hardened steel rod 1.1 mm – 1.4 mm diameter, with a 30° conical point, 0.1 mm radius tip4.55044.64 The ASTM D2240 standard recognizes twelve different durometer scales using combinations of specific spring forces and indentor configurations. These scales are properly referred to as durometer types; i.e., a durometer type is specifically designed to determine a specific scale, and the scale does not exist separately from the durometer. The table below provides details for each of these types, with the exception of Type R.[4] Durometer typeConfigurationDiameterExtensionSpring force[5] A35° truncated cone (frustum)1.40 mm (0.055 in)2.54 mm (0.100 in)8.05 N (821 gf) B30° cone1.40 mm (0.055 in)2.54 mm (0.100 in)8.05 N (821 gf) C35° truncated cone (frustum)1.40 mm (0.055 in)2.54 mm (0.100 in)44.45 N (4,533 gf) D30° cone1.40 mm (0.055 in)2.54 mm (0.100 in)44.45 N (4,533 gf) E2.5 mm (0.098 in) spherical radius4.50 mm (0.177 in)2.54 mm (0.100 in)8.05 N (821 gf) M30° cone0.79 mm (0.031 in)1.25 mm (0.049 in)0.765 N (78.0 gf) O1.20 mm (0.047 in) spherical radius2.40 mm (0.094 in)2.54 mm (0.100 in)8.05 N (821 gf) OO1.20 mm (0.047 in) spherical radius2.40 mm (0.094 in)2.54 mm (0.100 in)1.111 N (113.3 gf) DO1.20 mm (0.047 in) spherical radius2.40 mm (0.094 in)2.54 mm (0.100 in)44.45 N (4,533 gf) OOO6.35 mm (0.250 in) spherical radius10.7–11.6 mm (0.42–0.46 in)2.54 mm (0.100 in)1.111 N (113.3 gf) OOO-S10.7 mm (0.42 in) radius disk11.9 mm (0.47 in)5.0 mm (0.20 in)1.932 N (197.0 gf) Note: Type R is a designation, rather than a true "type". The R designation specifies a presser foot diameter (hence the R, for radius; obviously D could not be used) of 18 ± 0.5 mm (0.71 ± 0.02 in) in diameter, while the spring forces and indenter configurations remain unchanged. The R designation is applicable to any D2240 Type, with the exception of Type M; the R designation is expressed as Type xR, where x is the D2240 type, e.g., aR, dR, etc.; the R designation also mandates the employment of an operating stand.[4] Some conditions and procedures that have to be met, according to DIN ISO 7619-1 standard are: • For measuring Shore A the foot indents the material while for Shore D the foot penetrates the surface of the material. • Material for testing needs to be in laboratory climate storage at least one hour before testing. • Measuring time is 15s. • Force is 1 kg +0.1 kg for Shore A, and 5 kg +0.5 kg for Shore D. • Five measurements need to be taken. • Calibration of the Durometer is one per week with elastomer blocks of different hardness. The final value of the hardness depends on the depth of the indenter after it has been applied for 15 seconds on the material. If the indenter penetrates 2.54 mm (0.100 inch) or more into the material, the durometer is 0 for that scale. If it does not penetrate at all, then the durometer is 100 for that scale. It is for this reason that multiple scales exist. But if the hardness is <10 °Sh or >90 °Sh the results are not to be trusted. The measurement must be redone with adjacent scale type. Durometer is a dimensionless quantity, and there is no simple relationship between a material's durometer in one scale, and its durometer in any other scale, or by any other hardness test.[1] Shore Durometers of Common Materials MaterialDurometerScale Bicycle gel seat15–30OO Chewing gum20OO Sorbothane30–70OO Rubber band25A Door seal55A Soft wheels of roller skates and skateboard78A Hydraulic O-ring70–90A Hard wheels of roller skates and skateboard98A Ebonite rubber100A Solid truck tires50D Hard hat (typically HDPE)75D Cast urethane plastic80D ## ASTM D2240 hardness and elastic modulus Using linear elastic indentation hardness, a relation between the ASTM D2240 hardness and the Young's modulus for elastomers has been derived by Gent[6] and by Mix and Alan Jeffrey Giacomin.[7] Gent's relation has the form ${\displaystyle E={\frac {0.0981(56+7.62336S)}{0.137505(254-2.54S)}},}$ where ${\displaystyle E}$ is the Young's modulus in MPa and ${\displaystyle S}$ is the ASTM D2240 type A hardness. This relation gives a value of ${\displaystyle E=\infty }$ at ${\displaystyle S=100}$ but departs from experimental data for ${\displaystyle S<40}$. Mix and Giacomin derive comparable equations for all 12 scales that are standardized by ASTM D2240.[7] Another relation that fits the experimental data slightly better is[8] ${\displaystyle S=100\operatorname {erf} (3.186\times 10^{-4}~E^{1/2}),}$ where ${\displaystyle \operatorname {erf} }$ is the error function, and ${\displaystyle E}$ is in units of Pa. A first-order estimate of the relation between ASTM D2240 type D hardness (for a conical indenter with a 15° half-cone angle) and the elastic modulus of the material being tested is[9] ${\displaystyle S_{\text{D}}=100-{\frac {20(-78.188+{\sqrt {6113.36+781.88E}})}{E}},}$ where ${\displaystyle S_{\text{D}}}$ is the ASTM D2240 type D hardness, and ${\displaystyle E}$ is in MPa. Another Neo-Hookean linear relation between the ASTM D2240 hardness value and material elastic modulus has the form[9] ${\displaystyle \log _{10}E=0.0235S-0.6403,\quad S={\begin{cases}S_{\text{A}}&{\text{for}}~20 where ${\displaystyle S_{\text{A}}}$ is the ASTM D2240 type A hardness, ${\displaystyle S_{\text{D}}}$ is the ASTM D2240 type D hardness, and ${\displaystyle E}$ is the Young's modulus in MPa. ## Patents • US patent 1770045, A. F. Shore, "Apparatus for Measuring the Hardness of Materials", issued 1930-07-08 ## References 1. "Shore (Durometer) Hardness Testing of Plastics". Retrieved 2006-07-22. 2. "Material Hardness". CALCE and the University of Maryland. 2001. Archived from the original on 2007-07-07. Retrieved 2006-07-22. 3. "Rubber Hardness". National Physical Laboratory, UK. 2006. Retrieved 2006-07-22. 4. "DuroMatters! Basic Durometer Testing Information" (PDF). CCSi, Inc. Retrieved 29 May 2011. 5. "Standard Test Method for Rubber Property—Durometer Hardness1". ASTM International. November 2017. p. 5. doi:10.1520/D2240-15E01. 6. A. N. Gent (1958), On the relation between indentation hardness and Young's modulus, Institution of Rubber Industry -- Transactions, 34, pp. 46–57. doi:10.5254/1.3542351 7. A. W. Mix and A. J. Giacomin (2011), Standardized Polymer Durometry, Journal of Testing and Evaluation, 39(4), pp. 1–10. doi:10.1520/JTE103205 8. British Standard 903 (1950, 1957), Methods of testing vulcanised rubber Part 19 (1950) and Part A7 (1957). 9. Qi, H. J., Joyce, K., Boyce, M. C. (2003), Durometer hardness and the stress-strain behavior of elastomeric materials, Rubber Chemistry and Technology, 76(2), pp. 419–435. doi:10.5254/1.3547752
2021-10-24 14:04:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003653287887573, "perplexity": 3595.7119548015353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00098.warc.gz"}
https://www.jkong.co.uk/category/finance/
Browse Category # Different World (Thoughts on COVID-19) I remember a recent trip where my mum came up to London to visit. It was bright, and we had a relaxed lunch in a Chinese restaurant with a glass ceiling. I thought of how my routine had significantly changed – I would naturally prioritise spending time with her, instead of working an extra hour, studying another hour of German, practicing another set of logic puzzles or just scrolling through eBay. Similarly, the recent COVID-19 situation has changed things quite a fair bit for me, and I imagine the changes I’m exposed to are still relatively small compared to what many other people face. The obvious change is working from home, which I’m not particularly a fan of (human contact is important, and pair programming remotely turns out to be really hard – especially for me if I’m doing this more in an instructional capacity, as I like to have newer team members code while I jump in as needed), but I understand it’s necessary given the circumstances. Being able to work from home is certainly a privilege of software engineering, among other jobs. In hindsight, having the ability to work at all is also a privilege, in light of the widespread shop closures that came into effect. The UK government’s proposals should help, but it’s still a 20 percent pay cut (more, if one is earning more than 30,000 a year) – and I don’t know how exactly things are going to work for people on zero-hours contracts or the self-employed. There is then social distancing: I’d lump the aforementioned shop closures into it, as if it is decided that civil liberties should not be restricted, one can still give strong pushes by decreasing the desirability of engaging in the behaviours that one doesn’t want. I do have a few groups of friends that I meet outside of work, but so far at least the impact hasn’t been too bad – we’ve found alternatives such as doing it digitally which are to me inferior but are still better than nothing. The shop closures are annoying, but in any case I do most of my shopping, apart from groceries and food, online. It hasn’t massively changed a majority of my weekend programmes, because of what activities are involved (usually reading, puzzles, computer games or walks – note that the last of these can still go ahead subject to social distancing). The relative absence of people (which is good!) is highly noticeable. Travel restrictions are another factor. I had to cancel my Singapore trip over Easter, because the government requires returning Singapore citizens to spend 14 days at home. This is reasonable, but is of course problematic if one’s intended stay is less than 14 days. I didn’t have any other immediate plans to travel, though would likely have drafted some for the bank holidays, perhaps to Germany or Zurich again. In a sense the timing of my Zurich trip was good: I returned just before the number of cases spiked. That’s probably going to be on the backburner – and I wonder if (assuming the airlines survive) prices will be reasonable as we get past the peak. Grocery shopping has changed as well: a year ago hand soap would have been a mundane entry on a list while I now actively look out for it. I also tried to execute a Sainsbury’s delivery order earlier this week, and saw no availability for three weeks. It’s unclear if the worst has already passed – I’m seeing slightly more stock, maybe because most of the stockpiling has already happened, and supermarkets are actively ramping up the capacity of their food-stocking chains. Incidentally this makes me think of some variant of the Prisoner’s Dilemma from game theory, where cooperating is not stockpiling and defecting is stockpiling. It’s best if no one stockpiles (since stockpiling has costs: the cost of carry – organising one’s supplies, finding space for them, making sure use-by dates are observed safely – and the opportunity cost of the money going into the supplies, assuming that investment performance outpaces inflation). Normally, this is the case. However, as more people stockpile, seeking to stockpile suddenly becomes rational: not stockpiling may mean that one is unable to obtain important provisions. The game is iterated, as well: one presumably visits supermarkets multiple times, and can see the state of supermarket shelves and derive an estimate of what people are buying. There is also the recent fall of and heightened volatility on the stock market. I use a broker called De Giro that has a feature to send an email every time a position goes down by 10% (I think additively). I’ve seen a couple of cheerful 40% depreciation emails for some of the riskier assets there. It could be worse: IAG (owning BA, among others) stock is down 72 percent, and Lufthansa is down 65 percent. Interestingly Singapore Airlines is “only” down 40 percent. The numerical magnitude and speed of the damage here, especially if considered in US dollars (which strengthened aggressively) is probably a factor of 10 larger than what I’ve dealt with in the past. In a sense, the speed highlights the perception of loss, because a more drawn-out fall tends to be partially insulated by fresh contributions and pound-cost averaging. However, despite the scale of the numbers involved, they seem a bit further removed (an image of a bonfire of notes equal to the five figure sum would be more concerning). We’ll see how things go. I guess I was too young to remember and/or make any significant decisions when dealing with SARS in 2002-2003: I was in Primary 6 then, and remember schools being closed briefly. Even when schools reopened there was twice-daily temperature taking and values had to be documented. The numbers of cases were much smaller (I think four-figures globally), though I remember it being somewhat more dangerous (contrast with UK government advice – and this is indeed true – that for most people COVID-19 will be mild). One risk of FIRE pursuits and/or aggressive savings strategies is for some reason not being able to spend the money one has saved up later on. I’d say the financial strategy I’ve been using from 2016 to early 2020, though not at popular FIRE blogger levels, involves an aggressive savings strategy, and to some extent that risk has manifested. I certainly wouldn’t say it has caused a significant shift in my thinking, but it’s a good reminder that stockpiling of wealth as an end to itself tends to be unhelpful. # Risks in Playing the Long Game Monevator is one of the personal finance sites that I read fairly frequently. They have a fairly good weekly column that includes a number of interesting links, both about personal finance as well as on other topics like psychology or recent news events. The article I read yesterday begins as follows: Unlike every other personal finance blogger around, I’ve started 2020 wondering whether it’s time I spent more money. I would not call myself a personal finance blogger, but I had thoughts along similar lines when planning my budget for 2020. Eventually, I decided yes, and expanded the budget at the end of the planning session (and this is already after a 50% increase over 2018). My reasoning was not too different from the author’s (and the logic behind having a need for such an argument was similar as well – saving, which is a form of delaying gratification, seems to come naturally as a default to me). In theory at least, I can work out how much £1 spent today would cost me in retirement if I was to do that at 65 if I make some assumptions. There are 37 years to go there, and if we use a 4.5% real (that is, accounting for inflation) return that becomes about £5.10. In other words, I could buy roughly five times the number of goods I could today, if I was willing to wait 37 years. However, a large problem with excessively delaying gratification (both for money and in general) is that the expected benefit may not materialise – this is known as an interruption risk. These risks can typically take two forms: 1. Collection risk, where the projected reward doesn’t actually become available. In context this means that the £1 I invested doesn’t become the inflation-adjusted equivalent of £5.10 37 years down the road. 2. Termination risk, where the person delaying gratification does not get to benefit from the projected reward, even though it did become available. In context this means that although the £1 I invested does become £5.10 (or more), I don’t survive 37 more years or I do but my ability to use it is limited or hampered for some reason. The former could arise for various reasons, as my projection was dependent on a 4.5% real return assumption. This is in line with the past 119 years – in fact that was slightly higher at 5%, though I would use 4.5% because of platform and fund fees (see Credit Suisse’s 2019 Global Investment Returns Yearbook). This pattern may not continue going forward if there are large, value-destroying events (e.g. faster than expected climate change effects, nuclear war). Even if the pattern continues, I might not be able to get this return for more local concerns (e.g. aggressive investment taxes as part of left-wing policy, if my fund of choice turns out to be poorly managed). The latter is also very relevant. The author mentions that he’s “within striking distance of 47” and is concerned about planning 40 years ahead when using a compound interest calculator; I’m probably between fifteen or twenty years younger, and the pessimist (or realist?) in me also gets uneasy about putting 40 in that box, let alone the analogous 55-60 (e.g. see James 4:14 – whether you’re a Christian or not I think the sentiment is relatable). Even if I do make it there, it’s possible that I’d be in less of a position to enjoy the money, for time or health reasons. If I had my current financial state at the end of National Service in Singapore, for example, that 9-month period before starting university would have looked quite different and I would have done more things, some of which I’m not in a position to do right now because of work (e.g. I’d probably have done a two-month round-the-world trip). I am still intending to be mostly frugal and saving a fair amount. Travel will likely be the largest source of expenses for me – flights and hotels are not cheap. I also spent quite a lot of money on clothing last year, though I’m trying to avoid that as I’m running up against (reasonable) storage limitations and there are quite a number of clothes that I haven’t worn for a while. Thus my plans are nowhere near as far as the author’s claim that he’ll make little effort to grow his wealth. I don’t think I’m in a position where that would be responsible, in that my current savings/portfolio are nowhere near enough to be sufficient for retirement. # Reactions to the Red Box (UK Budget 2018) I’ve noticed that there has ended up being a string of primarily financial posts. This was not intentional, but there happened to be lots of interesting material related to finance that has come up over the past few weeks. Also, a disclaimer: I am not a financial advisor and am not offering professional financial advice. Conduct your own due diligence or seek an advisor if necessary. The UK government announced its Budget for 2018/2019, though with the caveat that it might be subject to change in the event of a no-deal Brexit (which looks a bit more of a risk each day). I’m a UK taxpayer, and thus changes in relation to tax and relevant allowances interests me; tax is my largest expense every year. As an expatriate I don’t have recourse to public funds, so I’m less aware of the changes pertaining to benefits and universal credit. The list below is far from exhaustive; readers should consult a more detailed guide like MoneySavingExpert or even the Budget itself if they want more details, or this Which? article if they’re considering optimising their tax affairs. #### Income and Take-Home Pay • Personal allowance has increased from £11,600 to £12,500 and the higher rate (40%) band has increased from £46,350 to £50,000. • National insurance thresholds have risen; the start of the 12% band has increased from £8,424 to £8,632 (by £208), while its end has increased from £46,834 to £50,024 (by £3,190). I was aware that the Conservative manifesto had pledged to bump the personal allowance and higher rate thresholds to these levels by 2020, so this is one year early. These increases seemed fairly generous – although I will be paying more in NI (notice that £2,982 is now taxed an extra 10 percent and £208 is now taxed 2 fewer percent – in other words I’m paying £294 more) the income tax saving looks pretty good, especially in nominal terms. (Whether Brexit wrecks the value of the pound further is another separate concern.) A separate consequence of this is that the 62% marginal band has extended to £125,000 as well, as there’s more personal allowance to lose. Regardless, the changes are certainly positive. In general, with inflation being positive, the value of a pound decreases over time. Thus, thresholds should be increased nominally at least in line with inflation if the goal is to implement a ‘neutral’ policy. I haven’t been keeping track of the history of these levels, but the roughly 7.75% bump in the thresholds is certainly well above inflation (and hopefully above two years of inflation – since the thresholds will be frozen next year). #### Other Income Sources • The Personal Savings Allowance remains at £1,000 for basic rate taxpayers, £500 for higher rate and £0 for additional rate. • The Dividend Allowance remains at £2,000 – note that dividends in a pension or ISA do not count against this limit. Dividend tax rates are unchanged. • The Trading and Property Allowances remain at £1,000. • The Rent-a-Room scheme threshold remains at £7,500. Most things held pretty steady in nominal terms (which means they’ve gone down slightly in real terms). That said, if interest rates continue to go up the savings allowance thresholds might quickly become relevant to me (obligatory hello to Marcus). I’m considerably further from the dividend allowance threshold, though again that’s something to watch out for if the markets decide to be exuberant. I haven’t been using the other allowances, though if an opportunity comes up that could be a possibility. #### Securities and Assets • The ISA (tax-advantaged savings account) limit remains at £20,000. • The Capital Gains Allowance was increased slightly, from £11,700 to £12,000. Tax on capital gains is unchanged. • The Lifetime Allowance for pensions was increased in line with inflation, from £1.03M to £1.055M. Not too much to say here. The markets haven’t been so friendly so there’s certainly no extravaganza of crystallising significant capital gains this time around. To be fair, much of my gains in the 2016/17 tax year centered around massive pound weakness. ISA allowance being held where it is is a bit of a damp squib as I max it, but to be fair it is still very generous. #### Consumption Taxes • VAT standard rate remains at 20%. • Air Passenger Duty (APD) remains flat for short-haul flights, but rises by RPI for long-haul flights (roughly £2 for Y, £4 for premium cabins). • Fuel duty is frozen. • Alcohol duty is frozen for beer, “most cider” and spirits. Duty on wine and higher-strength cider increases by RPI. • Tobacco duty increases by RPI + 2%. I don’t smoke, so don’t know too much. Owing to how it’s calculated RPI tends to be higher – and it’s a little frustrating / naughty that most payments coming out of the Treasury (e.g. planned bumps in income tax/pension allowances) seem to be CPI indexed while those going in tend to be RPI indexed. The main reason suggested for the double standards is legacy technical debt, but that doesn’t seem to explain why the inconsistencies seem to consistently resolve in favour of the exchequer. The APD change garners revenue for the Treasury, though it’s a little sad as APD is already incredibly high compared to aviation duties in other countries. That might actually increase the attractiveness of routing through Zurich or one of the German hubs when I fly the London-Singapore route. For comparison, long-haul premium cabin APD ex-UK will be £172. Singapore’s duty is about S$47 (about £27); Zurich weighs in at CHF 35 (also about £27) and Munich at EUR 42 (about £37). I guess I’m mildly affected by the alcohol duty change as I do drink wine, but I don’t drink a lot so the impact is very minimal. I’m not currently affected by the fuel or tobacco duty changes. #### 50p Coin • The Royal Mint will produce a 50p coin to mark Brexit. I’m somewhat of a Brexit bear and remain one (pun not intended). Taken at face value, the inscription (“peace, prosperity and friendship with all nations”) is at least one way in which I could see it possibly having a shot at working out – and assuming Brexit does go ahead is what I hope the government will do. However, the quote is adapted from Thomas Jefferson’s inauguration speech as US President. The continuation is “entangling alliances with none”, which is in some ways apt if the UK is concerned with the EU’s ever closer union – though I’d think the US and UK in a modern-day context certainly are allies! #### Conclusion In terms of personal impact, the Budget felt very much like a constructive or at least benign continuation of previous Budgets. That said, I realise I say that from a point of privilege in that I view the status quo as manageable. I’m aware that there have been changes to benefits (notably, universal credit) which have made some worse off. # The Problem of the Golden Basket Somewhat related to the previous post on paydays, I had lunch and then coffee with another friend, and for some reason our discussion turned to personal finance as well. Unlike the last time, I was the one starting with the view that deviates from conventional theory this time. Suppose you have a choice between two savings accounts. One account pays 2% AER and the other pays 1% AER – so £10000 invested for a year would earn £200 in the first account but only £100 in the second. Should you allocate 100% of your savings to the account that pays 2% AER? In general, if all other things were held equal, there probably isn’t much of a reason not to allocate everything to the 2% account. However, the standard for all other things has to be high. In practice, I can see quite a few scenarios where there may be legitimate reasons to allocate some money to the 1% account. One possible reason could be terms of access. Clearly, if the 2% account is a fixed rate bond only allowing access to the money after some amount of time while the 1% is an easy-access account, that provides a clear reason. If one wishes to maintain an emergency fund, for example, the fixed rate bond is probably best avoided even if it pays higher interest. Some savings accounts, while not having a fixed term, place restrictions on withdrawals in terms of frequency or advance notice in exchange for higher rates – again, be careful about using these accounts to park an emergency fund. Another reason could involve how the accounts compound. The annual equivalent rate (AER) refers to how much money one will have after a year. However, if one wants the funds together with some interest before one full year, then precisely how the accounts pay interest becomes significant. If the 1% account compounds monthly while the 2% account compounds annually only, then between one month and one year after the start date the 1% account has more withdrawable interest. This is a variant of the access problem, though this focuses on access to interest as opposed to the principal. This may seem a little short-term minded, but could be interesting if one is engaging in stoozing or has other pressing financial commitments. The amount of money one has is also relevant. Financial institutions can fail; in this case, the UK’s Financial Services Compensation Scheme (FSCS) guarantees up to £85,000 per depositor per authorised bank/building society. There is thus certainly a case for keeping not more than that amount with each bank; if one was fortunate enough for one’s savings to exceed £170,000, finding a third bank seems reasonable. I’ve never seen cash as doing the heavy lifting as far as growing my portfolio was concerned. I’d collect interest as available, but would prioritise safety. If the accounts were held with the same provider, of course, then this argument falls down – even if one has multiple accounts, the FSCS limit is on a per-bank basis. In fact, one has to be careful as some bank brands do share authorisations – meaning that an individual will only get the protection once even if several of these banks fail. In general, the inconvenience that might be caused by failures is something worth considering as well. The FSCS compensation only applies if the bank is suitably authorised; even if one’s balance is fully covered by the FSCS, claims can take one to four weeks to process. I think I’d be much more comfortable having at least a nontrivial amount in a separate account (two to three months’ expenses, ideally) if possible. Customer service is another factor. I’d probably prioritise that more significantly for more complex products – however, for a savings account it would still be useful. Furthermore, there are other principles which individuals might find important. MoneySavingExpert on its savings account best buy tables has a section for highly-rated ‘ethical savings accounts’. The criteria Ethical Consumer (which MSE works with) include concerns like tax avoidance and funding of climate change, though I don’t necessarily agree with all of them (“excessive director’s remuneration”, in particular – if someone is that valuable it seems unethical to me to artificially depress their salary). Similarly, Islamic banking is necessary for adherents, since interest is forbidden in Islam. To conclude, there are quite a number of reasons why one might not actually want to put 100% in the higher interest bearing account. Of course it makes sense ceteris paribus (if all other things were held equal), but that seems unlikely in practice. The standard for all other things being held equal here is high – the access conditions, compounding conventions and bank account provider all need to match (and that isn’t even an exhaustive list). # Moving Cash Flows I met a friend for a meal on the weekend, and among other things my friend mentioned that at his company, there was an internal debate over whether payday should be moved forward. This wasn’t a debate on the financial ability or willingness of the company to do this, but was instead focused on individual workers’ preferences. My initial reaction was a bit of surprise. I wondered why this was even a debate, as I believed the answer should almost always be yes. This reminded me of the standard time-value-of-money question that I wrote about just over a year ago; being paid the same amount of money slightly earlier, mathematically, seems like an outright win. The UK hasn’t had negative interest rates yet – and even in a place like Switzerland where bank rate is negative, this isn’t typically passed on to most depositors. Cash flow might be a problem if one considers the dollar-today-or-more-tomorrow question; however, this shouldn’t be an issue in this set-up. Valid cash-flow scenarios remain valid even if payday is brought forward. In a sense, an early payday creates additional options; it shouldn’t invalidate any existing ones. With a bit more discussion and thought, though, we found that there were indeed valid reasons as to why one might not want payday to be shifted forward. First, although the mathematical argument makes sense, there are some edge cases around tax liability. If one’s salary is close to a marginal rate change and a payment is pushed across a tax year boundary, the amount of tax one pays might change (and can increase). Also, although we speak of interest as an upside, how much benefit an individual can actually realise may be significantly limited. Some bank accounts pay interest based on the lowest balance on any day in the month, meaning that being paid a few days early yields no benefit. Even if interest is based on the average daily balance, the upside is also in most cases small. For a concrete example, 2017 median UK post-tax earnings would be about £1,884.60 per month. If one was getting paid three days early and storing that into a high-interest current account yielding 5% APR, the additional interest wouldn’t be more than about 30 pence. Moving away from purely numerical considerations, there are many other plausible reasons too. Clearly, departing from an existing routine may affect one’s own financial tracking. I find this alone to be a little flimsy (surely one’s tracker should be adaptable to variations arising from December and/or weekends?). That said, if one is unlikely to derive much benefit from the money coming early (and it seems like in most cases there indeed wouldn’t be much benefit), the change would likely seem unnecessary. Another scenario could be if one has many bills or other payments paid by direct debit, and cannot or does not want to pay all of them. In that case, deciding precisely where the money goes could be significant – for example, if one is faced with a decision to lose fuel or premium TV in winter. This is probably not the right system to handle a situation like that, but if one wishes to only make some payments then an unexpected early payday could mess the schedule up. Somewhat related might be joint accounts in households where there are financial disputes. Taking advantage of an early payday also requires self-control. Consider that if one is living paycheck-to-paycheck, while an early payday might ease financial pressure, it also means that the time to the next paycheck is longer than normal (unless that is also shifted forward). This needs to be dealt with accordingly. If you’d ask me whether I’d like payday to be shifted forward, I’d almost certainly say yes. Our discussion went to a further hypothetical – would you take a 1% pay cut to have your entire salary for the year paid on January 1st? From a mathematical point of view, you would be comparing a lump sum of $0.99N$ dollars paid now, or (for simplicity) twelve payments of $N/12$ dollars paid $1/12, 2/12, \ldots, 12/12=1$ years from now. Assuming that you can earn interest of $r$% per month and using monthly compounding, after one year we have $Value_{\text{LumpSum}} = 0.99N (1+r)^{12}$ $Value_{\text{Normal}} = \sum_{i=1}^{12} \left( \frac{N}{12} (1+r)^{12 - i} \right)$ If we set these two to be equal and solve for $r$, we get a break even point of $r = 0.00155$. This computes out to an APR of about $1.87\%$. This is higher than best-buy easy access accounts at time of writing (MoneySavingExpert identifies Marcus at 1.5%). You can beat this with fixed-rate deposits, and probably beat this through P2P loans, REITs and equities – though more risk is involved. I think I could see that being a yes for me, though I’m not entirely sure I’d have the self control required to manage it properly! # Retirement by 40? I overheard a conversation a few days ago on the near-impossibility of retiring by forty. It is understandable (consider 40 in relation to standard benchmarks for retirement – a private pension can be withdrawn at 55 and the state pension is given at 65, and these numbers are trending upwards). I’m not sure I agree, though; there exist quite a number of examples of people that have done this [1, 2]. Meta-analyses disagree (against: [3], in favour: [4]). It’s true that internet and media sources may perhaps not be reliable, but in any case one can perform the relevant analysis. Even given the option, I’m not certain I would take it. It’s arguable that I had a taste of retirement for the two-odd months in between submitting my Masters thesis at Imperial and starting full-time at Palantir, and I didn’t find it that easy to fill my time after a while. I’d probably stand a better chance now, having reignited a couple of interests in things apart from computer science. Anyway, let’s get to analysis. One simple way of decomposing this problem can involve 1. Figuring out the amount required before retirement is feasible, and 2. Figuring out the required savings rate to reach the result obtained in step 1. For the first point, there is a well-known 4% rule. Suppose you withdraw 4% of your portfolio in the year when you retire, and then adjust your withdrawals for inflation every year. Then, your portfolio will last for at least 30 years in 96% of historical scenarios. I have some issues with this, biasing in both directions. I think 4% seems lower than is reasonable, for several reasons: • The idea that one blindly draws the stipulated salary even when the market is down seems absurd. Really, one should factor in market returns when making one’s decisions about income, as in [5 – note, technical!]. • The study assumes that the portfolio alone supports the retiree; yet, a side hustle of some kind may be relevant, and at least in the UK the state pension could kick in once the retiree reaches 65 (or whatever the age is at that time). Yet, there are certain reasons to adjust the figure upwards too: • The life expectancy of someone who’s 40 is probably higher than 30 more years (especially one who’s considering retirement at 40!), hence that’s not enough as a baseline. • The study used the US markets, which have been performing very well. Of course, one can decide to use exclusively the US markets, but that tends to introduce currency risks too. • The study in question does not account for fund and platform fees. These can be kept quite low (I estimate my own portfolio operates at about 0.22%, and some of this is by choice because I hold some active and smart-beta funds, along with indices that are a bit more exotic) but invariably chew into returns. It seems like it’s a reasonable rule of thumb, though I wouldn’t treat the figure as authoritative. One needs to estimate asset returns and inflation for both parts 1 and 2; this can be partially simplified by working everything in real terms, though there is a risk that owing to high inflation increasing one’s savings in line with inflation can prove untenable. Typically, one relies on historical data; for instance, UK stocks have returned about a CAGR of 5.2% from 1900 to 2011 [6]. Post-retirement sequence of returns risk can prove troublesome, although the variance might indeed be smoothed out over a longer period. Notice that assuming one starts from zero and using the 4% or any constant-percentage model, the only factor after the asset returns and inflation rate have been factored in would be your savings rate – the level of expenditure needed is a function of this. I guess one could introduce another factor for decreased spending upon retirement! An example of a concrete calculation (inclusive of a link to a spreadsheet) can be found in [7]. Using the examples there (5% real returns, 4% withdrawal rate) and bearing in mind that I have 14 years till I’m 40, that clocks in at a smidge above 55 percent. For an individual, though, there’s probably a somewhat easier method to determine if a retirement by 40 or early retirement is feasible: 1. As above – figure out the amount required. 2. Given one’s past saving and investment patterns, estimate the amount one is likely to have at 40 if one continues to behave in the same way. Of course, we need to make the same assumptions involved in figuring out the value computed in step 1. We also still can’t get away from estimating inflation or market returns in step 2. However, the previous calculation for step 2 assumes a constant savings rate; with this method it is a lot simpler to adjust the model to account for events peculiar to one’s own situation (such as long CDs maturing, stock options, vesting of bonuses, known large expenses etc.). We then compare the figures in steps 1 and 2; there is of course some wiggle room. I think there’s a distinction to be drawn between deciding that one is financially independent and pulling the retirement trigger, though that’s perhaps a separate discussion topic. [8] I certainly would be interested in the first, but not the second at this time (and, hopefully, even at 40). One can even switch the method up a bit further: 1. Figure out the amount one is likely to have at 40 (step 2 above) 2. Figure out the withdrawal amount one can derive from that. Decide if that’s feasible. Again, we don’t get away from assumptions concerning inflation or market returns. Deriving values in step 2 gets tricky; one can always use the 4% rule or assume some other constant factor. It’s worth saying (for all of the methods) that building in fudge factors to leave some leeway for underwhelming market returns is probably a good idea, since getting back into the workforce after a long spell of early retirement might prove difficult. Personally I’m very fortunate that this should be possible if I decide to push hard on it. It’s difficult to say though – past performance is not an indicator of future performance, and I’m at a point where I’d say my past spending is also not an indicator of future spending. I think I’m pretty frugal, but don’t entirely fancy maintaining the same degree of strictness I’ve been running with in my university years throughout. It’s certainly possible, but also definitely not easy, and I’m not sure it’s what I’d want to do. # Inflation and the Substitution Effect I sometimes pick up my groceries at Iceland (the supermarket, not the country). When I was there today, I noticed a bunch of frozen pizzas which were typically on sale at £1 now on sale at 2 for £2.50. I thus bought zero of them, opting for a substitute (that turned out to be pretty good!). This is one example of a shortcoming of the generally accepted way of measuring inflation (via a consumer price index). Inflation refers to an increase in general prices of goods and services (in numerical terms). With the devaluation of the pound after the Brexit vote, I’ve experienced imported inflation (because a pound buys fewer dollars/yen etc., and overseas suppliers of goods want to be paid in their currency). Of course, one problem is how one measures “general prices of goods and services”. Typically, inflation is measured via changes in a consumer price index. This is an aggregate of the prices of a supposedly representative basket of goods and services. The basket is reviewed periodically, as some goods become more (or less) frequently consumed. For example, in the UK, this is managed by the Office for National Statistics. The 2016 basket is listed here, and changes included the addition of video game downloads and removal of nightclub entry. Of course, individuals’ spending patterns can be very different from the “average”. One can imagine a personal inflation rate that could be very different from CPI. For example, I don’t drive, so a significant increase in the price of cars might not affect me very much. (There could be some knock-on effects, e.g. if the cost of minicab rides increases.) Similarly, an increase in the cost of recreational boats (which are part of the basket) would have little impact on me. Conversely, I consume a fair amount of potatoes, so a cost increase there could have an outsize impact on me… It probably won’t, actually. If potatoes become expensive, I will eat fewer potatoes and more rice or noodles. Something similar actually happened after the Brexit vote. I used to enjoy an occasional instant ramen treat from the Japan Centre in London. However, with the pound falling almost 20 percent against the yen in the wake of Brexit (from 160 to about 130), prices were increased significantly. In many cases, this exceeded the 25% cost increase, perhaps because they wanted to avoid further increases if the pound fell further. GBPJPY is now about 149, but the GBP prices haven’t moved, possibly for the same reason. In any case, I’ve switched to eating more of other forms of carbohydrates. Note that the above doesn’t always work as goods actually need to have a reasonable substitute. Medical services, (possibly imputed) rent and education come to mind. There are opportunities for geographic arbitrage here (e.g. travelling to Brazil as a medical tourist, living in an RV or attending university in continental Europe respectively), but these tend to have higher barriers to entry. This is the key intuition behind chained CPI; it aims to account for people consuming different goods and services as prices change. This is currently being considered in the US as part of Trump’s tax reform (and would, in the future, save the government money by reducing tax bracket adjustments – chained CPI would be lower than CPI). Of course, computing chained CPI is a hard problem. For example, there was a parasite problem in salmon farms in early 2017, causing supply to fall. (Again, this was another instance where my consumption habits changed.) Is trout an acceptable substitute? Cod? Chicken? Tofu? Any kind of food? I’d answer yes to the first and kind-of to the second. For the last three, that would probably only be the case in emergency circumstances. There were similar problems with an iceberg lettuce shortage in early 2017, and yet people continued to buy them in spite of prices almost tripling. Furthermore, one typically only gets to witness substitution effects after the fact. If the price of lettuce triples, it’s difficult to predict how many people would switch or partially switch to substitutes. In practice, statistical offices usually come up with a preliminary estimate and then revise it when they have more data available. As an individual who has responded to the substitution effect quite a few times, especially since I moved to the UK, I think chained CPI makes sense. For me at least, the substitution effect is real and powerful. It hasn’t received the friendliest of receptions in the US, at least in part for political reasons (it hits the poor and elderly hard, for various reasons). It does make sense to have a different metric to use for benefits or the state pension, perhaps more tailored to the relevant populations (should video game downloads, generally speaking, really go into an elderly CPI?). However, in general the chained CPI seems more accurate and I see no reason not to use it. # The Time Value of Money There is an apocryphal interview question I’ve come across several times: Would you prefer to have a dollar today or a dollar one year from now? I do technical interviews for software engineers. Hence, this isn’t a question I would typically ask candidates (even if at times I wish it was – it could be interesting to see how candidates react!). Although it naturally seems like it would fit in a financial context, it seems too easy to be used as a serious question. Anyway, the answer I’d go for is “today”, because I could take the dollar and put it in the bank to earn interest. In practice, I’d invest the dollar. Furthermore, inflation is more likely to be positive than not, and this eats away at the value of the dollar. The idea that getting the dollar now is better is known as the time value of money. That said, I can also see legitimate cases why one might argue for “one year from now” – mainly centering around the idea that custody of the dollar is taken care of (assuming we allow this to be assumed!). Conversely, if you asked me a slightly different question: Would you prefer to have a dollar today or$100 one year from now? I would probably go for the hundred dollars, because my investments are very unlikely to increase hundred-fold (unless we have hyperinflation) in a year. As before, there are legitimate cases why one might go against the grain of financial theory – cash flow issues, in particular. If the amount is reduced a fair bit, such as to $1.09 (for me at least), then the decision gets more difficult. Using some kind of intermediate value theorem, there should be some value of r for which I’m indifferent to this question: Would you prefer to have a dollar today or$(1 + r) of today’s dollars one year from now? The conventional theory here is that if I got the dollar today, invested it for a year, and then have (1 + r) of today’s dollars, then I should be indifferent. I’m not sure I agree in practice. This is mainly because of the aforementioned cash flow issues. If 6 months on I find that I need the dollar, I can take it out and still keep the partial returns. You would need to give me an illiquidity premium. (Of course, I’ve assumed here that I invest in liquid securities.) There is also another shortcoming of this question. The size of the capital relative to my net worth would also affect my answer. Rather interestingly, I think I would take the money early for small or large amounts, but consider waiting for medium-sized ones. For small amounts, I would need to remember that a capital inflow is coming in a year’s time. The cost of tracking this could exceed the “premium” I derive from waiting. Conversely, for massive amounts, we start delving into the realm of diminishing marginal utility – if I could pick between $1 trillion today and$1.1 trillion this time next year, I’m pretty sure I’d pick the former. Up to this point, we’ve also avoided what’s known as counterparty risk. The person offering the money might become insolvent within the year. This would bias people towards taking the money now, and is reminiscent of a well-known proverb (“a bird in the hand is worth two in the bush”). Nonetheless, this practice of time discounting is useful when trying to assess the value of investments or securities, such as annuities or structured products. It is also frequently used in discounted cash-flow analyses, which are useful for determining if business ventures are likely to be profitable. I have not had to put this skill into practice yet, though (well, apart from the Computational Finance exam I did at Imperial). In theory, these concepts should be applicable to other resources or assets which (1) appreciate over time, and (2) can accumulate in value without substantial effort. That said, I’ve struggled to think of assets outside of the standard “investment” universe (stocks, bonds, real-estate, commodities, private equity, collectibles?) that satisfy both criteria. I did think of social capital (i.e. friendships, reputations) and human capital (for me, software development and other skills). They don’t seem to satisfy (2), though it could be argued that (2) is too strict. For example, by going about my daily routine, I already (hopefully) absorb more and better dev practices. Similarly, one needs to (well, should) do one’s homework regarding asset allocation and understanding one’s investments. Also, in practice to maintain one’s asset allocation one needs to rebalance a portfolio. # On Paying Oneself First The expression “pay yourself first” is one very frequently put forth in personal finance circles. In practice, this often involves automatically rerouting some amount of money into a separate savings or investment account whenever a paycheck comes in. Besides the mathematical benefits (compounding; to some extent, risk mitigation via cost averaging if one’s investing), I can see how the approach could be useful psychologically (in that it reflects a shift in one’s mindset concerning money, as well as a conscious prioritization of savings and capital growth). I’ve personally been following this, though I’ve been investing the money into a portfolio of equity trusts and index funds (I’d recommend having an emergency pot to sustain a few months’ expenses first, though). I see no reason why this can’t be generalized beyond money, though, especially since we do have to manage far more important scarce resources. In particular, I’m looking at time. I’ve received feedback that I tend to slant towards being outcome-oriented, and this does mean that if an important project is lagging but I see a way to recover, I’ll tend to vigorously pursue it – it’s thus not too difficult for me to end up spending 70 or even more hours a week on said project (or even 90-plus, as I did in second year at Imperial). I’ve learned that this is unsustainable, but for me it’s still not the easiest decision to drop something (to the point where a friend commended me for actually using some of my leave during the industrial placement!). If we look at time in the same way, we get 24 hours a day, or 168 a week; things like sleep are of course important, but I tend to see them more as bills or taxes that need to be paid (not paying them tends to lead to interest!). So paying myself first would involve reserving some time for something else; I’d propose personal learning and development as a good candidate for this. This is perhaps unsurprising; I suspect that if I polled people as to what “investing in oneself” entails, many answers concerning education would be forthcoming. Like bonds and equities (hopefully), developing one’s skills can lead to future payoffs. I do tend to partition this time into roughly three different domains: 1. Technical development – e.g. paper reading, programming contests, code katas. These are likely to feed back in to my software engineering work. I’d probably consider these similar to equity income mutual funds; they (ideally) grow in value reasonably steadily and generate nice payoffs along the way too. 2. General professional development – e.g. writing, finance, tax. Useful for both software engineering work (from what I can recall, I’ve written a lot of docs) and also for managing my own professional matters. Again, these are generally useful; perhaps they have smaller immediate payoffs than technical development, though I tend to think of them as also very important in the long run. Perhaps these would be more similar to a growth-focused fund then? Or even BRK-B (or BRK-A; we’ll get to that but I don’t have a quarter of a million quite yet!) 3. Random development – singing, photography, etc. These are generally quite fun (I do also enjoy software engineering, writing and finance, but they like all other domains tend to have diminishing marginal returns), and might suddenly explode into usefulness or value given the right conditions. Perhaps these are like emerging market equity funds that are focused on a specific country. There’s certainly a fair bit of variance, but the returns can sometimes be impressive. (If one wishes to take the metaphor even further, deeply out of the money options could be more accurate; they certainly add a bit of fun to a portfolio, too! That said, I have no direct experience with option trading.) Of course, money and finances are an important thing to manage, and I believe that paying oneself first is a good strategy there. However, it does feel to me that time is even more important. My admittedly equity-heavy portfolio lost around 6 percent during the US election jitters, and this is a portfolio which I’ve built up over the course of about a year and a half now – yet I didn’t feel much (I was expecting a further fall post-Trump win, though in the end the markets rallied). I’m the kind of person who can get annoyed if I find that a day was wasted – let’s not even get started on my reaction to 6% of 18 months (just over a month; 32.87 days). Of course, we have to be careful about jumping to conclusions about what constitutes waste or loss, but I think the point that I find loss of time more painful than loss of money (at least at a small-percentage scale) still holds. # An Unexpected Economic Computation Here’s a calculation that I found pretty surprising. Have a guess as to its meaning: $\dfrac{\left( \dfrac{60}{(24-15)} \times 2.40 \right)}{1 - (0.4 + 0.02)} \approx 27.59$ If you guessed something relating to savings, you’d be on the right track; the denominator would probably have clued you in to something along those lines, since the $0.4 + 0.02$ is the marginal tax that higher rate earners in the UK would face (as of the 2016/17 tax year). You might have figured out then, that the fraction on top probably refers to some form of a time savings expressed in minutes. The $2.40$ is probably a bit more puzzling especially if you’re not from London; from the previous observations it’s the cost of taking some convenience that saves nine minutes. That’s right; you might recognise that as the price of a single-trip Tube fare. Putting that together, that computation is actually the effective hourly gross rate I’m being paid to walk to my office as opposed to taking the Tube. Specifically, I’m comparing (a) taking the Tube, which takes about 15 minutes plus a single trip fare of 2.40, and (b) walking, which takes 24 minutes. We’re thus looking at a net rate of 2.40/9 minutes. Of course, I pay for the Tube using post tax money, so we need to factor that in, and to get an hourly rate we scale that linearly. Now of course this doesn’t mean I’ll never take the Tube to work again, or even that I’ll take it much less often – the 9 minute difference can be important (say there’s a high priority issue going on, or I’m about to be late for a meeting), and walking can be pretty unpleasant (if I’m very tired, it’s late, or it’s too cold; that said, I’ve done this in the current ~0 degree weather and found it fine, at least). Probably, doing this in the mornings would generally be more reasonable and sustainable as I’m pretty tired by the end of the day. Saving one trip even occasionally is enough of a win, I’d say, and getting some time to clear my head in the mornings is probably not too bad as well. A question could be why I don’t use a Travelcard for the computations instead. The weekly one for zones 1-2 costs $33$ and since I largely stay within zone 1 and 2 we can assume that that’s all I’ll need (I’ll ignore the odd trip to Heathrow). I think $16$ or $17$ is probably about the number of rides I’d take in a week if I used it aggressively (two a day, maybe three or four on the weekends). We can re-run the numbers with $\frac{33}{16.5}$, which means a trip costs $2.00$. Our final answer is $22.99$ which is still pretty solid (if you work 42 hours a week that’s a gross annual income of just over $50,000$). Anyway, as it turns out because I use contactless, if I happen to have a week where I do travel a lot I’ll effectively be using one. Now let’s have a look at the monthly or annual tickets. These cost $126.80$ for monthly, or $1,320$ for annual. Assuming that we make $2.357$ trips a day on average (taking the middle of the estimates above), and a month as $365.25/12$ days, with the monthly card the cost of a trip falls to $1.767$ and with the annual card it falls to $1.533$. The “gross wage” calculations become $20.31$ and $17.62$, which while not as high are still solid amounts. $17.62$ an hour corresponds to about just over $38,000$ in gross income assuming a 42 hour week, which would put you in about the 78th percentile of earners according to the ONS data for 2013-2014. I guess this will probably be a bit lower now, with inflation/wage growth, but still decent. Assuming usage is high, it seems that the annual card might be the way to go. However, an issue is that I sometimes need to travel overseas for work potentially a fair chunk (let’s say personal plus work travel adds up to two months a year), so the annual one is quite definitely a non-starter. Multiply the price of a trip by $6/5$ to factor that in, and you end up with a per-trip figure that’s higher than the monthly card. I have been getting the monthly card in the past, especially when I was a student and it only cost $86.50$, partly because walking to Imperial was going to take a very long time (that route yields savings more on the order of $30$ minutes). Note that while losing a card is a legitimate concern, you can usually recover the travel passes using the TfL website. (N.B. This idea was conceived independent of the currently ongoing Tube strike, though in a sense it’s a bit of a bonus that things aren’t affected.) • 1 • 2
2020-04-08 18:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41680607199668884, "perplexity": 1557.0777947756492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00385.warc.gz"}
https://bioinformatics.stackexchange.com/questions/8546/repeatscout-freq-table
# repeatscout freq table I'm running Repeatscout. I built the l-mer table called myfile.freq of myfile.fa Can anyone tell me what do they mean the second and third columns produced as output? here I report an example: AAAAAAAAGCGGGA 3 107776875 AAAAAAACTGTATG 10 83440519 AAAAAAAAGGCGTA 3 41037187 AAAAAAACTTGAAT 7 94493612 CATACATGCATGCA 1065 125671338 CATACATGCTTGAA 7 121799834 AAAAAAATCATGCA 10 95493021 AAAAAAAGTCCAGT 3 125127980 AATTCACATGTATG 7 102505668 I read the tutorial that explains how to run the program: https://bix.ucsd.edu/repeatscout/readme.1.0.3.txt. It says: "First, build_lmer_table creates a file that tabulates the frequency of all l-mers in the sequence to be analyzed" • Welcome to the site Jonny. Did you read the documentation of the software you are using, or any tutorial where it is used? Did you try searching on the internet for this format/output? – llrs Apr 29, 2019 at 14:48 • Yes, I searched and I read some manuals but I really don't understand what they mean. I know that it's built this table of l-mer from the original genome fasta file Apr 29, 2019 at 15:13 • If you could link and cite what parts you don't understand we could help you better – llrs Apr 29, 2019 at 20:11 • " First, build_lmer_table creates a file that tabulates the frequency of all l-mers in the sequence to be analyzed." I SUPPOSED that the second column could indicate the number of l-mer, but I don't understand what the third column is showing Apr 29, 2019 at 21:15 • Where are you taking this from? Could you provide also the link? Do not forget to edit the question to include this relevant information – llrs Apr 29, 2019 at 21:32
2022-05-19 14:41:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2460527867078781, "perplexity": 1535.6431510305224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00535.warc.gz"}