url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/differential-equations-problem-2.466293/
# Differential Equations problem 2 ## Homework Statement $2(2x^2+y^2)dx-xydy=0$ ## Homework Equations $let x = yv$ $dx = ydv+vdy$ ## The Attempt at a Solution $(4v^2y^2+2y^2)(ydv+vdy)-vy^2dy=0$ $4v^2y^3dv+4v^3y^2dy+2y^3dv+2vy^2dy-vy^2dy=0$ $4v^2y^3dv+2y^3dv+4v^3y^2dy+vy^2dy=0$ $2y^3(2v^2+1)dv+y^2(4v^3+v)dy=0$ dividing all sides by y2 $2y(2v^2+1)dv+(4v^3+v)dy=0$ combining all ydys and vdvs $\frac{(2v^2+1)dv}{(4v^3+v)}+\frac{dy}{2y}=0$ $\int{\frac{(2v^2dv)}{4v^3+v}}+\int{\frac{dv}{4v^3+v}}+\int{\frac{dy}{2y}}=0$ Err.. I don't know what to do next.. I can only integrate the 3rd integral.. ## Answers and Replies Related Calculus and Beyond Homework Help News on Phys.org Dick Science Advisor Homework Helper Two words. Partial fractions. Naa, my memory.. I always forgot those! Thank you very much for reminding! I'll try to review first that partial fraction. It seems like I have forgotten the process of it already.
2020-05-27 23:03:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5827592611312866, "perplexity": 5126.279971269681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00069.warc.gz"}
https://chat.stackexchange.com/transcript/36/2020/9/22
1:29 AM @BalarkaSen I don't know if that is necessarily true. German wikipedia says that that the random variable that determines the number of dirac measures in the simple representation of a point process can take infinity as a value, contradicting english wikipedia 1:51 AM Yup, we need the index-variable to be possibly infinite, see Corollary 6.5 in "Lectures on the Poisson Process" by Last & Penrose 3 hours later… 5:16 AM @orientablesurface the complement of the Cantor Set is open, and, in $\mathbb{R}$, an open set is the union of countably many open intervals. The Cantor Set is not the union of countably many intervals. Which is pretty much what you've said. @user2103480 I'm reading Last & Penrose. Note carefully the measure I started with is a probability measure. The index variable you can take to be just Poisson 5:29 AM When proving theorems about linear codes, can we always assume that the generator matrix is in standard form? 2 hours later… 7:11 AM 0 This question is regarding a step in the proof of Theorem 12.13 as stated in Joy of Cats. The theorem says, Every co-wellpowered cocomplete category with a separator is wellpowered (see this) and complete. In the course of the proof, it is shown that if $\bf{A}$ be a category which is both co-w... Any help is appreciated. 1 hour later… 8:18 AM In the context of the extended real number line, does it make sense to say that the limit of the sequence $\infty, \infty, \infty, \ldots$ is $\infty$? 8:32 AM yes @LeakyNun Oh, but how does the $\epsilon-\delta$ definition work? $\infty - \infty$ is undefined I guess Things like $|\infty - \infty| < \epsilon$ for all $n > N$ won't work @S.D. if you want to put a metric on the extended real number line, you can firstly contract the original metric on the real number line so that $d(x,y) < 1$ for all real $x$ and $y$ and then declare $d(x, \infty) = 1$ and then you can use the epsilon-delta definition also, the extended real number line (if you add in both infinities) is homeomorphic to $[0,1]$ @LeakyNun Ummm, actually in the context in which I'm working, I need to preserve the standard metric on $\mathbb R$ then you can't use the epsilon-delta definition I guess I guess I need to approach the problem differently. A sequence of $\infty$'s won't work 8:46 AM just use the open sets neighbourhoods of $x \in \Bbb R$ are $(x-\varepsilon, x+\varepsilon)$, hence the epsilon part but neighbourhoods of $\infty$ are $(M, \infty]$, so you need to use the $M$-$N$ definition of limit Yeah, I will think about it. Thanks a lot for the help! i.e. $(\lim_{i \to \infty} x_i = \infty) \equiv (\forall M \exists N \forall i : i \ge N \implies x_i \ge M)$ @Alessandro I found a post-rock band called "Lakes of Wada". First album is "Star-finite Complexes" Has to be a troll Seems like great music @LeakyNun Oh, I see your point. Yes, if we use the M-N definition of limit, it makes sense that $\infty$ is the limit of $\infty, \infty, \infty, \ldots$. :) just use filters 8:54 AM @LeakyNun Filters? 9:12 AM @BalarkaSen lol I'll listen to them while doing some topology later Actually they don't have an album just some tracks I tried to spook them in their youtube channel New EP coming in Oct 9 tho 1 hour later… 10:33 AM Why are the $p\Bbb Z\subset \Bbb Z$'s for prime $p$ not irreducible components ? what do you mean irreducible components what's the topology Zariski so you mean Spec Z Yes irreducible components correspond to minimal primes right 10:34 AM The definition I had is maximal closed subsets that cannot be the union of two proper closed subsets yes, and what I said is a theorem anyway if you want to use the definition directly, consider the bigger closed subset Spec Z So I can't write $\operatorname{Spec} \Bbb Z = \{p\Bbb Z\} \cup \{q\Bbb Z | q\ne p\}$ ? the second one isn't closed I mean, why is$\{q\Bbb Z | q\ne p\}$ not closed ? because the generic point (0Z) is dense (closure is the whole space, i.e. every nonempty open contains it) 10:41 AM Oh ok Thank you ! 11:08 AM There was once upon a time back in high school I did made that $\sin (x+y) = \sin x + \sin y$ mistake. Good times this one? @Balarka 11 The $n^\text{th}$ root of a real number $x$ is $$x^{1/n}$$ If $n=0$ then $1/0$ is undefined, so there is no such thing as the $0^{\text{th}}$ root. not defined for the same reason why you cannot divide by zero in any semirings The map $f : x \to y$ is not injective I think this undefineness holds for any noninjective maps $f$ whose kernel is the same as its domain The function $y=x^0$ is also less behaved than $y=0x$ since the later it is continuous for all $x$ while $y=x^0$ is discontinuous at $x=0$ 11:58 AM @BalarkaSen Ah yes true. But I'm not sure whether your example of throwing countably many darts is an accurate description. For the whole space, the probability that n darts are in the space is given by the poisson distribution, so almost surely we only throw finitely many darts onto X, or am I missing something 12:16 PM @user2103480 Valid point. But I think this is OK if the measure on X is $\sigma$-finite, no? Let ( E , A , μ ) {\displaystyle (E,{\mathcal {A}},\mu )} be some measure space with σ {\displaystyle \sigma } -finite measure μ {\displaystyle \mu } . The Poisson random measure with intensity measure μ {\displaystyle \mu } is a family of random variables { N A... hmm interesting so the measure used in the sum is of a special form so they are not things with compact support as I originally thought But the one Balarka mentioned seemed to be a sum of dirac deltas, which has compact support 1 As far as I can tell it is bounded, as it's within $[0, \sqrt 2]$, and is closed as there cannot be an open neighbourhood about 0, and as it's closed and bounded it is therefore compact. However I'm not sure if closed and bounded imply compact in this situation, as I've only ever used this proper... Regarding the second comment on the accepted answer by Stephan... Shouldn't $I$ be $[0,\sqrt2]$ instead of $[0,\sqrt2)$? It makes no difference when you intersect it with $\Bbb Q$ But see this line - "But then q∈I as I is closed".. how do we directly say $I$ is closed? 12:33 PM yeah, should be $[0,\sqrt{2}]$ the entire point of the example, though, is that $[0,\sqrt{2}]\cap\mathbb{Q}=[0,\sqrt{2})\cap\mathbb{Q}$, which ensures on one hand that the set is closed (and obviously bounded), but on the other that it is not compact @MikeMiller The Poisson question was a mistatement. @Thorgott When he says, I is closed, he means in R, right? This basically elaborate what Balarka said about the counting measure relationship with the poisson measure It holds within some compact set of positive measures and is only locally finite That means, most of the stuff probably happened in the tail of the poisson variable so the summands isn't a finite object even though you only need a finite number of them @SayanChattopadhyay I assume they meant what we proved yes 12:47 PM No they actually changed the question. Now by an infinitesimal symmetry $X$ of the triple $(M, \{ \cdot, \cdot \}, H)$ where $H \in C^{infty}(M)$ now they mean to say a vector field $X$ such that $L_X(H) = 0$ and $L_X(\Lambda) = 0$, where $\Lambda$ is the bivectorfield defined by the poisson bracket @BalarkaSen I'd think so, for example if we take an infinite strip in R^2 meh Yeah, I don't understand what's happening anymore You're presumably supposed to calculate $L_X(\Lambda)$ explicitly I dont even understand how I should think about a bivectorfield 12:49 PM I'm just assuming that an infinite strip actually works without any probability glitches, btw Sums of tensor products of vector fields. You give it two functions it gives an output function @MikeMiller Yeah, that's what I thought "Differential operators" which take 2 inputs and spit out 1 output function, which satisfy Leibniz in each coordinate Why do I care about them? Do they have anything interesting about them? Something like that It's just the geometric way of expressing what a Poisson bracket is Possion brackets are operations on pairs of functions Bivector fields give you precisely that 12:51 PM Hmm okay Nobody cares about them outside the context of Poisson I see These poisson guys seem very boring as of now, I thought they would say cool stuff about symplectic manifolds I don't like that stuff lol Maybe this is an interesting question. Does this bivectorfield have some kind of a global flow associated to it? I know too few questions about symplectic manifolds 12:57 PM Classify them in $n=4$ maybe ? What's special about "symplectic" in that question I have heard its harder. Or maybe that's just a plot to convince me to do symplectic geometry Idgi I don't know but $\Bbb{C}P^n$ is a very nice example of a symplectic manifold. Maybe one can ask how its algebro-geometric properties are influenced because of a symplectic structure The symplectic form on, Fubini-Study, CP^n can be interpreted as some average right How does that go Ah yes $\omega$ integrated over a curve is average intersection number of the curve with CP^1s in CP^n I think Can you do this in general 1:02 PM Oh hmm General as in? Interpret the symplectic form as average intersection number with some family of curves symplectic guys have lots of curves Who knows I don't know simp geo That's a nice question though i dunno lol Even the claim that there's a holomorphic sphere passing through any two given points is usually hard to prove it's some statement about GW invariants yeah lol nuts 1:07 PM @SayanChattopadhyay I mean every function f you feed it gives you a vector field which you can flow. But past that idt you can say much im convinced its impossible to learn symplectic geometry @MikeMiller Wow crazy after this gromov pseudoholomorphic crap there's garbage about moduli spaces not my cup of deal is there any reason to care about symplectic geometry if you're not a physicist symplectic structure is everywhere i dont know the right questions to ask 1:10 PM Is there any reason to care about anything? Lmao Those are notoriously irritating yeah Just do it in characteristic 2 :p 1:12 PM I got it tho, he was trying to compute Morse homology of Klein bottle RIP lolol Idk what Morse function I would even use off the top of my head We did it with a tilted height function like torus think of it as immersed in R^3 I guess Anyway my point is my only motivation behind this was to tell him a pun which I did, at the end Ok I guess I do 1:14 PM "the sign convention in most Morse homology books are quite badly Witten" Meh not gonna do Yeah man forget about it who remembers $TU(a) = TM(a, b) \oplus \Bbb R \oplus TS(b)$ or whatever you switch something and you'll forever be stuck They needed to come up with some mnemonic Oh @Balarka, what text are you following for Representation Theory? Steinberg I have to read rep theory, I took it as audit and then forgot about it Ok gotta run Ah we seem to doing that too, but the guy is overtly focused on Burnside stuff and is going to skip all the fourier analysis, which is annoying 1:50 PM Let $$0 \rightarrow A \rightarrow C \rightarrow B \rightarrow 0$$ be an exact sequence of modules. Call the map from $A \to B$ $f$ and from $C \to B$ $g$. Suppose there is a map $k : C \to A$ such that $fk = id_{A}$. Show that $C \cong A \oplus B$. I tried showing that the sequence is split by defining $j : B \to C$ via the following way. Let $b \in B$. By surjectivity of $g$, there is a some $c_0 \in C$ such that $g(c_0) = b$. Let $j(b) := c_0$. One obvious problem is determining whether it is well-defined. But showing $j$ is well-defined seems equivalent to showing that $g$ is invertible, in which $C \cong B$, which seems weird. I could use some help. You mean $f\colon A\rightarrow C$ and $fk=\operatorname{id}_C$? Yes, sorry. Oh wait. The problem statement says $f : A \to C$, $k : C \to A$ and $kf = id_{A}$ Do those compositions make sense? 2:07 PM yeah, if you know that the existence of a right splitting implies a direct sum decomposition, then you can just mimic the same proof in this scenario So, a right splitting is when there exists a map $j : B \to C$ such that $g \circ j = id_{B}$? yeah not sure if that's standard terminology for the record, but as long as you know what I mean the point is that these scenarios are symmetrical in a sense I do understand it, and I like the term actually. In my case I have a left splitting. 3:03 PM Taylor's theorem is usually stated for an approximation around a given/fixed point. Can I still use it for an approximation around a point that is not given/fixed? For example, say I have a function $f$ that is dependant on $(x_1,x_2) \in \mathbb{R}^2$, can I use Taylor's theorem for multivariate functions to approximate $f(x_1,x_2)$ around $(x_1,x_2 - y)$ for a $y \in \mathbb{R}$? I guess this is a pretty weird question. Not sure if I would need to give more context to properly ask. 3:15 PM I mean, you can just do the usual approximatioon for each $y$ separately, what else do you want? Hm maybe. It's a bit hard to explain what I want without giving overwhelmingly much context. Here's an attempt: What I'm trying to do is show an upper bound by using Taylor. It's about testing the null hypothesis that $\tilde{\beta}_1} = 0$ where $\tilde{\beta} \in \mathbb{R}^2$ is an unknown vector of parameters. The null hypothesis can be rewritten by saying that $\tilde{\beta} = \nu$ where $\nu_1 = 0$ and $\nu_2 \in \mathbb{R}$ is arbitrary. Now, I would like to upper bound something like $|\log(1+e^{\beta}) - \log(1+e^{\tilde{\beta}})|$ by approximating $\log(1+ e^{\beta})$ around $\beta = \beta - \tilde{\beta} = (\beta_1, \beta_2 - \nu_2)$. Would that be legitimate? 4:17 PM Just realised something interesting in Wheels The identity $0x+0y = 0xy$ has the same form as the logarithm identity $\ln (ab) = \ln a + \ln b$ 4:58 PM Hello! Is there anything I can do to find the error function depicted here (orange plot) given what I have? desmos.com/calculator/wbpxs2jyjp 5:09 PM continuously varying group What is that? 5:58 PM Just need a bit of a half turn to translate those peaks over a bit... I think I'll start, however, by normalizing everything to ln and exp. This'll surely incorporate metallic means in some way as the circular functions contain all unique polygons and all unique circles. 6:16 PM The residue theorem is in some sense a limit, right? Cause you can’t really define f(a) if f has a pole at a @xcodeking Where does f(a) appear in the statement of the residue theorem? I guess I should have been clear. I'm talking about $\oint f(z) dz = 2\pi i Res(f,a)$ where a is the only pole of f enclosed by the integration contour So what is the question? That doesn't require a notion of f(a) though The contour encloses a; a is not on the contour The residue theorem is not truly a limit. It is a consequence of Green's Theorem (Stokes's Theorem) and the wonderful fact that $\int_{|z|=c}\frac{dz}z = 2\pi i$ for any $c>0$. But, yeah, what Thor said. What is the question? 6:50 PM hello Nice, I shaved off some more error from my approximation: desmos.com/calculator/xhha0bse0c 7:04 PM I have a question on root of unity 1 Let m and n be positive integers have that have no common factor. Prove that the set of numbers $(z^\frac{1}{n})^m$ is the same as the set of numbers $(z^m)^\frac{1}{n}$.We denote this common set of numbers by $z^\frac{m}{n}$. Show that $$z^\frac{m}{n} = \sqrt[n]{|z|^m}\left(\cos\left(\frac{m}{n}... the book transformed 𝛽=(𝑚𝜃+2𝑘𝜋)/𝑛 into 𝛽=(𝑚𝜃+2𝑚𝑘𝜋)/𝑛 I don't know why 1 hour later… 8:20 PM Is the composition of flows (of vector fields) always a flow (of a third vector field)? How do you compose the flows? In an hour I will be an expert in character theory Going to power through Ch. 4 Steinberg I'm eagerly awaiting your proof of the odd order theorem 8:40 PM 9:30 PM @Balarka @Alessandro Rivers of Nihil - Where Owls Know My Name Very good album Gonna check it out Fanks npnp Pre-seminar talk tomorrow on Langlands correspondence for GL_2 wanna get a hard talk cuz the prof is the guy I wanna do my master thesis with rofl Cool Let us know how it goes will do :D @EdwardEvans thanks, what genre is that? The Lake of Wada EP was pretty cool @Balarka 9:33 PM errr it's like @Alessandro Yeah tech death with weird prog elements now that's interesting Subtle Change is the weirdest song on the album 10:22 PM @BalarkaSen Not only that! Polish spaces behave well with respect to countable products (the borel sigma algebra of the topological product is the product of the borel algebras of the spaces); for weak convergence to behave well, we need the space to be polish; kolmogorov's extension theorem works up to polish spaces (maybe standard borel spaces) and for polish spaces, regular conditional distributions exist o/ I need some help Need to take this imgur.com/a/dOjTHGg to only contain negation and conjunction by the formulas, not using a truth table am a bit swamped All I get is a permanently false result hi all 0 Consider a commutative non-associative ring A of finite dimension, that is power-associative. Now define weakly algebraicly closed in T as :$$ x^2 + a_1 x + a_2 = 0 always has at least one solution $x$ that belongs to $T$. ( the coefficients $a_1,a_2$ are in $T$ ) So I wonder : When is \$... any ideas ?? Ah nah it's not just weak convergence, it's crucial for measurability of the map d(X,Y), where d is a metric on a space X. So to handle the differen types of convergence we define, polish spaces are at the least very convenient to work in 10:41 PM Ive tried every possible combination of laws only to end up with sth like X & ~X I have found out the reason that it's so opaque to me: it is apparently a different approach to SPDE via martingale measures and rough paths. The trees and forests are there to abbreviate some kinds of iterated integrals. What I'll be learning about is the so-called variational approach via some functional analysis hokus pokus that doesn't look much less disturbing; see chapter 5 in http://page.math.tu-berlin.de/~scheutzow/SPDEmain.pdf, and especially 5.2, for the definition of a solution of an SPDE, in that approach note that polish spaces pop up in the very first theorem/remark, lol Damn, apparently classical probability theory really lives in polish spaces I'll stick with my particular area of inscrutable math I give up, this is hopeless to solve LOL, are we having an inscrutability contest? nah, everything's inscrutable haven't yet seen an area that isn't 10:53 PM I think it really depends on the person and the person's training, strengths, etc. 11:04 PM it's all hogwash @MikeMiller if I convince myself, I'll have a small taste of your inscrutable area though. I'm not yet sure if I will It's topology 2; the syllabus would approximately be: homotopy groups (homotopy groups of spheres, relative homotopy groups and the long exact sequence of a pair, hopf fibration and the long exact sequence of a fibration), then singular homology (excision, jordan-brouwer splitting theorem), CW complexes (hurewicz, whitehead), homology with coefficients, cohomology (cellular/singular, künneth's formula), poincare duality and then some I haven't done algebra in a while and also never seriously worked through a topology 1 course so yeah dunno if it's too much work. I'd just hope some intuition rubbed off from my exposure to simplicial sets and kan complexes I'm a bit lazy to seriously work through simplicial homology, is it necessary to do that? For computations it probably is, but I'd hope that it's not necessary to grasp singular homology and work with it @user2103480 Apart from trivial cases the countable product is Borel isomorphic to any of its factors with Polish spaces though I'm not sure what my point is @AlessandroCodenotti I'm slightly confused either way Because I don't know whether that converts to a weaking of the conditions for good behaviour™ of the probability 11:14 PM Dunno Anyway separable already implies that weak convergence is equivalent to convergence in the Prokhorov metric, so maybe you don't need Polish for weak convergence to beheave well @AlessandroCodenotti maybe I count that as good behaviour? bigthink Polish gives that the Prokhorov metric is also complete which is quite nice I suppose? Everything's nice in polish space especially the beer prices @user2103480 That's why I do descriptive set theory :P 11:21 PM Pfff merriam webster says hogwash's initial sense was "a semiliquid food for animals (such as swine) composed of edible refuse mixed with water or skimmed or sour milk" accurate description of what it feels like If that isn't probability I don't know what is amen @user2103480 So is that what you're up to in Berlin now? Hell, I don't know. Next semester will be very probability-heavy 11:32 PM I'm sorry? Stochastic processes in neuroscience and fluids (two seperate courses), SPDE and one course which I'll decide when that extremely slow TU releases their course catalogue Aren't classes starting in like 2 weeks? Haha no, that would be insane. They start 2nd november Thanks to the rona Oh ok @AlessandroCodenotti yeah I know 11:35 PM I actually have no idea when classes begin in Muenster Just realized let's see if I lose my taste after that dose of probability, but it can't be worse than higher category theory :D (which is a beautiful subject that I am just horrendously underqualified for in terms of prereqs) @AlessandroCodenotti you plan to take some? or just participate in seminars Not sure, there are a few classes that seem interesting but we'll have to see how strong my will to get out of bed is get out of bed? I literally didnt for some online classes hahaha They want to have classes in person (which I think is pretty crazy but anyway) @AlessandroCodenotti like, the whole uni, or just the math department? 11:39 PM At least that's what they told me this summer, maybe they changed idea in the meantime @user2103480 I don't know thank god my uni is not in NRW anymore, classes are almost exclusively online in berlin I think the regional rules are different They really didn't tell me anything, all they keep repeating is that I absolutely must be there before the 1st of October to sign my contract because otherwise the ground will open beneath my feet and swallow me the bureaucracy people will be mad @AlessandroCodenotti in what areas? @AlessandroCodenotti be there one time or permanently? There's a descriptive set theory class (thaught my advisor so maybe I should go lol) which is very relevant to the stuff I'll be doing, as well as a geometric group theory class that looks cool (they're doing small cancellation stuff which I never learned about properly) @user2103480 Just to sign the contract, I asked and there's no issue if I travel back to Italy in October for a while (which I'll do) I always forget. Thats the model theory GGT right? And you're trained in the analysis GGT? 11:43 PM trained is a very big word I've seen mostly the geometric side of things fair enough Small cancellation is something that both geometers and model theorists think about for some reason. Balarka should know more than me about what that is though @user2103480 No, you'll use cellular or something for computations. Simplicial is rarely computationally tractable Sounds like a standard course, maybe a little fast My personal area is low-dim topology / gauge theory. Very different flavor The lecturer actually works in differential & geometric topology (e.g. lectures about kirby calculus & 4-manifolds in the past). But I guess that is approximately the standard syllabus that lecturers have to teach in that topology cycle @AlessandroCodenotti damn, münster has a lot of classes for one math department. that's surely comparable to bonn You comfortable giving me a name? 11:54 PM @user2103480 Probably not as many as Bonn, but still quite a few Lokale Galois-Darstellungen - Prof. Dr. Schneider oof Schneider works a lot with the prof running the Langlands seminar Schneider is a big name in his field according to an arithmetic geometry guy I met in Bonn
2020-10-30 23:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151014447212219, "perplexity": 951.9127100923165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911792.65/warc/CC-MAIN-20201030212708-20201031002708-00463.warc.gz"}
https://chem.libretexts.org/Textbook_Maps/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_(McQuarrie_and_Simon)/24%3A_Solutions_I%3A_Liquid-Liquid_Solutions/24.2%3A_The_Gibbs-Duhem_Equation
24.2: The Gibbs-Duhem Equation For a system at equilibrium, the Gibbs-Duhem equation must hold: $\sum_i n_i d\mu_i = 0 \label{eq1}$ This relationship places a compositional constraint upon any changes in the chemical potential in a mixture at constant temperature and pressure for a given composition. This result is easily derived when one considers that $$\mu_i$$ represents the partial molar Gibbs function for component $$i$$. And as with other partial molar quantities, $G_{tot} = \sum_i n_i \mu_i$ Taking the derivative of both sides yields $dG_{tot} = \sum_i n_i d \mu_i + \sum_i \mu_i d n_i$ But $$dG$$ can also be expressed as $dG = Vdp - sdT + \sum_i \mu_i d n_i$ Setting these two expressions equal to one another $\sum_i n_i d \mu_i + \cancel{ \sum_i \mu_i d n_i } = Vdp - sdT + \cancel{ \sum_i \mu_i d n_i}$ And after canceling terms, one gets $\sum_i n_i d \mu_i = Vdp - sdT \label{eq41}$ For a system at constant temperature and pressure $Vdp - sdT = 0 \label{eq42}$ Substituting Equation \ref{eq42} into \ref{eq41} results in the Gibbs-Duhem equation (Equation \ref{eq1}). This expression relates how the chemical potential can change for a given composition while the system maintains equilibrium. So for a binary system, consisting of components $$A$$ and $$B$$ (the two most often studied compounds in all of chemistry) $d\mu_B = -\dfrac{n_A}{n_B} d\mu_A$ Contributors • Patrick E. Fleming (Department of Chemistry and Biochemistry; California State University, East Bay)
2018-07-21 15:36:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901857137680054, "perplexity": 936.1045621727656}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00446.warc.gz"}
https://www.yaclass.in/p/mathematics-cbse/class-9/co-ordinate-geometry-5564/re-4d276abd-0fc0-45c2-9093-83cef7210b4d
### Theory: Line parallel to the $$x$$ - axis: Consider drawing a line parallel to the $$x$$ - axis. If the distance from the $$x$$ - axis and the line is the same, then the line can be represented as $$y = c$$ (where $$c$$ is a constant). Line parallel to the $$y$$ - axis: Consider drawing a line parallel to the $$y$$ - axis and the distance between the $$y$$ - axis and the line is the same, then the line can be represented as $$x = c$$ (where $$c$$ is a constant). Example: Consider plotting the points $$(-1,2)$$, $$(0,2)$$, $$(1,2)$$, $$(2,2)$$, and $$(3,2)$$ and joining them. We get a straight line. Since the points are at the same distance ($$2 \ cm$$) from $$x$$ - axis. Then, the line is parallel to the $$x$$ - axis.
2021-06-21 20:09:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881304919719696, "perplexity": 113.76458321233518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00195.warc.gz"}
https://labs.tib.eu/arxiv/?author=J.%20Jonas
• ### The C-Band All-Sky Survey (C-BASS): Design and capabilities(1805.04490) May 11, 2018 astro-ph.CO, astro-ph.IM The C-Band All-Sky Survey (C-BASS) is an all-sky full-polarisation survey at a frequency of 5 GHz, designed to provide complementary data to the all-sky surveys of WMAP and Planck, and future CMB B-mode polarization imaging surveys. The observing frequency has been chosen to provide a signal that is dominated by Galactic synchrotron emission, but suffers little from Faraday rotation, so that the measured polarization directions provide a good template for higher frequency observations, and carry direct information about the Galactic magnetic field. Telescopes in both northern and southern hemispheres with matched optical performance are used to provide all-sky coverage from a ground-based experiment. A continuous-comparison radiometer and a correlation polarimeter on each telescope provide stable imaging properties such that all angular scales from the instrument resolution of 45 arcmin up to full sky are accurately measured. The northern instrument has completed its survey and the southern instrument has started observing. We expect that C-BASS data will significantly improve the component separation analysis of Planck and other CMB data, and will provide important constraints on the properties of anomalous Galactic dust and the Galactic magnetic field. • ### Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping and outer field suppression(1803.02569) March 7, 2018 astro-ph.IM Traditional radio interferometric correlators produce regular-gridded samples of the true $uv$-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the $uv$-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as "decorrelation" in the $uv$-space, which is equivalent in the source domain to "smearing". This work discusses and implements a regular-gridded sampling scheme in the $uv$-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space i.e. the time-frequency interval becomes baseline-dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping and outer field-of-interest suppression are achieved. • ### Using baseline-dependent window functions for data compression and field-of-interest shaping in radio interferometry(1607.04106) July 14, 2016 astro-ph.IM In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is a simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities "decorrelate", and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as "smearing", which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. In this work we investigate alternative approaches to low-loss data compression. We show that averaging of the visibility data can be treated as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. In particular, we show improved amplitude response over a chosen field of interest, and better attenuation of sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off, and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Karl G. Jansky Very Large Array (VLA) and the European Very-long-baseline interferometry Network (EVN). • ### Simultaneous VLBA polarimetric observations of the v=$\{$1,2$\}$ J=1-0 and v=1, J=2-1 SiO maser emission toward VY CMa II: component-level polarization analysis(1605.09572) May 31, 2016 astro-ph.GA, astro-ph.SR This paper presents a component-level comparison of the polarized v=1 J =1-0, v=2 J=1-0 and v=1 J=2-1 SiO maser emission towards the supergiant star VY CMa at milliarcsecond-scale, as observed using the VLBA at $\lambda=7$mm and $\lambda=3$mm. An earlier paper considered overall maser morphology and constraints on SiO maser excitation and pumping derived from these data. The goal of the current paper is to use the measured polarization properties of individual co-spatial components detected across multiple transitions to provide constraints on several competing theories for the transport of polarized maser emission. This approach minimizes the significant effects of spatial blending. We present several diagnostic tests designed to distinguish key features of competing theoretical models for maser polarization. The number of coincident features is limited by sensitivity however, particularly in the v=1 J=2-1 transition at 86 GHz, and deeper observations are needed. Preliminary conclusions based on the current data provide some support for: i) spin-independent solutions for linear polarization; ii) the influence of geometry on the distribution of fractional linear polarization with intensity; and, iii) $\pi/2$ rotations in linear polarization position angle arising from transitions across the Van Vleck angle ($\sin^2{\theta}=2/3$) between the maser line-of-sight and magnetic field. There is weaker evidence for several enumerated non-Zeeman explanations for circular polarization. The expected 2:1 ratio in circular polarization between J=1-0 and J=2-1 predicted by standard Zeeman theory cannot unfortunately be tested conclusively due to insufficient coincident components. • ### A return to strong radio flaring by Circinus X-1 observed with the Karoo Array Telescope test array KAT-7(1305.3399) May 15, 2013 astro-ph.HE Circinus X-1 is a bright and highly variable X-ray binary which displays strong and rapid evolution in all wavebands. Radio flaring, associated with the production of a relativistic jet, occurs periodically on a ~17-day timescale. A longer-term envelope modulates the peak radio fluxes in flares, ranging from peaks in excess of a Jansky in the 1970s to an historic low of milliJanskys during the years 1994 to 2007. Here we report first observations of this source with the MeerKAT test array, KAT-7, part of the pathfinder development for the African dish component of the Square Kilometre Array (SKA), demonstrating successful scientific operation for variable and transient sources with the test array. The KAT-7 observations at 1.9 GHz during the period 13 December 2011 to 16 January 2012 reveal in temporal detail the return to the Jansky-level events observed in the 1970s. We compare these data to contemporaneous single-dish measurements at 4.8 and 8.5 GHz with the HartRAO 26-m telescope and X-ray monitoring from MAXI. We discuss whether the overall modulation and recent dramatic brightening is likely to be due to an increase in the power of the jet due to changes in accretion rate or changing Doppler boosting associated with a varying angle to the line of sight. • ### Climate simulations for 1880-2003 with GISS modelE(physics/0610109) April 12, 2007 physics.ao-ph, physics.geo-ph We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies include unrealistically weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. The greatest uncertainties in the forcings are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. • ### Dangerous human-made interference with climate: A GISS modelE study(physics/0610115) Oct. 16, 2006 physics.ao-ph, physics.geo-ph We investigate the issue of "dangerous human-made interference with climate" using simulations with GISS modelE driven by measured or estimated forcings for 1880-2003 and extended to 2100 for IPCC greenhouse gas scenarios as well as the 'alternative' scenario of Hansen and Sato. Identification of 'dangerous' effects is partly subjective, but we find evidence that added global warming of more than 1 degree C above the level in 2000 has effects that may be highly disruptive. The alternative scenario, with peak added forcing ~1.5 W/m2 in 2100, keeps further global warming under 1 degree C if climate sensitivity is \~3 degrees C or less for doubled CO2. We discuss three specific sub-global topics: Arctic climate change, tropical storm intensification, and ice sheet stability. Growth of non-CO2 forcings has slowed in recent years, but CO2 emissions are now surging well above the alternative scenario. Prompt actions to slow CO2 emissions and decrease non-CO2 forcings are needed to achieve the low forcing of the alternative scenario.
2021-03-04 07:42:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5870290398597717, "perplexity": 3164.6178341620016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00464.warc.gz"}
http://wordpress.stackexchange.com/questions/102576/php-include-not-working-in-custom-page
# php include not working in custom page ### The Problem I have a db.php file which contains mysql login details. I want it in a separate file like this so it can be linked in several different pages for easier tracing later. This all works when not on a wordpress site, but I'm trying to convert everything over to wordpress now and I'm encountering a problem. The below doesn't work (echoes nothing on page), but if I copy the single line from inc/db.php into my page-pricechanges.php it works. What's the best way for me to link this? ### Structure \THEMEFOLDER • \page-pricechanges.php • \inc\db.php ### Page code <head> <?php include 'inc/db.php'; //I have also tried include get_bloginfo("template_url")'/inc/db.php'; //I have tried include 'db.php' and put db.php in root of site, root of theme folder.. nothing seems to pick it up. ?> inc/db.php <?php $con=mysqli_connect("localhost","root","", "dbname"); ?> page-pricechanges.php $query1 = mysqli_query($con, "SELECT * FROM dbname ORDER BY balance DESC") or die(mysql_error()); while ($row = mysqli_fetch_assoc($query1)) {$id = $row['id'];$name = addslashes($row['name']); echo$id." ".$name."<br>"; } - ## 1 Answer Wordpress has its own connection class, which can be used trough the$wpdb object. Is there a particular reason to use direct queries to the db, instead the wordpress functions (get_posts, get_post_meta, get_option)? Keep in mind that by doing this, you loose a lot of the wordpress tools, such as cache, filters, etc... - The database I listed isn't a wordpress one. It's a fantasy football stats database. All I'm trying to do is create a custom page which shows all the numbers from the MySQL Database in a HTML table. I didn't know about \$wpdb... Can I set this up for my own databases? I'll check it out now. – Cully Jun 11 '13 at 0:57 That can also be done with the wpdb class: wordpress.stackexchange.com/a/1618/33558 – davidmh Jun 11 '13 at 0:59 Great thanks - I wasn't aware. I'll take a look now. That looks like it will solve my problem. – Cully Jun 11 '13 at 1:01
2016-05-25 11:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18035100400447845, "perplexity": 2495.0804540168806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274756.45/warc/CC-MAIN-20160524002114-00095-ip-10-185-217-139.ec2.internal.warc.gz"}
https://1lab-wip.amelia.how/Cat.Instances.StrictCat.Cohesive.html
open import Cat.Instances.StrictCat open import Cat.Instances.Discrete hiding (Disc) open import Cat.Instances.Functor open import Cat.Instances.Product open import Cat.Univalent open import Cat.Prelude module Cat.Instances.StrictCat.Cohesive where # Strict-Cat is “cohesive”🔗 We prove that the category $\strcat$ admits an adjoint quadruple $\Pi_0 \dashv \id{Disc} \dashv \Gamma \dashv \id{Codisc}$ where the “central” adjoint $\Gamma$ is the functor which sends a strict category to its underlying set of objects. This lets us treat categories as giving a kind of “spatial” structure over $\id{Sets}$. The left- and right- adjoints to $\id{Ob}$ equip sets with the “discrete” and “codiscrete” spatial structures, where nothing is stuck together, or everything is stuck together. The extra right adjoint to $\id{Ob}$ assigns a category to its set of connected components, which can be thought of as the “pieces” of the category. Two objects land in the same connected component if there is a path of morphisms connecting them, hence the name. Note: Generally, the term “cohesive” is applied to Grothendieck topoi, which Strict-Cat is very far from being. We’re using it here by analogy: There’s an adjoint quadruple, where the functor $\Gamma$ sends each category to its set of points: see the last section. Strictly speaking, the left adjoint to $\Gamma$ isn’t defined by tensoring with Sets, but it does have the effect of sending $S$ to the coproduct of $S$-many copies of the point category. # Disc ⊣ Γ🔗 We begin by defining the object set functor. Γ : Functor (Strict-Cat o h) (Sets o) Γ .F₀ (C , obset) = Ob C , obset Γ .F₁ = F₀ Γ .F-id = refl Γ .F-∘ _ _ = refl We must then prove that the assignment Disc′ extends to a functor from Sets, and prove that it’s left adjoint to the functor Γ we defined above. Then we define the adjunction Disc⊣Γ. Disc : Functor (Sets ℓ) (Strict-Cat ℓ ℓ) Disc .F₀ S = Disc′ S , S .is-tr Disc .F₁ = lift-disc Disc .F-id = Functor-path (λ x → refl) λ f → refl Disc .F-∘ _ _ = Functor-path (λ x → refl) λ f → refl Disc⊣Γ : Disc {ℓ} ⊣ Γ For the adjunction unit, we’re asked to provide a natural transformation from the identity functor to $\Gamma \circ \id{Disc}$; Since the object set of $\id{Disc}(X)$ is simply $X$, the identity map suffices: adj : Disc {ℓ} ⊣ Γ adj .unit = NT (λ _ x → x) λ x y f i o → f o The adjunction counit is slightly more complicated, as we have to give a functor $\id{Disc}(\Gamma(X)) \to X$, naturally in $X$. Since morphisms in discrete categories are paths, for a map $x \equiv y$ (in {- 1 -}), it suffices to assume $y$ really is $x$, and so the identity map suffices. adj .counit = NT (λ x → F x) nat where F : (x : Precategory.Ob (Strict-Cat ℓ ℓ)) → Functor (Disc′ (x .fst .Precategory.Ob , x .snd)) _ F X .F₀ x = x F X .F₁ p = subst (X .fst .Hom _) p (X .fst .id) {- 1 -} F X .F-id = transport-refl _ F X .F-∘ = lemma {A = X} Fortunately the triangle identities are straightforwardly checked. adj .zig {x} = Functor-path (λ x i → x) λ f → x .is-tr _ _ _ _ # Γ ⊣ Codisc🔗 The codiscrete category on a set $X$ is the strict category with object space $X$, and all hom-spaces contractible. The assignment of codiscrete categories extends to a functor $\sets \to \strcat$, where we lift functions to act on object parts and the action on morphisms is trivial. Codisc : Functor (Sets ℓ) (Strict-Cat ℓ ℓ) Codisc .F₀ S = Codisc′ ∣ S ∣ , S .is-tr Codisc .F₁ f .F₀ = f Codisc .F₁ f .F₁ = λ _ → lift tt Codisc .F₁ f .F-id = refl Codisc .F₁ f .F-∘ = λ _ _ → refl Codisc .F-id = Functor-path (λ x → refl) λ f → refl Codisc .F-∘ _ _ = Functor-path (λ x → refl) λ f → refl The codiscrete category functor is right adjoint to the object set functor $\Gamma$. The construction of the adjunction is now simple in both directions: Γ⊣Codisc : Γ ⊣ Codisc {ℓ} NT (λ x → record { F₀ = λ x → x ; F₁ = λ _ → lift tt ; F-id = refl ; F-∘ = λ _ _ → refl }) λ x y f → Functor-path (λ _ → refl) λ _ → refl adj .counit = NT (λ _ x → x) λ x y f i o → f o adj .zag = Functor-path (λ _ → refl) λ _ → refl ## Object set vs global sections🔗 Above, we defined the functor $\Gamma$ by directly projecting the underlying set of each category. Normally in the definition of a cohesion structure, we use the global sections functor which maps $x \mapsto \hom(*,x)$ (where $*$ is the terminal object). Here we prove that these functors are naturally isomorphic, so our abbreviation above is harmless. Below, we represent the terminal category $*$ as the codiscrete category on the terminal set. Using the codiscrete category here is equivalent to using the discrete category, but it is more convenient since the $\hom$-sets are definitionally contractible. module _ {ℓ} where import Cat.Morphism Cat[ Strict-Cat ℓ ℓ , Sets ℓ ] as Nt GlobalSections : Functor (Strict-Cat ℓ ℓ) (Sets ℓ) GlobalSections .F₀ C = Functor (Codisc′ (Lift _ ⊤)) (C .fst) , Functor-is-set (C .snd) GlobalSections .F₁ G F = G F∘ F GlobalSections .F-id = funext λ _ → Functor-path (λ _ → refl) λ _ → refl GlobalSections .F-∘ f g = funext λ _ → Functor-path (λ _ → refl) λ _ → refl Since GlobalSections is a section of the $\hom$ functor, it acts on maps by composition. The functor identities hold definitionally. GlobalSections≅Γ : Γ {ℓ} Nt.≅ GlobalSections GlobalSections≅Γ = Nt.make-iso f g f∘g g∘f where open Precategory We define a natural isomorphism from Γ to the GlobalSections functor by sending each object to the constant functor on that object. This assignment is natural because it is essentially independent of the coordinate. f : Γ => GlobalSections f .η x ob = record { F₀ = λ _ → ob ; F₁ = λ _ → x .fst .id ; F-id = refl ; F-∘ = λ _ _ → sym (x .fst .idl _) } f .is-natural x y f = funext λ _ → Functor-path (λ _ → refl) λ _ → sym (F-id f) In the opposite direction, the natural transformation is defined by evaluating at the point. These natural transformations compose to the identity almost definitionally, but Agda does need some convincing, using our path helpers: Nat-path, funext, and Functor-path. g : GlobalSections => Γ g .η x f = F₀ f (lift tt) g .is-natural x y f = refl f∘g : f ∘nt g ≡ idnt f∘g = Nat-path λ c → funext λ x → Functor-path (λ x → refl) λ f → sym (F-id x) g∘f : g ∘nt f ≡ idnt g∘f = Nat-path λ _ i x → x # Connected components🔗 The set of connected components of a category is the quotient of the object set by the “relation” generated by the Hom sets. This is not a relation because Hom takes values in sets, not propositions; Thus the quotient forgets precisely how objects are connected. This is intentional! π₀ : Precategory o h → Set (o ⊔ h) π₀ C = Ob C / Hom C , squash The π₀ construction extends to a functor Π₀ (capital pi for Pieces) from Strict-Cat back to Sets. We send a functor $F$ to its object part, but postcomposing with the map inc which sends an object of $\ca{D}$ to the connected component it inhabits. Π₀ : Functor (Strict-Cat o h) (Sets (o ⊔ h)) Π₀ .F₀ (C , _) = π₀ C Π₀ .F₁ F = Quot-elim (λ _ → squash) (λ x → inc (F₀ F x)) λ x y r → glue (_ , _ , F₁ F r) We must prove that this assignment respects the quotient, which is where the morphism part of the functor comes in: Two objects $x, y : \ca{C}$ are in the same connected component if there is a map $r : x \to y$; To show that $F_0(x)$ and $F_0(y)$ are also in the same connected component, we must give a map $F_0(x) \to F_0(y)$, but this can be canonically chosen to be $F_1(r)$. Π₀ .F-id = funext (Coeq-elim-prop (λ _ → squash _ _) λ x → refl) Π₀ .F-∘ f g = funext (Coeq-elim-prop (λ _ → squash _ _) λ x → refl) The adjunction unit is a natural assignment of functors $\ca{X} \to \id{Disc}(\Pi_0(\ca{X}))$. We send $x$ to its connected component, and we must send a map $r : x \to y$ to an equality between the connected components of $x$ and $y$; But we get this from the quotient. Π₀⊣Disc : Π₀ ⊣ Disc {ℓ} adj .unit .η x = record { F₀ = inc ; F₁ = quot ; F-id = squash _ _ _ _ ; F-∘ = λ _ _ → squash _ _ _ _ } adj .unit .is-natural x y f = Functor-path (λ x → refl) λ _ → squash _ _ _ _ The adjunction counit is an assignment of functions $\Pi_0(\id{Disc}(X)) \to X$. This is essentially a natural isomorphism: the set of connected components of a discrete category is the same set we started with. adj .counit .η X = Quot-elim (λ _ → X .is-tr) (λ x → x) λ x y r → r adj .counit .is-natural x y f = funext (Coeq-elim-prop (λ _ → y .is-tr _ _) λ _ → refl) The triangle identities are again straightforwardly checked. adj .zig {x} = funext (Coeq-elim-prop (λ _ → squash _ _) λ x → refl) adj .zag = Functor-path (λ x → refl) λ f → refl Furthermore, we can prove that the connected components of a product category are product sets of connected components. Π₀-preserve-prods : ∀ {C D : Precategory o h} → ∣ π₀ (C ×Cat D) ∣ ≡ (∣ π₀ C ∣ × ∣ π₀ D ∣) Π₀-preserve-prods {C = C} {D = D} = Iso→Path (f , isom) where open is-iso We have a map splitting $\pi_0$ of the product category onto $\pi_0$ of each factor. This maps respect the quotient because we can also split the morphisms. f : ∣ π₀ (C ×Cat D) ∣ → ∣ π₀ C ∣ × ∣ π₀ D ∣ f = Quot-elim (λ _ → ×-is-hlevel 2 squash squash) (λ (a , b) → inc a , inc b) λ (x , x') (y , y') (f , g) i → glue (x , y , f) i , glue (x' , y' , g) i This map has an inverse given by joining up the pairs: isom : is-iso f isom .inv (a , b) = Coeq-rec₂ squash (λ x y → inc (x , y)) (λ a (x , y , r) i → glue ((x , a) , (y , a) , r , Precategory.id D) i) (λ a (x , y , r) i → glue ((a , x) , (a , y) , Precategory.id C , r) i) a b isom .rinv (a , b) = Coeq-elim-prop₂ {C = λ x y → f (isom .inv (x , y)) ≡ (x , y)} (λ _ _ → ×-is-hlevel 2 squash squash _ _) (λ _ _ → refl) a b isom .linv = Coeq-elim-prop (λ _ → squash _ _) λ _ → refl ## Pieces have points🔗 An important property of the cohesive quadruple defined above is that the canonically-defined natural morphism $\Gamma(X) \to \Pi_0(X)$ is surjective, i.e. each piece has at least one point. Points→Pieces : Γ {ℓ} {ℓ} => Π₀ Points→Pieces .η _ x = inc x Points→Pieces .is-natural x y f i o = inc (F₀ f o) pieces-have-points : ∀ {x} y → ∥ fibre (Points→Pieces {ℓ} .η x) y ∥ pieces-have-points = Coeq-elim-prop (λ _ → squash) λ x → inc (x , refl)
2022-09-28 07:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 45, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185877442359924, "perplexity": 4789.403710731887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00497.warc.gz"}
https://www.transtutors.com/questions/lewis-companyac-cs-standard-labor-cost-of-producing-one-unit-of-product-dd-is-3-20-h-759319.htm
# Lewis CompanyAc€?cs standard labor cost of producing one unit of Product DD is 3.20 hours at the Lewis CompanyAc€?cs standard labor cost of producing one unit of Product DD is 3.20 hours at the rate of $14.00 per hour. During August, 41,500 hours of labor are incurred at a cost of$14.12 per hour to produce 12,800 units of Product DD. a) Compute the total labor variance. b) Compute the labor price and quantity variances. c) Repeat (b), assuming the standard is 3.40 hours of direct labor at \$14.30 per hour. ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2020-09-21 09:40:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834197998046875, "perplexity": 8571.262521038005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00636.warc.gz"}
https://chemistry.stackexchange.com/questions/101956/buffer-made-of-salts-of-polyprotic-acid-reactions
# Buffer made of salts of polyprotic acid: reactions I was trying to make a buffer with $$\ce{K2HPO4}$$ to use it in a bacterial culture. If I use $$\ce{K2HPO4}$$ I think I have to add $$\ce{KH2PO4}$$. I am trying to think it writing the equations of the process but I have troubles. I think the buffer's reactions will be: Salt dissociation in water: $$\ce{KH2PO4_{(aq)} <=> K+_{(aq)} + {H_2PO4^-}_{(aq)}}$$ $$\ce{K2HPO4_{(aq)} <=> 2K+_{(aq)} + {HPO4^{-2}}_{(aq)}}$$ Anion hydrolysis: $$\ce{HPO4^2-_{(aq)} + H2O_{(l)} <=> H2PO4^-_{(aq)} + OH-_{(aq)}}$$ $$\ce{H2PO4-_{(aq)} + H2O_{(l)} <=> HPO4^2-_{(aq)} + H3O+_{(aq)}}$$ If I add acid: $$\ce{K2HPO4_{(aq)} + H3O+_{(aq)} <=> H2O_{(l)} + H2PO4-_{(aq)}}$$ If I add base: $$\ce{H2PO4-_{(aq)} + OH-_{(aq)} <=> H2O_{(l)} + HPO4^2-_{(aq)}}$$ Is it correct? And if so, is it possible to add only $$\ce{K2HPO4}$$ and water to make the buffer? • Welcome to Chemistry.SE! Please note that formulas can be better expressed with \$\ce{}\$ for chemical formulas/equations, \$\mathrm{}\$ for math term/equations, and \$\pu{}\$ for units. More information is available in this meta post Also, take a minute to look over the help center and tour page to better understand our guidelines and question policies. – A.K. Sep 20 '18 at 19:58 For use in a bacterial culture, you no doubt want a buffer solution with a $$\mathrm{pH = 7.20}$$, which corresponds to the $$pK_{a2}$$ for phosphoric acid. At that pH $$\ce{[H2PO4-] = [HPO4^2-]}$$. The $$\ce{K+}$$ ion is basically a spectator ion and doesn't influence the pH directly. So typically you'd start with a solution which has equal molarities of $$\ce{KH2PO4}$$ and $$\ce{K2HPO4}$$ and then adjust the pH with $$\ce{H3PO4}$$ and $$\ce{KOH}$$ using a pH meter to get $$7.20$$ "exactly." • Ok. Thanks. I have understood. So... Are the reactions I wrote fine, @MaxW ? (Because I have to write it, too). @MaxW – Geraldine Sep 21 '18 at 11:41 • Well, you went a little overboard. All you need is $$\ce{K+H2PO4- <=> K+ + HPO4^2- + H+}$$ – MaxW Sep 21 '18 at 13:25
2020-10-25 02:37:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632326245307922, "perplexity": 1122.8635322118068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00165.warc.gz"}
https://www.maa.org/press/periodicals/convergence/pythagorean-cuts-semicircles-a-special-case
# Pythagorean Cuts – Semicircles: A Special Case Author(s): Martin Bonsangue (California State University, Fullerton) and Harris Shultz (California State University, Fullerton) Let triangle ABC be a 30-60 right triangle with hypotenuse AC and shortest side AB $= 1$ (see Figure 8, below). Let semicircles be constructed on each side of triangle ABC. Figure 8: A special case of semicircles. The Pythagorean cut is semicircle LMK, shown in red. The semicircle on AB has radius $\frac{1}{2}$ and so has area ${{\frac{1}{2}}\pi\left({{\frac{1}{2}}}\right)^2,}$ or $\frac{\pi}{8}.$  Similarly, the area of the semicircle on side BC is $\frac{3\pi}{8}$ and the area of the semicircle on hypotenuse AC is $\frac{\pi}{2}$.  Since $\frac{\pi}{8}+\frac{3\pi}{8}=\frac{\pi}{2},$ the Pythagorean relationship holds: Area of semicircle on AB + Area of semicircle on BC = Area of semicircle on AC. Now let LK be the segment of length 1 perpendicular to AC and let the semicircle be constructed having diameter LK as shown.  Since AB = LK, the semicircles on AB and on LK must have the same area of $\frac{\pi}{8}.$ Moreover, since quarter circle AKL has area one-half the area of semicircle AKC, the area of quarter circle AKL is $\frac{\pi}{4}.$  Thus, Area of region ALMK $=\frac{\pi}{4}-\frac{\pi}{8}=\frac{\pi}{8}$ = Area of semicircle on side AB, and so semicircle LMK is a Pythagorean cut for semicircles built on the 30-60 right triangle. A GeoGebra applet showing the Pythagorean cut for semicircles with their diameters along the sides of this special right triangle is shown in Figure 9, below. Figure 9. Semicircles with diameters along the sides of a 30-60 right triangle Martin Bonsangue (California State University, Fullerton) and Harris Shultz (California State University, Fullerton), "Pythagorean Cuts – Semicircles: A Special Case," Convergence (December 2015)
2021-09-20 02:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730569064617157, "perplexity": 2834.8025627876955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00324.warc.gz"}
http://crypto.stackexchange.com/questions/15458/is-chaocipher-a-secure-cipher-under-ciphertext-only-attack
Is Chaocipher a secure cipher under ciphertext-only attack? Chaocipher was invented by John F. Byrne in 1919. The algorithm was recently revealed – see Moshe Rubin's Chaocipher Revealed, the Algorithm (PDF). While a known plaintext attack successfully finds the keys, nobody has been able to put forward a general solution to this cipher. Is that possible? - Any cipher than suffers key-recovery under known plaintext is horrifically weak under modern requirements. Nowadays, we tend to require that an attacker cannot even recognise the ciphertext (compared to random data), even if they can choose the plaintext to encrypt and even ask for the decryption of some messages of their own choice –  figlesquidge Apr 8 at 9:40 Ciphertext-only attacks virtually always assume some a priori distribution of plaintexts; otherwise all keys are equally probable. –  Dmitry Khovratovich Apr 8 at 10:20 Thank you both form your responses. I have a follow-up for figlesquidge please. Given that the cipher text from Chaocipher shows some deviation from randomness in one respect (which it does!) how then does the cryptanalyst go about solving the cipher? He knows the algorithm, but has no idea of the two 26-character keys. –  user2256790 Apr 8 at 13:26 Because the key is relatively large [ $2\log_2(26!)>176\text{ bits}$ ], and since the cipher is not trivially bad, a ciphertext-only attack can only be carried with significant amount of ciphertext corresponding to redundant plaintext. If we consider there is 2 bit/letter of exploitable redundancy in English text, and IF the cipher was perfect, we would need about 90-letter ciphertext to have any hope of solving it. Are there large Chaocipher challenges around? (or course it is easy to make some). –  fgrieu Apr 9 at 8:24 There are some lengthy Chaocipher ciphertexts here: mountainvistasoft.com/chaocipher/Chaocipher-ASCII-versions.htm I (and several others) have broken some of these with a known plaintext attack. But nobody (to my knowledge) has developed a method to solve using a ciphertext attack. Such a method would be of great interest. –  user2256790 Apr 27 at 15:55 While a known plaintext attack successfully finds the keys, nobody has been able to put forward a general solution to this cipher. Is that possible? You really have to go back in time to learn that ChaoCipher has been subject to some cryptanalysis before the description of the algorithm/device was published. As an example: Here’s one of the oldest papers I found related to the cryptanalysis of the ChaoCipher which was first published in 2003… long before the algorithm was published in 2010 and long before the cipher was subject to a complete break. If you do a bit of research, you’ll find a few more papers. Most cryptanalytic looks at ChaoCipher have shown ample weaknesses in the distribution, producing anomalies which could be exploited. Having used the algorithm description to code a C version, I quickly discovered it works somewhat like a substitution cipher that incorporates a primitive version of an which scrambles the S-box. Nothing you would want to use in times of modern cryptography. It’s more like a most-primitive rotor machine. Anyway… getting back to the core of your question: is it possible that no one has created a general “this breaks it from every side” solution? Sure! One reason can be found in the fact that a counts as a complete break (and in this case, a practically feasible one too). Finding additional attack vectors can be fun as a hobby, or it may be something you might want to check on while writing a thesis about ChaoCipher… but it’s not that interesting for most cryptanalysts to dive in deeper when a cryptographic algorithm already was subject to a complete break. Instead, cryptanalysts will be more interested in finding weaknesses in more modern (currently used) ciphers. Compared to modern cryptography, ChaoCipher can’t hold up. On the other hand, there are ample websites that talk about its internals… leaving the chance that one day, someone may find ways to break it from all sides – after investing enough time and efforts. -
2014-10-30 12:36:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5410985350608826, "perplexity": 1492.885223276693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637897717.20/warc/CC-MAIN-20141030025817-00230-ip-10-16-133-185.ec2.internal.warc.gz"}
https://msp.org/agt/2011/11-5/p03.xhtml
#### Volume 11, issue 5 (2011) Recent Issues The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement Author Index To Appear ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Other MSP Journals More on the anti-automorphism of the Steenrod algebra ### Vince Giambalvo and Haynes R Miller Algebraic & Geometric Topology 11 (2011) 2579–2585 ##### Abstract The relations of Barratt and Miller are shown to include all relations among the elements ${P}^{i}\chi {P}^{n-i}$ in the mod $p$ Steenrod algebra, and a minimal set of relations is given. ##### Keywords Steenrod algebra, anti-automorphism Primary: 55S10
2019-03-23 09:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1799032837152481, "perplexity": 8653.13068767996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00018.warc.gz"}
http://amplebiz.com/au/Sharp+Constants+and+Optimizers+for+a+Class+of+Caffarelli%E2%80%93Kohn/
Amplebiz Sucher für dich zu finden # Sharp Constants and Optimizers for a Class of Caffarelli–Kohn bezogene ergebnisse über 14. ## Title: Sharp constants and optimizers … Moreover, in the particular classes $r=p\frac{q-1}{p-1}$ and $q=p\frac{r-1}{p-1}$, the forms of maximizers will also be provided in the spirit of Del Pino and Dolbeault ([12], 13]). In the case $a=1$, that is the Caffarelli-Kohn-Nirenberg inequality without the interpolation term, we will provide the exact maximizers for all the range of $\mu\geq0$. https://arxiv.org/abs/1510.01224 ## CAFFARELLI-KOHN-NIRENBERG INEQUALITIES - arXiv arxiv:1510.01224v1 [math.ap] 5 oct 2015 sharp constants and optimizers for a class of the caffarelli-kohn-nirenberg inequalities nguyen lam and guozhen lu https://arxiv.org/pdf/1510.01224.pdf ## Pacific Journal of Mathematics - aurak.ac.ae using quasiconformal changes of variables to obtain sharp constants and optimizers in cases of the ... NORM CONSTANTS IN THE CAFFARELLI–KOHN ... http://aurak.ac.ae/publications/Norm-Constants-in-Cases-of-the-Caffarelli... ## Existence of extremal functions for a … Existence of extremal functions for a family of Caffarelli-Kohn-Nirenberg inequalities. ... Sharp constants and optimizers for a class of the Caffarelli ... https://www.researchgate.net/publication/274384902_Existence_of... ## The John-Nirenberg inequality with … Sharp constants in the classical weak form of the John--Nirenberg inequality: ... Sharp constants and optimizers for a class of the Caffarelli-Kohn ... http://www.oalib.com/paper/3844106 ## On the Best Constant for a Weighted … Nguyen Lam and Guozhen Lu, Sharp Constants and Optimizers for a Class of Caffarelli–Kohn–Nirenberg Inequalities, Advanced Nonlinear Studies, 10.1515/ans-2017-0012, 17, 3, (2017). https://londmathsoc.onlinelibrary.wiley.com/doi/abs/10.1112/jlms/s... ## Sharp weighted Trudinger–Moser and … On the Caffarelli-Kohn-Nirenberg inequalities: sharp constants, ... Lu G.Sharp constants and optimizers for a class of the Caffarelli-Kohn-Nirenberg ... https://www.sciencedirect.com/science/article/pii/S0362546X18300580 ## Sharp constants and optimizers for a … Abstract In this paper, we will use a suitable tranform to investigate the sharp constants and optimizers for the following Caffarelli-Kohn-Nirenberg ...
2018-09-21 05:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613607406616211, "perplexity": 2834.966837478656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156857.16/warc/CC-MAIN-20180921053049-20180921073449-00531.warc.gz"}
https://pos.sissa.it/398/273/
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T05: Heavy Ion Physics Study of final states in p-Au and p-Pb collisions T. Obikhod* and I. Petrenko Full text: pdf Pre-published on: January 28, 2022 Published on: Abstract In the framework of PYTHIA8.2 program we considered p-Pb and p-Au heavy ion collisions at the energy of 5.02 TeV and 8 TeV. This program combines several nucleon-nucleon collisions into one heavy ion collision, based on phenomenological treatment of a hadron as a vortex line in a colour superconducting medium, and treats consistently the central rapidity region with improvements of Glauber-like model where diffractive excitation processes are taken into account. We considered the influence of impact parameter correlations on the particle production cross sections in p-Pb and p-Au collisions to estimate the influence of hard and soft subprocesses on basic hadronic final-state properties in proton-ion collisions. Using these characteristics based on semi-hard multiparton interaction model we received the transverse momentum and rapidity distributions of K-meson and $\Lambda$-baryon at the energy of 5.02 TeV and 8.14 TeV. DOI: https://doi.org/10.22323/1.398.0273 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2022-01-29 04:55:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5885197520256042, "perplexity": 4406.845620494694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00629.warc.gz"}
https://www.transtutors.com/questions/1-in-2011-raleigh-sold-1-000-units-at-500-each-and-earned-net-income-of-50-000-383772.htm
1. In 2011, Raleigh sold 1,000 units at $500 each, and earned net income of$50,000 1 answer below » 1. In 2011, Raleigh sold 1,000 units at $500 each, and earned net income of$50,000. Variable expenses were $300 per unit, and fixed expenses were$150,000. The same selling price is expected for 2012. Raleigh's variable cost per unit will rise by 10% in 2012 due to increasing material costs, so they are tentatively planning to cut fixed costs by $15,000. How many units must Raleigh sell in 2012 to maintain the same income level as 2011? 2. Ramirez Corporation sells two types of computer chips. The sales mix is 30% (Q-Chip) and 70% (Q-Chip Plus). Q-Chip has variable costs per unit of$36 and a selling price of $60. Q-Chip Plus has variable costs per unit of$42 and a selling price of $78. The weighted-average unit contribution margin for Ramirez is 3. Ramirez Corporation sells two types of computer chips. The sales mix is 30% (Q-Chip) and 70% (Q-Chip Plus). Q-Chip has variable costs per unit of$36 and a selling price of $60. Q-Chip Plus has variable costs per unit of$42 and a selling price of $78. Ramirez's fixed costs are$540,000. How many units of Q-Chip would be sold at the break-even point? 4. Swanson Company has two divisions; Sporting Goods and Sports Gear. The sales mix is 65% for Sporting Goods and 35% for Sports Gear. Swanson incurs \$3,330,000 in fixed costs. The contribution margin ratio for Sporting Goods is 30%, while for Sports Gear it is 50%. What will sales be for the Sporting Goods Division at the break-even point? Wahid A 1: 794 units 2: 5063 units Posters rating is not 100%. Hence Pl doesn't answer this post as this person has been irresponsible in rating...
2019-11-20 15:21:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17352980375289917, "perplexity": 4315.05612461798}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00208.warc.gz"}
https://mathoverflow.net/questions/409898/hartshornes-proof-of-halphens-theorem
Hartshorne's proof of Halphen's theorem Apologies if this is not quite at the level of MathOverflow, but it has already been asked at MSE and gone unresolved for several years despite a bounty. Hartshorne states the theorem as follows: Proposition IV.6.1. A curve $$X$$ of genus $$g\geq 2$$ has a nonspecial very ample divisor $$D$$ of degree $$d$$ if and only if $$d\geq g+3$$. The necessity is shown, and then sufficiency. The idea is to show that the set $$S$$ of divisors $$D \in X^d$$ such that there exists $$D' \sim D$$ and points $$P,Q \in X$$ with $$E = D'-P-Q$$ an effective special divisor has dimension $$\leq g+2$$. Because $$d\geq g+3$$ this means there is some $$D\notin S$$ that is nonspecial and very ample of degree $$d$$. Hartshorne shows that the set of divisors of the form $$E+P+Q$$ in $$X^d$$ that are nonspecial with $$E$$ a special effective divisor has dimension $$\leq g+1$$. The part that confuses me comes afterwards. Namely, as $$E$$ is special the Riemann-Roch tells us that $$\dim |E| \geq d-1-g$$, and similarly that $$\dim |E+Q+P| = d-g$$. Because the difference between these two dimensions is at most 1, this somehow implies that the set of divisors $$S$$ as above has dimension $$\leq g+2$$. I don't understand this implication. We have divisors of the form $$E+Q+P$$, each of which gives a linear system of dimension $$d-g$$, and all of which form a set of divisors of dimension $$\leq g+1$$. How does the difference in dimension of $$E$$ and $$E+P+Q$$ tell us the dimension of $$S$$? Write $$\mathrm{Pic}^d(X)$$ for the scheme which parametrized all line bundles of degree $$d$$ on $$X$$, $$\mathrm{Div}^d(X)$$ for the scheme which parametrized all effective divisors of degree $$d$$ on $$X$$, and $$D_{\mathrm{univ}}\subset X\times \mathrm{Div}^d(X)$$ for the universal effective divisor of degree $$d$$. Then, by the universality of $$\mathrm{Pic}^d(X)$$, the line bundle $$\mathcal{O}_{X\times \mathrm{Div}^d(X)}(D_{\mathrm{univ}})$$ induces a morphism $$\varphi_d:\mathrm{Div}^d(X) \to \mathrm{Pic}^d(X).$$ This morphism can be written as $$\varphi_d(D) = \mathcal{O}_X(D)$$. Hence each fiber of $$\varphi_d$$ is linearly equivalent class. In particular, for any $$L\in \mathrm{Pic}^d(X)$$, it holds that $$\dim(\varphi_d^{-1}(L)) = \dim H^0(X,L) - 1$$. Write $$\mathrm{SpDiv}^d\subset \mathrm{Div}^d(X)$$ for the closed subscheme determied by all special effective divisors. Then, by Riemann-Roch, for any $$D\in \mathrm{SpDiv}^d$$, it holds that $$\dim(\varphi_d^{-1}(\mathcal{O}_X(D))) = 1+d-g+l(K-D)-1 \geq 1+d-g.$$ Since $$\dim(\mathrm{SpDiv}^d) = g-1$$, it holds that $$\dim(\varphi_d(\mathrm{SpDiv}^d)) \leq (g-1)-(1+d-g) = 2g-2-d.$$ Now, let us consider the following two morphisms: \begin{align*} f:X\times X\times \mathrm{Div}^{d-2}(X)\to \mathrm{Div}^d(X), (P,Q,D)\mapsto P+Q+D, \\ g:X\times X \times \mathrm{Pic}^{d-2}(X) \to \mathrm{Pic}^d(X), (P,Q,L)\mapsto L\otimes \mathcal{O}(P+Q). \end{align*} Then we obtain the following commutative diagram: $$\require{AMScd} \begin{CD} X\times X\times \mathrm{SpDiv}^{d-2} @>{\subset}>> X\times X\times \mathrm{Div}^{d-2}(X) @>{f}>> \mathrm{Div}^d(X) \\ @. @V{\psi}VV @VV{\varphi_d}V \\ @. X\times X\times \mathrm{Pic}^{d-2}(X) @>{g}>> \mathrm{Pic}^d(X), \end{CD}$$ where $$\psi := \mathrm{id}_X\times \mathrm{id}_X \times \varphi_{d-2}$$. Write $$T:= g(\psi(X\times X\times \mathrm{SpDiv}^{d-2}))\subset \mathrm{Pic}^d(X)$$. Then $$\dim(T) \leq 2g-2-(d-2)+2 = 2g-d+2.$$ Since $$\dim(\mathrm{Div}^d)) = d$$, and $$\dim(\mathrm{Pic}^d) = g$$, the dimension of general fibers of $$\varphi_d$$ are $$d-g$$. Hence $$\dim(\varphi_d^{-1}(T)) \leq (2g-d+2)+(d-g) = g+2,$$ (where we note that if $$T \subset \varphi_d(\mathrm{SpDiv}^d)$$, then the dimension of general fibers of $$\varphi_d^{-1}(T) \to T$$ are $$>d-g$$, however, in this case, since $$\varphi_d^{-1}(T)\subset \mathrm{SpDiv}^d$$, it holds that $$\dim(\varphi_d^{-1}(T)) \leq \dim(\mathrm{SpDiv}^d) = g-1 < g+2$$). Moreover, by our construction, the scheme $$\varphi_d^{-1}(T)$$ parametrizes all effective divisors $$D\subset X$$ which are linearly equivalent to $$E+P+Q$$, the sum of a special effective divisor $$E\subset X$$ and points $$P,Q\in X$$. This is desired conclusion.
2023-03-30 05:29:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 69, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968019127845764, "perplexity": 80.33681316104945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00522.warc.gz"}
http://indico.cern.ch/event/164089/other-view?view=standard
# LHC Performance Workshop - Chamonix 2012 from to (Europe/Zurich) at Chamonix Support Email: acctdir@cern.ch Go to day • Monday, 6 February 2012 • 08:30 - 12:30 S01 - Lessons from 2011 Conveners: Mike Lamont (Chair), Chiara Bracco (Scientific Secretary) Material: • 08:30 Chamonix 2012 welcome 5' Speaker: Steve myers Material: • 08:35 Review of 2011 LHC run from the experiments perspective 10' The 2011 LHC run is reviewed from the experiments perspective. The achievements directly related to physics production are summarized. This includes high luminosity p-p and Pb-Pb running, special activities (such as intermediate energy p-p physics, 90 m optics, luminosity calibrations) and other experiments (for example satellite-main bunch collisions in IP2, 25 ns stable beams tests, etc.). Speaker: Massimiliano Ferro-Luzzi (CERN) Material: • 08:45 Input from Evian 20' The LHC Beam Operation workshop 2011 took place 12th - 14th December. The principle aims of the workshop were to review 2011 LHC beam commissioning and beam operations experience, and to look forward to the operation of the LHC in 2012. Issues covered include: availability; injection; operational performance; beam loss and machine protection;system performance; limitations; and the outlook for 2012. A concise summary of the workshop is presented and potential performance issues for 2012 are highlighted. Where material is covered in more depth here at Chamonix appropriate reference will be made. Speaker: Mike Lamont (CERN) Material: Paper • 09:05 2011 availability analysis 20' A critical evaluation of machine availability and performance analysis of the 2011 Run will be given. This will include both an analysis of the run in terms of the different operational modes, as well as a breakdown of the run in terms of downtime and faults. Detailed evaluation of systems with significant fault time will be addressed, and intended improvements discussed. In addition a survey of premature dump statistics will be presented, along with a summary of the implemented improvements and their effect on overall performance in the 2011 run. Speaker: Alick Macpherson (CERN) Material: • 09:25 Injection and lessons for 2012 20' Injection of 144 bunches into the LHC became fully operational during the 2011 run and a nominal injection of 288 bunches was accomplished during MD time. Several mitigation solutions were put in place to minimise losses from the transfer line (TL) collimators and losses from kicking debunched beam during injection. Nevertheless, shot-by-shot and bunch-by-bunch trajectory variations, as well as long terms drifts, were observed and required a regular resteering of the TL implying a non negligible amount of time spent for injection setup. Likely sources of instability have been identified (i.e. MKE and MSE ripples) and possible cures to optimise 2012 operation are presented. Well defined references for TL steering will be defined in a more rigorous way in order to allow a more straightforward and faster injection setup. Benefits from an improved transverse beam diagnostic in the injectors (emittance, beam distribution and tail population) are illustrated. Encountered and potential issues of the injection system, in particular the injection kickers MKI, are discussed also in view of injections with a higher number of bunches. Speaker: Chiara Bracco (CERN) Material: • 09:45 Coffee break 20' • 10:05 Machine Protection 20' The performance and experience with the LHC machine protection system during the 2011 run will be briefly summarized. Emphasis will be given to identify further potential improvements of the existing protection systems for the 2012 run alongside possibilities to allow for a more efficient strategy during the intensity ramp up whilst maintaining the required validation steps and dependability for the various machine protection elements. The role of machine protection during the 2011 MD periods and special runs will be summarized. Speaker: Markus Zerlauth (CERN) Material: • 10:25 Vacuum performance and lessons for 2012 20' During the LHC run 2011, a tremendous progress has been made towards the machine operation with design parameters. In the same time, the run confirmed the sensitivity of the beam vacuum system to the machine parameters. As expected, a successful scrubbing period allowed mitigating the effects of the electron cloud giving room to an entire filling of the ring with 50 ns beams. In parallel issues such as the impact of the beam screen regulation, pressures spikes and local outgassing were observed during the year. On-line mitigations and immediate compensatory measures implemented during the winter technical stop are reviewed together with their efficiencies. The expected limitations while waiting for LS1 consolidation or when running with 25 ns beams are addressed. Lessons for 2012 are discussed. Speaker: Vincent Baglin (CERN) Material: • 10:45 Emittance preservation 20' Emittance measurements during the LHC proton run 2011 indicated a blow-up of 20 % to 30 % from LHC injection to collisions. This presentation will show the emittance preservation throughout the different parts of the LHC cycle and discuss the current limitations on emittance determination. An overview of emittance preservation through the injector complex as function of bunch intensity will also be given. Possible sources for the observed blow-up and required tests in 2012 will be presented. Possible improvements of emittance diagnostics and analysis tools for 2012 will be shown. Speaker: Verena Kain (CERN) Material: • 16:30 - 17:00 Coffee break • 17:00 - 20:30 S02 - Machine Studies Conveners: Ralph Wolfgang Assmann (Chair), Giulia Papotti (Scientific Secretary) Material: • 17:00 LHC experience with different bunch spacings in 2011 (25 , 50 & 75 ns) 20' LHC operation in 2011 had a smooth start in March with 75ns beams and only one month later moved to 50ns beam, after a successful dedicated scrubbing run. Several observables, such as pressure rise, heat load in the arcs, beam instability, emittance growth and synchronous phase shift, clearly pointed to the presence of an electron cloud inside the machine during the first days of operation with 50ns beams. The gradual reduction of all these effects, and their eventual disappearance, over the days of the scrubbing run, indicated electron cloud mitigation and allowed physics production to shift to 50ns beams. Up to the end of the run the quality of the 50ns beams was increased by regular stages (first lower transverse emittances, then higher intensities) and they could provide steadily improving peak luminosities. Furthermore, 4 MD sessions with 25ns beams took place fin the period June-October, but the quality of these beams was always deteriorated by severe electron cloud effects. However, a clear improvement was noticed also with the 25ns runs. An estimation of the present state of conditioning of the machine and the required scrubbing time can be inferred from electron cloud simulations compared with measured data. Speaker: Giovanni Rumolo (CERN) Material: Slides • 17:30 Observations of beam-beam effects in MDs in 2011 20' We report the observations of beam-beam effects made in MD sessions in 2011. The results are compared with the expectations and possible implications for the operation in 2012 are discussed. Speaker: Werner Herr (CERN) Material: • 18:00 Beam-induced heating/bunch length/RF and lessons for 2012 20' The observations made in 2011 are first compared to expectations and the possible implications for the operation in 2012 are then discussed. Speaker: Elias Metral (CERN) Material: • 18:30 Lessons in beam diagnostics 20' This presentation will concentrate on the studies carried out on LHC beam instrumentation systems during the 2011 run, the improvements made and the outlook for 2012. It will include an update on the issues resolved since 2010, such as the performance of the BCT systems, and discuss advances in the bunch by bunch measurement capability of many systems. The conclusions will highlight what can be hoped for in terms of performance for 2012 and the issues which remain to be solved. Speaker: Rhodri Jones (CERN) Material: • 19:00 Quench margins 20' With thirteen beam induced quenches and numerous Machine Development tests the current knowledge of LHC magnets quench limits still contains a lot of unknowns. Various approaches to determine the quench limits are reviewed and results of the tests are presented. Attempt to reconstruct a coherent picture emerging from these reults is taken. The available methods of computation of the quench levels are presented together with dedicated particle shower simulations which are necessary to understand the tests. The future experiments, needed to reach better understanding of quench limits as well as limits for the machine operation are investigated. The possible strategies to set BLM thresholds are discussed. Speaker: Mariusz Gracjan Sapinski (CERN) Material: • 19:30 First demonstration with beam of the Achromatic Telescopic Squeeze (ATS) 20' The Achromatic Telescopic Squeezing (ATS) scheme is a novel squeezing mechanism, which is (almost fully) compatible with the existing hardware of the LHC, and enables both the production and the chromatic correction of very low beta*. The basic principles of the ATS scheme will be reminded together with its basic motivation which is to deliver a very ambitious beta* of 10-15 cm in view of the even more ambitious performance commitments taken by the HL-LHC project. In this context, a few dedicated beam experiments were meticulously prepared and took place at the LHC in 2011. The results obtained will be highlighted, demonstrating already the viability of the scheme. The plans for 2012 will be discussed, with a few optics considerations which could already justify the implementation of the ATS scheme in the nominal machine, depending on which beta* limits will be met first, and that the ATS can solve (e.g. optics matchability, chromatic aberrations) and obviously cannot: the aperture of the existing triplet. Speaker: Stephane Fartoukh (CERN) Material: • Tuesday, 7 February 2012 • 08:30 - 12:30 S03 - Strategy for 2012 (I) Conveners: Jorg Wenninger (Chair), Rogelio Tomas Garcia (Scientific Secretary) Material: • 08:30 Experiments expectations, plans and constraints 20' The talk discusses the input from the experiments that is relevant to define next year's program. It covers the target for integrated luminosity for 2012, for both p-p and Heavy Ion physics, the configuration for the Heavy Ion period (p-Pb, Pb-Pb or both) and the requests for special runs (high beta, VdM scan with un-squeezed beam, high or low pile-up runs…). The impact of LHC parameters and conditions on the experiments is also discussed, including the effect of pile-up (would experiment performance be limited next year with 50 ns?), beam energy, bunch length, vacuum and background, etc.. Proposals for optimizations will also be discussed, including the use of satellite-main collision to provide luminosity for ALICE and suggestions for reducing the overhead of ALICE and LHCb polarity reversals. Speaker: Benedetto Gorini (CERN) Material: • 09:00 Turn-around improvements 20' An efficient turnaround will be an important parameter for the integrated luminosity performance at LHC in 2012, when an operation with steady beam parameters will be achieved at the beginning early on in the run. Improvements of the operational cycle were already put successfully in place after the 2010 experience but additional improvements are possible. In this paper, the 2011 turnaround performance is reviewed and the benefits of the improvements from 2010 are presented. Possible further gains are proposed for 2012. Speaker: Stefano Redaelli (CERN) Material: • 09:30 Performance reach of the injector complex 20' At the start of the 2011 physics run quite some margins in the performance of the injectors were available and identified. Following the fast increase of the performance of the LHC itself during 2011, these margins have very much been exploited and some have even been pushed further. In view of further increase in the LHC luminosity, the 2012 performance reach of the injectors will be reviewed. One of the arising topics is satellite bunches from the injectors. Until now concerted effort went into supressing satellite bunches to a minimum, but a recent successful test with “controlled” satellites might make their routine production and characterisation an important topic in 2012. Speaker: Rende Steerenberg (CERN) Material: • 10:00 Coffee break 30' • 10:30 Running the RF at higher beam energy and intensity 20' The improvements done to the RF parameters and hardware in 2011 are reviewed. We then present the upgrades planned for 2012: Further reduction of capture losses with the longitudinal damper, batch per batch blow-up at injection and modification of the blow-up to preserve bunch profile. Operation at higher energy is readily possible with the present RF power, and does not degrade longitudinal stability thanks to the controlled longitudinal emittance growth during the ramp. For operation with higher beam current, the observations in 2011 indicate that there is no single bunch issue with up to 3E11 p per bunch. With the large gain of the RF feedback and One-Turn feedback, the cavity impedance at the fundamental will not be a limitation for ultimate intensity (1.7E11 p per bunch) with 25 ns spacing. The klystron power (300 kW RF at saturation) is sufficient for 25 ns operation with nominal intensity (2808 bunches per beam, 1.1E11 p per bunch). An RF roadmap for going beyond will be outlined: It calls for an upgrade of the LLRF only and should allow for operation with ultimate beam intensity (25 ns spacing, 2808 b, 1.7E11 p per bunch) after LS1. Speaker: Philippe Baudrenghien (CERN) Material: • 11:00 Transverse damper 20' Plans for the operation of the transverse damper in 2012 at bunch spacings of 50 ns and 25 ns and at increased collision energy will be reviewed. The increased energy and the experience that will be gained at 25 ns are very important to define any upgrades that may be necessary for the high luminosity operation at 7 TeV after LS1. This means that the available parameter space must be probed in 2012 which in particular includes a higher feedback gain in the ramp and with colliding beams. Limits for the feedback gain for the current system will be outlined. The potential benefits of running with higher feedback gain for a better emittance preservation will be stressed and weighed against the operational difficulties and the potential impact of noise in the damper system. A plan for re-commissiong at 50 ns and 25 ns for operation at 4 TeV will be outlined. Speaker: Wolfgang Hofle (CERN) Material: • 11:30 R2E failure rates expectations 20' 2011 very successful LHC operation has provided valuable input for the detailed analysis of radiation levels and radiation induced equipment failures. Radiation levels around LHC critical areas and the LHC tunnel were studied in detail and compared to available simulation results, as well as put in perspective to LHC operation parameters. Observed radiation induced failures were not only analyzed in detail, but already addressed through early relocation measures and patch-solution on the equipment level. Both improvements continue during this xMasBreak together with the installation of heavy shielding around the RBs and UJs in Point-1. Based on measured radiation levels, calculations for the shielding improvements and expected operational parameters this talk provides an update on the expected radiation levels around LHC critical areas. Briefly summarizing the already performed mitigation measures and equipment patches, an estimate is provided on expected equipment failure rates during 2012 operation. Required beam and measurement studies are highlighted in order to further improve the predictions of both radiation levels and expected equipment failures, the latter driving the chosen mitigation actions for LS1. Speaker: Markus Brugger (CERN) Material: • 16:30 - 17:00 Coffee break • 17:00 - 20:30 S04 - Strategy for 2012 (II) Conveners: Oliver Bruning (Chair), Laurette Ponce (Scientific Secretary) Material: • 17:00 Beam Energy 20' The limiting factors for operating the LHC at higher energies with defective 13 kA busbar joints will be reviewed. The experience gained during the 2011 run, including the quench statistics and dedicated quench propagation tests will be presented. The possible operational limits from the by-pass diode contact resistance issue will be addressed. Finally, a proposal for running at the highest possible safe energy compatible with the pre-defined risk level will be presented. Speaker: Andrzej Siemko (CERN) Material: • 17:30 Optics Options 20' The experience from the past LHC proton run has provided plenty of information and can be used to define possible scenarios for the 2012 physics run. The key parameters such as beta* and crossing angle will be reviewed assuming a 4 TeV beam energy and considering options for 25 ns and 50 ns. Possible scenarios for the high-beta optics configuration during the 2012 run will be presented. Speaker: Massimo Giovannozzi (CERN) Material: Slides • 18:00 Collimation settings and performance 20' Collimator settings and available aperture are key parameters for deciding the reach in beta*. Based on MDs and operational experience in 2011, a review is given of the measured aperture and the achievable margins and settings of the collimators. In particular, the tight collimation scheme is evaluated in terms of possible gains and necessary operational changes. Speaker: Roderik Bruce (CERN) Material: • 18:30 Performance Reach in the LHC for 2012 20' Based on the 2011 experience and Machine Development study results the performance reach of the of the LHC with 25 and 50 ns beams will be addressed for operation at 3.5 and 4 TeV. The possible scrubbing scenarios and potential intensity limitations resulting from vacuum, heating will be taken into account wherever possible. Speaker: Gianluigi Arduini (CERN) Material: Slides • 19:00 MD plans in 2012 20' Machine development sessions were performed in 2011 during dedicated slots of beam time. These MD studies were scheduled and planned in detail well before, reflecting the agreed priorities: further optimizing machine performance, exploring beam parameters beyond design targets, assessing machine limitations, testing new concepts and machine settings. The MD's in 2012 will build on the successful 2011 experience. The proposed priorities are discussed. A particular emphasis is put on how to best prepare future LHC running in view of the upcoming LHC shutdown in 2013/14 and the recommissioning of the LHC with double beam energy and nominal luminosity by 2014/15. Speaker: Ralph Wolfgang Assmann (CERN) Material: • 19:30 Ions in 2012 20' Review of the options for the heavy ion run in 2012: proton-lead or lead-lead or time-sharing between the two. The implications of running at special energies choice of bunch spacing and filling scheme. Performance expectations in the light of the first p-Pb test, further MD on p-Pb and the 2011 Pb-Pb run. Possibilities for future improvements and an outline of the programme between LS1 and LS3. Speaker: John Jowett (CERN) Material: • Wednesday, 8 February 2012 • 08:30 - 12:20 S05 - LS1 (I) Conveners: Frederick Bordry (Chair), Katy Foraz (CERN) Material: • 08:30 LS1 general planning and strategy for LHC, LHC injectors 20' The goal of Long Shutdown 1 is to perform the full maintenance of equipment, and the necessary consolidation and upgrade activities in order to ensure reliable LHC operation at nominal performance from mid 2014. Long Shutdown 1 concerns not only LHC but also its injectors. In order to ensure that resources will be available an analysis is in progress to detect conflict/overload and to decide what is compulsory, what we can afford and what we have to postpone to Long Shutdown 2. The strategy, time key drivers, constraints and draft schedule will be presented. Speaker: Katy Foraz (CERN) Material: • 09:00 Powering tests before LHC warm-up: What is new from Chamonix 2011? 20' At the end of 2012, the Large Hadron Collider will enter its first programmed long stop (LS1). The problem at the origin of 2008 incident will be definitely treated and the main circuits will then be able to run at the design current value without protection issues. At Chamonix 2011, a proposal was done for a series of powering tests to be performed just before the LS1 to investigate other potential limitations in the machine, which could be fixed during the same maintenance period. A review of these powering tests is presented, together with the list of investigation to be performed by the electrical quality assurance (ElQA) team. A tentative planning is as well proposed. Moreover, following complementary activities during the LS1, a huge campaign of individual system tests will have to be as well performed during the shutdown. Attention will be put on the preliminary list of needed re-qualifications. Speaker: Mirko Pojer (CERN) Material: • 09:30 LHC consolidation of the superconducting circuits 40' All the activities necessary to consolidate the LHC superconducting circuits are given, especially the consolidation of the main splices, exchange of weak cryomagnets, the consolidation of the DFBAs and the special interventions. For each of them, the baseline strategy will be presented, highlighting the reasons that lead to these choices and the remaining risk level. In particular, the progress of the work of the LHC splices task force, the recommendations of the second LHC splices review (November 2011) and their analysis are reported. Finally, the work planning, the organization chart and the associated resources will be detailed. Speaker: Jean-Philippe Tock (CERN) Material: Paper • 10:20 Coffee break 30' • 10:50 R2E strategy and activities during LS1 20' The level of flux of hadrons with energy in the multi MeV range expected from the collimation system at Point 7 and from the collisions at the interaction Points 1, 5 and 8 will induce Single Event Errors (SEE) of the standard electronics present in equipment located around these Points. Such events would perturb the LHC operation. As a consequence, the sensitive equipment will be shielded or relocated in safer areas. These mitigations activities will be performed mainly during the LS1. About 15 groups (including equipment owners) will be involved. Some of them will have to work in parallel in several Points. This talk summarizes the planning of these mitigations activities in each impacted points. It presents the R2E organization process, the priorities and possible bottlenecks as today identified. Speaker: Anne-Laure Perrot (CERN) Material: The last two years of LHC operation have highlighted concerns on the levels of the dynamic vacuum in the long straight sections (LSS) in presence of high intensity beams. The analysis of the existing data has shown relationship between pressures spikes and beam screen temperature oscillations or micro-sparking in the RF fingers of the bellows on one side and coincidence of pressure bumps with stimulated desorption by electron cloud, beam losses and/or thermal out gassing stimulated by HOM losses. The electron cloud mitigation solutions will be adapted to the different configurations: cold/warm transitions, non-coated surfaces in direct view of beams, photoelectrons, etc. All scenarios will be presented together with their efficiencies. Additional pumping and reengineering of components will reduce the sensitivity of the vacuum system to beam losses or HOM inducing out gassing. The expected margin at nominal intensity and energy resulting from these consolidations will be summarized. Finally, the challenges of the Experimental areas will be addressed, more specifically the status of the new Beryllium pipes (ATLAS and CMS) which are in the critical path and the consolidation of vacuum instrumentation, pumping and electron cloud mitigation. The risk corresponding to the proposed consolidations will be shown and the margins with respect to the schedule analyzed. Speaker: Jose Miguel Jimenez (CERN) Material: Slides • 11:50 Cryogenics system: strategy to achieve nominal performance and reliable operation 30' During the LHC operation in 2010 and 2011, the cryogenic system has achieved an availability level fulfilling the overall requirement. To reach this level, the cryogenic system has profited like many other beam-dependent systems from the reduced beam parameters. Therefore, impacts of some failures occurred during the LHC operation were mitigated by using the overcapacity margin, the existing built-in redundancy in between adjacent sector cryogenic plants and the “cannibalization” of spares on two idle cryogenic plants. These two first years of operation were also crucial to identify the weaknesses of the present cryogenic maintenance plan and new issues like SEUs. After the LS1, nominal beam parameters are expected and the mitigated measures will be less effective or not applicable at all. Consequently, a consolidation plan to improve the MTBF and the MTTR of the LHC cryogenic system is under definition. Concerning shutdown periods, the present cryogenic sectorization imposes some restrictions in the type of interventions (e.g. cryo-magnet removal) which can be done without affecting the operating conditions of the adjacent sector. This creates additional constrains and possible extra down-time in the schedule of the shutdowns including the hardware commissioning. This presentation focuses on the consolidation plan foreseen during the LS1 to improve the performance of the LHC cryogenic system in terms of availability and sectorization. Speaker: Laurent Tavian (CERN) Material: • 16:30 - 17:00 coffee break • 17:00 - 20:30 S06 - LSI (II) Conveners: Frederick Bordry (Chair), Katy Foraz (scientific secretary) • 17:00 LHC experiments upgrade and maintenance 15' All experiments plan an effective usage of the LS1 shutdown period. After three years of running they will go through a consolidation phase, mostly to fix problems which have emerged over time, like single points of failure in the infrastructure, failures of low voltage power supplies and optical links. Upgrade of some detector components will start, manly related to the beam pipe, the innermost tracker elements and the trigger system. Detector components, which had to be staged for cost reason in 2003, will then enter into the detector setup. The goal is to be fully ready for the new energy regime at nominal luminosity. Speaker: Marzio Nessi (CERN) Material: • 17:25 QPS upgrade and machine protection during LS1 15' The presentation will explain all the proposed changes and discuss the impact on other shutdown activities. The upgrade of the LHC Quench Protection System QPS during LS1 with respect to radiation to electronics will concern the re-location of equipment and installation of new radiation tolerant hardware. The midterm plan for further R2E upgrades will be addressed. The protection systems for insertion region magnets and inner triplets will be equipped with a dedicated bus-bar splice supervision including some additional modifications in order to improve the EMC immunity. The extension of the supervision capabilities of the QPS will concern the quench heater circuits, the earth voltage feelers and some tools to ease the system maintenance. The protection of the undulators will be revised in order to allow more transparent operation. The installation of snubber capacitors and arc chambers for the main quad circuits will be complete the upgrade of the energy extraction systems. Finally the re-commissioning of the protection systems prior to the powering tests will be addressed. Speaker: Reiner Denz (CERN) Material: • 17:50 EN-EL upgrade and consolidation 15' EN/EL will have a huge program of work during LS1. All our projects are being organised in order to concentrate efforts for LS1 on tasks possible only during shutdowns of the machines. Therefore the EL program for the shutdown will commence early (January 2012) and will be completed later (2015). EN/EL is now starting to increase engineering resources so that newcomers will have at least a minimum period of time for training and integration. The scale of this increase in resources will be limited by the reasonable capacity of EN/EL core staff to manage it. In other words, not all the requested activities will be possible and prioritisation will be necessary. Three different types of activities are planned and will be prioritized for the next long shut down: - accelerators infrastructure maintenance, - consolidation of aging elements of EN/EL infrastructure (part of a 15 years program to substantially increase the reliability and availability of the power distribution network). - user requests (EN/EL estimates that only 50% of the LS1 requests are currently known). The main activities will be the contributions to the R2E project, the BE/BI upgrade projects and the RF upgrade project in SPS (BA3). Speaker: Francois Duval (CERN) Material: • 18:10 EN-CV upgrade and consolidation 15' The EN/CV group will be heavily involved into several projects and activities during LS1 within a window frame limited to around twelve months. According to the requests received so far, the majority of projects are related to the upgrade of user’s equipment, consolidation work and construction of new plants; however a part of them is needed following the experience on the first years of LHC run or to adapt the installations to new operating parameters. The author shall focus his presentation on some of these projects, outlining the impact that they will have in operational working conditions or in the risk for breakdown. Speaker: Mauro Nonis (CERN) Material: Slides • 18:35 Access strategy in the accelerator complex and experimental areas 15' This paper shall review the main features of the new PS Personnel Protection System (PSPSS) as well as the main milestones for its deployment during the Long Shutdown of 2013-2014. Access conditions in the PS, SPS and LHC complexes during this period shall be described as well as the upgrades and improvements that are under preparation. Speaker: Rui Nunes (CERN) Material: • 19:00 RF upgrade program in LHC injectors and LHC machine 20' The main themes of the presently on-going RF upgrade program are: the LLRF-upgrade for PSB and PS, the study of a tuning-free wide-band system for PSB, the upgrade of the SPS 800 MHz amplifiers and beam controls and the upgrade of the transverse dampers of the LHC. Whilst LHC Splice Consolidation is certainly the top priority for LS1, some necessary RF consolidation and upgrade is necessary to assure the LHC performance for the next 3-year run period. This includes 1) necessary maintenance and consolidation work that could not fit the shorter technical stops during the last years, 2) the upgrade of the SPS 200 MHz system from presently 4 to 6 cavities, which constitutes the present bottle neck for LHC beam current and possibly 3) the replacement of one LHC cavity. On the longer term, the LHC luminosity upgrade requires crab cavities, for which some preparatory work both in SPS Coldex and LHC point 4 must be scheduled during LS1. Speaker: Erk Jensen (CERN) Material: • 19:30 What is the maximum reasonable energy? 20' In 2008 all the LHC main dipole circuits were trained to 5 TeV, two sectors to 6 TeV, and one sector was pushed up to 6.6 TeV. In the 5-6 TeV range, a few quenches were needed to retrain the LHC dipoles, and none for the quadrupoles. On the other hand, in the 6-7 TeV range a larger than expected number of quenches was observed in the main dipoles. Using this limited set of data, tentative estimates were given to guess the number of quenches needed to reach nominal energy. After three years, the only additional experimental data are the retraining of the magnets individually tested at SM18, either coming from the spares or from the 3-4 sector. After presenting this additional information, we will consider the different scenarios that can be envisaged to train the LHC main magnets after the Long Shut-down 1, the expected energy, the impact on the commissioning time and the associated risk. Speaker: Ezio Todesco (CERN) Material: • Thursday, 9 February 2012 • 08:30 - 12:30 S07 - After LS1 Conveners: Rudiger Schmidt (Chair), Mirko Pojer (Scientific Secretary) • 08:30 Performance potential of the injectors after LS1 20' The main upgrades of the injector chain in the framework of the LIU project will only be implemented in the second long shutdown (LS2), in particular the increase of the PSB energy to 2 GeV or the implementation of cures/solutions against instabilities/e-cloud effects etc. On the other hand, Linac4 will become available by the end of LS1. Its connection to the PSB can then take place either on short notice if Linac2 fails, taking 50 MeV protons in the PSB via the existing injection system but with reduced performance, or from the end of 2015 during a prolonged winter shutdown before LS2. The anticipated beam performance of the LHC injectors after LS1 in these different cases is presented. Space charge on the PS flat-bottom will remain a limitation because the PSB to PS transfer energy will stay at 1.4 GeV. Therefore new RF manipulations are presented which will improve brightness for 25 ns bunch spacing and should allow for more than nominal luminosity in the LHC. Speaker: Heiko Damerau (CERN) Material: • 08:55 Performance reach of LHC after LS1 20' Based on past experience (2010/2011), in particular expected limitations from beam-beam effects, and taking into account the expected beam quality from the LHC injectors, the peak and integrated luminosities at top energy are discussed for different scenarios (e.g. bunch spacing, beta*). In particular it will be shown which are the key parameters to reach the nominal luminosity and whether it is possible to exceed the nominal luminosity. Possible test in 2012 are discussed. Speaker: Werner Herr (CERN) Material: • 09:20 Magnet powering with zero downtime - a dream? 20' Despite a number of improvements already applied in the course of the year, the magnet powering system of the LHC still accounts for around 50% of the premature beam dumps. This number might even further increase when moving to higher beam energies in the next years. With mitigations of radiation effects and the prospects for beam induced magnet quenches being discussed elsewhere, we aim at identifying possible mid- and long-term improvements within the various equipment systems to further reduce the number of equipment failures leading to a loss of the particle beams. Amongst others, this includes the sensitivity of equipment to external causes such as electromagnetic perturbations or perturbations on the electrical network. To conclude, the gain of the identified mitigations will have to be balanced against the potential impact on schedule and cost. Speaker: Markus Zerlauth (CERN) Material: • 09:45 Beam systems without failures – what can be done? 20' The beam dumps triggered by interlocks not related to the magnet powering are discussed. This concerns the systems like the RF, the transverse feedbacks, beam instrumentation, beam dumping system, collimators and control systems. An analysis of the reasons of these dumps is presented together with a possible strategy to mitigate the effect of these failures. Speakers: Matteo Solfaroli Camillocci (CERN), Jan Uythoven (CERN) Material: • 10:10 Coffee break 30' • 10:40 Will we still see SEEs? 20' The actions during the first years of the R2E Mitigation Project allowed to drastically reducing the rate of Single Event Errors on radiation sensitive electronic equipment installed in the LHC underground areas. Shielding and relocation activities during LS1 will allow the resolution of the present issues concerning UJs of P1, 5 and 7 as well as the P8 cavern. The parallel development of radiation tolerant power converters will address the remaining concerns in the RRs. Radiation levels in areas where luminosity is the source are under control. The remaining open questions are related to the evolution of the beam-gas source term in the arc and in the dispersion suppressor and to the evolution of losses at the betatron and momentum insertion regions. 2012 operation will allow addressing these points, which will be used for a complete forecast of radiation levels and projected failures after the resume of operation in 2014/15. Speaker: Marco Calviani (CERN) Material: • 11:05 UFOs – will they take over? 20' During UFOs are one of the major unexpected performance limitations of the LHC. With large-scale increases of the BLM thresholds, their impact on LHC availability could be mitigated in the second half of 2011. For higher beam energy and lower magnet quench limits, the problem is expected to be considerably worse, though. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. In this talk, the state of knowledge is summarized and extrapolations for LHC operation after LS1 are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified. Speaker: Tobias Baer (CERN/Hamburg University (DE)) Material: • 11:30 Quenches: will there be any? 20' Quenches in superconducting circuits are part of the normal operation of the LHC, and can never be excluded. Operation at 3.5 TeV resulted in only a small amount of quenches in 2011, especially because most magnets were operated below 20% of the critical current, resulting in a temperature margin of about 5 K. When operating at about 6.5-7 TeV beam energy, the temperature margin will only be about 1.5 K and the magnets will be much more sensitive to beam losses, while at the same time the beam intensity will be higher. In this talk I will discuss the quench sensitivity of the magnets due to beam losses. I will also present the probability of increased quenching of the superconducting circuits due to other effects such as higher currents and ramp rates, increased EM coupling, and in some cases reduced QPS thresholds. Speaker: Arjan Verweij (CERN) Material: • 15:30 - 16:00 Coffee break • 16:00 - 17:00 Special Seminar - Mikhail Lomonosov 1h0' Speaker: Vladimir Shiltsev (Fermilab) Material: • 17:00 - 20:00 S08 -LHC-related projects and studies part (I) Conveners: Roland Garoby (Chair), Laurette Ponce (Scientific Secretary) Material: • 17:00 Will ALICE be running during the HL-LHC era? 10' We will present the perspectives for ion running in the HL-LHC era. In particular, ALICE is preparing a significant upgrade of its rate capabilities and further extending its particle identification potential. This paves the way for heavy ion physics at unprecedented luminosities, which are expected in the HL-LHC era with the heaviest ions. The potential interest of data-taking during high luminosity proton runs for ATLAS and CMS will also be commented. Speaker: Johannes Wessels (Westfaelische Wilhelms-Universitaet Muenster (DE)) Material: • 17:15 Will LHCb be running during the HL-LHC era? 10' The LHCb collaboration submitted in March 2011 a 'Letter of Intent' for upgrading the detector by 2018, in order to take data at a luminosity of 1-2x10^33/cm^2/s after LS2 with a detector read out at 40 MHz. A more flexible software-based triggering strategy will allow to increase trigger efficiencies especially in decays to hadronic final states. LHCb intends to take data for another 10 years after LS2 in parallel to ATLAS and CMS, aiming at a integrated luminosity of at least 50 fb-1. We plan to take data with 25ns bunch spacing and to use luminosity leveling, as done at present. After a short physics motivation the talk will review the desired running conditions of LHCb and other machine related requirements. Speaker: Burkhard Schmidt (CERN) Material: • 17:30 HL-LHC operation with protons and ions 20' The presentation discusses potential variations of the beam and optics parameters that are compatible with the HL-LHC design goals and attempts to discuss the parameter margins required for translating the HL-LHC performance parameters to specifications for the LHC injector complex. The variations of the parameter sets attempt to incorporate the experience from the LHC operation and MD studies in 2010 and 2011. Particular attention will be given to a comparison of the 25ns and 50ns bunch spacing scenarios for the proton beam operation. Preliminary information will be given about the modes of operation and beam parameters for ions. The impact of operation for ions during the HL-LHC exploitation period will be outlined. Speaker: Oliver Bruning (CERN) Material: Paper Slides • 18:00 Can the proton injectors meet the HL-LHC requirements after LS2? 20' The LIU project has as mandate the upgrade of the LHC injector chain to match the requirements of HL-LHC. The present planning assumes that the upgrade work will be completed in LS2, for commissioning in the following operational year. The known limitations in the different injectors are described, together with the various upgrades planned to improve the performance. The expected performance reach after the upgrade with 25 and 50 ns beams is examined. The project planning is discussed in view of the present LS1 and LS2 planning. The main unresolved questions and associated decision points are presented, and the key issues to be addressed by the end of 2012 are detailed in the context of the machine development programs and hardware construction activities. Speaker: Brennan Goddard (CERN) Material: • 18:30 Necessary LIU studies in the injectors during 2012 20' A significant fraction of the Machine Development (MD) time in the LHC injectors in 2011 was devoted to the study of the intensity limitations in the injectors (e.g. space charge effects in PS and SPS, electron cloud effects in the PS and SPS, single bunch and multi-bunch instabilities in PS and SPS, emittance preservation across the injector chain, etc.). The main results achieved in 2011 will be presented as well as the questions that still remain unresolved and are of relevance for the LIU project. 2012 MDs will also continue exploring the potential of scenarios that might become operational in the future, like the development of a low gamma transition optics in the SPS or alternative production schemes for the LHC beams in the PS. A tentative prioritized list of studies and their requirements in terms of machine time, resources and instrumentations will be provided. Speaker: Giovanni Rumolo (CERN) Material: • 19:00 SPS: scrubbing or coating?- 20' The operation of the SPS with high intensity bunched beams is limited by the electron cloud building-up in both the arcs and long straight sections. Two consolidation options have been considered: mitigation of the electron cloud using coatings or relying, as before, on the scrubbing runs. A status report on both options will be given with a particular emphasis on measurements plans for 2012 and pending issues. The testing needs, corresponding beam parameters and MD time in 2012 will be addressed. The criteria for the decision making and the corresponding schedule will be discussed. Speaker: Jose Miguel Jimenez (CERN) Material: Slides • 19:30 Plans for ions in the injector complex 20' The heavy ion beams required during the HL-LHC era will imply significant modifications to the existing injector chain. We review the various options, highlighting the importance of an early definition of the future needs and keeping in mind the compatibility with the rest of the future CERN physics programme. Speaker: Django Manglunki (CERN) Material: • Friday, 10 February 2012 • 08:30 - 12:30 S09 -LHC related projects and studies – Part(II) Conveners: Lucio Rossi (Chair), Riccardo De Maria (Scientific secretary) Material: • 08:30 Beam current limit for HL-LHC 20' The HL-LHC upgrades require beam currents, that are significantly increased beyond the original specifications of the LHC. The talk will explore expected limitations in hardware systems (including RF power, instrumentation, vacuum), the impact of increased beam losses on quench statistics, risks due to beam current related damage, beam dynamics issues, loss-induced activation and single event upsets Speaker: Ralph Wolfgang Assmann (CERN) Material: • 08:55 Do we really need an upgrade of the collimation system for HL-LHC? 20' Several improvements are foreseen for the collimation system during the LS1. The talk will discuss how the new performance level compares with the HL-LHC needs and if further improvements will be needed to be compatible with the high luminosity operations expected for the HL-LHC upgrade . Speaker: Stefano Redaelli (CERN) Material: • 09:20 New Magnets for the IR: how far are we from HL-LHC target? 20' The HL-LHC will benefit from new magnet technology (large aperture, Nb3Sn conductors) already under advanced R&D programs. The talk will review the status and roadmap of the new magnets for the high luminosity interaction regions. Speaker: Gianluca Sabbi (LBNL) Material: • 09:45 Crab Cavities: from virtual reality to real reality? 20' The talk will review the status and roadmap for the crab-cavity system that is foreseen for the HL-LHC upgrade project for luminosity enhancement and leveling. Speaker: Rama Calaga (CERN) Material: • 10:10 coffee break 30' • 10:40 LHeC and HE-LHC: Accelerator lay-out and challenges 20' This talk will present a concise description of the layout of the two machines, together with the main accelerator-physics and technology challenges, detail the required LHC modifications, and describe the global schedules with decision points. Speaker: Frank Zimmermann (CERN) Material: Slides • 11:05 Magnet R&D for LHeC and HE-LHC: synergy and competion? 20' The new projects heavily rely on new magnet systems. The talk with illustrate the magnet R&D program for the LHeC (accelerator, transfer lines, kickers, interaction regions) and the HE-LHC (main Ring, SPS+, transfer lines, kickers). In particular synergies or parallel roadmaps will be highlighted together with the first steps foreseen for 2012. Speaker: Luca Bottura (CERN) Material: • 11:30 SC Cavities R&D for LHeC an HE-LHC 20' The new machines HE-LHC and LheC (whether or not linac-ring and ring-ring options will be favored) rely on new RF systems. The talk will analyze the synergies or competitions between the R&D strategies. The first steps foreseen for 2012 will be highlighted. Speaker: Erk Jensen (CERN) Material: • 12:30 - 14:00 Lunch@Les Aiglons • 14:00 - 16:00 S10 - Summary session Conveners: Steve Myers (Chair), Frank Zimmermann (Scientific Secretary) Material:
2014-04-24 17:25:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35869091749191284, "perplexity": 4247.053136385288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
https://indico.hiskp.uni-bonn.de/event/40/contributions/609/
The 39th International Symposium on Lattice Field Theory (Lattice 2022) Aug 8 – 13, 2022 Hörsaalzentrum Poppelsdorf Europe/Berlin timezone Adjoint fermions at large-$N_c$ on the lattice Aug 10, 2022, 3:00 PM 20m CP1-HSZ/1st-1.001 - HS5 (CP1-HSZ) CP1-HSZ/1st-1.001 - HS5 CP1-HSZ 50 Oral Presentation Theoretical Developments and Applications beyond Particle Physics Speaker Pietro Butti (IFT UAM-CSIC Madrid) Description Lattice simulations of Yang-Mills theories coupled with $N_f$ flavours of fermions in the adjoint representation provide a way to probe the non-perturbative regime of a plethora of different physical scenarios, such as Supersymmetric Yang-Mills theory to BSM models. Although the large-$N_c$ limit of these theories can give important insight into the strongly coupled regime of these models, the computational cost of standard lattice simulations involving dynamical adjoint fermions forces one to small-$N_c$ gauge groups. In this talk I am going to present how this large-$N_c$ limit is tackled on the lattice by exploiting volume reduction through twisted boundary conditions, which allows one to simulate these theories at high values of $N_c$ such as 289,361. I will emphasise our most recent results on Yang-Mills theory coupled with one Majorana adjoint fermions ($N_f=\frac{1}{2}$), which corresponds to $\mathcal{N}=1$ SUSY Yang-Mills. Primary authors Antonio Gonzalez-Arroyo (IFT, UAM) Margarita García Pérez (IFT UAM-CSIC, CSIC) Pietro Butti (IFT UAM-CSIC Madrid) Ken-Ichi Ishikawa (Hiroshima University) Masanori Okawa (Hiroshima University)
2023-02-02 01:27:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7670410871505737, "perplexity": 4288.918432119495}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00449.warc.gz"}
https://www.rieselprime.de/ziki/Absolute_value
Topics Register • News • History • How to • Sequences statistics • Template prototypes # Absolute value The absolute value of a real number is defined as: $\displaystyle{ |x| = x }$ when $\displaystyle{ x\geq 0 }$ $\displaystyle{ |x| = -x }$ when $\displaystyle{ x\lt 0 }$ The absolute value of a complex number z = x + iy is defined as: $\displaystyle{ |z| = \sqrt{x^2+y^2} }$ where we take the positive value of the square root. Note that when y = 0, both definitions are equivalent. Geometrically, the absolute value of a number represents its distance from the origin.
2021-02-26 04:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839803338050842, "perplexity": 106.18272673997429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00287.warc.gz"}
https://scoop.eduncle.com/q19-two-rockets-approach-a-planet-from-opposite-directions-with-speeds-with-respect-to-the-planet-the
IIT JAM Follow May 5, 2020 7:45 pm 30 pts Q19. Two rockets approach a planet from opposite directions with speeds +with respect to the planet. The proper length of each rocket is L. How long does each rocket appear to the other rocket? 3 (c) (a) (b) • 0 Likes • Shares • Somnath This is the problem of relativistic velocity addition formula. Details discussion is attached. You will be able to solve not only this problem but also similar kind of problems if ... • Ruby negi option b is correct, one rocket see other rocket length 3/5 times of it's length.. Dhairya sharma solution is attached.. you have to find the velocity of one rocket with respect to other rocket using the velocity transformation formula... I also find the length of rocket in pla...
2022-06-25 11:53:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369432091712952, "perplexity": 1925.0972057170616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00367.warc.gz"}
https://www.mathcounterexamples.net/the-set-of-all-commutators-in-a-group-need-not-be-a-subgroup/
The set of all commutators in a group need not be a subgroup I here provide a simple example of a group whose set of commutators is not a subgroup. The counterexample is due to P.J. Cassidy (1979). Description of the group $$G$$ Let $$k[x,y]$$ denote the ring of all polynomials in two variables over a field $$k$$, and let $$k[x]$$ and $$k[y]$$ denote the subrings of all polynomials in $$x$$ and in $$y$$ respectively. $$G$$ is the set of all upper unitriangular matrices of the form $A=\left(\begin{array}{ccc} 1 & f(x) & h(x,y) \\ 0 & 1 & g(y) \\ 0 & 0 & 1 \end{array}\right)$ where $$f(x) \in k[x]$$, $$g(y) \in k[y]$$, and $$h(x,y) \in k[x,y]$$. The matrix $$A$$ will also be denoted $$(f,g,h)$$. Let’s verify that $$G$$ is a group. The products of two elements $$(f,g,h)$$ and $$(f^\prime,g^\prime,h^\prime)$$ is $\left(\begin{array}{ccc} 1 & f(x) & h(x,y) \\ 0 & 1 & g(y) \\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{ccc} 1 & f^\prime(x) & h^\prime(x,y) \\ 0 & 1 & g^\prime(y) \\ 0 & 0 & 1 \end{array}\right)$ $=\left(\begin{array}{ccc} 1 & f(x)+f^\prime(x) & h(x,y)+h^\prime(x,y)+f(x)g^\prime(y) \\ 0 & 1 & g(y)+g^\prime(y) \\ 0 & 0 & 1 \end{array}\right)$ which is an element of $$G$$. We also have: $\left(\begin{array}{ccc} 1 & f(x) & h(x,y) \\ 0 & 1 & g(y) \\ 0 & 0 & 1 \end{array}\right)^{-1} = \left(\begin{array}{ccc} 1 & -f(x) & f(x)g(y) – h(x,y) \\ 0 & 1 & -g(y) \\ 0 & 0 & 1 \end{array}\right)$ proving that the inverse of an element of $$G$$ is also an element of $$G$$. The commutator subgroup of $$G$$ is the set of elements $$(0,0,h)$$ where $$h \in k[x,y]$$ Let’s remind that the commutator subgroup of a group is the subgroup generated by all the commutators of the group. For elements $$a$$ and $$b$$ of a group, the commutator of $$a$$ and $$b$$ is $$[a,b]=a^{-1}b^{-1}ab$$. The commutator subgroup of $$G$$, also called the derived subgroup is denoted $$G^\prime$$. One can verify that the commutator of $$(f,g,h), \ (f^\prime,g^\prime,h^\prime) \in G$$ is $[(f,g,h),(f^\prime,g^\prime,h^\prime)]= \left(\begin{array}{ccc} 1 & 0 & f(x)g^\prime(y)-f^\prime(x)g(y) \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right)$ which is an element $$(0,0,h(x,y))$$ with $$h(x,y)=f(x)g^\prime(y)-f^\prime(x)g(y) \in k[x,y]$$. Conversely, we have to prove that for $$\displaystyle h(x,y) \in k[x,y]$$, $$(0,0,h(x,y))$$ belongs to $$G^\prime$$. When $$h(x,y)=x^ky^l$$ is a monomial, we derive from above the equality $$(0,0,x^ky^l)=[(x^k,0,0)(0,y^l,0)]$$ which proves that $$(0,0,x^ky^l)$$ is a commutator. The following equality $\left(\begin{array}{ccc} 1 & 0 & a \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{ccc} 1 & 0 & b \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right)= \left(\begin{array}{ccc} 1 & 0 & a+b \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right)$ holds. Hence we get $(0,0,h)=\prod_{i,j}[(a_{ij}x^i,0,0),(0,y^j,0)]$ for any $$\displaystyle h(x,y)=\sum_{i,j} a_{ij}x^iy^j \in k[x,y]$$. We can finally conclude $G^\prime=\{(0,0,h(x,y)) | \ h(x,y) \in k[x,y]\}$ $$(0,0,x^2+xy+y^2)$$ is not a commutator If $$(0,0,x^2+xy+y^2)$$ is a commutator, then there are polynomials $$f(x),f^\prime(x) \in k[x]$$ and $$g(y),g^\prime(y) \in k[y]$$ with $x^2+xy+y^2=f(x)g^\prime(y)-f^\prime(x)g(y)$ If $$f(x) = \sum b_i x^i$$ and $$f^\prime(x) = \sum b_i^\prime x^i$$, then there are equations: $\left\{\begin{array}{lll} b_0 g^\prime(y) – b_0^\prime g(y) & = & y^2 \\ b_1 g^\prime(y) – b_1^\prime g(y) & = & y \\ b_2 g^\prime(y) – b_2^\prime g(y) & = & 1 \end{array}\right.$ which cannot be as this implies that the two polynomials $$g(y),g^\prime(y)$$ generate the three linearly independent polynomials $$\{1,y,y^2\}$$. Based on this result, we derive that the product of the two commutators $$[(x^2,-y,0)(x,1,0)]=(0,0,x^2+xy)$$ and $$[(0,y^2,0)(0,0,0)]=(0,0,y^2)$$ which is equal to $$(0,0,x^2+xy+y^2)$$ is not a commutator. Case of finite groups The group $$G$$ is infinite. It is also possible to find a finite group whose set of commutator is not a subgroup. I might come back on this topic later on. The smallest group having this feature is of order $$96$$.
2020-10-01 17:23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612155556678772, "perplexity": 54.58737768421989}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00469.warc.gz"}
http://www.math.utk.edu/colloquium/past/11_18_19.html
### Seminars and Colloquiums for the week of November 18, 2019 SPEAKERS Monday Louis Gross, UTK Larry Rolen, Vanderbilt University Tuesday Jerzy Dydak, UTK Wednesday Shuler Hopkins, UTK Tricia Phillips, UTK Thursday Timothy Robertson, UTK Robin Baidya, UTK Ephy Love, UTK Julian Scheuer, Columbia University Tea Time - 3:00 pm – 3:30 pm Monday, Tuesday, & Wednesday Room: Ayres 401 Hosted by: Delong Li Topics:    How to write a cover letter; What to include/not include on a resume or CV for an academic/industry position; Weekly check-in Monday, Nov. 11 MATH BIOLOGY Title:  More on Complexity Theory Speaker: Louis Gross Time: 10:10-11 Room: Claxton 105 ALGEBRA SEMINAR TITLE: Periodicities for Taylor coefficients of half-integral weight modular forms SPEAKER: Larry Rolen, Vanderbilt University TIME: 3:35 PM ROOM: Ayres 114 ABSTRACT: Congruences of Fourier coefficients of modular forms have long been an object of central study. By comparison, the arithmetic of other expansions of modular forms, in particular Taylor expansions around points in the upper-half plane, has been much less studied. Recently, Romik made a conjecture about the periodicity of coefficients around $\tau=i$ of the classical Jacobi theta function. Here, in joint work with Michael Mertens and Pavel Guerzhoy, we prove this conjecture and generalize the phenomenon observed by Romik to a general class of modular forms of half-integral weight. Tuesday, 11/19 TOPOLOGY/GEOMETRY SEMINAR TITLE:  Visual boundary of geodesic spaces SPEAKER: Jerzy Dydak, UTK TIME: 11:10-12:25 PM ROOM: Ayres 114 ABSTRACT: Visual boundary of CAT(0) spaces is usually defined as the space of geodesic rays with the cone topology. I will define the visual boundary of a larger class of proper geodesic spaces. It consists of equivalence classes of sequences $x_n$ diverging to infinity such that the geodesics $[p,x_n]$ converge point-wise to a geodesic ray. Since we do not want dependence on the base-point $p$, the natural axiom (which can be verified for CAT(0) spaces) is that convergence of geodesics $[p,x_n]$ implies convergence of $[q,x_n]$ for any $q$. The natural topology on such defined boundary is that of point-wise convergence. It turns out the boundary is compact metrizable and $X$ union the boundary is a compactification of $X$. Wednesday, Nov. 20 ANALYSIS SEMINAR TITLE: Murray-von Neumann dimension and Jones towers of factors. SPEAKER: Shuler Hopkins, UTK TIME: 2:30 -3:20pm ROOM: Ayres 113 ABSTRACT: In this talk, we will introduce the concept of the dimension of a Hilbert space over a von Neumann algebra in order to define the Jones index of an inclusion of finite factors $N\subset M$ (an invariant for the 'position' of N in M). We use this inclusion to perform Jones's "basic construction" to obtain a new factor containing M. Iterating this procedure yields a tower of factors satisfying remarkable properties. In Vaughn Jones's seminal paper "Index for Subfactors" these properties are used to prove the surprising result that not every real number >1 appears as a Jones Index. COMPUTATIONAL and APPLIED MATHEMATICS (CAM) SEMINAR TITLE: Modeling in Mathematical Biology SPEAKER: Speaker: Tricia Phillips, UTK TIME: 3:35 PM ROOM: Ayres 112 ABSTRACT: I will give an overview of modeling in mathematical biology and discuss specific applications. Thursday, Nov. 21 DIFFERENTIAL EQUATIONS TITLE: On Masuda's uniqueness theorem for the Navier-Stokes Equations. SPEAKER: Timothy Robertson, UTK TIME: 2:10- 3pm ABSTRACT:  In 1933 Leray famously proved the existence of weak solutions of the Navier-Stokes equations with $L^{2}$ initial data. However, the uniqueness of these solutions remained an open question. Here we present Masuda's proof of weak-strong uniqueness in the critical case in dimension three, and an ancillary result of Kozono and Sohr. ALGEBRA SEMINAR TITLE: Cancellation of finite-dimensional Noetherian modules SPEAKER: Dr. Robin Baidya, UTK TIME: 3:30 – 4:30pm ROOM: Ayres 113 ABSTRACT: The Module Cancellation Problem asks when isomorphic direct summands of a module have isomorphic complements.  In other words, if K, L, and M are modules over a ring S such that the direct sum of K and L is isomorphic to the direct sum of K and M, the question is when L is isomorphic to M.  In a forthcoming paper, we prove that cancellation holds if S is commutative; K and M are Noetherian; K has finite dimension; and, after localizing at any prime ideal p in the support of K, the module M admits a direct-sum decomposition in which the number of times K appears exceeds the dimension of S/p.  Our finding yields examples inaccessible by cancellation theorems of Bass, De Stefani-Polstra-Yao, Evans, and Warfield:  In the first two cases, K is required to be projective, whereas we do not impose such a condition; in the last two cases, there are constraints on the stable rank of the endomorphism ring of K over S that we have been able to obviate.  In this talk, we will present three concrete examples that satisfy the hypotheses in our cancellation theorem but fail to meet the criteria of the other theorems we have mentioned. MATHEMATICAL DATA SCIENCE SEMINAR TITLE: A Review of Contemporary Topological Analyses of Machine Learning SPEAKER: Ephy Love, UTK TIME: 2:10-3:25PM ROOM: Ayres 111 ABSTRACT: We will review four recent papers on the application of topological data analysis (TDA) to machine learning. There is tremendous interest in developing better explanatory tools for highly complex and non-linear machine learning methods. TDA is a promising toolbox for this line of work, both in post hoc interpretation and in interpretable feature construction. Carlson et al.'s paper "Topological Approaches to Deep Learning" was published less than a year ago and already has 8 citations on Google Scholar. We will examine basic model constructions, analyses of datasets, presentations of results and promising research avenues discussed in four papers published in the last year. Papers to be Covered: 1. Ayasdi. (2018). Topological Data Analysis Based Approaches to Deep Learning. 2. Brüel-Gabrielsson, R., Nelson, B. J., Dwaraknath, A., Skraba, P., Guibas, L. J., & Carlsson, G. (2019). A Topology Layer for Machine Learning. ArXiv:1905.12200 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1905.12200 3. Carlsson, G., & Gabrielsson, R. B. (2018). Topological Approaches to Deep Learning. ArXiv:1811.01122 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1811.01122 4. Garin, A., & Tauzin, G. (2019). A Topological “Reading” Lesson: Classification of MNIST using TDA. ArXiv:1910.08345 [Cs, Math, Stat]. Retrieved from http://arxiv.org/abs/1910.08345 GEOMETRIC ANALYSIS SEMINAR TITLE: Isoperimetric problems in Lorentzian manifolds SPEAKER: Julian Scheuer, Columbia University TIME: 4:00pm ROOM: Ayres 113 ABSTRACT: Isoperimetric problems in Lorentzian manifoldsSpeaker: Julian Scheuer ( Columbia University) Time:4:00 – 5:00pmAbstract: The classical isoperimetric and Minkowski inequalities in the Euclidean space relate the enclosed volume, the surface area and the total mean curvature of certain hypersurfaces. In this talk we present a curvature flow approach to prove properly defined analogues in certain classes of Lorentzian manifolds. #### If you are interested in giving or arranging a talk for one of our seminars or colloquiums, please review our calendar. If you have questions, or a date you would like to confirm, please contact Dr. Christopher Strickland, cstric12@utk.edu Past notices: Nov. 11, 2019 Nov. 4, 2019 Oct. 28, 2019 Oct. 21, 2019 Oct. 14, 2019 Oct. 7, 2019 Sept. 30, 2019 Sept. 23, 2019 Sept. 16, 2019 Sept. 9, 2019 Sept. 2, 2019 Aug. 26, 2019 2018-19 2017-18 ###### last updated: November 2019 Department of Mathematics College of Arts & Sciences 227 Ayres Hall. 1403 Circle Drive. Knoxville TN 37996-1320 Phone: 865-974-2461 Fax: 865-974-6576 Email: math_info@utk.edu
2021-04-14 20:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028059959411621, "perplexity": 2838.853855753486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00183.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Quadratic_residue&printable=yes
# Quadratic residue Jump to: navigation, search modulo $m$ An integer $a$ for which the congruence $$x^2\equiv a\pmod m$$ is solvable. If the above congruence is unsolvable, then $a$ is called a quadratic non-residue modulo $m$. Euler's criterion: Let $p>2$ be prime. Then an integer $a$ coprime with $p$ is a quadratic residue modulo $p$ if and only if $$a^{(p-1)/2}\equiv1\pmod p,$$ and is a quadratic non-residue modulo $p$ if and only if $$a^{(p-1)/2}\equiv-1\pmod p.$$ #### References [1] I.M. Vinogradov, "Elements of number theory" , Dover, reprint (1954) (Translated from Russian) #### Comments An amusing unsolved problem is the following: Let $p$ be a prime with $p\equiv3$ ($\bmod\,4$). Let $N$ be the sum of all quadratic non-residues between 0 and $p$, and $Q$ the sum of all quadratic residues. It is known that $N>Q$. Give an elementary proof. #### References [a1] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers" , Oxford Univ. Press (1979) pp. Chapt. XIII How to Cite This Entry: Quadratic residue. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Quadratic_residue&oldid=33286 This article was adapted from an original article by S.A. Stepanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2021-09-20 17:23:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338303208351135, "perplexity": 556.941690163922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00525.warc.gz"}
https://stats.stackexchange.com/questions/108417/how-to-perform-proper-data-mining-on-time-series-data
# How to perform proper data mining on time-series data? I have some daily data from city A, B, C. Values from city A are highly correlated with values from other cities for lag -1,-2,-3 and -4. I want to use Random Forest, SVM and ANN to predict values for city A. My idea is: 1. Split data into training and testing set. 2. Use the formula: valueA ~ valueB-1 + valueB-2 + valueB-3 + valueB-4 + valueC-1 + valueC-2 + valueC-3 + valueC-4 3. Try different methods (Random Forest, SVM and ANN) on training set and use createTimeSlices to cross-validation for model training and parameter tuning, like in this example - https://stackoverflow.com/a/22338029/2602477 4. Evaluate obtained model on the testing set (using R2, RMSE, etc.) My questions are: • How to properly split data into training and testing set? I'm not sure, but boostraping or k-fold cross-validation doesn't sound right. • Are given formula is correct for this case? I'll take the alternative approach to @forecaster and suggest you have the option of not treating this as a time series problem. Instead, with A as the response, pre-compute predictor values for each of the lags. That is, before training, add a column for each lagged value to your data frame, say df$p1=B df$p2=lag(B,1) df$p3=lag(B,2) df$p4=C df$p5=lag(C,1) ... and so on. That way you don't have to worry about keeping rows together during sampling and can use bootstrap and k-fold just fine. Your formula becomes A ~.. Depending on which learning technique you use you may find that certain "highly correlated" cities and lags are in fact not good predictors for A. Moreover if your predictors are highly correlated you can get rid of some. I'd suggest using the caret package to simplify your programming task, qualify your predictors, and evaluate your modeling results. I'm not clear why you're interested in bootstrapping. You can use the partition creator of caret to make splits easily. If you have plenty of data you may not need bootstrapping, but if you want resampling you can uses caret for that too (see createResample). Here is a template for how you might use caret to perform a random forest fit with your data. The formula shown is the A~. suggested above which assumes your response is A in the training and testing data frames. SEED=800 require(caret) library('psych') library('corrplot') library('zoo') # set up predictors and responses predictors <- getPredictors(...) # your predictors from B, C, D, rows by time responses <- getResponses(...) # your responses from A, rows by time # examine describe(data.frame(predictors)) describe(data.frame(responses)) nearZeroVar(predictors) correlations <- cor(na.omit(predictors)) dim(correlations) corrplot(correlations, order="hclust", title="Predictor Correlation (HC)", addrect=5, rect.col="darkgrey", tl.col="black", tl.cex=0.5) corrplot(correlations, order="FPC", title="Predictor Correlation (First PC)", tl.col="black", tl.cex=0.5) corrplot(correlations, order="AOE", title="Predictor Correlation (Ang. Ord. Eigenvectors)", tl.col="black", tl.cex=0.5) # find highly-correlated predictors complete <- predictors[complete.cases(predictors),] predCor1 <- cor(complete) highlyCorPred <- findCorrelation(predCor1, cutoff = 0.75) filteredPred <- complete[, -highlyCorPred] predCor2 <- cor(filteredPred) # removed highly-correlated predictors highlyCorPred summary(predCor1[upper.tri(predCor1)]) summary(predCor2[upper.tri(predCor2)]) # find linear combinations findLinearCombos(data.frame(filteredPred)) # using the filtered predictors, append the response then build partition clean.bin <- cbind(filteredPred,responses[index(filteredPred),]) # index from time series clean.bin <- as.data.frame(clean.bin[complete.cases(clean.bin),]) # responses may have added NAs colnames(clean.bin)[15] <- "A" # build training and testing sets inTrain <- createDataPartition(y=clean.bin$A,times=1,p=0.7,list=FALSE) training <- clean.bin[inTrain,] testing <- clean.bin[-inTrain,] # setup learning method require(randomForest) require(parallel) # optional library('doParallel') # applies for each classification or regression fit fitControl <- trainControl( method = "repeatedcv", number = 10, repeats = 10, classProbs = TRUE, verboseIter = TRUE, preProcOptions=list(thresh=0.95,na.remove=TRUE,verbose=TRUE), seeds = NA, allowParallel = TRUE ) # try the random forest fit # using parallel computation if available set.seed(SEED) rfGrid = expand.grid(mtry = c(10,20,40,80)) cluster <- makeCluster(detectCores()) registerDoParallel(cluster) fit.raf <- train(A~., data=training, method="rf", preProcess=c("center","scale"), tunelength=15, tuneGrid = rfGrid, trControl=fitControl, ntree = 1000, metric="RMSE" ) stopCluster(cluster) fit.raf plot(fit.raf) predicted.raf <- predict(fit.raf,newdata=testing) The caret documentation is excellent (e.g. caret model training) so you can explore many other options there. You might also use the time series split of caret ?createTimeSlices to use the techniques suggested by @forecaster. • I'd done something like this. Firstly, I prepared my data, the same as in your example. And my problem is (was?) how to do proper model training and evaluation on this data. I wasn't sure that I can use bootstrap for data split. – Jot eN Aug 1 '14 at 18:51 • Edited post to add a template study. You can do a lot of things with time series analysis as others have posted. You can do a lot of things without time series analysis too. Caret will be useful to you and you can read a lot more about the techniques and the package in Kuhn & Johnson's "Applied Predictive Modeling". – mrbcuda Aug 1 '14 at 21:09 • Just to clarify, as noted in my post I have stressed in 3 different times that the OP question is NOT doing a time series regression. so taking the rf/nn/svm route would work here.There is not time in Y/dependent variable. So any data dredging methods would work. – forecaster Aug 1 '14 at 23:15 • In addition, just because of using a lag variable doesn't mean that we could shuffle the data, especially if you are predicting future values, t-1 has to happen before t happens, so it should be serielly ordered. I have yet to see a data mining book that covers a time series/dimension problem may be, RF/NN/SVM doesn't work well on time series problems. None of the techniques have been empirically tested on time series problems. – forecaster Aug 1 '14 at 23:16 • Function embed allows obtaining lagged values for multiple lags, keeping variable length the same for each lag - that could be useful. – Richard Hardy Nov 26 '14 at 9:34 Have a look at the ARIMAX model specification. It seems to be closest to what you're doing. You can use any nonparametric regression or classification model given concatenated Toeplitz design matrices. There's a slight issue, in that your models might not be stable in the sense outlined in this paper: Why Yule-Walker Should Not Be Used For Autoregressive Modelling With forecasting time-series models, you want the data to be preprocessed so that it has approximately the same mean and standard deviation everywhere. Depending on your data, you probably want to preprocess your data by differencing, in which you subtract each value from the previous. For forecasting purposes, the inverse of this operation is the cumulative sum plus your burnt in value at the beginning. This operation can be applied multiple times to make your data appear stationary. Another preprocessing option is to perform low order polynomial regression on the data, then predict the residuals. The appropriate form of cross validation for time-series to take the first 60% as your training set, the next 20% as your validation set and the final 20% as your test set. If you have enough data you can restrict yourself to online (iterative) models, you can have an efficient estimate of performance. A neat trick for with confidence bands in online time series models was presented in this paper: Online Reliability Estimates for Individual Predictions in Data Streams. If you want a stronger background in the theory of time-series, I cannot recommend Brockwell and Davis enough. Unfortunately, there's no legal free source of the PDF. At this moment, there is no 'rule' as how to divide a data set into a training, validation and testing set. However, 80/20 (Pareto) is a popular and widely accepted way of dividing your data set. This translates into 80% training(1) and 20% testing. That 80% training(1) is then divided into training(2) and validation data in the same way (80/20). That's about the same as the 60/20/20 that @Jacob Mick suggested in the other answer (and perhaps more straight forward). As you mentioned a time series, I wouldn't use k-fold, but rather a holdout sample for your validation and testing (aka 1-fold). More specifically, the last part of your data. Otherwise you'll be going/predicting back in time and forth during optimization. Something I would avoid. For your formula...I really don't have enough information about the values to tell you whether it's a good formula in this case. Below are my responses: You could do K fold cross validation/ leave one out cross validation as follows for time series regression: if you have 100 time series observation, first determine how many minimum# of observation is required to build your model. 1. Lets say you need at least observations 1 to 50 to build your model. Build a model on the first 50 observations. 2. Test your model on the next n (say 10 observations). 3. Now rebuild the model on the 50 + 10 (n) = 60 time ordered data. Test the model on 61 to 70 observations. 4. Repeat the steps 1 to 3 by increment the training dataset by 10 data points. If you did this and calculate the prediction error you would have done the k (5) fold validation of your model. It is very important to note that the data should be time ordered and should not be shuffled or selected randomly. In the example above, observation 1 would be your earliest time data, while your 100th observation would be your last data point, sorted in an ascending order so that your lag structure in the model is not disturbed. A variation of the above approach would be, if you only want 50 observations in your training data to build model, you need to drop the first 10 observation as you keep adding l0 observations at last, as an example: • Step 1: Train on 1 to 50 observations, test on 51 to 60. • Step 2: Train on 11 to 60, test on 61 to 70. • .... and so on, This could be easily expanded to leave one out cross validation. Here is the problem you might face, while you have nicely arranged the train and test sets, machine learning methods, randomly shuffle the data to build models. Example, random forest will not care if it is time ordered or not. it will randomly shuffle the dataset to build models. At the end it doesn't matter if you use a time series cross validation or a regular cross validation, as long as your residuals of the model is not correlated. In your case I think it will not be autocorrelated unless if you have different time periods for a single city in you dependent variable as an example: $valueA_t = valueB-1 + valueB-2 + valueB-3 + valueB-4 + valueC-1 + valueC-2 + valueC-3 + valueC-4$ where $t = time$ The key is suffix $t$ in your dependent variable. If you have time dependency in your dependent variable the you have to do a time series cross validation. But based on your problem statement, your are NOT doing a time series regression, since your dependent variable doesn't have time dependency/serial dependance on previous observation i.e obs 2 (time 1) is dependent on obs 1 (time 2), and they are independent of each other, so you should be fine using a regular cross validation. See the following link on how to do proper cross validation for time series data for forecasting. http://robjhyndman.com/hyndsight/tscvexample/ In case you are in fact doing a time series regression, then I would be using tradition statistical based time series models such as ARIMAX models or regression with arima errors because, machine learning methods are notoroiusly know to perform extremely poorly on time series data and there is no known empirical evidence that using random forest/neural network is known to improve prediction. As shown in the neural network forecasting competition, statistical time series methods significantly out performed neural networks. http://www.neural-forecasting-competition.com/NN3/results.htm Again in your case, I dont think you are doing a time series regression, so you should be fine using machine learning methods. • I've had luck in the past with time-series regression. It is sensitive to preprocessing via differencing, detrending and removing seasonality. For most problems I've encountered it works better than linear models. For a primer see: ulb.ac.be/di/map/gbonte/ftp/time_ser.pdf – Jessica Collins Jul 29 '14 at 22:30
2020-02-24 22:22:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4420284032821655, "perplexity": 1026.7621540022215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00460.warc.gz"}
http://newartisans.com/category/uncategorized/
I think one reason I’ve been avoiding posting to my blog lately is the time commitment of writing something of decent length. To get over this hump, I’m going to shift my focus to writing smaller little discoveries of things I find during my researches into Haskell and technology. Let’s see how that goes. To show just how significant parallelized algorithms can be, today I discovered pxz, a parallelized version of the xz compression utility, which I use constantly. The proof is in the numbers: Command Before After Ratio Time xz 2937M 305M 0.104 32m pxz -9e 2937M 281M 0.096 4m(!) I put this alias in my .zshrc: alias tar='tar --use-compress-program=pxz' Note that to build pxz on the Mac, I had to comment out a reference to MAP_POPULATE, which the OS X’s mmap function doesn’t support. The following is an amalgam of several letters I sent to Richard Stallman, founder of the free software movement, expressing my concern about the direction GPL licensing is taking, and why I disagree with some of the objectives of the Free Software Foundation. I’ve seen this issue mentioned in some random and hard to reach places on the Net, so I thought I’d re-express it here for those who find Google sending them this way. UPDATE: According to the discussion at https://trac.macports.org/ticket/27237, the real problem here is not fully dynamic string, but the use of _GLIBCXX_DEBUG. So I recommend ignoring what follows, as it will help you on Snow Leopard or Lion with gcc 4.6 and above. I’ve been managing my Ledger project with Git for some time now, and I’ve finally settled into a comfortable groove concerning branches and where to commit stuff.
2014-04-18 00:12:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4471642076969147, "perplexity": 2446.161571163904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2654243/an-unbiased-coin-is-tossed-six-times-in-a-row-which-statement-describing-the-la
An unbiased coin is tossed six times in a row. Which statement describing the last two coin tosses has the highest probability of being correct? An unbiased coin is tossed six times in a row and four different such trials are conducted. One trial implies six tosses of the coin. If H stands for head and T stands for tail, the following are the observations from the four trials: $$\text{(1) HTHTHT}\quad\text{(2) TTHHHT}\quad\text{(3) HTTHHT}\quad\text{(4) HHHT_ _}$$ Which statement describing the last two coin tosses of the fourth trial has the highest probability of being correct? (A) Two $\text T$ will occur. (B) One $\text H$ and one $\text T$ will occur. (C) Two $\text H$ will occur. (D) One $\text H$ will be followed by one $\text T$. I think option A is correct and the reason is statistical regularity. Am I correct? If not then please help me how to do this problem. Any help would be appreciated. Thanks in advance. • Note that statistical regularity implies that the ratio between the number of heads and the number of tails approaches 1. It does not imply that the difference between the two counts approaches 0. – Tanner Swett Feb 17 '18 at 19:27 4 Answers $B$ is correct here. It has probability $\frac12$ in contrast to the other options that all have probability $\frac14$. A) TT has probability $\frac14$ B) HT or TH has probability $\frac14+\frac14$ (summation of two probabilities of mutually exclusive events) C) HH has probability $\frac14$ D) HT has probability $\frac14$ Essential for this conclusion is the fact that the coin is unbiased. • Nice solution.thanks. – math is love Feb 17 '18 at 14:46 • You are welcome. – drhab Feb 17 '18 at 15:21 • Awesome solution! – user3767495 Dec 11 '18 at 2:33 A is not correct; B is. Statistical regularity – more often called independence – means that • the results of the three previous trials do not affect the fourth trial's outcomes • the four prior tosses of the coin in the fourth trial do not affect the last two tosses Therefore, each of $\text{HH, HT, TH, TT}$ has a $\frac14$ chance of occurring. With regards to the options, only option B has a $\frac12$ chance; the others have a $\frac14$ chance. • Wonderful solution👌the question is also nice ..thanks a lot. – math is love Feb 17 '18 at 14:42 • I think it's worth pointing out that all the irrelevant information contained in the list of what happened in the first $3$ trials and the first $2/3$ of the last trial might be the examiner's wish to find the students who still believe coins that have come up heads more often have to catch up with tails. That is of course implicit in your bullet points. – Ethan Bolker Feb 18 '18 at 1:17 B is correct. It is the probability of two outcomes out of four equally likely outcomes and equals 1/2. The others are the probability of one outcome and equal 1/4. The results from the trials are a red herring. The actual chances of 3 H and 3T out of any 6 tosses is statistically correct, but in the real world... Any toss will result in 50/50 H/T. So B is closest to what 'should' happen. One more H and one more T, but in random order. As already stated, the other options have half the chance of option B. Even though a pattern seems to be emerging that might make A the correct answer. Wrong!
2020-02-22 02:09:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149196267127991, "perplexity": 497.57332582082233}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00457.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Tidal_locking
# Talk:Tidal locking WikiProject Astronomy (Rated C-class, High-importance) Tidal locking is within the scope of WikiProject Astronomy, which collaborates on articles related to Astronomy on Wikipedia. C  This article has been rated as C-Class on the project's quality scale. High  This article has been rated as High-importance on the project's importance scale. ## Well done To all persons responsible for this article - it explains this phenomenon very clearly and simply. Take a bow, the lot of you. ElectricRay (talk) 13:24, 6 January 2014 (UTC) ## Final Configuration "The simple picture of the moon stabilising with its heavy side towards the Earth is incorrect, however, because the tidal locking occurred over a very short timescale of a thousand years or less, while the maria formed much later." Why does it matter when the maria formed? If the density of the moon shifts to a different side following tidal locking, wouldn't the moon re-orient itself subsequently? Afterall, it is a similar adjustment that causes the tidal locking in the first place, following perturbations in the shape of the moon. Could someone address this? —Preceding unsigned comment added by Blueil77 (talkcontribs) 00:21, 13 January 2008 (UTC) ## Misleading Sentence: Both bodies sychronize It results in the orbiting bodies synchronizing their rotation so that one side always faces its partner The above sentence implies that both bodies face their partner. But when one body is tremndously more massive than the other, isn't it only the tiny one whose rotation gets synchronized: • moon and earth - moon faces earth, but earth rotates 28 extra times per lunar rotation/revolution • mercury and sun - mercury's rotation synchronized in a 2:3 harmonic to its orbit period, while suns rotates considerably faster (?) --Ed Poor I think the fact that it says, "one side", implies that only one of the pair will show the same face at all times. Maybe this could be changed so it's more explicit? The discussion, the way it is formed, is so unclear to me. The moon does rotate once everytime it circles the earth, correct? Doesn't that mean the same side of the moon always faces the sun, not the earth? One side of the moon is always in the dark to us and one side is always in the sun? Please help me here... Cvgittings (talk) 05:03, 10 December 2010 (UTC) --Don It is mentioned later-on that Earth is still slowing down its rotation due to its interaction with the moon. —Preceding unsigned comment added by Blueil77 (talkcontribs) 00:25, 13 January 2008 (UTC) The usage of the term "synchronized" implies that the moon just happens to spin at the right rate to always face its same side to the earth, which is not the case. It is tidally locked because one side is more dense, which is the side that always faces us. Would it be correct to describe an "orbiting" ball tethered by a string to a focal point, synchronized? In my opinion it wouldn't be. All I am saying is that its misleading to what is actually* happening with tidal locking between the earth and moon. Its not a coincidence, like describing it as synchronized suggests. No, "synchronize" in this case means that a body's the rotational period becomes equal its own orbital period. It does not imply a coincidence. --JorisvS (talk) 10:05, 1 May 2013 (UTC) ## Tau Boötis is known to be locked to the close-orbiting giant planet Tau Boötis Ab This sounds like BS. Could someone please provide a reference to this statement, or at least describe how one could obseverve that an extra-solar planet is indeed tidally locked? Lunokhod 23:52, 24 January 2007 (UTC) Presumably a tidal locking timescale has been estimated, and it's much much less than the known age of the system? By the way, what's BS stand for (sorry)? Deuar 15:32, 25 January 2007 (UTC) Then perhaps we should move this entry from "List of known tidally locked bodies" to "Bodies likely to be locked". Either there has been some amazing advancement in imaging extra solar planets that I am ignorant of, or this is only an inference! Also, if the planet is a gas giant, then the Q and k2 will be much different than for a solid object. Perhaps this could be discussed in the article? A quick google search suggests that the Q of jupiter is 1 billion, as opposed to ~100 for the Earth. And this would increase the tidal locking timescale accordingly. Lunokhod 15:54, 25 January 2007 (UTC) Go for it, I reckon. Deuar 16:21, 25 January 2007 (UTC) ## Mercury So...Mercury isn't tidal locked to the Sun? Then why is it on the list? --MPD T / C 02:43, 1 March 2007 (UTC) I think that tidal locking is not the same as "synchronous rotation", even though the intro seems to say so. Perhaps it is better to say that tidal locking is a process where tidal torques leave one body on a spin-orbit resonance. Synchronous (1:1) is the lowest energy configuration. Lunokhod 13:04, 1 March 2007 (UTC) ## Tidal braking In the UTC article, there's a [[tidal locking|tidal braking]] link. Can someone please provide a definition for tidal braking in this article, even if they are the same? The "Locking of the larger body" paragraph in the "Mechanism" section seems an appropriate place to do this. Thanks. Xiner (talk, email) 01:22, 11 March 2007 (UTC) Actually, I think the tidal acceleration article seems to discuss this in more detail. I've linked it to there instead. Deuar 15:55, 19 March 2007 (UTC) ## Unclear Description of Orbital Resonance Under "Mechanics," the description of orbital resonance is rather unclear. This is what it says: Rotation-Orbit resonance: Finally, in some cases where the orbit is eccentric and the tidal effect is relatively weak, the smaller body may end up in an orbital resonance, rather than tidally locked. Here the ratio of rotation period to orbital period is some well-defined fraction different from 1:1. A well known case is the rotation of Mercury—locked to its orbit around the Sun in a 3:2 resonance. It does not specify why or how this happens. I don't know myself, but I'm guessing it happens because the smaller body's rotation does not change when it reaches the aphelion of its orbit, thus causing it to skip ahead. However, my guess does not explain A. why the opposite does not happen at the perihelion, thereby nullifying the effect, or B. why this would cause resonance to occur in well-defined ratios such as 3:2 (in the case of Mercury). I would find it very helpful if a little more research were done and this paragraph were revised. I myself would not know where to look, and the page on orbital resonance does not seem to describe rotational resonance at all. ## Tidal locking and developing life I removed the following from the "planets" section: Tidally locked planets may present problems for developing life, as one side of the planet will always be facing away from the star and the other side will always face toward it; in the absence of significant heat redistribution by atmospheric winds or hydrospheric currents, this would result in constant temperature extremes. [citation needed] On the other hand, tidally locked large satellites of gas giants rotate with respect to the central star, providing places for developing life that avoid extremes in temperature.[citation needed] Besides that being mere speculation, and being without any references, it also mentions "large satellites of gas giants" which are clearly not planets. If this text should be on this page at all, then please in a separate section, and with references. Jalwikip (talk) 14:10, 19 November 2007 (UTC) ## Isn't the animation confusing? The animation shows two bodies orbiting a central body at the same rate, yet they are at different distances from the central body...what is the point of this?, It might lead some people to think that this is a real orbital configuration...Jellyboots (talk) 20:55, 29 January 2009 (UTC) ## but being precise... Pluto's not a planet. --Taraborn (talk) 20:21, 30 January 2009 (UTC) For the purposes of the article, Pluto's planetary status doesn't matter. Planet status under the new definition is not a function of size or mass...but of location. Just pretend that the Pluto-Charon system were located between Mercury and Venus. If that were the case, even Pluto's small mass would suffice to clear its orbit, and (surprise) it would be a Planet by the new definition and orbit-clearing formula. It is the bizarre insistence that location matter (in the definition of planet) that riles up so many people. If something is over 1,000 miles in diameter and suddenly appeared in the inner Solar System, we'd say "that's a planet"... so close your eyes and pretend. Chesspride 172.164.20.73 (talk) 21:46, 9 February 2016 (UTC) ## Final configuration I took this out. It has no sources and I don't think I believe it. Does anyone speak for it? There is a tendency for a moon to orient itself in the lowest energy configuration, with the heavy side facing the planet. Irregularly shaped bodies will align their long axis to point towards the planet. Both cases are analogous to how a rounded floating object will orient itself with its heavy end downwards. In many cases this planet-facing hemisphere is visibly different from the rest of the moon's surface. The orientation of the Earth's moon might be related to this process. The lunar maria are composed of basalt, which is heavier than the surrounding highland crust, and were formed on the side of the moon on which the crust is markedly thinner. The Earth-facing hemisphere contains all the large maria. The simple picture of the moon stabilising with its heavy side towards the Earth is incorrect, however, because the tidal locking occurred over a very short timescale of a thousand years or less, while the maria formed much later. The maria are instead formed from heavier lunar magma that responded to the tidal lock by gravitating towards the Earth. William M. Connolley (talk) 18:39, 9 March 2010 (UTC) I speak for it and so does Gravity-gradient stabilization. Tidal locking forces are proportional to the spin angular velocity difference between the bodies orbital velocities. As the locking body slows down so the locking force reduces. This continues until when you take the limit no locking force exists. So, at the end there is no phase lock information remaining. Between the moon and Earth there is a fixed phase (empirical). This phase lock is occurring and it cannot be from tidal locking. As William describes above the moon cannot have symmetrical weight. Hence the heavy side faces the gravitational attractor i.e. the Earth. We should add a section on phase locking which is as a result of Gravity-gradient stabilization. User:pcrengnr —Preceding undated comment added 16:56, 24 March 2016 (UTC) ## Side? Since when does a spherical object have "sides"? --77.109.223.37 (talk) 07:56, 15 June 2010 (UTC) Since there are defined features on that sphere, that's when. —Preceding unsigned comment added by 129.186.253.87 (talk) 19:45, 27 August 2010 (UTC) In the section "Timescale", I think the second ("simplified") formula given for tlock ${\displaystyle t_{\textrm {lock}}\quad \approx \quad 6\ {\frac {a^{6}R\mu }{m_{s}m_{p}^{2}}}\quad \times 10^{10}\ {\textrm {years}}}$ is incorrect, if the first formula (${\displaystyle t_{\textrm {lock}}\approx {\frac {wa^{6}IQ}{3Gm_{p}^{2}k_{2}R^{5}}}}$ ) is right. Namely, the following conclusion drawn from the second formula One conclusion is that other things being equal (such as Q and μ), a large moon will lock faster than a smaller moon at the same orbital radius from the planet because ${\displaystyle m_{s}\,}$ grows much faster with satellite radius than ${\displaystyle R}$. contradicts the first formula: There, the satellite mass only appears in the numerator of the formula given (via the "Inertia momentum" term). If we assume the satellite mass to roughly increase at the third power of its radius (i.e. assuming constant density, which seems a plausible assumption), we get five powers of R in both the numerator and the denominator of the fraction. Thus, tidal locking should essentially be independent of the satellite's mass, all other things (except the radius) being equal. Can someone clarify this for me? --Roentgenium111 (talk) 21:43, 23 August 2010 (UTC) ♦ I agree that there is something wrong with the ("simplified") formula. Earth's moon was tidally locked by the time of the Lunar Cataclysm, but plugging in the values for the Earth/Moon system in the ("simplified") formula gives a time to lock of 2.3 × 1031 years which we know cannot be true. The time to lock the Earth/Moon system must be less than the difference between the time of formation of the moon (~4,450 million years ago) and the time of the Lunar Cataclysm (~3,900 million years ago) which is approximately 550 million years. -az — Preceding unsigned comment added by Sciencebookworm (talkcontribs) 17:33, 9 March 2011 (UTC) I don't know how you got that high a figure! I calculated 3.8 million years for the moon tidally locking to Earth. (using the same simplified formula). If anything the simplified formula seems to underestimate the tidal locking times, by a factor of 100, particularly for planets. When I multiply the results by 100, I get - 384 millions years for the moon - 5.1 billion years for a planet in the habitable zone of Epsilon Indi - 543 billion years for the Earth to the sun. The results I get after multiplying the simplifed formula in the article by 100 are very similar to the results from other formulae for calculating tidal locking time. For example, 'Tidal Locking Time in years = (ρ*((a/0.0483)^6))/(M^2)' - where a = distance in AU from primary, M - Mass of primary, as fraction of the sun (eg -Epsilon Indi M=0.762), ρ (density of satellite, Earth taken for Epsilon Indi= 5512 Kg/m^3 or for the moon= 3346 Kg/m^3). Using that formula I get - 4.1 billion years for Epsilon Indi- 433 billion years for the Earth - 8.4 million years for the Moon to tidally lock to Earth, almost the same values. Or, using the formula taken from 'Peale et al - 1977'. 'Tidal Locking Time in Seconds = (a/(0.027*(M^(1/3))))^6/486' - using CGS units where M= Stellar Mass (grams) a= orbital distance (cm), gives a tidal locking time of 477 billion years for the Earth to the sun and 5 billion years for an exo-Earth around Epsilon Indi, again almost the same values, if you multiply the result by 100. I think a lot of this article is more or less photocopied from a book called "Habitability and cosmic catastrophes" By "Arnold Hanslmeier" "2009" so the formula probably must be accurate, in some way. — Preceding unsigned comment added by 86.185.215.187 (talk) 01:20, 29 November 2011 (UTC) Your comment seems correct, the formulas are not contradictory. I derived the simplified formula from the first one. One needs to use ${\displaystyle k_{2}\approx 3/2/(19\mu /(2\rho gR)),g=Gm_{s}/R^{2},\rho =m_{s}/(4/3\pi R^{3}),I=4m_{s}R^{2}/10}$. Approximation for the Love number seems OK, since it is usually much smaller than 1[1]. If I input μ for rocky planet, 3×1010 Nm−2 and original ω = 1/3600/24 rad, Q = 100, I get 3.8 million years. The prefactor ${\displaystyle 6*10^{10}\;\mathrm {kg} ^{2}/\mathrm {m} ^{6}\mathrm {s} ^{2}\mathrm {year} }$ should be probably different to give more realistic results, but from the direct calculation, I obtained ${\displaystyle 3*10^{10}\;\mathrm {kg} ^{2}/\mathrm {m} ^{6}\mathrm {s} ^{2}\mathrm {year} }$, which is very similar. My personal guess is, that Q = 100 from the referenced article, is in CGS units and it is not valid in SI units. But the problem can be elsewhere. Irigi (talk) 09:09, 8 October 2014 (UTC) Another issue with the Timescale portion as it stands, the tidal locking formula description shows a value for the mass of the satellite, but that value does not appear anywhere within the formula given. As the mass of the planet is given as a squared value, is this assuming the mass of the planet and satellite are the same? — Preceding unsigned comment added by 205.166.76.15 (talk) 18:25, 18 July 2014 (UTC) ## Kant? I'm dubious about [1]. "researched" appears a little over the top - speculated, or reasoned, might be closer. But the text too is uncertain: according to [2] Immanuel Kant, who took great interest in scientific issues, reasoned on the basis of pure theory that the action of oceanic tides must slow down the earth's rotation which isn't what the retarding forces of tides on satellite bodies says William M. Connolley (talk) 17:12, 29 September 2010 (UTC) It isn't what Immanuel Kant says, either William M. Connolley (talk) 17:14, 29 September 2010 (UTC) I agree with you and am removing the sentence. Not only is the way the claim is phrased dubious, the source does not show he was the first either. —Lowellian (reply) 03:27, 1 October 2010 (UTC) ## Orbital changes Can someone expand on this section, specifically what mechanism is responsible for an orbiting body moving farther from its parent as its rotation slows? The explanation that angular momentum is conserved just doesn't do it for me. An explanation of the forces that cause a body to speed up in its orbit like the preceding sections on Bulge dragging and Resulting torque would be nice. 128.32.99.173 (talk) 15:16, 5 November 2010 (UTC) Sorry, but it is *precisely* the conservation of angular momentum that explains why this happens. We have various phenomena, and we have the principle that angular momentum is conserved. We see that the phenomena are aligned with the principle. It is common -- though incorrect perhaps -- to say that "aha, that is why X occurs, it is due to the conservation principle" but as far as a mechanism that aligns the phenomenon with the principle -- there isn't one that is known. In the same way, various phenomena that align with the principle that (global) entropy for a system tends to increase (but local entropy may decrease if it contributes to global entropy) lacks a mechanism. Also, things like Gauss's law -- that the distribution of charge in a large body tends to be such that the positive charge is on the surface (but the net charge for the body is still zero) lacks a mechanism of the detailed sort that you seek. Get used to it. Chesspride 172.164.20.73 (talk) 21:55, 9 February 2016 (UTC) ## Time Scale Formula Useless The timescale formula given is useless, because without units the value is meaningless. Iæfai (talk) 03:18, 20 April 2011 (UTC) Pretty sure it is in seconds. — Preceding unsigned comment added by 149.169.221.113 (talk) 18:46, 25 September 2012 (UTC) If the dimensionality of the formulae were correct (that is, if they were in units of time), you could use whatever units you wanted. However, the "complex" formula gives an answer in units of angle * time, while the "simple" formula gives an answer in units of distance^6 / mass^2 * time. As they are, both formulae are thus completely useless. 76.255.189.2 (talk) 20:51, 24 September 2013 (UTC) ## Rotation of the locked object It can be quite easily demonstrated that the apparent rotation of the moon about it's axis is actually a necessary result of the moons orbit and not an independent source of rotation, meaning that it is technically incorrect to say that the rotational period of a tidally locked object precisely matches it's orbital period, since it has no rotational period independent of it's orbital motion. Tidal locking actually slows the orbiting object's rotation until it completely stops rotating around it's central axis. DoC352 (talk) 07:24, 9 August 2011 (UTC) Based on DoC352s' statement above, the following sentence from the first paragraph, "A tidally locked body takes just as long to rotate around its own axis as it does to revolve around its partner.", is incorrect. As a result I am editing this sentence to say, "With Tidal Locking the smaller body does not rotate around its own axis as it revolves around the larger body. This is completely different than Tidal Resonance. With Tidal Resonance the smaller body does rotate around it's own axis as it revolves around the larger body.". There are some other statements about the Moon rotating further down in the article that I am also removing. If you disagree with these edits please let me understand your reasoning. Cvgittings (talk) 20:01, 9 March 2012 (UTC) Phancy Physicist (talk) 18:56, 10 March 2012 (UTC) Unfortunately this explanation is inadequate. Consider for instance a plane flying in a complete circle around earth at a specific altitude. As it flies "straight ahead" much like we might assume the momentum of the moon does, the gravity of earth causes the plane to fly around the curvature of the earth with the bottom of the plane always parallel to the surface of the earth. Does the plane therefore rotate about its own axis with the exact periodicity of its revolution around the earth? Such a conclusion seems to represent a fundamental misunderstanding of the difference between a geometric rotation and translation. When an object geometrically rotates about a distant central point, it maintains its same facing towards the central point without any separate rotation about it's own axis, much as we see with the moon relative to the gravitational pull from the Earth's center of mass. Alternatively, if it could be shown that gravity results in a translation rather than a rotation (i.e. if it can be shown that some force other than gravity acts on the plane to make the nose come "down" as it attempts to fly in a straight line out of the atmosphere, and that gravity from the earth alone would allow the plane to absurdly maintain the same absolute rotation relative to its starting point) as an object maintains forward momentum through the gravitational field, then Phancy Physicist's argument would be perfectly valid. DoC352 (talk) 04:13, 3 March 2013 (UTC) First of all, the plane is not bound in it's path by gravitational forces. Or not only gravitational forces are keeping it in the sky. Second of all, rotation in mathematics has nothing to do with orbital mechanics. Rotation, as defined in the linked article, is used there as math jargon. Rotation as used here is used as astronomy jargon - i.e. there is a clear distinction between rotation and revolution. Furthermore, the geometrical rotation you're talking about requires a rigid connection between the rotating body and the rotation axis - i.e. all points of the rotating body must maintain the same distance to the point of rotation. With gravity, that's not the case. Finally, if you have a fixed (i.e. non rotating) reference frame, you can decompose the motion of the rotating body in two in two separate rotational motions - one about the central axis, and one about the body's central axis. Tidal locking means that these two motions are synchronized - that is they have a fixed ratio between the periods.95.76.220.229 (talk) 14:26, 7 January 2015 (UTC)Apass ## Effect of composition and structure In the article it says "μ can be roughly taken as 3×1010 Nm−2 for rocky objects and 4×109 Nm−2 for icy ones.". What would the effect of a more or less substantial atmosphere, of (bodies of) surface or subsurface liquid(s), or of a body being mostly metallic be on this value? --JorisvS (talk) 12:39, 19 September 2012 (UTC) ## Tidal locking in gas giants Many of the extrasolar gas giants that were detected are very close to their primaries and are assumed to be tidally locked. But how this tidal locking is defined for planets that don't have fixed surface features (i.e. they don't really have a surface)? And what would be the effects on the planet? Visibly, I guess the planet would look pretty much as any other gas giant, with no visible cue that it's tidally locked. Apass 89.137.186.101 (talk) 20:25, 9 November 2012 (UTC) Defining tidally locked rotation is, in principle, still straightforward for gas giants: Their rotation period must be the same as their orbital period. However, defining what is 'the' rotation period of a gaseous planet is far more tricky: The rotation period of Jupiter's cloud tops seems to be what is usually considered its 'rotation period', but this gives it an equatorial rotation period and a polar rotation period (Jupiter#Orbit_and_rotation). Tides will tend to synchronize a planet's rotational angular momentum to its orbital angular momentum, no matter its composition. Given that a gas giant's 'rotational period' is usually taken to be that of the visible clouds, other effects could affect a hot jupiter's apparent rotational period, possibly resulting in an apparently unlocked state. Strong winds resulting from high temperature differentials would likely be a factor here. --JorisvS (talk) 00:09, 13 November 2012 (UTC) ## Oberon? How come Oberon is both on "Locked by Uranus" and "Probably locked by Uranus" lists? — Preceding unsigned comment added by 90.151.131.81 (talk) 20:20, 28 March 2013 (UTC) ## Language section I've tried hard to change the Arabic corresponding language but received errors of usage by another item. The new way of modifying languages has made Wikipedia more complicated that before :(. Please the corresponding article in Arabic is ar:تقييد مدي--Almuhammedi (talk) 11:27, 22 July 2013 (UTC) Done, though I'm not sure this is the correct page to make such a request. — Reatlas (talk) 12:11, 22 July 2013 (UTC) ## length of lunar month is getting shorter? Under "Locking of the larger body", it says "Given enough time, this would create a mutual tidal locking between Earth and the Moon, where the length of a day has increased and the length of a lunar month has shortened until the two are the same.". Isn't the length of a lunar month actually getting longer? For example: [2] RDV74 (talk) 04:26, 7 February 2016 (UTC) ## Definition of tidal locking According to: Heller, R.; et al. (April 2011), "Tidal obliquity evolution of potentially habitable planets", Astronomy & Astrophysics, 528: 16, Bibcode:2011A&A...528A..27H, arXiv:, doi:10.1051/0004-6361/201015809, A27. "A widely spread misapprehension is that a tidally locked body permanently turns one side to its host." Further, "As long as 'tidal locking' denotes only the state of dωp/dt [rotation rate change] = 0, the actual equilibrium rotation period ... may differ from the orbital period, namely when e [eccentricity] ≠ 0 and/or ψp [obliquity] ≠ 0." This statement differs from the definition in the lead of this article. Praemonitus (talk) 19:51, 7 April 2016 (UTC) ## Regarding bodies A and B in the mechanism section The mechanism section employs bodies A and B to explain the phenomenon and does a good job at it. The explanation can be improved if there was a figure depicting the two bodies A and B (or marked in the figure on the right - I hope that the person who updated this had such a figure in mind?). Also, what does red line depict in the figure? — Preceding unsigned comment added by Blackholebounce (talkcontribs) 19:06, 3 March 2017 (UTC) 1. ^ B. Gladman; et al. (1996). "Synchronous Locking of Tidally Evolving Satellites". Icarus. 122: 166. Bibcode:1996Icar..122..166G. doi:10.1006/icar.1996.0117. (See pages 169-170 of this article. Formula (9) is quoted here, which comes from S.J. Peale, Rotation histories of the natural satellites, in J.A. Burns, ed. (1977). Planetary Satellites. Tucson: University of Arizona Press. pp. 87–112.) 2. ^ https://en.wikipedia.org/wiki/Lunar_month#Cycle_lengths
2017-09-21 20:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222625017166138, "perplexity": 1193.5903302660254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687837.85/warc/CC-MAIN-20170921191047-20170921211047-00172.warc.gz"}
http://cnx.org/content/m24561/latest/?collection=col10151/1.27
# Connexions You are here: Home » Content » Connexions Tutorial and Reference » Introduction to the MathML Editor ### Lenses What is a lens? #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. #### Affiliated with (What does "Affiliated with" mean?) This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization. • CNX Documentation This module and collection are included inLens: Connexions Documentation By: Connexions "The canonical how-to guide to using Connexions." Click the "CNX Documentation" link to see all content affiliated with them. Click the tag icon to display tags associated with this content. • JVLA Affiliated This collection is included inLens: Jesuit Virtual Learning Academy Affiliated Material Click the "JVLA Affiliated" link to see all content affiliated with them. Click the tag icon to display tags associated with this content. #### Also in these lenses • OER/LOR Connexions T This collection is included inLens: OER/LOR Connexions Training By: Connexions "This collection has the basic training for authoring modules (chapters/sections) and collections (textbooks/courses etc)." Click the "OER/LOR Connexions T" link to see all content selected in this lens. Click the tag icon to display tags associated with this content. ### Recently Viewed This feature requires Javascript to be enabled. ### Tags (What is a tag?) These tags come from the endorsement, affiliation, and other lenses that include this content. Inside Collection (Course): # Introduction to the MathML Editor Module by: Philip Schatz. E-mail the author Summary: Reference Manual for the MathML Editor ## Math Editor Features This module explains how to open the Math Editor, create math, edit existing math, and keyboard shortcuts. There is also a separate tutorial page with examples showcasing the features. At the end of this module are nuances and limitations of the editor. Please, let us know which ones you'd really like to see incorporated! ## Opening the Editor When editing a Module using using Mozilla's Firefox browser, click on a part of the module to open a blue editing box. On the top-right hand side of the box is a "MathML Editor" link which will open up the editor. ## User Interface The Editor has 4 main sections, detailed below. The toolbar provides a way to insert new operations, a navigation tree to show where the cursor is located, and standard buttons for undo, preview, and source editing. The main editing area is located below the toolbar and contains the math that is being edited. ## Toolbar The toolbar contains a row of buttons representing categories of different mathematical operations. These are enabled when something is selected in the editing area. Note: The editor does not infer multiplication and addition. See Nuances for how to insert next to existing math by wrapping existing math. Explain the different sections, when it's enabled, how things get inserted, and Keyboard Entry for things. ### Path Shows the path to the cursor location. Math is organized in a tree-like hierarchy (see Navigating Math) and the path represents where in the tree the cursor currently is. The path (and context) are important because they define what can be inserted and where it will go. ### Undo/Redo These buttons allow the user to undo an operation such as deletion or insertion. See Keyboard Shortcuts for details on using these features from just the keyboard. ### Preview Shows what math will look like when module is published. To resume editing, one must click the Preview button a second time. ### View Source Math in Connexions is represented in an XML format known as MathML. Clicking the View Source button will allow editing of the raw MathML. ## Editing Area This is the main area for creating math. It begins empty, but math can be pasted directly in here from Connextions. The tutorials contain instructions on moving math from Connexions to the math editor and back. The editing area is the most important part of the editor and as several subsections, outlined below: • Math is structured like a tree. • Colors are used in this area to denote required information and contextual clues. • Content vs Presentation Math discusses the two different types of math the editor supports. • The cursor is discussed in detail below, including navigation and different editing modes. • Since the exact location of the cursor may at times be ambiguous, the context provides visual cues. • Keyboard strokes are discussed in detail. • Finally, empty blocks are discussed below. ## Math Tree Math in the editor is structured like a tree. It can be thought of as removing the precedence rules and just having parentheses. For example, the formula "a*x^2+b*x+c=0" which is displayed (using the editor) as ax2+bx+c=0 a x 2 b x c 0 and as a tree would look like Figure 3. The equal sign has the least precedence and so is on the top. Similarly, xx binds tighter to 22 through the power operation than to aa through the times operation. ## Colors ### Color notation (legend) _ f _ = x 1 if x < 0 x 2 if __ > 0 __ otherwise _ f _ = x 1 if x < 0 x 2 if __ > 0 __ otherwise • _ f _ _ f _ : The location where text is currently being entered is represented as a box with a blue border (see Text Input for more information on how to enter math). • x x and x 1 x 1 : Content MathML is represented in black while Presentation MathML is in a dark green (See Content vs. Presentation for editing Presentation MathML. • x < 0 x < 0 : The cursor context (when the cursor is next to a complex expression) is represented by having a gray background. See Context for details. • __ __ and __ __ : Empty blocks that need to be filled are denoted with a yellow background and optional blocks that can be filled but do not need to be filled are transparent with a dotted border. See Blocks for details. • x 2 x 2 : The current selection is denoted by a light blue background. See Copy and Paste back to Modules for details. ## Content vs. Presentation There are two subsets of the MathML language; Content MathML and Presentation MathML. Content, as the name implies, focuses on expressing operations like addition, integration, matrices, etc. Presentation focuses on how precisely math is displayed and contains elements like tables and subscripts. The editor supports creating and editing the Content Math subset while being able to navigate through Presentation MathML. Every thing that is entered into the Editor is entered as Content Math. For example, entering a*x^2+b*x+c=0 will be translated as the variable aa times xx to the power of 22 and added to bb times xx ... ## Cursor The Math Editor can be used entirely from the keyboard (See Keyboard Input). The cursor can be in one of four places. Either it is editing a variable or number, editing an empty block of text, next to a complicated expression, or has selected an expression. In each of these places there are several things that can be done. ### Editing a variable, number, or block At this point, the cursor is surrounded by a blue box and the user can type in expressions or even paste existing MathML. The expression will be parsed as soon as the cursor leaves the box or presses the Enter key (in the case of an expression) or immediately when MathML is pasted in. The user can leave the box by pressing clicking on the toolbar or by pressing the Left, Right, or Tab key. See Keyboard Input for more on expressions. ### Next to a Complicated Expression When a cursor is next to a complicated expression, the expression is shown with a light gray background (See Context). From this point, one of three things may be done. The user may add on to the expression. This is done by just typing. For example, if the cursor is to the left of (π)i , the user may type -1=e^ and parse the expression to yield 1=e(π)i 1 e One can select the expression by either pressing Shift+Right/Left (depending on whether the cursor is before or after the element), Delete, or Backspace key. See Selection for what can be done next. #### Selection When an expression is selected, several things can be done: • Pressing the Delete or Backspace key will remove it • Pressing Ctrl+X/C will cut/copy it • Pressing Ctrl+V will replace the selection with the contents of the clipboard • Clicking an item in the toolbar will replace the selected item ## Context Instead of using parentheses to denote which operations are grouped, the math editor highlights the current context for the operation. The context shows the position of the cursor relative to existing math in the editing area and is displayed using a gray background. An example of a confusing position can be shown using the following example. Suppose the editor contains the term a+bc a b c and the cursor is just after the cc. If the user enters "^2" it is not clear what should be squared. At that position the user may want to square cc, bc b c , or the entire term a+bc a b c . This produces very different math, namely a+bc2 a b c 2 , a+bc2 a b c 2 , and a+bc2 a b c 2 . In the above example, the context would highlight precisely the math that ended up being in parentheses. One can think of the context as defining where the parentheses should go once the new math is entered. ## Keyboard Input There are several places the user can enter text into the editor. Most of them behave the same way, but listed below are common uses and specifics: ### Common for all Text Entry Points • Pressing the Enter key or moving the cursor out of the text box (by pressing the Left/Right, Tab key, or clicking elsewhere) after entering will cause the Math to be parsed. • If the text cannot be converted to Math, it will appear with a red dashed line beneath it (like a spelling error) and must be corrected before saving. • Simple algebraic expressions, logic operations, trigonometric functions, and subscripts can be entered and will be converted into math. • If a shorthand notation exists for an operation, it will show up in the toolbar next to the name of the operation (See Toolbar. Shorthand notation is usually more natural (the operation, like addition, is between its arguments, like a+2 • If a shorthand notation does not exist for an operation, one can still enter the operation using the keyboard by typing the name of the operation which is also found in the menu (See Toolbar) ### Categories There are three categories of key presses and are enumerated in the table below. • Shortcuts are preceded by pressing the Ctrl key (or the ⌘ key on Apple computers) • Navigation keys move the cursor through the math • Modification keys change the math in some way Table 1 Category Key Condition Action Ctrl+ (Apple ⌘+) X Math is selected Cuts the selected Math to the clipboard and replaces it with an empty block (that can be deleted) C Math is selected Copies the selected Math to the clipboard V Math is selected Pastes MathML from the Clipboard (can be from other sources) Z   Undoes one step in the editor Y Ctrl+Z was just pressed Redoes one step in the editor E   Opens full-source editing Navigation Tab Shift+Tab   Moves to the next/previous free block Left / Right   Moves to the previous/next element in the Math Shift+Left / Shift+Right After / Before the Context Selects the Context element (right next to the cursor) Before / After the Context Selects the Context's parent Modification Enter   Attempts to parse the text entered next to the cursor Delete / Backspace Cursor next to Math Selects the Math Node (subsequent delete will remove the math) Math selected Removes the node and replaces it with an empty block (a second press will remove the block as well) Cursor in block Removes the empty block if it is allowed in MathML (in "a+b+c" any one variable can be removed, but addition requires at least 2 things to add) Table 2: Text Input Examples Type Input Math Output Calculator a*x^2+b*x+c=1/2 ax2+bx+c=12 a x 2 b x c 1 2 a && b || c != a -> b a and (b or c)ab a b c a b sin(x)^2+cos(x)^2=1 sin2x+cos2x=1 x 2 x 2 1 x_1+x_2<x_3 x 1 + x 2 < x 3 x 1 x 2 x 3 Templates sum=n*(n-1)/2 __ =n(n1)2 __ n n 1 2 MathML <pi /> π &#1207x; ҷ ## Text Entry This is a text entry place. See shortcuts. can paste MathML (Ctrl+V from Mathematica, MathType, etc), or enter simple algebra (see Shortcuts). Moving away using Enter, Tab, Left, Right will cause the input to be parsed and converted into Math. ## Blocks Blocks are holes that may need to be filled. (Click or Tab to them). Required blocks have a yellow background and optional ones are transparent and have a dotted border. Click, double-click, highlight, (only right-click inside a text box) ## Nuances / Limitations There are several nuances in the editor, and common ones are listed here, along with workarounds. Also listed are limitations of the editor and things we'd like to get working soon. • If more than two things are added or summed together, one cannot select only a subset of them. • One cannot easily change a "+" sign to "*". To do this, you will need to copy the entire "+" operation and paste it, then remove the unwanted children. • Moving children around by dragging is not possible. Unfortunately, this currently requires copying and pasting to the clipboard. ### Limitations #### Unable to change the domain of operations like Sum, Max, and Integrals. Operations like Sum, Max and Integrals may be over an interval, or when a certain condition holds (like xR x ). The math editor allows editing these variations but does not always offer a way to create new operations. Currently, this must be done by hand by switching to the source edit view and manually replacing the <interval/> with a <condition/>. ### Wrapping Math with Math Sometimes it is necessary to add to existing mathematical operations. For example, adding higher terms to a polynomial. This can be done either by using the keyboard or with help of the toolbar. In the explanations below we start with "b*x+c=0" and create ax2+bx+c=0 a x 2 b x c 0 #### Keyboard Only To add the ax2 a x 2 term: 1. Step 1. Move the cursor to the left of bx+c b x c but make sure the context is only around bx+c b x c and that bx+c b x c is not selected. This can be done by clicking the "+" sign. 2. Step 2. Enter "a*x^2+" (without the quotes) and press the Enter key. #### Toolbar Using only the toolbar to insert math is a bit more difficult because the editor does not infer multiplication or addition when pasting right next to existing math. We will need to "wrap" the existing math with the combiner operation (usually +,*, or ^) and then add in the new math. 1. Step 1. Select bx+c b x c but make sure only bx+c b x c is selected. This can be done by double clicking the "+" sign. 2. Step 2. Cut the existing math. This should create an empty block. 3. Step 3. From the toolbar select the combiner operation. This should create at least one empty block. 4. Step 4. Paste the math that was cut earlier into one of the empty blocks. 5. Step 5. Select another empty block. 6. Step 6. From the toolbar, insert the operation. ## Content actions PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. PDF | EPUB (?) ### What is an EPUB file? EPUB is an electronic book format that can be read on a variety of mobile devices. #### Collection to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks #### Module to: My Favorites (?) 'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'. | A lens I own (?) #### Definition of a lens ##### Lenses A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust. ##### What is in a lens? Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content. ##### Who can create a lens? Any individual member, a community, or a respected organization. ##### What are tags? Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens. | External bookmarks
2014-03-15 08:52:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3171035051345825, "perplexity": 2579.8233441328625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678696864/warc/CC-MAIN-20140313024456-00026-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.groundai.com/project/novel-bounds-on-the-capacity-of-the-binary-deletion-channel/
Novel Bounds on the Capacity of the Binary Deletion Channel # Novel Bounds on the Capacity of the Binary Deletion Channel ## Abstract We present novel bounds on the capacity of the independent and identically distributed binary deletion channel. Four upper bounds are obtained by providing the transmitter and the receiver with genie-aided information on suitably-defined random processes. Since some of the proposed bounds involve infinite series, we also introduce provable inequalities that lead to more manageable results. For most values of the deletion probability, these bounds improve the existing ones and significantly narrow the gap with the available lower bounds. Exploiting the same auxiliary processes, we also derive, as a by-product, a couple of very simple lower bounds on the channel capacity, which, for low values of the deletion probability, are almost as good as the best existing lower bounds. \centerfigcaptionstrue Binary deletion channel, channel capacity, capacity bounds. ## 1 Introduction We consider a binary deletion channel where each bit in the input sequence gets deleted, independently of the others, with probability , while the non-deleted bits are received without errors and in the correct order. The positions at which the deletions occur are unknown to both the transmitter and the receiver. Formally, let  be a sequence of  bits at the input of the channel, let  be the number of received bits, which is a random variable taking values in  according to the realization of the deletion process, and let  be the received sequence. The capacity per input bit of this channel, generally referred to as independent and identically distributed (IID) binary deletion channel, is defined as [1] C=limN→∞maxP(X)1NI(X;Y) (1) where is the distribution of the input sequence, and is the average mutual information between two random sequences [2]. The capacity (1) is unknown, and only some upper and lower bounds are available in the current literature. The first lower bound on the capacity of the deletion channel was derived by Gallager in [3], where he proved that, for , the capacity of interest is at least equal to that of a binary symmetric channel with bit-flipping probability . A number of lower bounds have since been proposed (see [4], [5], and references therein), among which the best bounds that we are aware of are the ones presented in [4] and [5]. In particular, the latter bound outperforms the former when , that is, for all values of  for which the authors of [5] could run the required computations whose execution time grows quickly as  increases. Throughout the paper, the reference lower bound will thus be the one in [5] for  and the one in [4] for . Only a few upper bounds have been derived on the capacity of the IID deletion channel. A simple upper bound is given by the capacity of an IID erasure channel with erasure probability , since the erasure channel is identical to the deletion channel, except that the receiver additionally knows the positions of the deleted bits [2]. A combinatorial bound proposed by Ullman in [6], which was originally derived for particular channels with synchronization errors, had been used for decades as an upper bound for the deletion channel. However, it is not a true upper bound, and it has been recently found to be violated by provable lower bounds on the capacity of the deletion channel [4]. The reason is due to the fact that Ullman focused on systems with null error probability, while the definition of capacity relies on the weaker condition of error probability that can be made arbitrarily low by increasing the length of the codewords [2]. The only non-trivial upper bound that we are aware of is the one presented in [7], which will be adopted here as a reference benchmark. This paper presents novel upper bounds on the capacity of the IID deletion channel that improve the existing ones for most values of the deletion probability . All upper bounds are computed by considering the capacity of some auxiliary channels obtained by providing genie-aided information on suitable random processes related to the deletion process. In particular, we show that, when such auxiliary random processes are revealed to the transmitter and/or the receiver, we obtain memoryless channels whose capacity can be evaluated by means of the Blahut-Arimoto algorithm (BAA) [8, 9], leading to provable upper bounds on the capacity of interest. Moreover, we show that, based on the introduced auxiliary processes, lower bounds on the capacity of the deletion channel can be derived as well. The obtained lower bounds, yet close to the ones proposed in [4] and [5] for low values of , do not improve them, and will only be considered as by-product results. The paper is organized as follows. Section 2 introduces an auxiliary channel based on which we derive three upper bounds on the capacity of the IID deletion channel, which are presented in Sections 3, 4, and 5, respectively. The fourth upper bound, evaluated by exploiting a different auxiliary channel, is introduced in Section 6. The main contributions in upper bounding the capacity of the deletion channel are summarized an discussed in Section 7. Finally, Section 8 introduces a couple of simple lower bounds, while Section 9 gives some concluding remarks. ## 2 A Useful Auxiliary Channel Let and  be two natural numbers such that , and let us define . We consider a channel for which, at each use, the input consists of a sequence of  bits and the output consists of a sequence of  bits. The input/output relationship characterizing each channel use is the following:  bits are deleted from the  input bits, while the remaining  bits are received without errors and in the correct order. At each channel use, the deletion pattern, that is, the positions at which the  deletions occur, randomly takes on each of the possible  realizations with equal probability, and is unknown to both the transmitter and the receiver. Also, deletion patterns in different channel uses are independent, so that the channel is memoryless. As an example, the transition probabilities characterizing the use of the channel are reported in Table 1, for the case and . and  denote the input sequence and the output sequence, respectively, while  denotes conditional probability. The capacity per use of the considered auxiliary channel is defined as f(L,R)=maxP(A)I(A;B), (2) where is the distribution of the input sequence. Since each channel output is a sequence of  bits, the following upper bound holds f(L,R)≤R. (3) In some particular cases, it can be shown that achieves the upper bound (3). These cases are listed and briefly discussed in the following. • . All input bits are deleted and no information can be delivered. • . A capacity-achieving scheme consists of transmitting, at each channel use, either a sequence of zeros or a sequence of ones, with equal probability and independently of the previous/future transmissions. In this case, for each channel use, the only received bit fully determines the input sequence, irrespectively of the deletion pattern. Formally, adopting the standard notation for the entropy and the conditional entropy [2], we get I(A;B)=H(A)−H(A|B)=H(A)=1 which achieves the upper bound (3). • . Since all transmitted bits are correctly received, the capacity is equal to  bits per channel use, which is achieved by independent and uniformly distributed (IUD) input bits. When , we could not find a closed-form expression of the capacity . On the other hand, since the auxiliary channel is memoryless and has finite input/output alphabets, its capacity can be numerically evaluated by means of the BAA [8, 9]. To run the BAA, we only need the transition probabilities characterizing the channel, as those reported in Table 1. Hence, in principle, we can compute the capacity  based on similar tables, for all desired values of and . Unfortunately, the implementation of the BAA becomes computationally infeasible for large values of  — for example, is the largest value that we were able to manage for all possible values of , while is the largest value that we were able to manage for , which will be shown later to be a case of particular interest. Some values of  are reported in Table 2, where the results obtained by means of the BAA have a two-digit precision after the decimal point, and are rounded up to the next hundredth since, rigorously, the BAA can underestimate the true capacity if a finite number of iterations are performed [8, 9]. In the following, we introduce several lemmas that will be used in the remaining sections to manipulate the capacity of the auxiliary channel when running the BAA seems impossible. Before providing the lemmas, we define ~f(L,D)=f(L,L−D), (4) so that we can index the capacity of the auxiliary channel either by the number of received bits, using , or by the number of deleted bits, using . The following definitions will also be useful in the remaining sections: α(L,R) = R−f(L,R), (5) ~α(L,D) = α(L,L−D)=L−D−~f(L,D). (6) Note that the coefficients and  cannot be negative due to (3). Lemma 1: For all values of and , the following holds f(L+1,R)≤f(L,R). (7) The proof is based on the fact that, when additional information is provided to the transmitter, the capacity of a system cannot decrease [2]. In particular, the capacity  cannot decrease if, at each channel use, the transmitter knows one of the positions at which the  deletions occur. Clearly, the bit transmitted in that position is irrelevant. Moreover, if the revealed position is chosen according to a uniform distribution on the  possible values, the system is characterized by  effective input bits,  output bits, and IUD deletion patterns, that is, by definition, a system with capacity . Hence, the lemma is proved. Lemma 2: For all values of and , the following holds if^L>Lthenα(^L,R)≥α(L,R). (8) The proof that is simply derived from (5) and (7). The remainder of the lemma can then be proved by induction. Lemma 3: For all values of and all positive values of , the following holds Missing or unrecognized delimiter for \right (9) The proof is based on the fact that, when additional information is provided to both the transmitter and the receiver, the capacity of a system cannot decrease [2]. In particular, we consider the information on the binary event “the last bit of the  transmitted bits is deleted”, which occurs with probability . When the event occurs, the last transmitted bit is irrelevant and the system is characterized by  effective input bits and  deletions on IUD positions, that is, the system has capacity . When the event does not occur, the last transmitted bit can be safely sent uncoded, while, for the first transmitted bits, the systems is characterized by  effective input bits and  deletions on IUD positions, that is, the system has capacity . Hence, the lemma is proved. Lemma 4: For all values of and all values of , the following holds ~α(L+1,D)≥~α(L,D)(1−DL+1). (10) The lemma is proved after straightforward manipulations of (9) based on (3) and (6). The lemmas provided hereafter focus on a particular case, that is, the occurrence of exactly one deletion. The reader may skip them without affecting the arguments exploited in Sections 3, 4, 5, and 6. The interest for this case will become evident in Section 7. Lemma 5: For all values of , the following holds ~f(nL,1)≤~f(L,1)+(n−1)L,∀n>0. (11) Let us partition the input sequence of  bits into  subsequences of  consecutive bits, and let us assume that both the transmitter and the receiver knows in which of the subsequences the deletion occurs. By definition, this subsequence has capacity , while the remaining  subsequences have capacity . Hence, since the capacity  cannot exceed that of the described genie-aided system, the lemma is proved. Lemma 6: For all values of , the following holds ~α(nL,1)≥~α(L,1),∀n>0. (12) The lemma directly follows from (11) by definition (6). Lemma 7: For all values of , the following holds ~f(L+1,1)≥~f(L,1)+1−1L+1−h(1L+1) (13) where  is the binary entropy function [2]. To prove the lemma, we first notice that the equation I(A;B)=I(A;B,C)−I(A;C|B) holds irrespectively of the definition of the random processes , , and  [2]. Moreover, since  cannot be larger than the entropy  of the process , we can write I(A;B)≥I(A;B,C)−H(C). (14) In particular, let and  be, respectively, the input sequence and the output sequence of the auxiliary channel considered in this section, when the input sequence includes  bits and exactly one deletion occurs. Also, let  be the binary event “the last bit of the  transmitted bits is deleted”, whose entropy is . Under these definitions, the inequality ~f(L+1,1)≥maxP(A)I(A;B,C)−h(1L+1) (15) follows from (14). Note that the first term at the right-hand side of (15) is the capacity of a channel identical to the considered one, when the receiver is provided with side information on the event , while the transmitter is not. According to the data-processing inequality [2], the capacity of this genie-aided system does not increase if, when the event  occurs, the receiver deletes one of the received bits, selected with equal probability over the  received bits. In this case, the channel consists of two independent subchannels: the former is characterized by  input bits and one deletion on IUD positions, and thus has capacity , while the latter is an erasure channel with erasure probability , and thus has capacity . Hence, we can write maxP(A)I(A;B,C)≥~f(L,1)+1−1L+1 which, combined with (15), proves the lemma. Lemma 8: The following holds limL→∞~α(L+1,1)~α(L,1)=1. (16) To prove the lemma, we first notice that the inequality ~α(L+1,1)≤~α(L,1)+1L+1+h(1L+1) (17) directly follows from (13) by definition (6). Then, according to (10) and (17), we can write 1−1L+1≤~α(L+1,1)~α(L,1)≤1+1~α(L,1)(L+1)+1~α(L,1)h(1L+1) which proves the lemma since both sides tend to one as  tends to infinity. Lemma 9: The following holds limL→∞[~f(L+1,1)−~f(L,1)]=1. (18) To prove the lemma, we first notice that the inequalities 1−1L+1−h(1L+1)≤[~f(L+1,1)−~f(L,1)]≤1+L−1−~f(L,1)L+1 (19) follow from (9) and (13) after simple manipulations. The left-hand side in (19) clearly tends to one as  tends to infinity. Then, we notice that the limit limL→∞~f(L,1)L=1 follows from the fact that the binary channel with one deletion tends to the binary identity channel, whose capacity per input bit is one, as the length of the input sequence tends to infinity. Hence, the right-hand side in (19) tends to one as  tends to infinity, and the lemma is proved. Note that (18) implies (16), but is stronger. ## 3 The First Upper Bound In this section, we derive an upper bound on  by providing side information on a random process , defined in the following. Let be a non-negative integer parameter and let us assume that the total number of deleted bits is a multiple of , so that is an integer — this assumption does not affect the capacity evaluation, where the limit is to be considered. We define such that is equal to the position in the transmitted sequence of the -th deleted bit and, for each value of  in , is equal to the difference between the position in the transmitted sequence of the -th deleted bit and that of the -th deleted bit. An example is depicted in Fig. 1 and discussed in the related caption. Given the assumption of IID deletions, the process  is IID too, and each element of  takes on the value with probability P(Zi=L+1)=(LD)dD+1(1−d)L−D (20) according to the Pascal distribution [10], for all values of such that . To point out various similarities between the bounds presented in this paper, it is useful to define, for and , the terms p(L,R) = (LR)dL−R(1−d)R (21) ~p(L,D) = p(L,L−D)=(LD)dD(1−d)L−D (22) so that we get P(Zi=L+1)=d⋅~p(L,D). (23) The realizations of the process  are actually unknown to both the transmitter and the receiver. Hence, an upper bound on the capacity of the deletion channel can be obtained by providing them with genie-aided information on . We will refer to the capacity per input bit of this genie-aided system as . With this side information, we have  blocks that do not interfere with each other, where the -th block has input bits, of which get deleted. The last input bit of each block is irrelevant, since both the transmitter and the receiver know that it gets deleted. The -th block is thus characterized by effective input bits and deletions on IUD positions, so that the related capacity is , as defined in Section 2. Hence, defining the expectation operator  and considering that limN→∞NS=E[Zi] by the law of large numbers [10], we get C1 = limN→∞1NS∑i=1~f(Zi−1,D) = 1E[Zi]limS→∞1SS∑i=1~f(Zi−1,D) = 1E[Zi]E[~f(Zi−1,D)] where the last equality follows from the law of large numbers. Finally, by exploiting the properties of the Pascal distribution [10], the upper bound yields C1=d2D+1∞∑L=D~f(L,D)~p(L,D) which can be also written as C1 = d2D+1∞∑L=D[L−D]~p(L,D)1−d−d2D+1∞∑L=D[L−D−~f(L,D)]~p(L,D) (24) = 1−d−d2D+1∞∑L=D~α(L,D)~p(L,D). Since the coefficients cannot be negative, the bound (24) is at least as good as the trivial bound . In particular, by combining Lemma 4 with the available outcomes of the BAA, it can be proved that the bound (24) equals  when , otherwise it is strictly better. Unless , it seems infeasible to evaluate the coefficients for all values of  required in (24). Let us assume that we know the coefficients  for all values of  such that , but not for larger values of  — in particular, we have . In this case, we can exploit the inequality in (10) to manipulate the coefficients  for . The obtained results are reported in Fig. 2, for all values of  in  and . The resulting bounds, referred to as , are actually larger than the capacity  in (24), because of the use of (10) for . Hence, the reported curves can be improved when an inequality tighter than (10) is exploited to manipulate the coefficients  for . In Fig. 2, the upper bound proposed in [7] and the lower bounds proposed in [4] and [5], which are the best existing bounds that we are aware of, are also reported for comparison.1 We point out that the upper bound  improves the upper bound presented in [7] for a wide range of  values, in particular when . ## 4 The Second Upper Bound In this section, we derive an upper bound on  by providing side information on a random process , defined in the following. Let be a non-negative integer parameter and let us assume that the number of bits at the output of the deletion channel is a multiple of , so that is an integer — this assumption does not affect the capacity evaluation, as in the previous case. We define such that is equal to the position in the transmitted sequence of the -th received bit and, for each value of  in , is equal to the difference between the position in the transmitted sequence of the -th received bit and that of the -th received bit. An example is depicted in Fig. 3 and discussed in the related caption. Given the assumption of IID deletions, the process  is IID too, and each element of  takes on the value with probability P(Wi=L+1)=(1−d)p(L,R) (25) according to the Pascal distribution [10], for all values of such that . As in the previous case, an upper bound on the capacity of the deletion channel can be obtained by providing the transmitter and the receiver with genie-aided information on the realizations of . We will refer to the capacity per input bit of this genie-aided system as . Similarly to the previous case, we have  blocks that do not interfere with each other, the -th block having input bits and output bits. The last input bit of each block can be safely sent uncoded, since both the transmitter and the receiver know that it is correctly received. Hence, following the same arguments as in the previous section, we get C2 = limN→∞1NT∑i=1[f(Wi−1,R)+1] = Missing or unrecognized delimiter for \left = 1E[Wi]E[f(Wi−1,R)+1]. Finally, by exploiting (25) and the properties of the Pascal distribution, the upper bound yields C2=(1−d)2R+1∞∑L=R[f(L,R)+1]p(L,R) which can be also written as C2 = (1−d)2R+1∞∑L=R(R+1)p(L,R)1−d−(1−d)2R+1∞∑L=R[R−f(L,R)]p(L,R) (26) = 1−d−(1−d)2R+1∞∑L=Rα(L,R)p(L,R). Since the coefficients cannot be negative, the bound (26) is at least as good as the trivial bound . In particular, by combining Lemma 2 with the available outcomes of the BAA, it can be proved that the bound (26) equals  when , otherwise it is strictly better. When , it seems infeasible to evaluate the coefficients for all values of  required in (26). Let us assume that we know the coefficients for all values of  such that , but not for larger values of  — in particular, we have . In this case, we can exploit (8) to manipulate the coefficients  for , obtaining C∗2=(1−d)2R+1LMAX∑L=R[α(LMAX,R)−α(L,R)]p(L,R)+(1−d)[1−α(L%MAX,R)R+1] (27) after a few straightforward manipulations — the bound is referred to as  because it is actually larger than the capacity  in (26). The obtained results are reported in Fig. 4, for all values of  in  and . Clearly, such curves can be improved when an inequality tighter than (8) is exploited to manipulate the coefficients  for . In Fig. 4, the upper bound proposed in [7] and the lower bounds proposed in [4] and [5] are also reported for comparison. We point out that the upper bound  improves the upper bound presented in [7] for most values of , in particular when , and, for large values of , the gap from the best lower bound is now roughly halved. ## 5 The Third Upper Bound In this section, we derive an upper bound on  by providing side information on a random process , defined in the following. Let be a positive integer parameter, based on which we partition the input sequence  into subsequences  of  consecutive bits. Formally, we define Xi=(X(i−1)L+1,X(i−1)L+2,…,XiL),∀i≥1. For example, when , we have , , , and so on. We assume that is a multiple of , and thus that there are exactly subsequences  — this assumption does not affect the capacity evaluation, as in the previous cases. We then partition the output sequence  into  subsequences , where, for each value of in , includes the received bits related to the input subsequence . Finally, we define the random process  such that, for each value of in , denotes the number of bits in the subsequence . An example is depicted in Fig. 5 and discussed in the related caption. Given the assumption of IID deletions, the process  is IID too, and each element of  takes on the value in  with probability , according to the binomial distribution. As in the previous cases, an upper bound on the capacity of the deletion channel can be obtained by providing the transmitter and the receiver with genie-aided information on the realizations of . We will refer to the capacity per input bit of this genie-aided system as . Similarly to the previous cases, we have  blocks that do not interfere with each other, the -th block having  input bits and  output bits. Hence, using similar arguments as in the previous sections, we get C3 = limN→∞1NQ∑i=1f(L,Vi) = 1LlimQ→∞1QQ∑i=1f(L,Vi) = 1LL∑R=0f(L,R)p(L,R) which can be also written as C3 = 1LL∑R=0Rp(L,R)1−d−1LL∑R=0[R−f(L,R)]p(L,R) (28) = 1−d−1LL∑R=0α(L,R)p(L,R). Hence, since the coefficients cannot be negative, the bound (28) is at least as good as the trivial bound . In particular, by combining Lemma 2 with the available outcomes of the BAA, it can be proved that the bound (28) equals  when , otherwise it is strictly better. Note that, unlike the previous cases, the bound  does not involve an infinite series. The upper bound (28) is plotted in Fig. 6, together with the upper bound proposed in [7] and the lower bounds proposed in [4] and [5]. For each value of  for which we could run the BAA, the bound  improves as  increases — we conjecture that this behavior holds for any value of  (see Section 7). Note that the considered approach significantly improves the bound presented in [7] for most values of the deletion probability , in particular when . ## 6 The Fourth Upper Bound Given any positive value of the integer parameter , we can define a system identical to the deletion channel, in which the receiver knows the realizations of the process  defined in the previous section, while the transmitter does not. In this case, it is useful to think of the system as if there were a “parallel” channel that provides the sequence  to the receiver. The capacity per input bit of this system, which will be denoted by , is definitely an upper bound on the capacity (1), since, when the parallel output  is neglected, the original deletion channel is obtained. Moreover, the upper bound  cannot be larger than  for the same value of , since the system with capacity  reduces to the system with capacity  when the transmitter neglects the side information on the process . As for the system considered in the previous section, we have  blocks that do not interfere with each other, so that a discrete memoryless channel results. For each use of this channel, we still have an input sequence of  bits and, with probability , an output sequence of  bits, but now the value of  is unknown to the transmitter. Hence, all transmitted sequences must be taken from the same distribution, and no longer from a distribution matched to the number of deletions in the current channel use. Consequently, the results related to the auxiliary channel introduced in Section 2 cannot be exploited here. Formally, we get C4 = limN→∞maxP(X)1NI(X;Y,V) (29) = 1LlimQ→∞maxP(X)1QI(X;Y,V) = 1LmaxP(Xi)I(Xi;Yi). When , this auxiliary channel reduces to the erasure channel, so that . In any other case, we could not find a closed-form expression of , and still resorted to the BAA. To run the BAA, we need the transition probabilities characterizing the channel, as those reported in Table 3 for the case . We point out that, unlike the auxiliary channel considered in Section 2, the transition probabilities now depend on the value of , so that the BAA must be run for each value of the deletion probability. The upper bounds and  are compared in Fig. 7 for three different values of  — in both cases, is the largest value for which we could run the BAA. We point out that the difference between the two bounds, yet is rigorously tighter for each value of , tends to vanish as  increases. This is due to the fact that, for large values of , the number of deletions for every transmitted bits is very likely to be close to , so that the advantage of knowing the actual number of such deletions (as it happens to the transmitter for the system with capacity ) tends to vanish. As for the bound , for each value of  for which we could run the BAA, the bound  improves as  increases, and we conjecture that this behavior holds for any value of  (see Section 7). ## 7 Discussions on the Proposed Upper Bounds In Table 4, we report a comparison between the best upper bounds found in this paper, that is, with for and with for , and the existing upper bounds that we are aware of. We remark that the proposed approaches lead to a new state-of-the-art upper bound on the capacity of the deletion channel for most values of , as evident from the table (where the best values are shown in bold face). We believe that the values reported in Table 4 could be improved if it were possible to run the BAA for longer sequences. In particular, our conjecture is formalized in the following. Conjecture 1: the bound  does not worsen as increases; the bound  does not worsen as increases; the bound  does not worsen as increases; the bound  does not worsen as increases. These conjectures are based on the amount of genie-aided information, that is, the entropy per input bit of the revealed processes. The idea is that the lower the entropy per input bit of the revealed information, the tighter the upper bound. For example, let us consider the bound : if we reveal the position of one deletion every 100, we expect a tighter bound than if we reveal the position of one deletion every 3. Unfortunately, we could not completely prove the conjectures listed above, but we were able to derive closely related results. For example, we can prove that  does not increase when  is replaced by any positive multiple of . It is sufficient to note that, when , carries the same information as when (), plus some additional information. Hence, we get maxP(Xi)I(Xi;Yi)∣∣∣L=nℓ≤nmaxP(Xi)I(Xi;Yi)∣∣∣L=ℓ which, according (29), proves that  does not increase when  is replaced by . We now discuss the behavior of the proposed upper bounds for limiting values of , that is, and . In particular, after straightforward manipulations, the following results can be obtained limd→0+1−C∗2d = α(R+1,R)+1=~α(R+1,1)+1 (30) limd→0+1−C3d = α(L,L−1)+1=~α(L,1)+1 (31) limd→1−C∗21−d = 1−α(LMAX,R)R+1 (32) which are valid for any finite value of , , and . The limits reported above are the only ones leading to closed-form expressions that do not reduce to the trivial erasure-channel bound. The limit for small values of  is determined by the coefficient , some values of which are reported in Table 5 — note that the coefficients in (30) and (31) are identical, except for the name of the parameters. The best value that we have found so far is limd→0+1−C3d=4.19 (33) obtained when . Other than the erasure-channel bound, we are not aware of any upper bound that leads to closed-form limiting expressions comparable with the reported one. We believe that (33) could be improved if it were possible to run the BAA for longer sequences, as formalized in the following. Conjecture 2: For all values of , the following holds if^L>Lthen~α(^L,1)≥~α(L,1). (34) We wish to prove this conjecture since it would imply that the asymptotic upper bound (31) does not worsen as  increases. Additionally, a strict inequality in (34), which holds for all available outcomes of the BAA, would imply that the asymptotic upper bound (31) improves as  increases. Lemma 6 gives a partial proof of (34). We point out that the limiting value (31) may not be limited, since (17) does not satisfy any convergence criterion [11]. The limit for large values of  leads to similar considerations. In particular, the best value that we have found so far is limd→1−C∗21−d=0.49, (35) obtained by (32) when and . Note that, according to (8), the reported value could be improved by running the BAA for longer sequences, which unfortunately seems infeasible. We point out that (35) improves the limiting upper bound limd→1−C1−d≤0.7918 derived in [7], and closes the gap from the limiting lower bound limd→1−C1−d≥0.1185 derived [12]. ## 8 Two Simple Lower Bounds In this section, we derive lower bounds on  by exploiting the random process  defined in Section 5. For any input distribution , the following equation holds I(X;Y)=I(X;Y,V)−I(X;V|Y) (36) by definition [2]. Moreover, since cannot be larger than the entropy of the process , we can write I(X;Y)≥I(X;Y,V)−H(V) (37) from which we get the following lower bound on the capacity of the deletion channel C≥limN→∞1NI(X;Y,V)−limN→∞1NH(V). (38) If we consider the process defined before, following the arguments given for the derivation of (29), we obtain limN→∞1NI(X;Y,V) = 1LI(Xi;Yi) limN→∞1NH(V) = 1LH(Vi), so that (38) can be written as C≥1LI(Xi;Yi)+1LL∑R=0p(L,R)log2[p(L,R)]. (39) In Fig. 8, the lower bound (39) is compared with the best lower bound available in the literature, namely the one from [4] or the one from [5], depending on the value of  (see Section 1). For the computation of (39), two different input distributions have been considered, that is, the distribution that maximizes , which was considered in the previous section to derive , and IUD input bits. Note that the difference between the curve related to the optimized input distribution and that related to IUD input bits is not significant for low values of , which is compliant with the fact that IUD input bits are optimal when . Interestingly, for low values of , both distributions lead to a lower bound roughly as good as the reference benchmarks, as evident from Table 6 (where the best values are shown in bold face). ## 9 Conclusions We have presented novel upper bounds on the capacity of the IID binary deletion channel. All bounds have been obtained by revealing side information on suitable random processes, and by computing the capacity of the resulting genie-aided systems. The proposed approaches lead to a new state-of-the-art upper bound for most values of the deletion probability , and provide novel insights on the channel capacity in the limiting scenarios and . As a by-product of our approach, we have also presented simple lower bounds, which turn out not to improve the existing ones. ### Footnotes 1. As explained in Section 1, the lower bound proposed in [5] is adopted when , while the one proposed in [4] is adopted when . ### References 1. R. L. Dobrushin, “Shannon’s theorems for channels with synchronization errors,” Problems of Information Transmission, vol. 3, no. 4, pp. 11–26, 1967. 2. T. M. Cover and J. A. Thomas, Elements of Information Theory. New York: John Wiley & Sons, Inc., 1991. 3. R. Gallager, “Sequential decoding for binary channels with noise and synchronization errors,” tech. rep., Lincoln Lab. Group Report, Oct. 1961. 4. E. Drinea and M. Mitzenmacher, “Improved lower bounds for the capacity of i.i.d. deletion and duplication channels,” IEEE Trans. Inform. Theory, vol. 53, pp. 2693–2714, Aug. 2007. 5. E. Drinea and A. Kirsch, “Directly lower bounding the information capacity for channels with I.I.D. deletions and duplications,” in Proc. IEEE International Symposium on Information Theory, pp. 1731–1735, 2007. 6. J. D. Ullman, “On the capabilites of codes to correct synchronization errors,” IEEE Trans. Inform. Theory, vol. 13, pp. 95–105, Jan. 1967. 7. S. Diggavi, M. Mitzenmacher, and H. D. Pfister, “Capacity upper bounds for the deletion channels,” in Proc. IEEE International Symposium on Information Theory, pp. 1716–1720, 2007. 8. R. E. Blahut, “Computation of channel capacity and rate distortion functions,” IEEE Trans. Inform. Theory, vol. 18, pp. 460–473, Jan. 1972. 9. S. Arimoto, “An algorithm for calculating the capacity of an arbitrary discrete memoryless channel,” IEEE Trans. Inform. Theory, vol. 18, pp. 14–20, Jan. 1972. 10. A. Papoulis, Probability, Random Variables and Sthocastic Processes. New York, NY: McGraw-Hill, 1991. 11. W. Rudin, Principles of Mathematical Analysis. New York: McGraw-Hill, 1974. 12. M. Mitzenmacher and E. Drinea, “A simple lower bound for the capacity of the deletion channel,” IEEE Trans. Inform. Theory, vol. 52, pp. 4657–4660, Oct. 2006. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minumum 40 characters
2019-03-21 13:29:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791792392730713, "perplexity": 624.2484677289109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00109.warc.gz"}
https://courses.lumenlearning.com/ivytech-collegealgebra/chapter/solving-equations-using-factoring/
## Solving Equations Using Factoring We have used factoring to solve quadratic equations, but it is a technique that we can use with many types of polynomial equations, which are equations that contain a string of terms including numerical coefficients and variables. When we are faced with an equation containing polynomials of degree higher than 2, we can often solve them by factoring. ### A General Note: Polynomial Equations A polynomial of degree n is an expression of the type ${a}_{n}{x}^{n}+{a}_{n - 1}{x}^{n - 1}+\cdot \cdot \cdot +{a}_{2}{x}^{2}+{a}_{1}x+{a}_{0}$ where n is a positive integer and ${a}_{n},\dots ,{a}_{0}$ are real numbers and ${a}_{n}\ne 0$. Setting the polynomial equal to zero gives a polynomial equation. The total number of solutions (real and complex) to a polynomial equation is equal to the highest exponent n. ### Example 4: Solving a Polynomial by Factoring Solve the polynomial by factoring: $5{x}^{4}=80{x}^{2}$. ### Solution First, set the equation equal to zero. Then factor out what is common to both terms, the GCF. $\begin{array}{l}5{x}^{4}-80{x}^{2}\hfill&=0\hfill \\ 5{x}^{2}\left({x}^{2}-16\right)\hfill&=0\hfill \end{array}$ Notice that we have the difference of squares in the factor ${x}^{2}-16$, which we will continue to factor and obtain two solutions. The first term, $5{x}^{2}$, generates, technically, two solutions as the exponent is 2, but they are the same solution. $\begin{array}{l}5{x}^{2}\hfill&=0\hfill \\ x\hfill&=0\hfill \\ {x}^{2}-16\hfill&=0\hfill \\ \left(x - 4\right)\left(x+4\right)\hfill&=0\hfill \\ x\hfill&=4\hfill \\ x\hfill&=-4\hfill \end{array}$ The solutions are $x=0\text{ (double solution),}$ $x=4$, and $x=-4$. ### Analysis of the Solution We can see the solutions on the graph in Figure 1. The x-coordinates of the points where the graph crosses the x-axis are the solutions–the x-intercepts. Notice on the graph that at the solution $x=0$, the graph touches the x-axis and bounces back. It does not cross the x-axis. This is typical of double solutions. Figure 1 ### Try It 4 Solve by factoring: $12{x}^{4}=3{x}^{2}$. Solution ### Example 5: Solve a Polynomial by Grouping Solve a polynomial by grouping: ${x}^{3}+{x}^{2}-9x - 9=0$. ### Solution This polynomial consists of 4 terms, which we can solve by grouping. Grouping procedures require factoring the first two terms and then factoring the last two terms. If the factors in the parentheses are identical, we can continue the process and solve, unless more factoring is suggested. $\begin{array}{l}{x}^{3}+{x}^{2}-9x - 9\hfill&=0\hfill \\ {x}^{2}\left(x+1\right)-9\left(x+1\right)\hfill&=0\hfill \\ \left({x}^{2}-9\right)\left(x+1\right)\hfill&=0\hfill \end{array}$ The grouping process ends here, as we can factor ${x}^{2}-9$ using the difference of squares formula. $\begin{array}{l}\left({x}^{2}-9\right)\left(x+1\right)\hfill&=0\hfill \\ \left(x - 3\right)\left(x+3\right)\left(x+1\right)\hfill&=0\hfill \\ x\hfill&=3\hfill \\ x\hfill&=-3\hfill \\ x\hfill&=-1\hfill \end{array}$ The solutions are $x=3$, $x=-3$, and $x=-1$. Note that the highest exponent is 3 and we obtained 3 solutions. We can see the solutions, the x-intercepts, on the graph in Figure 2. Figure 2 ### Analysis of the Solution We looked at solving quadratic equations by factoring when the leading coefficient is 1. When the leading coefficient is not 1, we solved by grouping. Grouping requires four terms, which we obtained by splitting the linear term of quadratic equations. We can also use grouping for some polynomials of degree higher than 2, as we saw here, since there were already four terms.
2019-09-23 17:46:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6617305874824524, "perplexity": 307.71701775080044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00458.warc.gz"}
https://www.numerade.com/questions/use-the-given-graphs-of-f-and-g-to-evaluate-each-expression-or-explain-why-it-is-undefined-a-f-g2-b-/
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! # Use the given graphs of $f$ and $g$ to evaluate each expression, or explain why it is undefined.(a) $f (g(2))$(b) $g (f(0))$(c) $(f \circ g) (0)$(d) $(g \circ f) (6)$(e) $(g \circ g) (-2)$(f) $(f \circ f) (4)$ ## (a) $g(2)=5,$ because the point (2,5) is on the graph of $g$. Thus, $f(g(2))=f(5)=4,$ because the point (5,4) is on the graph of $f$.(b) $g(f(0))=g(0)=3$(c) $(f \circ g)(0)=f(g(0))=f(3)=0$(d) $(g \circ f)(6)=g(f(6))=g(6) .$ This value is not defined, because there is no point on the graph of $g$ that has $x$ -coordinate 6.(e) $(g \circ g)(-2)=g(g(-2))=g(1)=4$(f) $(f \circ f)(4)=f(f(4))=f(2)=-2$ ### Discussion You must be signed in to discuss. ##### Catherine R. Missouri State University ##### Kristen K. University of Michigan - Ann Arbor ##### Michael J. Idaho State University Lectures Join Bootcamp ### Video Transcript all right. First, we're going to find part a f of G of two. So we start by finding g of to. So we find where X equals two. And then we follow that up to where it meets the G graph, and that would be at five. So g of two is five. We substitute that in, and now we're finding fo five. So we find X equals five and then we go up to where that meets the F graph, and that's at four. So our answer is for now we do something like that for part B. We start by finding f of zero. So we find X equals zero and we see where that matches the F graph, and that's at a height of zero. F of zero is zero. We substitute that in. Now we're finding g of zero. And so we go over to X equals zero and go up to where that meets the G graph. And that's at a height of three. Next, let's to part c f of g of zero. And we can write that this way if we want to f of g of zero. So the first thing we want to find is G of zero. So we go to where X equals zero and we find out on the G graph and that's three. Substitute that in, and now we have f of three. So we go over to where X equals three and we find that on the F graph, and that's at a height of zero. So the answer is zero. And now, for part D g of F six, that can be written as g of f of six like this. So the first thing we're finding his f of six. So we go over to where x equal six and we go up on the F graph and we get six for the height. So we substitute that in, and now we're finding g of six. However, six is not in the domain of G. It doesn't go that far, So this one is undefined. All right, here's part E. We can rewrite this as g of g of negative, too. And so on the inside, we're finding g of negative too. So we go over to where X equals negative to and find that on the G graph, and that's one so we substitute that in, and now we're finding g of one. So we go over to where X equals one, and we find that on the G graph. And that's right about four. So the answer is for and now for FFR four, which we can rewrite this way ff before we find the inside first effort for So we go over to where X equals four and we find that on the F graph. And that is it's supposed to be two minds a little low, but if you look at the actual graph, it's too. So we're going to substitute that in. And now we're looking for f of to. So we go to where X equals two and we find that on the F graph, and that is negative two. So there's our answer. Oregon State University ##### Catherine R. Missouri State University ##### Kristen K. University of Michigan - Ann Arbor ##### Michael J. Idaho State University Lectures Join Bootcamp
2021-09-27 01:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7110163569450378, "perplexity": 620.1558057307516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058222.43/warc/CC-MAIN-20210926235727-20210927025727-00258.warc.gz"}
https://imathworks.com/tex/tex-latex-how-to-understand-the-latex-warning-there-were-undefined-references/
# [Tex/LaTex] How to understand the LaTeX warning “there were undefined references” cross-referencingwarnings I am running this LaTeX file through pdflatex and getting these warnings: latex warning: there were undefined references. latex warning: label(s) may have changed. Rerun to get cross-reference right. and also some more warnings like this: latex warning: reference 'lastpage' on page 76 undefined on input line 3200. on every page end. My LaTeX file is like this \documentclass[a4paper,leqno,twoside]{article} \usepackage[latin1]{inputenc} \usepackage[english]{babel} \usepackage{multirow} \renewcommand{\familydefault}{\sfdefault} \usepackage{color} \usepackage{draftwatermark} \SetWatermarkText{EXPERIMENTAL} \SetWatermarkScale{0.6} \usepackage{parskip} \graphicspath{{Figure/}} \let\oldsection\section \renewcommand{\section}{\clearpage\oldsection} \begin{document} % Issued by {Name, acronym, department, phone} \issuedby{vrebwr} % Checked by \checkedby{-} % Approved by \approvedby{-} % Document title. Use \doctitleShort{} to insert a shorter title in the header. \doctitle{test numbers with id} \doctitleShort{document for tests} % Publish date \publishdate{\today} % Titlepage \frontmatter % Should be on first page \maketitle \vspace*{08eX} \begin{center} \subsection*{Abstract} \end{center} \noindent \newpage \mainmatter % Main pages. This command should be on page 2 or later. \phantom{phantom} \cleardoublepage \subsection*{Change Record} \begin{itemize} \item R.1: Initial version. \begin{itemize} \item generation of documentation using the vrebwr \end{itemize} \end{itemize} \cleardoublepage \setcounter{tocdepth}{2} \tableofcontents \cleardoublepage \section{ How to read the tset information} \begin{enumerate} \item information of tests \end{enumerate} \section{test numbers} \subsection*{1, temperature sensor. } \subsubsection*{data} some data regarding test where we performed \subsubsection*{values} temperature sensor value here three or more. \subsubsection*{reason} why we get like this. \newpage \subsection*{2, humidity sensor. } \subsubsection*{data} some data regarding test where we performed \subsubsection*{values} humidity sensor value here three or more. \subsubsection*{reason} why we get like this. \subsection*{3, pressure sensor. } \subsubsection*{data} some data regarding test where we performed \subsubsection*{values} pressure sensor value here three or more. \subsubsection*{reason} why we get like this. \newpage \end{document} In the above LaTeX file I have around 100 tests. I used the * symbol to eliminate numbering in subsections, because I am using test names with number so there is no need to add numbering. It looks bad in the PDF, so I tried to add that test name and number in table of contents by using this command: \addcontentsline{toc}{subsection}{1,temperature sensor.} When I used this command I am getting errors like as I explained above. I searched and I found some reasons why this error occurred, because of \label{} and \ref{} is not used like that. But I don't know how to use that in my file. Can anyone help me with it? Getting a warning about undefined references after the first LaTeX run is nothing to worry about. Just rerun LaTeX once or twice more and all cross-references should be resolved (including the one to lastpage).
2023-03-30 12:07:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744150400161743, "perplexity": 4915.738843983767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00077.warc.gz"}
https://global-sci.org/intro/article_detail/ijnam/19112.html
Volume 18, Issue 4 A Modified Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form Int. J. Numer. Anal. Mod., 18 (2021), pp. 500-523. Published online: 2021-05 Preview Purchase PDF 3 3781 Export citation Cited by • Abstract A modified primal-dual weak Galerkin (M-PDWG) finite element method is designed for the second order elliptic equation in non-divergence form. Compared with the existing PDWG methods proposed in [6], the system of equations resulting from the M-PDWG scheme could be equivalently simplified into one equation involving only the primal variable by eliminating the dual variable (Lagrange multiplier). The resulting simplified system thus has significantly fewer degrees of freedom than the one resulting from existing PDWG scheme. Optimal order error estimates are derived for the numerical approximations in the discrete $H^2$-norm, $H^1$-norm and $L^2$-norm respectively. Extensive numerical results are demonstrated for both the smooth and non-smooth coefficients on convex and non-convex domains to verify the accuracy of the theory developed in this paper. • Keywords Primal-dual, weak Galerkin, finite element methods, non-divergence form, Cordès condition, polyhedral meshes. • BibTex • RIS • TXT @Article{IJNAM-18-500, author = {Wang , ​Chunmei}, title = {A Modified Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form}, journal = {International Journal of Numerical Analysis and Modeling}, year = {2021}, volume = {18}, number = {4}, pages = {500--523}, abstract = { A modified primal-dual weak Galerkin (M-PDWG) finite element method is designed for the second order elliptic equation in non-divergence form. Compared with the existing PDWG methods proposed in [6], the system of equations resulting from the M-PDWG scheme could be equivalently simplified into one equation involving only the primal variable by eliminating the dual variable (Lagrange multiplier). The resulting simplified system thus has significantly fewer degrees of freedom than the one resulting from existing PDWG scheme. Optimal order error estimates are derived for the numerical approximations in the discrete $H^2$-norm, $H^1$-norm and $L^2$-norm respectively. Extensive numerical results are demonstrated for both the smooth and non-smooth coefficients on convex and non-convex domains to verify the accuracy of the theory developed in this paper. }, issn = {2617-8710}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/ijnam/19112.html} } TY - JOUR T1 - A Modified Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form AU - Wang , ​Chunmei JO - International Journal of Numerical Analysis and Modeling VL - 4 SP - 500 EP - 523 PY - 2021 DA - 2021/05 SN - 18 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/ijnam/19112.html KW - Primal-dual, weak Galerkin, finite element methods, non-divergence form, Cordès condition, polyhedral meshes. AB - A modified primal-dual weak Galerkin (M-PDWG) finite element method is designed for the second order elliptic equation in non-divergence form. Compared with the existing PDWG methods proposed in [6], the system of equations resulting from the M-PDWG scheme could be equivalently simplified into one equation involving only the primal variable by eliminating the dual variable (Lagrange multiplier). The resulting simplified system thus has significantly fewer degrees of freedom than the one resulting from existing PDWG scheme. Optimal order error estimates are derived for the numerical approximations in the discrete $H^2$-norm, $H^1$-norm and $L^2$-norm respectively. Extensive numerical results are demonstrated for both the smooth and non-smooth coefficients on convex and non-convex domains to verify the accuracy of the theory developed in this paper. ​Chunmei Wang. (2021). A Modified Primal-Dual Weak Galerkin Finite Element Method for Second Order Elliptic Equations in Non-Divergence Form. International Journal of Numerical Analysis and Modeling. 18 (4). 500-523. doi: Copy to clipboard The citation has been copied to your clipboard
2021-12-05 07:25:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5520349740982056, "perplexity": 750.6462292643483}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00221.warc.gz"}
http://scholarpedia.org/article/Talk:Galactic_magnetic_fields
# Talk:Galactic magnetic fields The article is a nice and succinct introduction to the field. The main improvements would be to add a small number of references (see point 5) and more informative figure captions (point 6). 1. Introduction. The meaning of 'total', 'regular' and 'turbulent'/'random' magnetic field should be defined more precisely, along with their relation to the different observable quantities and the resolution of the telescope. I think it is preferable to refer to random rather than turbulent magnetic fields: tangling of field lines does not necessarily have to be caused by a classical turbulent flow. It could still be mentioned that compression or shear can produce anisotropy in a turbulent magnetic fluid. 'B-vectors' should be defined; my preference would be to not use the term B-vectors and just mention that the angle of the plane polarization is orthogonal to the magnetic field direction in the absence of Faraday rotation (short wavelength limit). 2. The Origin of Galactic Magnetic Fields. An additional problem for the primordial theory is finite pitch angle of the field: the field lines are not toroidal but spiral (as mentioned later in the article). The problems of the primordial theory call for a "mechanism to sustain and organise the magnetic field" (this does not HAVE to be a dynamo). Second last sentence: "...axisymmetric symmetry..." is a bit too symmetric. Last sentence: I suggest to reword as "These modes can be identified from the pattern of polarization angles and Faraday rotation in multi-wavelength observations." 3. Magnetic Field Strengths in Galaxies. The equipartition assumption may not be well founded at local scales in a galactic disc, but there is support for the field strength estimates from Faraday rotation estimates in the Milky Way and a few nearby galaxies. I do not think that "massive" is the right term for galaxies like M51 and NGC6946 in comparison to M31 --- how about "gas rich" or "galaxies with higher star forming rates". 4. Magnetic field structures. "M31 hosts a largely axisymmetric field" (not dominating) --- is it reallly true that axisymmetry is known to r=25 kpc? (Or just the presence of a magnetic field?) The lack of clear patterns in Faraday rotation in many galaxies may also be due to inadequate search techniques --- it is very difficult to separate by eye the superposition of 2 or 3 Fourier modes in noisy data! 5. A small number of references at the end would be very helpful: this is a vast topic and the article is heavily weighted towards the investigation of galactic magnetism via radio astronomy. Some review papers from the author's website could be linked as well as a 1996 article in Annual Reviews (available online via NED at http://nedwww.ipac.caltech.edu/level5/araa.html). 6. Some description in the figure captions about what data is being shown and a brief note of some key features that can be recognised would be very helpful; for example in Fig. 3 explain that M31 is highly inclined; that the radio emission is concentrated in a ring centered on a radius of 10 kpc; that the regular magnetic field can be seen to be highly ordered on scales of several kpc. There should also be full refernces or credits as the images could be easily copied for use elsewhere. Review of " Galactic Magnetic Fields" by R. Beck (second reviewer) The text is well written and informative. I have only a few minor comments: Usage of "bold face" in the beginning of introduction looks strange. against gravitation --> against gravity. spiralling around interstellar magnetic fields --> spiralling around interstellar magnetic field lines. over a large radio wavelength range --> over a large range of radio wavelengths "Faraday rotation" in italics could well be a reference to another web page I'd remove the plus/minus in connection with the 180 degrees ambiguity. This is an ambiguity between 0 and 180, not between -180 and +180. I believe the following suggested replacement represents better what is meant: Such a "primordial" field is hard to maintain because --> Explaining this as a "primordial" field is difficult because .... I suggest to replace electromagnetic energy by magnetic energy, because electric fields do not play an active role in the non-relativistic case. non-homogeneous --> non-uniform The expressions "regular magnetic field" and "total magnetic field" should be explained and synonyms such as large scale or mean magnetic fields could be introduced. Likewise for the fluctuating magnetic field. This involves the definition of an average which may not be very precise. It may be sufficient to say that it is customary to separate the magnetic field into a mean or regular component and fluctuating or turbulent one. Their sum is referred to as the total field. Some 10 or more references should be given, just like in regular reviews.
2020-11-29 13:38:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6889446973800659, "perplexity": 1131.0525822701738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00301.warc.gz"}
http://onewebglobal.com/burst-error/burst-error-correction-for-cyclic-codes.php
Home > Burst Error > Burst-error Correction For Cyclic Codes # Burst-error Correction For Cyclic Codes ## Contents Notice that such description is not unique, because D ′ = ( 11001 , 6 ) {\displaystyle D'=(11001,6)} describes the same burst error. Its encoder can be written as c ( x ) = a ( x ) g ( x ) {\displaystyle c(x)=a(x)g(x)} . Initially, the bytes are permuted to form new frames represented by L 1 L 3 L 5 R 1 R 3 R 5 L 2 L 4 L 6 R 2 Cyclic codes using Fourier transform can be described in a setting closer to the signal processing. get redirected here Cyclic codes are considered optimal for burst error detection since they meet this upper bound: Theorem (Cyclic burst correction capability). We write the λ k {\displaystyle \lambda k} entries of each block into a λ × k {\displaystyle \lambda \times k} matrix using row-major order. Without loss of generality, pick i ⩽ j {\displaystyle i\leqslant j} . Vanstone, Paul C. ## Burst Error Correcting Codes C {\displaystyle {\mathcal {C}}} is called a cyclic code if, for every codeword c=(c1,...,cn) from C, the word (cn,c1,...,cn-1) in G F ( q ) n {\displaystyle GF(q)^{n}} obtained by a This will happen before two adjacent codewords are each corrupted by say 3 errors. S ( x ) {\displaystyle S(x)} = v ( x ) mod g ( x ) = ( a ( x ) g ( x ) + e ( x ) By the division theorem we can write: j − i = g ( 2 ℓ − 1 ) + r , {\displaystyle j-i=g(2\ell -1)+r,} for integers g {\displaystyle g} and r They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction. Binary Reed–Solomon codes Certain families of codes, such as Reed–Solomon, operate on alphabet sizes larger than binary. Length of the pattern is given by deg b ( x ) + 1 {\displaystyle b(x)+1} . Burst Error Correcting Convolutional Codes For error detection cyclic codes are widely used and are called t − 1 {\displaystyle t-1} cyclic redundancy codes. Now, for cyclic codes, Let α {\displaystyle \alpha } be primitive element in G F ( q m ) {\displaystyle GF(q^{m})} , and let β = α q − 1 {\displaystyle Upon receiving c 1 {\displaystyle \mathbf … 1 _ … 0} hit by a burst b 1 {\displaystyle \mathbf − 7 _ − 6} , we could interpret that as if Suppose that we want to design an ( n , k ) {\displaystyle (n,k)} code that can detect all burst errors of length ⩽ ℓ . {\displaystyle \leqslant \ell .} A https://en.wikipedia.org/wiki/Cyclic_code Generated Wed, 05 Oct 2016 02:07:09 GMT by s_hv902 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection On the other hand we have: n − w = number of zeros in  E = ( n − l e n g t h ( P 1 ) ) + Burst And Random Error Correcting Codes S ( x ) {\displaystyle S(x)} = v ( x ) mod g ( x ) = ( a ( x ) g ( x ) + e ( x ) The only vector v {\displaystyle v} in G F ( q ) n {\displaystyle GF(q)^{n}} of weight d − 1 {\displaystyle d-1} or less whose spectral components V j {\displaystyle V_{j}} Therefore, j − i {\displaystyle j-i} must be a multiple of p {\displaystyle p} . ## Burst Error Correction Using Hamming Code Each of the M {\displaystyle M} words must be distinct, otherwise the code would have distance < 1 {\displaystyle <1} . http://ieeexplore.ieee.org/iel5/18/22793/01057825.pdf Then two columns will never be linearly dependent because three columns could be linearly dependent with the minimum distance of the code as 3. Burst Error Correcting Codes To provide access without cookies would require the site to create a new session for every page you visit, which slows the system down to an unacceptable level. Burst Error Correction Example This defines a ( 2 m − 1 , 2 m − 1 − m ) {\displaystyle (2^{m}-1,2^{m}-1-m)} code, called Hamming code. If the code is cyclic, then 10001011 is again a valid codeword. http://onewebglobal.com/burst-error/burst-error-correction-example.php RSL-E-2, 1959. ^ Wei Zhou, Shu Lin, Khaled Abdel-Ghaffar. I am writing this message here to assure you that I own this page and I only will be doing the corresponding Wikipedia entry under the user name : script3r. One important difference between Fourier transform in complex field and Galois field is that complex field ω {\displaystyle \omega } exists for every value of n {\displaystyle n} while in Galois Burst Error Correcting Codes Ppt Cyclic codes on Fourier transform Applications of Fourier transform are widespread in signal processing. If it had a burst of length ⩽ 2 ℓ {\displaystyle \leqslant 2\ell } as a codeword, then a burst of length ℓ {\displaystyle \ell } could change the codeword to First we observe that a code can correct all bursts of length ⩽ ℓ {\displaystyle \leqslant \ell } if and only if no two codewords differ by the sum of two http://onewebglobal.com/burst-error/burst-error-correction-codes.php Therefore, j − i {\displaystyle j-i} cannot be a multiple of n {\displaystyle n} since they are both less than n {\displaystyle n} . We call the set of indices corresponding to this run as the zero run. Signal Error Correction Implications of Rieger Bound The implication of this bound has to deal with burst error correcting efficiency as well as the interleaving schemes that would work for burst error correction. Theorem (Burst error detection ability). ## We have q k {\displaystyle q^{k}} codewords. Cyclic codes using Fourier transform can be described in a setting closer to the signal processing. To define a cyclic code, we pick a fixed polynomial, called generator polynomial. In fact, cyclic codes can also correct cyclic burst errors along with burst errors. Burst Error Correction Pdf Location of burst - Least significant digit of burst is called as location of that burst. 2. Since p ( x ) {\displaystyle p(x)} is a primitive polynomial, its period is 2 5 − 1 = 31 {\displaystyle 2^{5}-1=31} . One such bound is constrained to a maximum correctable cyclic burst length within every subblock, or equivalently a constraint on the minimum error free length or gap within every phased-burst. Contents 1 Definition 2 Algebraic structure 3 Examples 3.1 Trivial examples 4 Quasi-cyclic codes and shortened codes 4.1 Definition 4.2 Definition 5 Cyclic codes for correcting errors 5.1 For correcting two http://onewebglobal.com/burst-error/burst-error-correction-ppt.php We can think of it as the set of all strings that begin with 1 {\displaystyle 1} and have length ℓ {\displaystyle \ell } . Therefore, the error correcting ability of the interleaved ( λ n , λ k ) {\displaystyle (\lambda n,\lambda k)} code is exactly λ ℓ . {\displaystyle \lambda \ell .} The BEC A quasi-cyclic code with b {\displaystyle b} equal to 1 {\displaystyle 1} is a cyclic code. Cyclic codes can be used to correct errors, like Hamming codes as a cyclic codes can be used for correcting single error. That means both that both the bursts are same, contrary to assumption. Also, the bit error rate is ideal (i.e 0) for more than 66.66% of the cases which strongly supports the user of interleaver for burst error correction. This article incorporates material from cyclic code on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Cambridge, UK: Cambridge UP, 2004. Ensuring this condition, the number of such subsets is at least equal to number of vectors. Thus, this is in form of M X N array.
2018-08-20 22:52:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731850028038025, "perplexity": 1480.4731807590333}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00591.warc.gz"}
https://cs.stackexchange.com/questions/45772/does-a-reentrant-list-for-signal-queue-in-a-single-thread-environment-exist
# Does a reentrant list for signal queue in a single-thread environment exist? I need to handle Unix signals in a single-threaded application with the following goals: 1. Signals doesn't mask on receive (thus, the signal handler must be reentrant). 2. I am not allowed to lose signal data (thus, if a new signal comes before the handler of the previous returned, it also must be handled correctly). I have the common multi-threaded primitives (spinlocks, semaphores, etc). But they doesn't seem enough, because my higher-level data structures (even a such simple as a list) aren't thread-safe. My initial idea was the following: 1. I use a list, in which I store the data of the incoming signal fast, 2. and process them (call the possibily much slower running handlers) later, out of the critical section. The main problem with that, that the list data structure isn't thread safe. If I lock it, I can't store a second signal anywhere. I can't wait until the previous handler exits, because on the second signal it is essentially suspended in a critical section. Simply I don't have any idea, how to handle the following scenario: 1. signal1 comes, the process suspends, and the handler of signal1 starts 2. signal2 comes, the handler of signal1 suspends, and the handler of signal2 starts 3. Handler of signal2 returns, execution returns to the handler signal1 4. Handler of signal1 returns, execution returns to the main program. After thinking a lot on it, I have an impression, maybe my problem is unsolvable. Am I right? How do operating systems handle similar problems (for example, possibily bursting interrupts from hardware)? • @JustAnotherSoul I won't guarantee there aren't more signals delivered at once as many threads I have. And my current app is single-threaded, but it has to handle signals very well (i.e. no signal can be lost). – peterh says reinstate Monica Sep 1 '15 at 20:55 • What I mean is, why not just spin off a thread to add the signal data to the data structure. I.E. Signal 1 comes in, thread to add the data to the list is created. ... Signal n comes in, thread to add the data to the list is created. I'm also assuming you can't simply spin off a thread to handle each signal for some reason. – JustAnotherSoul Sep 1 '15 at 20:56 • Off-topic note: your architecture will lose signal data: if two signals are delivered to your application at almost the same time, they will be conflated, and your process will only receive one. On-topic note: pretty much any synchronization primitive does two things — test-and-set, clear-and-mask-interrupt (a.k.a. start signal handler and mask signal), test-for-input-and-block, ... – Gilles 'SO- stop being evil' Sep 1 '15 at 21:01 • @JustAnotherSoul Good assumption, although your solution looks fine. – peterh says reinstate Monica Sep 1 '15 at 21:01 Finally I found a solution, which handles the whole problem at once. It is relatively easy. They key to the solution: we can use the signal stack as a "to-do stack". Important to remember, that although this problem is also about race condition eliminiation, its solution differs significantly from the "lock everything what you use, do your task, release everything" solutions. It is because it is not about parallelisation, it is about reentrancy. The common lock-based solutions would lead to deadlock here, because the parent (in the example) signal1 handler will be surely suspended while the whole execution of the handler of signal2. So, this is a disadvantage, but it is an advantage as well. We can guarantee, that signal1 won't do anything while our signal2 runs. We can't simply lock things, but we also don't need to do them. So, that blocking locks are closed out, only the nonblocking locks left. What I invented, is the following C code: #define STORE(a, b) __atomic_store(&(a), &(b), __ATOMIC_SEQ_CST) #define SWAP(a, b) __atomic_exchange(&(a), &(b), &(b), __ATOMIC_SEQ_CST) // action handler wrapper void ss_wrapper(int signum, siginfo_t* siginfo, ucontext_t* ucontext) { // currently top element on the signal stack static struct ss_hit *top = NULL; struct ss_hit *hit = ss_hit_new(signum, siginfo, ucontext); struct ss_hit *bkp; again: bkp = hit; SWAP(top, hit); if (!hit) { // we got the lock, we are the master ss_fire(bkp); SWAP(top, &bkp->next); // release the lock, find out if there is new element if (bkp->nxt) { // there IS hit = bkp; free(bkp); goto again; } else free(bkp); } else { // we didn't got the lock, but we got the top in hit STORE(hit->next, bkp); } } As we can see, it had been very beautiful to have a separated, reentrant stack (and not list) data structure. The main problem was to understood, that • adding new element in the stack, AND testing if it is empty, they should be done in a single atomic operation (this is why top serves as both of spinlock and pointer to the top of the stack) • similarly, removing element from the stack, and to know, if it is now empty, should be done also atomically. I also learned from this some days of thinking, that * constructing reentrant algorithm is much harder as to contrust a multithread * the most important thing is that reentrant algos should have only very few variables to interact eachother through them, and all operation should be atomic on them.
2019-11-13 17:42:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2383691668510437, "perplexity": 3588.623214822939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667319.87/warc/CC-MAIN-20191113164312-20191113192312-00090.warc.gz"}
https://testbook.com/question-answer/a-man-covers-a-distance-of-61-km-in-9-hours-he-tr--63450e68301d0fc2f2296b9b
# A man covers a distance of 61 km in 9 hours. He travels some distance by bicycle at 9 km per hour and some distance by foot at 4 km per hour. Then the distance traveled by him on the bicycle will be: This question was previously asked in Maharashtra Talathi Official Paper 2019 (03.07.19) View all Maharashtra Talathi Papers > 1. 16 km 2. 25 km 3. 45 km 4. 50 km Option 3 : 45 km Free ST : General Knowledge (Mock Test मॉक टेस्ट) 6.5 K Users 20 Questions 40 Marks 20 Mins ## Detailed Solution Given: 61 km covers in = 9 hours Formula used: Speed = Distance/Time Calculation: Let the distance traveled by him on the bicycle be "x" Then the distance traveled by him by foot = (61 - x) km ⇒ 9 = $${x \over 9}$$ + $${61 - x \over 4}$$ ⇒ 9 = $${4x + 549 - 9x \over 36}$$ ⇒ 9 = $${549 - 5x \over 36}$$ ⇒ 9 × 36 = 549 - 5x ⇒ 324 - 549 = - 5x ⇒ -225 = -5x ⇒ x = 45 ∴ The distance traveled by him on the bicycle will be 45 km
2023-01-30 20:30:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5767642259597778, "perplexity": 2483.3734742533566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00160.warc.gz"}
https://www.physicsforums.com/threads/doppler-effect-of-a-laser.725790/
# Doppler Effect of a laser 1. Nov 30, 2013 ### GreenPrint 1. The problem statement, all variables and given/known data A laser emits a monchromic beam of wavelength λ, which is reflected normally from a plane mirror, receding at a speed v. What is the beat frequency between the incident and reflected light? 2. Relevant equations 3. The attempt at a solution The solutions starts off with this $f_{1} = \frac{f_{0}}{1 + \frac{v}{c}}$ But I'm not exactly sure where this equation came from. The solution uses $f_{0}$ frequency of source $f_{1}$ frequency of incident light (source) as measured by moving mirror $f_{2}$ frequency of reflected light as measured by the moving mirror I know that the Doppler effect is often stated as $\frac{λ^{'}}{λ} = \sqrt{\frac{1 - \frac{v}{c}}{1 + \frac{v}{c}}}$ So I'm not exactly sure where the first equation came from. Thanks for any help. 2. Nov 30, 2013 ### hjelmgart Are you sure about your equation? In my book it is the same, but without the square root. If you replace the wavelengths by the frequency and isolate for f1, you will get the equation, they start off with. Remember that the two velocities are of the observer and reciever. So one of them is zero. The top velocity in your bracket, that is. The lower velocity is positive, when the mirror moves away from a stationary source. So you get, what they have. 3. Nov 30, 2013 ### GreenPrint Ya. I just read underneath that part and it says that when $v << c$ Eq. (4-44) (the equation I posted in the first post for the Doppler Effect) is approximated by $\frac{λ^{'}}{λ} = 1 - \frac{v}{c}$ Which I didn't realize. I remember reading it earlier but it left my mind because I would rather use the equation in post one. It's interesting that your book doesn't mention it being a approximation, but that is probably because $v << c$ occurs a lot. Thanks for the help. 4. Nov 30, 2013 ### GreenPrint Wait hold on. What allows us to simply replace the wavelengths with the frequencies and just interchange them? I thought that the equation applied to wavelengths. Oh ok I know why lol. 5. Nov 30, 2013 ### hjelmgart Yeah, well the "approximation" is the most commonly used equation, which probably explains it. Anyway, all equations are based on approximations to a certain degree :-)
2017-11-22 19:32:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548900842666626, "perplexity": 695.6650591724732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806620.63/warc/CC-MAIN-20171122175744-20171122195744-00163.warc.gz"}
https://catboost.ai/docs/concepts/loss-functions-multilabel-classification
MultiLabel Classification: objectives and metrics Objectives and metrics MultiLogloss $\displaystyle\frac{-\sum\limits_{j=0}^{M-1} \sum\limits_{i=1}^{N} w_{i} (c_{ij} \log p_{ij} + (1-c_{ij}) \log (1 - p_{ij}) )}{M\sum\limits_{i=1}^{N}w_{i}} { ,}$ where $p_{ij} = \sigma(a_{ij}) = \frac{e^{a_{ij}}}{1 + e^{a_{ij}}}$ and $c_{ij} \in {0, 1}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true MultiCrossEntropy $\displaystyle\frac{-\sum\limits_{j=0}^{M-1} \sum\limits_{i=1}^{N} w_{i} (t_{ij} \log p_{ij} + (1-t_{ij}) \log (1 - p_{ij}) )}{M\sum\limits_{i=1}^{N}w_{i}} { ,}$ where $p_{ij} = \sigma(a_{ij}) = \frac{e^{a_{ij}}}{1 + e^{a_{ij}}}$ and $t_{ij} \in [0, 1]$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true Precision This function is calculated separately for each class k numbered from 0 to M – 1. $\frac{TP}{TP + FP}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true Recall This function is calculated separately for each class k numbered from 0 to M – 1. $\frac{TP}{TP+FN}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true F1 This function is calculated separately for each class k numbered from 0 to M – 1. $2 \frac{Precision * Recall}{Precision + Recall}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true Accuracy The formula depends on the value of the $type$ parameter: Classic $\displaystyle\frac{\sum\limits_{i=1}^{N}w_{i} \prod\limits_{j=0}^{M-1} [[p_{ij} > 0.5]==t_{ij}]}{\sum\limits_{i=1}^{N}w_{i}} { , }$ where $p_{ij} = \sigma(a_{ij}) = \frac{e^{a_{ij}}}{1 + e^{a_{ij}}}$ PerClass This function is calculated separately for each class k numbered from 0 to M – 1. $\frac{TP + TN}{\sum\limits_{i=1}^{N} w_{i}}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true type The type of calculated accuracy. Possible values: • Classic • PerClass Default: Classic HammingLoss $\displaystyle\frac{\sum\limits_{j=0}^{M-1} \sum\limits_{i = 1}^{N} w_{i} [[p_{ij} > 0.5] == t_{ij}]]}{M \sum\limits_{i=1}^{N} w_{i}}$ User-defined parameters: use_weights Use object/group weights to calculate metrics if the specified value is true and set all weights to 1 regardless of the input data if the specified value is false. Default: true Used for optimization Name Optimization MultiLogloss + MultiCrossEntropy + Precision - Recall - F1 - Accuracy - HammingLoss -
2021-10-21 19:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6045827865600586, "perplexity": 2579.854351903655}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00050.warc.gz"}
https://journal.psych.ac.cn/adps/EN/10.3724/SP.J.1042.2021.01657
ISSN 1671-3710 CN 11-4766/R Advances in Psychological Science ›› 2021, Vol. 29 ›› Issue (9): 1657-1668. • Regular Articles • ### Do positive stereotypes have a negative impact? WANG Zhen, GUAN Jian 1. Department of Social Psychology, Zhou Enlai School of Government, Nankai University, Tianjin 300071, China • Received:2021-02-07 Online:2021-09-15 Published:2021-07-22 Abstract: Positive stereotypes are defined as positive traits describing social groups. Previous research on stereotypes has mainly focused on negative stereotypes while overlooking positive stereotypes, especially their negative effects. Here, we will discuss positive and negative effects of positive stereotypes from racial, gender and aging stereotypes and conditions for their emergence and further future research The positive effects of positive stereotypes are mainly evinced through the stereotype boost. For example, activation of positive racial stereotypes, positive gender stereotypes and positive aging stereotypes has a positive effect on targets' minds and behaviors. The negative effects of positive stereotypes on targets' behaviors and cognition are caused by the choking under pressure effect and compensation effect of social cognition, respectively. For example, targets with positive racial stereotypes have negative attitudes and evaluations towards the stereotyper. Targets are prone to underperform in stereotyped domains in positive gender stereotypes situation. As for positive aging stereotypes, the mental and psychical health of targets can be adversely affected. Generally, positive stereotypes still induce negative effect similar to negative stereotypes in certain conditions, although having the positive side. The effects (positive or negative) of positive stereotypes depend on the following four moderators: (1) Activation of positive stereotypes. Compared with the subtle activation of positive stereotypes, blatantly activating positive stereotypes easily cause the “choking under pressure” of targets and their sense of being depersonalized, finally resulting in a negative impact. (2) Accuracy of expressing positive stereotypes. Compared with accurately expressing positive stereotypes, the one who states positive stereotypes in an extreme way tends to generate the feeling of untruth, resulting in conflicted response of targets. (3) Individuals who state positive stereotypes. Compared with an ingroup member, positive stereotypes stated by an outgroup member easily cause the prejudice by targets, which then result in targets' negative attitudes and evaluations towards the stereotyper. (4) Culture context of positive stereotypes. Compared with collectivistic culture, positive stereotypes in individualistic culture are prone to have a sense of being depersonalized and be thread. Further research on positive stereotypes can be discussed from the following aspects: (1) Exploration of effects of positive stereotypes in collectivistic culture. For example, China is the representative country of collectivistic cultures which emphasize “fundamental connectedness of human beings to each other”, and positive stereotypes as positive beliefs about members of social groups based on the category membership. Therefore, the Chinese feel less depersonalized when the stereotyper describe them in ways related positive stereotypes. (2) Exploration of positive stereotypes from research fields and targets, such as fields of sexual orientation and academic discipline. Academic discipline stereotypes deem that science students are superior to arts students in science, and arts students are superior to science students in arts. As a result, male science students may underperform on the science test and female arts students may underperform on the arts test when priming their major and gender identities simultaneously, due to the feeling of untruth present when activating two positive stereotypes. In addition, researchers can explore positive stereotypes of children as there are no stereotype awareness of children under 7 ages. That is one of the prerequisites for positive stereotypes having influence on targets. (3) Exploration of interventions of negative effects reduced by positive stereotypes. By far there is no research on the interventions of negative effects of positive stereotypes. However, it is not hard to assume that would be difficult to reduce the negative effects of positive stereotypes because of the complimentary nature of positive stereotypes. (4) Exploration of positive effects of negative stereotypes. Based on our knowledge, only two studies have found that negative stereotypes have positive consequences. Once more empirical evidence to support the findings can be confirmed, this would play a significant role in the domain of stereotype research, especially for the interventions of negative effects of stereotypes. CLC Number:
2022-08-13 03:11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21908114850521088, "perplexity": 6330.54584828263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00363.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-5-exponents-and-polynomials-chapters-1-5-cumulative-review-problem-set-page-234/46
# Chapter 5 - Exponents and Polynomials - Chapters 1-5 Cumulative Review Problem Set: 46 $x=9$ #### Work Step by Step We isolate the variable on one side of the equation to obtain: $1.6-2.4x=5x-65$ $-2.4x-5x=-65-1.6$ $-2.4x-5x=-65-1.6$ $-7.4x=-66.6$ $x=\frac{-66.6}{-7.4}$ $x=\frac{666}{74}$ $x=9$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-08-21 20:31:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46758460998535156, "perplexity": 900.773483075951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218899.88/warc/CC-MAIN-20180821191026-20180821211026-00533.warc.gz"}
http://academickids.com/encyclopedia/index.php/Absolute_convergence
# Absolute convergence In mathematics, a series [itex]\sum_{n=1}^\infty a_n[itex] or an integral [itex]\int_A f(x)\,dx[itex] is said to converge absolutely if the series or integral of the corresponding absolute value is finite, i.e. [itex]\sum_{n=1}^\infty \left|a_n\right|<\infty[itex] or, respectively, [itex]\int_A \left|f(x)\right|\,dx<\infty.[itex] Absolute convergence entails that rearrangement of the series [itex]\sum_{n=1}^\infty a_{\sigma(n)}[itex] where σ is a permutation of the natural numbers, does not alter the sum to which the series converges. Similar results apply to integrals. See Cauchy principal value and an elegant rearrangement of a conditionally convergent iterated integral. In the light of Lebesgue's theory of integration, sums may be treated as special cases of integrals, rather than as a separate case. Series or integrals that converge but do not converge absolutely are said to converge conditionally.de:Absolute Konvergenz es:Convergencia absoluta • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2022-08-11 04:59:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346898794174194, "perplexity": 1538.0396420755485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00180.warc.gz"}
https://www.r-bloggers.com/2020/06/select-columns-from-a-data-frame/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. To select only a specific set of interesting data frame columns dplyr offers the select() function to extract columns by names, indices and ranges. You can even rename extracted columns with select(). • Learn to use the select() function • Select columns from a data frame by name or index • Rename columns from a data frame select(my_data_frame, column_one, column_two, ...) select(my_data_frame, new_column_name = current_column, ...) select(my_data_frame, column_start:column_end) select(my_data_frame, index_one, index_two, ...) select(my_data_frame, index_start:index_end) ## Selecting by name select(my_data_frame, column_one, column_two, ...) select(my_data_frame, new_column_name = current_column, ...) select(my_data_frame, column_start:column_end) select(my_data_frame, index_one, index_two, ...) select(my_data_frame, index_start:index_end) In this chapter we will have a look at the pres_results dataset from the politicaldata package. It contains data about US presidential elections since 1976, converted to a Tibble for nicer printing. # A tibble: 561 x 6 year state total_votes dem rep other 1 1976 AK 123574 0.357 0.579 0.0549 2 1976 AL 1182850 0.557 0.426 0.0163 3 1976 AR 767535 0.650 0.349 0.00134 # … with 558 more rows For this example, we will have a look at the number of total votes in different states at different elections. Since we are only interested in the number of people who voted we would like to create a custom version of the pres_results data frame that only contains the columns year, state and total_votes. For such filtering, we can use the select() fiction from the dplyr package. The select() function takes a data frame as an input parameter and lets us decide which of the columns we want to keep from it. The output of the function is a data frame with all rows, but containing only the columns we explicitly select. We can reduce our dataset to only year, state and total_votes in the following way: select(pres_results, year, state, total_votes) # A tibble: 561 x 3 1 1976 AK 123574 2 1976 AL 1182850 3 1976 AR 767535 # … with 558 more rows As the first parameter we passed the pres_results data frame, as the remaining parameters we passed the columns we want to keep to select(). Apart from keeping the columns we want, the select() function also keeps them in the same order as we specified in the function parameters. If we change the order of the parameters when we call the function, the columns of the output change accordingly: select(pres_results, total_votes, year, state) # A tibble: 561 x 3 1 123574 1976 AK 2 1182850 1976 AL 3 767535 1976 AR # … with 558 more rows ## Exercise: Life expectancy in Austria The gapminder_austria dataset contains information about the economic and demographic change in Austria over the last decades. To inspect how the life expectancy in Austria changed over time, create a subset of the tibble that contains only the necessary columns for this task: 1. Use the dplyr select() function and define gapminder_austria as the input tibble. 2. Keep only the columns year and lifeExp in the output dataset. Start Exercise ## Renaming columns select(my_data_frame, column_one, column_two, ...) select(my_data_frame, new_column_name = current_column, ...) select(my_data_frame, column_start:column_end) select(my_data_frame, index_one, index_two, ...) select(my_data_frame, index_start:index_end) In addition to defining the columns we want keep, we can also rename them. To do this, we need to set the new column name inside the select() function using the command new_column_name = current_column In the following example, we select the columns year, state and total_votes but rename the year column to Election in the output: select(pres_results, Election = year, state, total_votes) # A tibble: 561 x 3 1 1976 AK 123574 2 1976 AL 1182850 3 1976 AR 767535 # … with 558 more rows ## Exercise: Rename columns The gapminder_india dataset contains information about the economic and demographic change in India over the last decades. Inspect how the population in India changed over time: 1. Use the select() function and define gapminder_india as the input tibble. 2. Keep only the columns year and pop. 3. Rename the pop column to population in the output tibble. Start Exercise ## Selecting by name range select(my_data_frame, column_one, column_two, ...) select(my_data_frame, new_column_name = current_column, ...) select(my_data_frame, column_start:column_end) select(my_data_frame, index_one, index_two, ...) select(my_data_frame, index_start:index_end) When we use the select() function and define the columns we want to keep, dplyr does not actually use the name of the columns but the index of the columns in the data frame. This means, when we define the first three columns of the pres_results data frame, year, state and total_votes, dplyr converts these names to the index values 1, 2 and 3. We can therefore also use the name of the columns, apply the : operator and define ranges of columns, that we want to keep: select(pres_results, year:total_votes) # A tibble: 561 x 3 1 1976 AK 123574 2 1976 AL 1182850 3 1976 AR 767535 # … with 558 more rows What the year:total_votes does, can be translated to 1:3, which is simply creating a vector of numerical values from 1 to 3. Then, the select() function takes the pres_results data frame and outputs a subset of it, keeping only the first three columns. ## Exercise: Select a name range The gapminder_europe_2007 dataset contains economic and demographic information about European countries for the year 2007: # A tibble: 30 x 6 country continent year lifeExp pop gdpPercap 1 Albania Europe 2007 76.4 3600523 5937. 2 Austria Europe 2007 79.8 8199783 36126. 3 Belgium Europe 2007 79.4 10392226 33693. # … with 27 more rows Create a subset of the tibble and compare the life expectancy in different European countries for the year 2007: 1. Apply the select() function on the gapminder_europe_2007 tibble. 2. Use the : operator and select the columns from country to lifeExp. Start Exercise ## Select() by indices select(my_data_frame, column_one, column_two, ...) select(my_data_frame, new_column_name = current_column, ...) select(my_data_frame, column_start:column_end) select(my_data_frame, index_one, index_two, ...) select(my_data_frame, index_start:index_end) The select() function can be used with column indices as well. Instead of using names we need to specify the columns we want to select by their indices. Compared to other programming languages the indexing in R starts with one instead of zero. To select the first, fourth and fifth column from the pres_results dataset we can write select(pres_results, 1,4,5) # A tibble: 561 x 3 year dem rep 1 1976 0.357 0.579 2 1976 0.557 0.426 3 1976 0.650 0.349 # … with 558 more rows Similarly to defining ranges of columns using their names, we can define ranges (or vectors) of index values instead: select(pres_results, 1:3) # A tibble: 561 x 3 1 1976 AK 123574 2 1976 AL 1182850 3 1976 AR 767535 # … with 558 more rows ## Exercise: Select by indices The gapminder_europe_2007 dataset contains economic and demographic information about European countries for the year 2007. # A tibble: 30 x 6 country continent year lifeExp pop gdpPercap 1 Albania Europe 2007 76.4 3600523 5937. 2 Austria Europe 2007 79.8 8199783 36126. 3 Belgium Europe 2007 79.4 10392226 33693. # … with 27 more rows Create a subset of the dataset and compare the GDP per capita of the European countries for the year 2007: 1. Apply the select() function on the gapminder_europe_2007 tibble. 2. Keep the columns country and gdpPercap, but use only the index of the columns (1and 6) for this step. Start Exercise Select columns from a data frame is an excerpt from the course Introduction to R, which is available for free at quantargo.com VIEW FULL COURSE
2020-10-27 21:49:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17830729484558105, "perplexity": 4850.717578332586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00168.warc.gz"}
https://zbmath.org/?q=ut%3Aintegrable+boundary+conditions
## Found 129 Documents (Results 1–100) 100 MathJax Full Text: Full Text: Full Text: Full Text: Full Text: ### A dual construction of the isotropic Landau-Lifshitz model. (English)Zbl 1453.37060 MSC:  37K10 35Q82 35Q40 Full Text: ### On a Neumann boundary value problem for Ermakov-Painlevé III. (English)Zbl 1438.34110 MSC:  34B30 34B16 Full Text: Full Text: ### Dressing the boundary: on soliton solutions of the nonlinear Schrödinger equation on the half-line. (English)Zbl 1420.35388 MSC:  35Q55 37K15 35C08 Full Text: ### Soliton solutions of the sine-Gordon equation on the half line. (English)Zbl 1407.35052 MSC:  35C08 35L71 35L20 Full Text: Full Text: Full Text: ### Interplay between the inverse scattering method and Fokas’s unified transform with an application. (English)Zbl 1388.37073 MSC:  37K15 35Q55 35A22 Full Text: ### The Fokas-Lenells equation on the finite interval. (English)Zbl 1399.35336 MSC:  35Q55 37K10 Full Text: Full Text: ### On integrable boundaries in the 2 dimensional $$O(N)\sigma$$-models. (English)Zbl 1382.81131 MSC:  81T10 81T13 81R12 Full Text: ### The open XXX spin chain in the SoV framework: scalar product of separate states. (English)Zbl 1373.82020 MSC:  82B20 82B23 Full Text: ### Inverse scattering transform for the defocusing nonlinear Schrödinger equation with fully asymmetric non-zero boundary conditions. (English)Zbl 1415.35217 MSC:  35P25 35Q55 37K10 Full Text: ### Boundary conditions for infinite conservation laws. (English)Zbl 1384.35114 MSC:  35Q53 37K10 35Q55 Full Text: Full Text: Full Text: ### Optimal lower bound for the first eigenvalue of the fourth order equation. (English)Zbl 1360.34170 MSC:  34L15 34B09 49K35 Full Text: ### Local well-posedness of KP-I initial value problem on torus in the Besov space. (English)Zbl 1342.35078 MSC:  35G31 37K10 Full Text: Full Text: Full Text: ### Homogenization of systems with equi-integrable coefficients. (English)Zbl 1326.35020 MSC:  35B27 49K20 Full Text: ### Boundary energy of the open XXX chain with a non-diagonal boundary term. (English)Zbl 1285.82020 J. Phys. A, Math. Theor. 47, No. 3, Article ID 032001, 9 p. (2014); corrigendum ibid. 47, No. 7, Article ID 079501, 2 p. (2014). MSC:  82B20 82B23 Full Text: ### Wave breaking and measure of momentum support for an integrable Camassa-Holm system with two components. (English)Zbl 1304.35229 MSC:  35G61 37K10 35Q35 Full Text: ### Heisenberg XXX model with general boundaries: eigenvectors from algebraic Bethe ansatz. (English)Zbl 1288.82020 MSC:  82B23 81R12 82B20 Full Text: Full Text: ### Algebraic Bethe ansatz for the six vertex model with upper triangular $$K$$-matrices. (English)Zbl 1284.82026 MSC:  82B23 81R12 Full Text: ### Algebraic Bethe ansatz for open XXX model with triangular boundary matrices. (English)Zbl 1280.82003 MSC:  82B23 81R12 Full Text: Full Text: ### Nonlinear degenerate diffusion problems with a singularity. (English)Zbl 1226.35038 Reviewer: Jiaqi Mo (Wuhu) Full Text: Full Text: ### The spectrum of an open vertex model based on the $$U_q[SU(2)]$$ algebra at roots of unity. (English)Zbl 1204.82010 MSC:  82B20 82B23 Full Text: Full Text: ### On NLS equations on BD.I symmetric spaces with constant boundary conditions. (English)Zbl 1187.35235 Sekigawa, Kouei (ed.) et al., Trends in differential geometry, complex analysis and mathematical physics. Proceedings of 9th international workshop on complex structures, integrability and vector fields, Sofia, Bulgaria, August 25–29, 2008. Hackensack, NJ: World Scientific (ISBN 978-981-4277-71-6/hbk). 83-91 (2009). MSC:  35Q55 37K05 37K10 37K30 35P05 ### Eigenvectors of the Baxter-Bazhanov-Stroganov $$\tau^{(2)}(t_q)$$ model with fixed-spin boundary conditions. (English. Russian original)Zbl 1157.82309 Theor. Math. Phys. 155, No. 1, 585-597 (2008); translation from Teor. Mat. Fiz. 155, No. 1, 94-108 (2008). MSC:  82B20 Full Text: ### Continuous spectrum and square-integrable solutions of differential operators with intermediate deficiency index. (English)Zbl 1165.34052 MSC:  34L05 47E05 Full Text: ### Completely integrable systems associated with classical root systems. (English)Zbl 1192.81190 MSC:  81R12 70H06 37K10 Full Text: ### C-series discrete chains. (English)Zbl 1177.37075 Theor. Math. Phys. 146, No. 2, 170-182 (2006); translation from Teor. Mat. Fiz. 146, No. 2, 208-221 (2006). MSC:  37K60 Full Text: ### Bethe ansatz and boundary energy of the open $$\mathrm{spin}$$-$$\frac12$$ $$XXZ$$ chain. (English)Zbl 1125.82007 MSC:  82B20 82B23 Full Text: ### Dirichlet and Neumann problems for string equation, Poncelet problem and Pell-Abel equation. (English)Zbl 1092.35054 MSC:  35L20 14H70 35L05 Full Text: ### The Riemann problem and matrix-valued potentials with a convergent Baker-Akhiezer function. (English)Zbl 1178.30048 Theor. Math. Phys. 144, No. 3, 1264-1278 (2005); translation from Teor. Mat. Fiz. 144, No. 3, 453-471 (2005). MSC:  30E25 37K10 37K20 Full Text: ### Mixed integrable $$\text{SU}(N)$$ vertex model with arbitrary twists. (English)Zbl 1119.82304 MSC:  82B23 81R12 Full Text: ### Algebraic Bethe ansatz for the Zamolodchikov-Fateev and Izergin-Korepin models with open boundary conditions. (English)Zbl 1123.82326 MSC:  82B23 81R12 Full Text: ### Numerical discretization of boundary conditions for first order Hamilton–Jacobi equations. (English)Zbl 1058.65102 MSC:  65M60 35L60 35Q58 65M12 49L25 Full Text: ### Excited TBA equations. II: Massless flow from tricritical to critical Ising model. (English)Zbl 1072.82010 MSC:  82B20 82B27 Full Text: ### Boundary scattering, symmetric spaces and the principal chiral model on the half-line. (English)Zbl 1018.58021 MSC:  58J50 81T10 81U20 Full Text: Full Text: ### On integrability of many-body problems with point interactions. (English)Zbl 1064.70015 Albeverio, S. (ed.) et al., Operator methods in ordinary and partial differential equations. Proceedings of the S. Kovalevsky symposium, Stockholm, Sweden, June 2000. Basel: Birkhäuser (ISBN 3-7643-6790-3/hbk). Oper. Theory, Adv. Appl. 132, 67-76 (2002). MSC:  70H06 81U10 Full Text: ### Spectrum of boundary states in $$N=1$$ SUSY sine-Gordon theory. (English)Zbl 0999.81040 MSC:  81T10 81T60 Full Text: ### Parabolic equations: Asymptotic behavior and dynamics on invariant manifolds. (English)Zbl 1002.35001 Fiedler, Bernold (ed.), Handbook of dynamical systems. Volume 2. Amsterdam: Elsevier. 835-883 (2002). ### Conformal boundary conditions. (English)Zbl 1024.81037 Ahn, Changrim (ed.) et al., Integrable quantum field theories and their applications. Proceedings of the APCTP winter school, Cheju Island, Republic of Korea, February 28-March 4, 2000. Singapore: World Scientific. 318-342 (2001). ### Finite size effects in integrable quantum field theories. (English)Zbl 0995.81050 Horváth, Zalán (ed.) et al., Non-perturbative QFT methods and their applications. Proceedings of the 24th Johns Hopkins workshop on current problems in particle theory 24, Budapest, Hungary, August 19-21, 2000. Singapore: World Scientific. 199-264 (2001). Full Text: ### On the boundary conditions for products of Sturm-Liouville differential operators. (English)Zbl 1006.34026 Reviewer: Pavel Rehak (Brno) MSC:  34B24 47A55 47E05 ### Seocond order quantum corrections to the classical reflection factor of the sinh-Gordon model. (English)Zbl 1022.81034 MSC:  81T10 81T15 81R12 Full Text: ### On Birkhoff coordinates for KdV. (English)Zbl 1017.76015 MSC:  76B25 37K10 35Q53 Full Text: ### The minimal LM(3,4) lattice model and the two-dimensional Ising model with cylindrical boundary conditions. (English. Russian original)Zbl 1018.82003 Theor. Math. Phys. 126, No. 1, 48-65 (2001); translation from Teor. Mat. Fiz. 126, No. 1, 63-83 (2001). MSC:  82B20 82B28 Full Text: ### Approximate solutions of the Kuramoto-Sivashinsky equation for periodic boundary value problems and chaos. (English)Zbl 0986.65126 MSC:  65P20 35Q58 35Q53 37J45 37D45 Full Text: ### Correspondence between the $$XXZ$$ model in roots of unity and the one-dimensional quantum Ising chain with different boundary conditions. (English)Zbl 1042.82009 Pakuliak, Stanislav (ed.) et al., Integrable structures of exactly solvable two-dimensional models of quantum field theory. Proceedings of the NATO Advanced Research workshop on dynamical symmetries of integrable quantum field theory and lattice models, Kiev, Ukraine, September 25–30, 2000. Dordrecht: Kluwer Academic Publishers (ISBN 0-7923-7183-6). NATO Sci. Ser. II, Math. Phys. Chem. 35, 321-331 (2001). MSC:  82B20 82B10 ### External voltage sources and tunnelling in quantum wires. (English)Zbl 0982.81025 MSC:  81Q99 81V80 82C99 Full Text: ### Boundary remnant of Yangian symmetry and the structure of rational reflection matrices. (English)Zbl 0977.81058 MSC:  81T05 81T10 Full Text: ### Surface tension of a metal-electrolyte boundary: Exactly solvable model. (English)Zbl 0990.82033 MSC:  82D10 82B05 Full Text: ### Multi-symplectic Fourier pseudospectral method for the nonlinear Schrödinger equation. (English)Zbl 0980.65108 MSC:  65M70 65P10 37K10 37J05 65M15 35Q55 37J45 Full Text: ### Integrable and conformal boundary conditions for $$\widehat{\text{sl}}(2) A\text{-}D\text{-}E$$ lattice models and unitary minimal conformal field theories. (English)Zbl 0989.82016 MSC:  82B20 81T40 81T27 Full Text: MSC:  82B20 Full Text: ### Normal modes on average for purely stochastic systems. (English)Zbl 0965.82018 MSC:  82C20 37H99 Full Text: Full Text: Full Text: ### Boundary spectrum in the sine-Gordon model with Dirichlet boundary conditions. (English)Zbl 1032.81533 MSC:  81T10 81R12 37K99 Full Text: ### Integrable boundary conditions for nonlinear lattices. (English)Zbl 0958.37054 Levi, Decio (ed.) et al., SIDE III - Symmetries and integrability of difference equations. Proceedings of the 3rd conference, Sabaudia, Italy, May 16-22, 1998. Providence, RI: American Mathematical Society (AMS). CRM Proc. Lect. Notes. 25, 173-180 (2000). MSC:  37K15 37K10 37K60 Full Text: ### KAM tori for 1D nonlinear wave equations with periodic boundary conditions. (English)Zbl 0956.37054 MSC:  37K55 37K10 35L70 Full Text: ### The reconstruction of local quantum operators for the boundary $$XXZ$$ spin-$$\frac 12$$ Heisenberg chain. (English)Zbl 0956.82006 MSC:  82B20 82B23 Full Text: ### Near-integrability of periodic FPU-chains. (English)Zbl 1060.37503 MSC:  37J35 37J45 Full Text: ### Integration of the $$\text{SL}(2,{\mathbb{R}})/\text{U}(1)$$ gauged WZNW model with periodic boundary conditions. (English)Zbl 0951.81070 MSC:  81T40 81T75 Full Text: ### Particle reflection amplitudes in $$a^{(1)}_n$$ Toda field theories. (English)Zbl 0958.81027 MSC:  81T10 81R12 Full Text: ### Equations of motion and conserved quantities in non-abelian discrete integrable models. (English. Russian original)Zbl 0991.81038 Theor. Math. Phys. 119, No. 1, 420-430 (1999); translation from Teor. Mat. Fiz. 119, No. 1, 34-46 (1999). MSC:  81R12 82B23 39A12 Full Text: ### Boundary breathers in the sinh-Gordon model. (English)Zbl 0945.81023 MSC:  81T10 81T40 81Q20 Full Text: ### Open $$t$$-$$J$$ chain with boundary impurities. (English)Zbl 0961.82010 MSC:  82B23 82B20 Full Text: ### New solutions to the reflection equation and the projecting method. (English)Zbl 0988.82011 MSC:  82B20 81R50 Full Text: ### Integrable mappings of KdV type and hyperelliptic addition theorems. (English)Zbl 0928.37015 Clarkson, Peter A. (ed.) et al., Symmetries and integrability of difference equations. Proceedings of the 2nd international conference, Canterbury, UK, July 1–5, 1996. Cambridge: Cambridge University Press. Lond. Math. Soc. Lect. Note Ser. 255, 64-78 (1999). ### Classical backgrounds and scattering for affine Toda theory on a half-line. (English)Zbl 0958.81024 MSC:  81T10 37K40 Full Text: ### Two-component one-dimensional gas with integrable open boundary conditions. (English)Zbl 0952.82006 MSC:  82B23 82B10 Full Text: ### Ruijsenaars-Macdonald-type difference operators from $$\mathbb{Z}_n$$ Belavin model with open boundary conditions. (English)Zbl 0953.82020 MSC:  82B23 81R50 39A70 Full Text: ### A one-dimensional many-body integrable model from $$Z_n$$ Belavin model with open boundary conditions. (English)Zbl 0928.17028 MSC:  81V70 81R50 82B23 Full Text: ### Nine classes of integrable boundary conditions for the eight-state supersymmetric fermion model. (English)Zbl 0936.82009 MSC:  82B23 82B20 Full Text: ### Lax pair formulation for a small-polaron chain with integrable boundaries. (English)Zbl 0920.58026 MSC:  37J35 37K10 Full Text: ### Canonically conjugate variables for the Volterra lattice with periodic boundary conditions. (English. Russian original)Zbl 0916.58017 Math. Notes 64, No. 1, 98-109 (1998); translation from Mat. Zametki 64, No. 1, 115-128 (1998). MSC:  37J35 37K10 Full Text: ### Reconstruction of the velocity from the vorticity in three-dimensional fluid flows. (English)Zbl 0919.76075 MSC:  76M30 76B47 35Q35 Full Text: Full Text: Full Text: ### Boundary conditions for integrable equations. (English)Zbl 0927.35093 MSC:  35Q53 37J35 58J70 Full Text: Full Text: ### On the numerical solution of the sine-Gordon equation. I: Integrable discretizations and homoclinic manifolds. (English)Zbl 0866.65064 MSC:  65M70 35Q53 Full Text: ### Fusion procedure for the $$Z_n$$ Belavin model with open boundary conditions. (English)Zbl 0905.17019 MSC:  17B37 82B23 81R50 Full Text: Full Text: ### Integrable boundary conditions for the Toda lattice. (Russian. English summary)Zbl 0858.35118 Kalyakin, L. A. (ed.) et al., Asymptotics and symmetries in nonlinear dynamical systems. Ufa: Institut Matematiki s Vychislitel’nym Tsentrom RAN. 7-30 (1995). MSC:  35Q58 37J35 37K10 Full Text: all top 5 all top 5 all top 5 all top 3
2022-05-25 01:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671706199645996, "perplexity": 5657.398909536708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00331.warc.gz"}
https://proofwiki.org/wiki/Center_is_Characteristic_Subgroup
# Center is Characteristic Subgroup ## Theorem Let $G$ be a group. Then its center $Z(G)$ is characteristic in $G$.
2018-08-21 02:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6242052912712097, "perplexity": 723.5111615080168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217909.77/warc/CC-MAIN-20180821014427-20180821034427-00334.warc.gz"}
https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-migrate
# dotnet migrate ## Name dotnet migrate - Migrates a Preview 2 .NET Core project to a .NET Core SDK-style project. ## Synopsis dotnet migrate [<SOLUTION_FILE|PROJECT_DIR>] [--format-report-file-json <REPORT_FILE>] [-r|--report-file <REPORT_FILE>] [-s|--skip-project-references [Debug|Release]] [--skip-backup] [-t|--template-file <TEMPLATE_FILE>] [-v|--sdk-package-version] [-x|--xproj-file] dotnet migrate -h|--help ## Description This command is deprecated. The dotnet migrate command is no longer available starting with .NET Core 3.0 SDK. It can only migrate a Preview 2 .NET Core project to a 1.x .NET Core project, which is out of support. By default, the command migrates the root project and any project references that the root project contains. This behavior is disabled using the --skip-project-references option at run time. Migration can be performed on the following assets: • A single project by specifying the project.json file to migrate. • All of the directories specified in the global.json file by passing in a path to the global.json file. • A solution.sln file, where it migrates the projects referenced in the solution. • On all subdirectories of the given directory recursively. The dotnet migrate command keeps the migrated project.json file inside a backup directory, which it creates if the directory doesn't exist. This behavior is overridden using the --skip-backup option. By default, the migration operation outputs the state of the migration process to standard output (STDOUT). If you use the --report-file <REPORT_FILE> option, the output is saved to the file specify. The dotnet migrate command only supports valid Preview 2 project.json-based projects. This means that you cannot use it to migrate DNX or Preview 1 project.json-based projects directly to MSBuild/csproj projects. You first need to manually migrate the project to a Preview 2 project.json-based project and then use the dotnet migrate command to migrate the project. ## Arguments PROJECT_JSON/GLOBAL_JSON/SOLUTION_FILE/PROJECT_DIR The path to one of the following: • a project.json file to migrate. • a global.json file: the folders specified in global.json are migrated. • a solution.sln file: the projects referenced in the solution are migrated. • a directory to migrate: recursively searches for project.json files to migrate inside the specified directory. Defaults to current directory if nothing is specified. ## Options --format-report-file-json <REPORT_FILE> Output migration report file as JSON rather than user messages. -h|--help Prints out a short help for the command. -r|--report-file <REPORT_FILE> Output migration report to a file in addition to the console. -s|--skip-project-references [Debug|Release] Skip migrating project references. By default, project references are migrated recursively. --skip-backup Skip moving project.json, global.json, and *.xproj to a backup directory after successful migration. -t|--template-file <TEMPLATE_FILE> Template csproj file to use for migration. By default, the same template as the one dropped by dotnet new console is used. -v|--sdk-package-version <VERSION> The version of the sdk package that's referenced in the migrated app. The default is the version of the SDK in dotnet new. -x|--xproj-file <FILE> The path to the xproj file to use. Required when there is more than one xproj in a project directory. ## Examples Migrate a project in the current directory and all of its project-to-project dependencies: dotnet migrate Migrate all projects that global.json file includes: dotnet migrate path/to/global.json Migrate only the current project and no project-to-project (P2P) dependencies. Also, use a specific SDK version: dotnet migrate -s -v 1.0.0-preview4
2022-10-01 05:32:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.334839791059494, "perplexity": 14921.10735337801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00752.warc.gz"}
https://socratic.org/questions/why-is-the-line-in-bold-valid-picture-added-below
# Why is the line in bold valid (Picture added below)? Jul 22, 2017 Refer to the Explanation. #### Explanation: So far, it has been proved, that, ${\left(1 + p\right)}^{k + 1} \ge 1 + k p + p + k {p}^{2.} \ldots \left({\ast}^{1}\right) .$ Now, note that, $p > - 1 \Rightarrow {p}^{2} > 0. \text{ Also, } k \in {\mathbb{Z}}^{+} .$ $\therefore k {p}^{2} > 0.$ Adding, on both sides of this inequality $1 + k p + p ,$ we get, $k {p}^{2} + 1 + k p + p > 1 + k p + p \ldots \ldots \ldots \ldots . \left({\ast}^{2}\right) .$ Combining, $\left({\ast}^{1}\right) \mathmr{and} \left({\ast}^{2}\right) ,$ we have, ${\left(1 + p\right)}^{k + 1} \ge 1 + k p + p + {p}^{2} > 1 + k p + p , i . e . ,$ ${\left(1 + p\right)}^{k + 1} > 1 + \left(k + 1\right) p .$ In simple form, the line in bold means that, if, a >= b+c, &, c >0, then, if we drop $c > 0 ,$ from $b + c ,$ we still have, $a > c .$
2022-07-02 17:48:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6023438572883606, "perplexity": 3691.0881993750036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00434.warc.gz"}
https://tex.stackexchange.com/questions/174742/how-does-the-package-pst-labo-work-to-draw-chemical-equipment
# How does the package pst-labo work to draw chemical equipment? I would like to draw some chemical equipment and I found this pst-labo package. However, it does not work for me and I am sure I have the package installed. This should be a very simple example of its usage which does not work for me: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{pst-labo} \title{Blank} \date{April 2014} \begin{document} \maketitle \section{Introduction} \psset{unit=0.5cm} \pstTubeEssais \end{document} • How are you trying to compile the document? pdflatex? pstricks (the package working behind the scenes) requires latex->dvips->ps2pdf to be used. – Paul Gessler May 1 '14 at 21:46 • Or you can add \usepackage[pdf]{pstricks} to your preamble, and compile with pdflatex. – Bernard May 1 '14 at 21:50 • By adding \usepackage[pdf]{pstricks} to my preamble to the code showed before still reports error and does not compile with PdfLaTeX. – Garret May 1 '14 at 22:58 • If you go back to the original code, does latex work? – cfr May 2 '14 at 2:00 Run the document with xelatex (without loading package inputenc) or run it with pdflatex --shell-escape <file> with loading package auto-pst-pdf: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[crop=off]{auto-pst-pdf} \usepackage{pst-labo} \title{Blank} \date{April 2014} \begin{document} \maketitle \section{Introduction} \begin{postscript} \psset{unit=0.5cm} \pstTubeEssais \end{postscript} \end{document} My output with xelatex and current TeXLive 2013 • Using that code and compiling with pdflatex still does not show the equipment figure. Texkstudio says: auto-pst-pdf: . Or turn off auto-pst-pdf.}. Furthermore, copying the code also into sharelatex does not show the figure either. – Garret May 2 '14 at 6:51 • I suppose that you are missing the option --shell-escape. Howver, can't you use xelatex then there is no need for using auto-pst-pdf – user2478 May 2 '14 at 6:58 • Even using xelatex I cannot see the equipment. Did you try the code personally? – Garret May 3 '14 at 11:04 • see my added image in my answer – user2478 May 3 '14 at 11:48 • That works for me using xelatex from texlive 2014. I had to change the postscript to pspicture though. – ezietsman Aug 20 '14 at 8:34
2020-01-18 06:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007083773612976, "perplexity": 3730.916381259924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00531.warc.gz"}
http://clay6.com/qa/130083/for-1-molal-aqueous-solution-of-the-following-compounds-which-one-will-show
Comment Share Q) # For 1 molal aqueous solution of the following compounds, which one will show the highest freezing point ? ( A ) $[Co(H_2O)_6]Cl_3$ ( B ) $[Co(H_2O)_3Cl_3].3H_2O$ ( C ) $[Co(H_2O)_5Cl]Cl_2.H_2O$ ( D ) $[Co(H_2O)_4Cl_2]Cl.2H_2O$
2019-04-25 00:38:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40290477871894836, "perplexity": 5395.89255751909}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578675477.84/warc/CC-MAIN-20190424234327-20190425020327-00271.warc.gz"}
http://www.realmagick.com/diagonal-matrix-other-properties/
realmagick.com The shrine of knowledge. # Diagonal Matrix Other Properties A selection of articles related to diagonal matrix other properties. What is hypnotic trance? Does it provide unusual physical or mental capacities? 2.1 'Trance;' descriptive or misleading? Most of the classical notions of hypnosis have long held that hypnosis was special in some way from other types of interpersonal communication and that an induction (preparatory process considered by some to be... Parapsychology >> Hypnosis Am I Damned?: Part 1 (Excerpted from a planned website, much of this material has already appeared in answer to various questions on the boards, especially in General Earth Spirituality. I will attempt to redact it for this forum, as much of the arguments contained here are basic... Religions >> Christianity & Paganism Trance-formation (The Cosmic Tao of the Sexes) The purest outpouring of love between the sexes, is as a miniature super-nova of ecstatic union. Reciprocating pleasure becomes the engine that feeds the fires of passion. Passion becomes dynamic fuel for love. It is an engine that runs with the force of a... Body Mysteries >> Yoga Body Mysteries >> Sexuality Morrison, Dorothy Dorothy is a Wiccan High Priestess of the Georgian tradition and an avid practitioner of the ancient arts for over 20 years. She teaches the Craft to students throughout the US and in Australia. Her interests include archery and bowhunting, magical herbalism,... Real Interviews >> Authors Resurrection of the Higher Self In July of 1989, a controversial article crossed the managing editor's desk of an equally controversial magazine. This article was never actually published in the quarterly publication, then known as The Magical Blend , even though it was well documented by... Body Mysteries >> Psychoactive Substances Am I Damned?: Part 2 CAVEAT: It is impossible to answer a "theological" question simply and correctly at the same time. The reason for this is the fact that the theological arena involves reality not only beyond our capacity to adequately express, but even beyond that,... Religions >> Christianity & Paganism Diagonal Matrix Other Properties is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Diagonal Matrix Other Properties books and related discussion. ## Suggested Pdf Resources The Restricted Isometry Property for Block Diagonal Matrices this paper, we study block diagonal measurement matrices where each block on the main .. Extremal Properties of Balanced Tri-Diagonal Matrices Extremal Properties of. Balanced Tri-Diagonal Matrices. By Peter A. Hidden Breit-Wigner Distribution and Other Properties of Random Hidden Breit-Wigner Distribution and Other Properties of Random Matrices with. Matrix Algebra: Determinants, Inverses, Eigenvalues This Chapter discusses more specialized properties of matrices, such as When at least one row (or column) of a matrix is a linear combination of the other rows ( or columns) j2 = 2, ... jn = n, which is the product of the main diagonal entries. The electronic structure and properties of cumulative systems Here we shall restrict ourselves to those properties of cumulenes which can be studied without calculating the non- diagonal matrix etements. ## Suggested Web Resources Diagonal matrix - Wikipedia, the free encyclopedia The eigenvalues of diag(a1, ..., an) are a1, ..., an with associated The adjugate of a diagonal matrix is again diagonal. Symmetric matrix - Wikipedia, the free encyclopedia Every diagonal matrix is symmetric, since all off-diagonal entries are zero. PlanetMath: diagonal matrix From the definition, we see that an $n\times n$ diagonal matrix is completely determined by the $n$ entries on the diagonal; all other entries are zero. Matrix Reference Manual: Matrix Properties Jan 14, 2011 The properties of the characteristic matrix are described in the section on eigenvalues. indefinite if xHAx is > 0 for some x and < 0 for some other x. . An introduction to MATRICES Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
2015-07-06 05:12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4481998383998871, "perplexity": 2919.576659279801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098059.60/warc/CC-MAIN-20150627031818-00033-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-11-section-11-3-integrated-review-summary-on-solving-quadratic-equations-page-792/23
Algebra: A Combined Approach (4th Edition) $x_{1}= \sqrt{12} = 2\sqrt{3}$ and $x_{2}= -\sqrt{12} = -2\sqrt{3}$ Given $4x^2-48=0$ $1.)$ Divide by 4 both sides: $4x^2-48=0 \longrightarrow x^2-12=0$ $2.)$ Apply difference of two squares: $x^2-12=0 \longrightarrow (x+\sqrt{12})(x-\sqrt{12})=0$ Therefore the solutions are $x_{1}= \sqrt{12} = 2\sqrt{3}$ and $x_{2}= -\sqrt{12} = -2\sqrt{3}$
2018-07-19 00:38:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6065940856933594, "perplexity": 466.75939520429574}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00065.warc.gz"}
http://clay6.com/qa/9774/the-volume-generated-when-the-region-bounded-by-y-x-y-1-x-0-is-rotated-abou
Browse Questions # The volume generated when the region bounded by $y=x,y=1,x=0$ is rotated about $y$-axis is $\begin{array}{1 1}(1)\frac{\pi}{4}&(2)\frac{\pi}{2}\\(3)\frac{\pi}{3}&(4)\frac{2\pi}{3}\end{array}$ $\qquad= \pi \times 1^2 \times 1 - \large\frac{1}{3} \pi \times 1^2 \times 1$
2016-12-05 15:00:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844561219215393, "perplexity": 788.2169764596757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541697.15/warc/CC-MAIN-20161202170901-00335-ip-10-31-129-80.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1016827/an-obstacle-in-a-proof-of-lagranges-mean-value-theorem-by-nested-interval-theor
# An obstacle in a proof of Lagrange's mean value theorem by Nested Interval theorem I was trying to prove Lagrange's mean value theorem by Nested Interval theorem and there's step where I got stuck ; let me write down to the step Let $f:[x_1,x_2]\to \mathbb R$ be continuous on $[x_1,x_2]$ and differentiable in $(x_1,x_2)$ . Define $m:=\dfrac{f(x_1)-f(x_2)}{x_1-x_2} , y:=\dfrac{x_1+x_2}2 , m_1:=\dfrac {f(y)-f(x_1)}{h} , m_2:=\dfrac {f(x_2)-f(y)}{h}$ , where $h:=\dfrac{x_2-x_1}2$ then $\min\{m_1,m_2\}\le m \le \max \{m_1,m_2\}$ so defining the continuous function $g(x):=\dfrac {f(x+h)-f(x)}h$ , we see by intermediate value theorem , for some $a_1 \in [x_1,x_2] , m=\dfrac{f(b_1)-f(a_1)}{b_1-a_1} ,$ where $b_1-a_1=h=\dfrac {x_2-x_1}2$ so we can repeat this process to get a nested sequence of intervals $[a_n,b_n]$ , where $b_n-a_n=\dfrac {x_2-x_1}{2^n}$ and $m=\dfrac {f(a_n)-f(b_n)}{a_n-b_n}$ for each $n$ , so there is a unique point $x \in \cap_{n=1}^ \infty [a_n,b_n]$ I am planning to show that $f'(x)=m=\dfrac{f(x_1)-f(x_2)}{x_1-x_2}$ , and I can except one difficulty , I am not being able to show that $x$ lies strictly between $x_1,x_2$ that is I cannot show $x \in (x_1,x_2)$ ; Is it true that $x \in (x_1,x_2)$ or is my construction not valid ? Consider an affine linear function to see that it is possible to choose $a_n = x_1$ for all $n$, and then you have $x = x_1 \notin (x_1,x_2)$. To get the proof working, you need to show that you can arrange it so that $x \in (x_1,x_2)$, that is, there is an $n_1$ with $a_{n_1} > x_1$, and an $n_2$ with $b_{n_2} < x_2$. Then we have $x_1 < a_n < b_n < x_2$ for all $n \geqslant \max \{n_1,n_2\}$ and the proof is done. If $m_1 < m < m_2$ (or $m_2 < m < m_1$), then we must have $x_1 < a_1 < y$, and therefore also $b_1 < x_2$, and everything is fine. So we need only consider the case where we have $m_1 = m$ or $m_2 = m$. But if that is the case, then $$m = \frac{f(x_2) - f(x_1)}{x_2-x_1} = \frac{1}{2}\cdot \frac{f(x_2) - f(y) + f(y) - f(x_1)}{h} = \frac{1}{2}\left(m_2 + m_1\right)$$ shows that we have $m_1 = m = m_2$. Then: If at the first step we have $m_1 = m = m_2$, choose $a_1 = x_1$ to get $b_1 < x_2$. If at the second step we also have $m_1 = m = m_2$, then choose $a_2 = \frac{a_1+b_1}{2}$ to get $x_1 < a_2$ and we are done. If at the second step the two slopes are different, we necessarily have $x_1 < a_2 < b_2 < x_2$ and are happy too.
2019-10-18 01:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543552994728088, "perplexity": 67.71867958230867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00262.warc.gz"}
https://aacrjournals.org/clincancerres/article/20/7/1719/251221/It-Shouldn-t-Take-Another-50-Years-Accelerating
Many thought it could never happen. In an action once considered inconceivable, the major pharmacy chain CVS Caremark recently announced that it will stop selling all tobacco products throughout its 7,600 outlets nationwide (1). Declaring that the sale of tobacco represented an unacceptable paradox for a business trying to promote health, CVS Caremark leaders began phasing out tobacco sales in February 2014 and will end all such sales by October 2014. And thus a new social norm has come to pass. As extraordinary as it was, this private sector action represents only one of three major milestones that have already made 2014 a standout year for tobacco control. A second milestone was the release of The Health Consequences of Smoking—50 Years of Progress (2). On one hand, the 50th Anniversary Report noted striking advances over the past half century. Since 1964, the country has witnessed declining adult per capita cigarette consumption by more than 70%, and the prevalence of adult smoking has decreased by over half [from 42.7% (1964) to 18.1% (2012)]. Tobacco control has helped to avert at least eight million early deaths over that period. Furthermore, the incidence of lung cancer, an uncommon disease in the early 20th century, which then increased markedly to become the leading cause of cancer death, has begun a decline, first among men and more recently among women. And of note, the report states that quitting smoking improves the prognosis of cancer patients. But the new report also summarizes major challenges for the future. More than 42 million smokers still struggle with tobacco dependence, and 16 million current and former smokers live with smoking-related illnesses (2). New estimates now indicate annual tobacco-related deaths approach half a million in the United States, with more than five million a year worldwide. Fifty years after the first Surgeon General's Report, evidence now links tobacco as a new causal factor for malignancies, such as colorectal cancer and liver cancer, as well as nonmalignant conditions, such as diabetes mellitus and rheumatoid arthritis. And in a startling conclusion, the report states that 5.6 million youth alive today will die prematurely because of tobacco dependence if we continue at current smoking rates. The 50th Anniversary Report urges enhanced attention to both new and proven strategies for tobacco control to make the next generation tobacco free. Such strategies are sorely needed as the tobacco industry spends $8 billion a year—nearly a million dollars an hour—to advertise and market cigarettes and smokeless tobacco, thereby outspending current state tobacco control programs by a factor of 18 to 1. Last year, customers consumed over 14 billion packs of cigarettes, and over time these cigarettes have become more complex and more addictive, and the risks from smoking have become more deadly. Furthermore, each year the tobacco industry adds to an ever-growing array of novel products, such as flavored small cigars and e-cigarettes, that may have special appeal to young people. In particular, significant questions remain about the net public health benefits of e-cigarettes. In short, the 50th Anniversary Report underscores again that tobacco dependence represents a pediatric disease. Of note, most smokers start before age 18, with the median age for initiation being only 13 years of age. And for each adult who succumbs to tobacco, two younger replacement smokers stand ready to take their place, thereby perpetuating a cycle of dependence. Too many kids still routinely see tobacco use everywhere and assume it is part of the social norm. Half of the U.S. states still lack comprehensive smoke-free laws for public places. Images glamorizing tobacco use still reach kids through the Internet, at retail stores, and in movies (3). And because tobacco causes damage that may be unseen until many years later, youth can easily dismiss the risks involved. To capture their attention, vulnerable kids need to see, and feel, what the true risks of tobacco actually are. That's why the third major 2014 tobacco control milestone, the February launch of a$115-million mass media youth prevention campaign by the U.S. Food and Drug Administration (FDA), represents yet another major opportunity to reduce tobacco use. Entitled “The Real Cost,” this media campaign has been constructed based on critical scientific and marketing research on how best to reach at-risk youth. The campaign, which will extend over a period of a year, vividly and viscerally conveys graphic messages about tobacco use, building on lessons learned from previous successful national and statewide mass media campaigns. The FDA campaign will have a rigorous evaluation, involving thousands of youth. Other notable related events include the unveiling by the Centers for Disease Control (CDC) of the latest version of its “Tips from Former Smokers” campaign designed to encourage smokers to quit, along with the Legacy Foundation's planned mass media efforts to take place later this year (3). Through all of these efforts, more public attention to tobacco use may well be generated in 2014 than at any time in recent memory. These new 2014 milestones accelerate momentum stemming from the release from the U.S. Department of Health and Human Services (HHS) of the first ever Strategic Action Plan for Tobacco Control Ending the Tobacco Epidemic, released in 2010 (4). The Strategic Action Plan urges reinvigorated tobacco control efforts, emphasizing pillars of improving the public's health, engaging the public and changing social norms, leading by example, and advancing research. Major accomplishments emerging from the Strategic Action Plan already include expanded coverage of tobacco cessation counseling to approximately five million Medicare tobacco users (not just those with tobacco-related diseases), new Medicaid guidelines that mandate full cessation coverage for pregnant women, and 50% reimbursement to states providing telephone “quit line” support to beneficiaries who smoke. The Strategic Action Plan has also served as a framework for enhanced access to robust evidence-based cessation treatments for all federal employees (as well as retirees and dependents as ensured by the federal Office of Personnel Management); ongoing FDA contracts to states to enforce tobacco marketing, sale, and distribution laws and regulations at retail locations; CDC grants to communities for tobacco prevention; and a requirement for no cost sharing for tobacco counseling and cessation services for millions in new private health plans made available through the Affordable Care Act. In addition to these accomplishments, we need further action to realize the full impact of proven strategies such as fully funded comprehensive statewide tobacco control programs at CDC-recommended levels, higher prices on cigarettes and other tobacco products that will drive down consumption, and complete protection of our entire country—not just half the population as is seen currently—through smoke-free indoor air policies. But accelerating progress for the future will also require more innovative actions like the decision made by CVS Caremark. An area of great need is to bring more attention to reduction of tobacco-related disparities and to the disproportionate burden tobacco places on those with mental illness and substance use disorders. Individuals with mental illness smoke at rates approximately double those of adults without mental illness and comprise more than half of nicotine-dependent smokers (2, 4). Moreover, tobacco use drives much of the excess mortality among people with serious mental illness. We need greater attention to cessation and prevention policies for this key population. Another issue relates to the current and future extent of FDA regulatory authority over tobacco products. The 2009 Family Smoking Prevention and Tobacco Control Act gave the FDA explicit regulatory authority over cigarettes, smokeless tobacco, and “roll-your-own” but was silent on other tobacco products. However, the 2009 act also provided FDA the option to extend its authority and thereby “deem” other tobacco products to be subject to regulation. Ultimately, such extended authority through the issuance of proposed regulations could include products that are not currently regulated. A promising future direction revolves around possible “end game” strategies. The 50th Anniversary Report summarizes such strategies, which could include requiring the reduction of nicotine content to nonaddicting levels (2). Through notice and comment rule making, the FDA can issue product standards, supported by science and as “appropriate for the protection of public health.” Such product standards, as part of a comprehensive nicotine regulatory strategy, have the potential to fundamentally change the market for the spectrum of nicotine products in the future. In short, 2014 has already delivered historic actions for tobacco control that accelerate public health momentum. The ultimate goal for the nation should be to relegate the morbidity and mortality associated with tobacco to the history books as a conquered epidemic (4). And it shouldn't take another 50 years. Achieving that goal will require even more committed leadership through every sector of society that rejects the status quo and envisions a healthier future. In 2014, we honor new milestones but also must explore the promise of a true end game for tobacco. It is time to reclaim the social norm as one that is tobacco free and to bring better health to future generations. 1. Brennan TA , Schroeder SA . Ending sales of tobacco products in pharmacies . JAMA 2014 Feb 5 [Epub ahead of print] . 2. U.S. Department of Health and Human Services . The Health Consequences of Smoking—50 Years of Progress : A Report of the Surgeon General. Atlanta, GA : U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 2014 . Available from http://www.surgeongeneral.gov/library/reports/50-years-of-progress/ 3. Schroeder SA , Koh HK . Tobacco control 50 years after the 1964 Surgeon General's report . JAMA 2014 ; 311 : 141 3 . 4. Koh HK , Sebelius KG . Ending the tobacco epidemic . JAMA 2012 ; 308 : 767 8 .
2022-05-27 11:56:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2037973403930664, "perplexity": 6503.250851420374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00480.warc.gz"}
https://en.wikipedia.org/wiki/Symplectic_matrix
# Symplectic matrix In mathematics, a symplectic matrix is a 2n×2n matrix M with real entries that satisfies the condition ${\displaystyle M^{\text{T}}\Omega M=\Omega \,,}$ (1) where MT denotes the transpose of M and Ω is a fixed 2n×2n nonsingular, skew-symmetric matrix. This definition can be extended to 2n×2n matrices with entries in other fields, e.g. the complex numbers. Typically Ω is chosen to be the block matrix ${\displaystyle \Omega ={\begin{bmatrix}0&I_{n}\\-I_{n}&0\\\end{bmatrix}}}$ where In is the n×n identity matrix. The matrix Ω has determinant +1 and has an inverse given by Ω−1 = ΩT = −Ω. Every symplectic matrix has unit determinant, and the 2n×2n symplectic matrices with real entries form a subgroup of the special linear group SL(2n, R) under matrix multiplication, specifically a connected noncompact real Lie group of real dimension n(2n + 1), the symplectic group Sp(2n, R). The symplectic group can be defined as the set of linear transformations that preserve the symplectic form of a real symplectic vector space. An example of a group of symplectic matrices is the group of three symplectic 2x2-matrices consisting in the identity matrix, the upper triagonal matrix and the lower triangular matrix, each with entries 0 and 1. ## Properties Every symplectic matrix is invertible with the inverse matrix given by ${\displaystyle M^{-1}=\Omega ^{-1}M^{\text{T}}\Omega .}$ Furthermore, the product of two symplectic matrices is, again, a symplectic matrix. This gives the set of all symplectic matrices the structure of a group. There exists a natural manifold structure on this group which makes it into a (real or complex) Lie group called the symplectic group. It follows easily from the definition that the determinant of any symplectic matrix is ±1. Actually, it turns out that the determinant is always +1 for any field. One way to see this is through the use of the Pfaffian and the identity ${\displaystyle {\mbox{Pf}}(M^{\text{T}}\Omega M)=\det(M){\mbox{Pf}}(\Omega ).}$ Since ${\displaystyle M^{\text{T}}\Omega M=\Omega }$ and ${\displaystyle {\mbox{Pf}}(\Omega )\neq 0}$ we have that det(M) = 1. When the underlying field is real or complex, elementary proof is obtained by factoring the inequality ${\displaystyle \det(M^{\text{T}}M+I)\geq 1}$.[1] Suppose Ω is given in the standard form and let M be a 2n×2n block matrix given by ${\displaystyle M={\begin{pmatrix}A&B\\C&D\end{pmatrix}}}$ where A, B, C, D are n×n matrices. The condition for M to be symplectic is equivalent to the two following equivalent conditions[2] ${\displaystyle A^{\text{T}}C,B^{\text{T}}D}$ symmetric, and ${\displaystyle A^{\text{T}}D-C^{\text{T}}B=I}$ ${\displaystyle AB^{\text{T}},CD^{\text{T}}}$ symmetric, and ${\displaystyle AD^{\text{T}}-BC^{\text{T}}=I}$ When n = 1 these conditions reduce to the single condition det(M) = 1. Thus a 2×2 matrix is symplectic iff it has unit determinant. With Ω in standard form, the inverse of M is given by ${\displaystyle M^{-1}=\Omega ^{-1}M^{\text{T}}\Omega ={\begin{pmatrix}D^{\text{T}}&-B^{\text{T}}\\-C^{\text{T}}&A^{\text{T}}\end{pmatrix}}.}$ The group has dimension n(2n + 1). This can be seen by noting that the group condition implies that ${\displaystyle \Omega M^{\text{T}}\Omega M=-I}$ this gives equations of the form ${\displaystyle -\delta _{ij}=\sum _{k=1}^{n}m_{k,i+n}m_{n+k,j}-m_{n+k,i+n}m_{n,j}-m_{k,i}m_{n+k,j}+m_{k,i}m_{k,j}}$ where ${\displaystyle m_{ij}}$ is the i,j-th element of M. The sum is antisymmetric with respect to indices i,j, and since the left hand side is zero when i differs from j, this leaves n(2n-1) independent equations. ## Symplectic transformations In the abstract formulation of linear algebra, matrices are replaced with linear transformations of finite-dimensional vector spaces. The abstract analog of a symplectic matrix is a symplectic transformation of a symplectic vector space. Briefly, a symplectic vector space is a 2n-dimensional vector space V equipped with a nondegenerate, skew-symmetric bilinear form ω called the symplectic form. A symplectic transformation is then a linear transformation L : VV which preserves ω, i.e. ${\displaystyle \omega (Lu,Lv)=\omega (u,v).}$ Fixing a basis for V, ω can be written as a matrix Ω and L as a matrix M. The condition that L be a symplectic transformation is precisely the condition that M be a symplectic matrix: ${\displaystyle M^{\text{T}}\Omega M=\Omega .}$ Under a change of basis, represented by a matrix A, we have ${\displaystyle \Omega \mapsto A^{\text{T}}\Omega A}$ ${\displaystyle M\mapsto A^{-1}MA.}$ One can always bring Ω to either the standard form given in the introduction or the block diagonal form described below by a suitable choice of A. ## The matrix Ω Symplectic matrices are defined relative to a fixed nonsingular, skew-symmetric matrix Ω. As explained in the previous section, Ω can be thought of as the coordinate representation of a nondegenerate skew-symmetric bilinear form. It is a basic result in linear algebra that any two such matrices differ from each other by a change of basis. The most common alternative to the standard Ω given above is the block diagonal form ${\displaystyle \Omega ={\begin{bmatrix}{\begin{matrix}0&1\\-1&0\end{matrix}}&&0\\&\ddots &\\0&&{\begin{matrix}0&1\\-1&0\end{matrix}}\end{bmatrix}}.}$ This choice differs from the previous one by a permutation of basis vectors. Sometimes the notation J is used instead of Ω for the skew-symmetric matrix. This is a particularly unfortunate choice as it leads to confusion with the notion of a complex structure, which often has the same coordinate expression as Ω but represents a very different structure. A complex structure J is the coordinate representation of a linear transformation that squares to −1, whereas Ω is the coordinate representation of a nondegenerate skew-symmetric bilinear form. One could easily choose bases in which J is not skew-symmetric or Ω does not square to −1. Given a hermitian structure on a vector space, J and Ω are related via ${\displaystyle \Omega _{ab}=-g_{ac}{J^{c}}_{b}}$ where ${\displaystyle g_{ac}}$ is the metric. That J and Ω usually have the same coordinate expression (up to an overall sign) is simply a consequence of the fact that the metric g is usually the identity matrix. ## Diagonalisation and decomposition • For any positive definite real symplectic matrix S there exists U in U(2n,R) such that ${\displaystyle S=U^{\text{T}}DU\quad {\text{for}}\quad D=\operatorname {diag} (\lambda _{1},\ldots ,\lambda _{n},\lambda _{1}^{-1},\ldots ,\lambda _{n}^{-1}),}$ where the diagonal elements of D are the eigenvalues of S.[3] ${\displaystyle S=UR\quad {\text{for}}\quad U\in \operatorname {U} (2n,\mathbb {R} ){\text{ and }}R\in \operatorname {Sp} (2n,\mathbb {R} )\cap \operatorname {Sym} _{+}(2n,\mathbb {R} ).}$ • Any real symplectic matrix can be decomposed as a product of three matrices: ${\displaystyle S=O{\begin{pmatrix}D&0\\0&D^{-1}\end{pmatrix}}O',}$ such that O and O' are both symplectic and orthogonal and D is positive-definite and diagonal.[4] This decomposition is closely related to the singular value decomposition of a matrix and is known as an 'Euler' or 'Bloch-Messiah' decomposition. ## Complex matrices If instead M is a 2n×2n matrix with complex entries, the definition is not standard throughout the literature. Many authors [5] adjust the definition above to ${\displaystyle M^{*}\Omega M=\Omega \,.}$ (2) where M* denotes the conjugate transpose of M. In this case, the determinant may not be 1, but will have absolute value 1. In the 2×2 case (n=1), M will be the product of a real symplectic matrix and a complex number of absolute value 1. Other authors [6] retain the definition (1) for complex matrices and call matrices satisfying (2) conjugate symplectic.
2017-07-22 10:21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 27, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968929648399353, "perplexity": 293.98873386794634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00171.warc.gz"}
https://socratic.org/questions/how-do-you-prove-1-tan-2-x-sec-2-x-1
# How do you prove 1+tan^2 (x) = sec^2 (x)? Oct 1, 2016 See explanation... #### Explanation: Starting from: ${\cos}^{2} \left(x\right) + {\sin}^{2} \left(x\right) = 1$ Divide both sides by ${\cos}^{2} \left(x\right)$ to get: ${\cos}^{2} \frac{x}{\cos} ^ 2 \left(x\right) + {\sin}^{2} \frac{x}{\cos} ^ 2 \left(x\right) = \frac{1}{\cos} ^ 2 \left(x\right)$ which simplifies to: $1 + {\tan}^{2} \left(x\right) = {\sec}^{2} \left(x\right)$
2020-02-22 01:12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811253547668457, "perplexity": 7313.335677188686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00316.warc.gz"}
http://jokerwang.com/wp-content/one/314.html
There are $$2n$$ complex numbers that satisfy both $$z^{28} - z^{8} - 1 = 0$$ and $$|z| = 1$$. These numbers have the form $$z_{m} = \cos\theta_{m} + i\sin\theta_{m}$$, where $$0\leq\theta_{1} < \theta_{2} < \ldots < \theta_{2n} < 360$$ and angles are measured in degrees. Find the value of $$\theta_{2} + \theta_{4} + \ldots + \theta_{2n}$$. (第十九届AIME2 2001 第14题)
2017-08-21 21:25:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631528615951538, "perplexity": 120.61282240359607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00671.warc.gz"}
http://www.exampleproblems.com/wiki/index.php/Pole_(complex_analysis)
# Pole (complex analysis) In complex analysis, a pole of a holomorphic function is a certain type of simple singularity that behaves like the singularity 1/zn at z = 0. A pole of the function f(z) is a point z = a such that f(z) approaches infinity as z approaches a. File:Gamma abs.png The absolute value of the Gamma function. This shows that a function becomes infinite at the poles (left). On the right, the Gamma function does not have poles, it just increases quickly. Formally, suppose U is an open subset of the complex plane C, a is an element of U and f : U − {a} → C is a holomorphic function. If there exists a holomorphic function g : UC and a natural number n such that ${\displaystyle f(z)={\frac {g(z)}{(z-a)^{n}}}}$ for all z in U − {a}, then a is called a pole of f. If n is chosen as small as possible, then n is called the order of the pole. A pole of order 1 is called a simple pole. Equivalently, a is a pole of order n≥ 0 for a function f if there exists an open neighbourhood U of a such that f : U - {a} → C is holomorphic and the limit ${\displaystyle \lim _{z\to a}(z-a)^{n}f(z)}$ exists and is different from 0. The point a is a pole of order n of f if and only if all the terms the Laurent series expansion of f around a below degree −n vanishes and the term in degree −n is not zero. A pole of order 0 is a removable singularity. In this case the limit limza f(z) exists as a complex number. If the order is bigger than 0, then limza f(z) = ∞. If the first derivative of a function f has a simple pole at a, then a is a branch point of f. (The converse need not be true). A non-removable singularity that is not a pole or a branch point is called an essential singularity. A holomorphic function whose only singularities are poles is called meromorphic.
2020-10-31 17:09:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693662524223328, "perplexity": 239.3613090905976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00289.warc.gz"}
http://cms.math.ca/cjm/kw/quasi-commutativity
General Preservers of Quasi-Commutativity Let ${ M}_n$ be the algebra of all $n \times n$ matrices over $\mathbb{C}$. We say that $A, B \in { M}_n$ quasi-commute if there exists a nonzero $\xi \in \mathbb{C}$ such that $AB = \xi BA$. In the paper we classify bijective not necessarily linear maps $\Phi \colon M_n \to M_n$ which preserve quasi-commutativity in both directions. Keywords:general preservers, matrix algebra, quasi-commutativityCategories:15A04, 15A27, 06A99
2015-02-01 22:44:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364655017852783, "perplexity": 370.9941238260059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122189854.83/warc/CC-MAIN-20150124175629-00227-ip-10-180-212-252.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/165604/score-multiplier-based-on-lowest-time
# Score multiplier based on lowest time I'm making a score system that will be based on several factors, how many collectibles you have obtained, how few hits you have taken, etc. I want to add in a multiplier based on how quickly the player manages to finish the game. So far the only thing I've thought of is to have a maximum time, and to subtract the final time from that number. So as an example, say I set the maximum number to 300 seconds and somebody beat the game in 220 seconds, that would leave 80. This could be divided into a smaller number and then used as a multiplier on the total score. It works but feels a little limited. What would be a better way to achieve this? • Can you describe the specific way in which this feels "limited" to you? In what way would you like to see this improved? – DMGregory Nov 24 '18 at 8:21 • Well I'm just not a huge fan of this dependency on another arbitrary number really. It feels like there's a more clever way to mathematically achieve this. I suppose I may be wrong but I just wanted to hear some other thoughts on it. – Danny Santos Nov 24 '18 at 8:34 So it sounds like you want an equation that starts at a large number where t = 0, and approaches 1 as t approaches infinity, yes? For example it would be a multiplier of 10 if they complete the level in 0 seconds, and a multiplier of 1 if they complete the level in infinity seconds. Here’s a starting point: $$\frac{9}{100 ^ {t/300}} + 1$$ Where t is the amount of time taken, 9 is 1 less than the maximum multiplier (10 in this case), and 100 and 300 can be tuned to adjust how quickly the multiplier approaches 1. With 100 and 300, the multiplier moves down from 10 to 2 at about 150 time units, and is very close to 1 at 300 time units. You can use this formula: $$\frac{t_{half}}{t_{half} + t} \cdot {x}$$ If t = 0, score equals to x, the maximum score. If t = thalf, score equals to half of x. If t goes to infinity, score goes to 0.
2020-06-02 09:17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6716545224189758, "perplexity": 313.497218914089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00022.warc.gz"}
https://forum.allaboutcircuits.com/threads/how-does-voltage-lead-current-in-an-inductive-circuit.123680/
# HOW does voltage lead current in an inductive circuit? #### Sparky82 Joined May 1, 2016 2 Hello. Thanks in advance for anyone able to provide clarity to my questions. I'm desperately trying to making sense of PHASE SHIFT. I've read statement after statement regarding the description of how voltage leads current in an inductive circuit, and how current leads voltage in an inductive circuit ; however, every statement leaves a lot of open ends through lack of description due to specifics and/or not using practical visuals. Yes, I can interpret a sine chart showing the 90 degree phase displacements and how at peak X we have zero value Y...that's wonderful and all; however, HOW is that happening in the real world before math is used to describe it?!?! Does the 90 degree flux line from a coiled conductor cut back on itself, essentially imposing a self-manifested wall on source current but voltage is still able to truck on by unfettered? Wtf? A textbooks typical description is typically something like "an inductor opposes the change in current...blah-blah-blah" Oh? Is that a current change that's increasing or decreasing at the same time the flux is doing X? Is the back emf of the collapsing mag flux pushing source voltage back FROM the supply or TO the supply? Is the "voltage" that's leading an "EMF" voltage or Potential Difference voltage or Resultant Voltage or Apple-Pie in the Sky voltage?!?! Like, Jesus Christ, why is this so hard to get a clear answer to? Literally hundreds of sources repeating the same garb. I'd be forever appreciative if someone can set the record straight and provide some explicitly concise detail and visuals. Links to suggested sources are appreciated and will be combed over. Thanks #### Papabravo Joined Feb 24, 2006 12,853 Sorry man. AFAIK there is no hand waving "magic bullet" explanation that will satisfy you. Either you understand what is happening or you don't. If it is any consolation to you I didn't understand it either the first time I came across it. I kept reading and experimenting and nipping around the edges and it came upon me suddenly. If this stuff was easy then everybody could do it and salaries for degreed EE's would plummet. Oh....wait.....my bad, that is exactly what has happened. I guess it is not as tough as I thought. Either that or employers no longer care. If you reject and cannot understand the precise formulation that leads to the insight then I fear there is no helping you. By all means let everybody try, but I fear it is a fool's errand. #### Sparky82 Joined May 1, 2016 2 Sorry man. AFAIK there is no hand waving "magic bullet" explanation that will satisfy you. Either you understand what is happening or you don't. If it is any consolation to you I didn't understand it either the first time I came across it. I kept reading and experimenting and nipping around the edges and it came upon me suddenly. If this stuff was easy then everybody could do it and salaries for degreed EE's would plummet. Oh....wait.....my bad, that is exactly what has happened. I guess it is not as tough as I thought. Either that or employers no longer care. Well, I appreciate the laugh, that's for sure. I just find it funny that the world electrical community that prides itself on "reason and clarity" is anything but....at least the scribes aren't. I'm not a hapless student, I've received honours in previous studies and am top-of-class with electrical studies....along with continually stumping my profs, I'm just fed up that you can get so far and know very little; it simply won't do. (Minor edit by moderator) Last edited by a moderator: #### #12 Joined Nov 30, 2010 18,190 First, you have to apply voltage to the inductor. The inductor refuses to allow current to change instantly because the energy you just applied is being used to create a magnetic field. For any voltage, there is a ramp up of current inversely proportional to the inductance. In steady flow, the magnet field just stands there and the impedance seems to be the ohmic resistance of the wire that was used to make the inductor. When you try to open the circuit, the magnetic field collapses and its energy wants to keep the current flowing the same way it was already flowing. With a capacitor, it's like a bucket. You have to pour current into it before the voltage rises. Current leads voltage. With an inductor, you have to apply voltage and wait for the current to increase. Voltage leads current. #### WBahn Joined Mar 31, 2012 25,217 Hello. Thanks in advance for anyone able to provide clarity to my questions. I'm desperately trying to making sense of PHASE SHIFT. I've read statement after statement regarding the description of how voltage leads current in an inductive circuit, and how current leads voltage in an inductive circuit ; however, every statement leaves a lot of open ends through lack of description due to specifics and/or not using practical visuals. Yes, I can interpret a sine chart showing the 90 degree phase displacements and how at peak X we have zero value Y...that's wonderful and all; however, HOW is that happening in the real world before math is used to describe it?!?! Does the 90 degree flux line from a coiled conductor cut back on itself, essentially imposing a self-manifested wall on source current but voltage is still able to truck on by unfettered? Wtf? A textbooks typical description is typically something like "an inductor opposes the change in current...blah-blah-blah" Oh? Is that a current change that's increasing or decreasing at the same time the flux is doing X? Is the back emf of the collapsing mag flux pushing source voltage back FROM the supply or TO the supply? Is the "voltage" that's leading an "EMF" voltage or Potential Difference voltage or Resultant Voltage or Apple-Pie in the Sky voltage?!?! Like, Jesus Christ, why is this so hard to get a clear answer to? Literally hundreds of sources repeating the same garb. I'd be forever appreciative if someone can set the record straight and provide some explicitly concise detail and visuals. Links to suggested sources are appreciated and will be combed over. Thanks Hate to break it to you, but that math "garb" that you want to avoid IS the language of physics and, as a consequence, electronics. There is only so much understanding you can achieve without understanding the math that describes it. By the definition of inductance, the voltage across an inductor and the current through an inductor are related by the following constitutive relation: $$V_L \; = \; L \frac{dI_L}{dt}$$ If the current is sinusoidal then take the derivative and you will see that the voltage will be a sinusoid that leads that current by ninety degrees. A similar, but opposite, relationship holds for a capacitor. To look more closely as some of your questions: A textbooks typical description is typically something like "an inductor opposes the change in current...blah-blah-blah" Oh? Is that a current change that's increasing or decreasing at the same time the flux is doing X? Is the back emf of the collapsing mag flux pushing source voltage back FROM the supply or TO the supply? Is the "voltage" that's leading an "EMF" voltage or Potential Difference voltage or Resultant Voltage or Apple-Pie in the Sky voltage?!?! Like, Jesus Christ, why is this so hard to get a clear answer to? Literally hundreds of sources repeating the same garb. Yes, an inductor produces a voltage across it that opposes any change in the current through it. If you try to lower the current, the induced voltage will be of a polarity that would attempt to increase the current, while if you try to increase the current the induced voltage will be of a polarity that would attempt to decrease the current. This is a direct consequence of the fact that the current in the inductor produces a magnetic flux through the coil. When you try to change the flux (by changing the current), the changing flux produces a voltage across those same coil windings. If the flux is increasing the voltage is one polarity and if the flux is decreasing the voltage is the opposite polarity. That the induced voltage tends to oppose the change in flux that is producing it is captured by Lenz's Law (which, like most "laws" of this type are merely mathematical descriptions of observed phenomena). Many of the phrases you are using make little sense. You don't "push source voltage back" either from or to the supply. The induced voltage appears as a voltage difference across the inductor. #### Papabravo Joined Feb 24, 2006 12,853 I'm tempted to give odds on the insufficiency of those answers, as excellent as they are. #### crutschow Joined Mar 14, 2008 24,295 First thing to note is that inductors store energy in their inductance. The analog of this in the mechanical world is the inertia of a mass. If you apply a force (voltage) to a mass (inductance) then it will slowly pick up velocity (current). The rate of this increase is directly proportional to the force (voltage) and inversely proportional to the mass (inductance). The velocity of the mass (current in the inductance) stores energy (1/2 mV² for mass and 1/2 LI² for inductance). Now if you remove the force (voltage) this stored energy will tend to keep the mass (current) moving. Even if you now reversed the applied force (voltage) the mass (current) will still want to keep moving in the same direction, but slowing down until all the energy has dissipated. If the force (voltage) is periodically applied and reversed in a sinusoidal fashion then the system will settle into a steady-state oscillation where the velocity (current) will be such that it keeps increasing in the forward direction until the forward force (voltage) returns to zero, at which time the velocity (current) will have reached its peak value. This is true because the velocity (current) keeps increasing as long as there is any value of force (voltage) being applied. Similarly the velocity (current) reaches a peak in the reverse direction when the reverse force (voltage) again returns to zero. This means the velocity (current) always lags the force (voltage) by 90°. Does that help make sense of it for you? #### wayneh Joined Sep 9, 2010 16,228 FWIW, I never got it until I became very comfortable with the calculus and differential equations describing it all. I had math first and actually learned it well enough that I still use it decades later. Then when I encountered electronics, I could just "see" it all in terms that made sense. I frankly can't imagine having any feel for it at all without that solid foundation of math. For me, knowing the math first made it like having F=ma and then deriving the laws of motion. It's easy. But trying to just remember or intuit them is impossible. #### Papabravo Joined Feb 24, 2006 12,853 If anybody thinks classical mechanics is tough to understand without the mathematics, then quantum mechanics and general relativity will positively blow you to Bermuda! Last edited: #### wayneh Joined Sep 9, 2010 16,228 I aced P-chem (quantum mechanics) but I feel like I learned less in that class, with more pain, than any other class I took in college. #### jpanhalt Joined Jan 18, 2008 8,711 #### nsaspook Joined Aug 27, 2009 6,885 Well, I appreciate the laugh, that's for sure. I just find it funny that the world electrical community that prides itself on "reason and clarity" is anything but....at least the scribes aren't. I'm not a hapless student, I've received honours in previous studies and am top-of-class with electrical studies....along with continually stumping my profs, I'm just fed up that you can get so far and know dick-shit; it simply won't do. You know a lot more than you think you do. This brick wall is pretty common because you don't yet have the proper geometric theory in your head. You can keep banging at the circuit equations with the help of phasor diagrams until you can do them in your sleep and finally understand what they mean or you can step back a bit and examine Electromechanics a bit as 4-D spacetime (space and time) fields to understand what we commonly think of the separate (electric) voltage and (magnetic) current elements are really different views of one dual-entity EM field. This won't likely help with your present questions about EMF but it might open a new vista on how to interpret your studies. So when we see the voltage current waveform in a purely inductive circuit you can note the relationship between the rapid rate of change of the charges at the zero crossing matches to the peaks of current (magnetic field) while the slow rates of change at the voltage peaks matches current (magnetic field) nulls. A changing magnetic fields give rise to a changing opposing electric field that limits the current. The energy of this circuit is in the fields not the charge carrier electrons (current) so the stored EM energy in the inductor is just sloshing back and forth instead of being dissipated. Simple harmonic motion How we view this (EM energy as electric or magnetic) depends on the 'projection' we see as it moves in space and time. http://www.technick.net/public/code/cp_dpage.php?aiocp_dp=guide_dft_projection_circular_motion Last edited: #### crutschow Joined Mar 14, 2008 24,295 FWIW, I never got it until I became very comfortable with the calculus and differential equations describing it all. I had math first and actually learned it well enough that I still use it decades later. Then when I encountered electronics, I could just "see" it all in terms that made sense. I frankly can't imagine having any feel for it at all without that solid foundation of math. For me, knowing the math first made it like having F=ma and then deriving the laws of motion. It's easy. But trying to just remember or intuit them is impossible. I really admire anyone who can "see" in mathematical terms. I can't. I'm a visual guy and math is not visual to me. I learned all the differential equations describing circuit behavior and can use them when necessary but they do little to help my comprehension of the circuit operation. For that I need to have (for lack of a better term) an intuitive feel for what's happening. The math is there to help me quantify the circuit operation, but that's it. For example, I had little real understanding of how an inductor works based upon the inductive magnetic field equations. It was sort of black magic until I came upon the inertia analogy. Then it became very clear to me how inductors work. For example it was then easy to see why inductive voltage spikes occur and the correct polarity of those spikes, no math needed. (I've seen a number of people on these forums mangle that nature of that effect by their misinterpretation of the equations.) The analogy also made it easy for me to see how an LC oscillation is simply the periodic transfer of energy back and forth between the capacitor voltage electric field and the inductive current magnetic field. #### recklessrog Joined May 23, 2013 985 I really admire anyone who can "see" in mathematical terms. I can't. I'm a visual guy and math is not visual to me I was trying to explain how a moving magnetic field in a coil induced a voltage in a coil that could produce a magnetic force. No mater how I drew it or explained it, one guy couldn't get it. To show him in a "real physical way" I got a 12" length of plumbers copper pipe, a rare earth high power button magnet and dropped it down the tube. There was look of sheer amazement on the face of the guy as he saw that it dropped 12" through the tube slower than in free air. He then grasped the principle that the moving magnet induced an electric current in the copper tube that in turn created a magnetic field that opposed the movement of the magnet thereby slowing it's decent. Then we applied the maths which he fully comprehended having seen it in action . #### BR-549 Joined Sep 22, 2013 4,938 Math is the BANE of understanding. No one should be taught math until after graduating. You can not "reverse engineer a equation" to determine process or cause. This is what modern science does. This is why they teach the fallacy of the standard model for 100 years. To satisfy an equation. If math is useful as experimental tool..................why after 100 years, why can't modern science tell me what an electron is, what is looks like, and how does it physically change energy level? Yeah right.....................math clears everything up. Cause and process will give you the right math. But the math will not give you the right process. Electrons do NOT orbit protons. Silly silly math. #### hp1729 Joined Nov 23, 2015 2,304 Hello. Thanks in advance for anyone able to provide clarity to my questions. I'm desperately trying to making sense of PHASE SHIFT. I've read statement after statement regarding the description of how voltage leads current in an inductive circuit, and how current leads voltage in an inductive circuit ; however, every statement leaves a lot of open ends through lack of description due to specifics and/or not using practical visuals. Yes, I can interpret a sine chart showing the 90 degree phase displacements and how at peak X we have zero value Y...that's wonderful and all; however, HOW is that happening in the real world before math is used to describe it?!?! Does the 90 degree flux line from a coiled conductor cut back on itself, essentially imposing a self-manifested wall on source current but voltage is still able to truck on by unfettered? Wtf? A textbooks typical description is typically something like "an inductor opposes the change in current...blah-blah-blah" Oh? Is that a current change that's increasing or decreasing at the same time the flux is doing X? Is the back emf of the collapsing mag flux pushing source voltage back FROM the supply or TO the supply? Is the "voltage" that's leading an "EMF" voltage or Potential Difference voltage or Resultant Voltage or Apple-Pie in the Sky voltage?!?! Like, Jesus Christ, why is this so hard to get a clear answer to? Literally hundreds of sources repeating the same garb. I'd be forever appreciative if someone can set the record straight and provide some explicitly concise detail and visuals. Links to suggested sources are appreciated and will be combed over. Thanks No math required. As current starts to flow a magnetic field starts to build up around each wire in the coil. The expanding magnetic field crosses other wires inducing a current in the opposite direction. So current flow is inhibited but not the voltage. #### WBahn Joined Mar 31, 2012 25,217 I aced P-chem (quantum mechanics) but I feel like I learned less in that class, with more pain, than any other class I took in college. For me it was statistical thermodynamics. The professor walked into class the first day and threw two six-sided die on the table and asked what was the probability of rolling a seven. We figured it out and told him and then he said, "From these humble beginnings we will hence forth derive all of classical thermodynamics." And then he spent the entire semester doing precisely that! It was interesting and fascinating and I walked away with virtually nothing useful actually learned (but I'm still glad I took it and sometime I wouldn't mind retaking it because I think I am in a much better position to appreciate the subtle details that went sailing past me before). #### WBahn Joined Mar 31, 2012 25,217 I really admire anyone who can "see" in mathematical terms. I can't. I'm a visual guy and math is not visual to me. Same here. I can sometimes get glimpses into the mathematical world, but usually have to make an effort to map the mathematical ramifications into a world in which I can visualize a reasonable analogy. I have a colleague that CAN think purely in terms of the math -- and he does the opposite in that he makes an effort to map what he sees into mathematical terms and relationships. He is scary brilliant and both a lot of fun and a bit intimidating to work with. Fortunately my strengths offset some of his few weaknesses, so while I am definitely the junior member of the team, we make a remarkably good team, each finding things that the other missed. #### wayneh Joined Sep 9, 2010 16,228 ...(but I'm still glad I took it and sometime I wouldn't mind retaking it because I think I am in a much better position to appreciate the subtle details that went sailing past me before). I feel that way about many of my college classes, but most assuredly NOT quantum mechanics. I don't think I'm particularly gifted in math. My advantage was getting exposed to calculus in high school and then having a very rigorous and advanced calculus class my first semester of college. After that, it was easy to internalize anything that came later in physics and science classes. When vectors were introduced in Physics, most of my fellows were blown away and struggling. It was trivial to me and even felt dumbed-down. I was able to think about the actual physics involved and didn't have to waste brainpower on scalars and vectors and cross products. None of that helped in P.Chem. I'd never seen most of the math that gets used in P.Chem and I suffered along with everybody else. #### WBahn Joined Mar 31, 2012 25,217 I don't recall my P-chem class involving quantum mechanics, but I took it as a first semester freshman and was quite unprepared for the math. I was taking Calc II at the same time and also Physics I. It was the first time that I had dealt with partial derivatives and most of my effort was struggling with the mechanics and I wasn't able to focus on the concepts. It's very possible that the material was straight out of quantum mechanics and that either it wasn't presented in that vein or I wasn't in a position to appreciate it. When I hit quantum later in my physics curriculum I was much better prepared for it. But I sure agree that having a solid math understanding makes it SO much easier to learn and actually comprehend any subject that uses that math -- and that the reverse is also very true in that if you have to fight the math, you will have a much harder time understanding the concepts involved in the subject you are studying. Last edited:
2020-03-28 20:26:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.510888397693634, "perplexity": 962.4842804132592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00316.warc.gz"}
http://codeforces.com/blog/entry/83998
kanisht_09's blog By kanisht_09, history, 3 months ago, Can someone please explain elaborately distributing dp. I found it in the editorial of atcoder problem Leaping Tak. Given below is the editorial link : https://atcoder.jp/contests/abc179/editorial/133 » 3 months ago, # | ← Rev. 3 →   +1 Distributing dp is also often referred to as push dp, and receiving dp is also called pull dp.To briefly go over the differences between them, let's consider this problem. In both methods, our definition of dp is the same, $dp[x] = \text{minimum number of coins required to make value x}$ Push dpFor push dp, you push results from currently available results.so, if you consider adding coin with value $c$ to your knapsack, and considering you already have the optimal value for some $x$ in $dp[x]$then, from using $dp[x]$ coins to make value $x$, you can transition to, using $dp[x]+1$ coins to make value $x+c$,more programatically, we update $dp[x+c] = min( dp[x+c], dp[x] + 1 )$.Notice how we "pushed" from an already computed value $dp[x]$ to $dp[x+c]$. Pull dpFor pull dp, you pull results for the current state, from previously computed results.you want to consider computing $dp[x]$, let's say the last coin used to get to the current state was of value $c$, then your update looks like, $dp[x] = min( dp[x], dp[x-c] + 1 );$Here, you pull the results from already computed value of $dp[x-c]$.Additional Note: If you think about these methods, you realise the importance about order of processing. If the dp for your current state is not optimally calculated, and you push from there, and later optimally compute the dp for the state, then you can see that you would probably not get the optimal overall result. Thus, it leads you to the idea of a topological ordering of states, according to dependence on other states' results.
2021-01-21 18:41:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6393530964851379, "perplexity": 1067.822400048617}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00457.warc.gz"}
https://www.strathweb.com/2020/09/introduction-to-quantum-computing-with-q-part-8-superdense-coding/
## Introduction to quantum computing with Q# – Part 8, Superdense coding Last time, we discussed the quantum teleportation protocol, which relies on the phenomenon of quantum entanglement to move an arbitrary quantum state from one qubit to another, even if they are spatially separated. Today, we shall continue exploring the scenarios enabled by entanglement, by looking at the concept called “superdense coding”. It allows sending two classical bits of information by physically moving only a single qubit around, and is sometimes referred to as a conceptual inverse of teleportation. ### Superdense coding The protocol was first proposed by Charles Bennett and Stephen Wiesner in their 1992 paper Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states. Just like in the case of quantum teleportation, superdense coding starts of with two actors, Alice and Bob, sharing an entangled pair (EPR pair) of qubits. Once the entanglement is established, they can go their separate ways. Later on, Alice decides that she wants to send Bob a classical two-bit message – $00$, $01$, $10$ or $11$. It turns out, that thanks to superdense coding, she can convey these two bits of classical information using a single qubit that is currently in her possession. Of course at first glance this doesn't seem particularly exciting, after all, as we already know, a single qubit quantum state is continuous – the coefficients $\alpha$ and $\beta$ could be any values as long $\alpha^2 + \beta^2 = 1$ so there should be plenty of room to encode two classical bits there and send that over to Bob. The problem is of course that this information cannot be extracted out of the qubit – Bob would need to measure the qubit and receive one of the orthogonal computational basis states $\ket{0}$ or $\ket{1}$. The clever solution to this problem proposed by Bennett and Wiesner was to use a Bell state, or more specifically, one of the four orthogonal Bell states representing the basis for two-qubit systems. As we learnt earlier, the Hilbert space spanned by these orthogonal states is four dimensional. Based on the message that she wants to send, Alice will encode the classical bits into the entangled pair by applying a relevant unitary transformation to her qubit and send it over to Bob using a quantum channel (e.g. and optical link them). Upon a joint measurement in the Bell basis, Bob will decode the two classical bits from the two qubits now in his possession – exactly those that Alice planned. Ultimately, as Bennett and Wiesner point out, we still had to leverage two qubits to send two classical bits of information, which is not much different from having two classical bits sent over between the two parties in a classical way. However, in the quantum case, one of these qubits could be shared upfront and the communication protocol is later completed by physically moving only one of them. The communication of two bits via two particles, one of which remains fixed while the other makes a round trip, is no more efficient in number of particles or number of transmissions than the obvious scheme of directly encoding each bit in one transmitted particle. Nevertheless, the EPR scheme has the advantage of allowing some of the particle transmissions to take place before the message has been decided upon, perhaps at cheaper “off'-peak” rates. Superdense coding protocol is an important theoretical construct in the quantum information science. At the same time, currently, its practical benefits as a “lower bandwith” type of solution, as suggested in the original paper are still rather questionable, especially given the infant state of quantum hardware. Generally speaking using bits wherever possible is still dramatically cheaper and preferred over using the precious qubits. On top of that, in the field of quantum communications, there are still a lot of complexities related to stably moving and storing entangled particles over long distances. This was highlighted by David Mermin in his book Quantum Computer Science: An Introduction: Like dense coding, many tricks of quantum information theory, including (…) teleportation, rely on two or more people sharing entangled Qbits, prepared some time ago, carefully stored in their remote locations awaiting an occasion for their use. While the preparation of entangled Qbits (in the form of photons) and their transmission to distant places has been achieved, putting them into entanglement-preserving, local, long-term storage remains a difficult challenge. Despite these difficulties, there are additional application scenarios for superdense coding protocol – for example it is now clear that the field of quantum security can benefit greatly. The prerequisite of physically requiring both qubits to decode the encoded message makes superdense coding a very attractive tool for secure communications – it means that even if the qubit in transit is intercepted by an eavesdropper, it alone cannot be used to recover the initial message. One such approach was suggested by a group of Chinese researches back in 2005 in their paper Quantum secure direct communication with high-dimension quantum superdense coding. Superdense coding was experimentally confirmed using a pair of entangled photos by K. Mattle, H. Weinfurter, P. Kwiat and A. Zeilinger from the University of Innsbruck in their 1996 paper Dense Coding in Experimental Quantum Communication. A diligent reader would at this point notice that, as we learnt in the previous part, the first realization of the teleportation protocol also happened at University of Innsbruck under the leadership of Anton Zeilinger. ### Mathematics of superdense coding How does it actually work? It all becomes quite obvious very quickly, once we realize that the reason why superdense coding is often referred to as the inverse of quantum teleportation is because the decoding step that Bob needs to apply in the teleportation protocol, is actually identical to the encoding step Alice has to apply in superdense coding protocol. Alice and Bob start off with a shared pair of entangled qubits. We shall refer to it as $\ket{\varphi_{ab}}$ but it is actually one of the four Bell states, the maximally entangled $\ket{\Phi^+}$. $$\ket{\varphi_{ab}} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11}) = \ket{\Phi^+}$$ Alice and Bob can then set off in their separate directions, each one with a qubit of their own in their possession. Alice would then like to encode two bits of classical information into this shared entangled pair – and she can do it, by applying a single-qubit unitary transformation to her qubit only. • to encode a classical $00$, she applies the no-op $I$ transformation • to encode a classical $01$, she applies the $X$ transformation • to encode a classical $10$, she applies the $Z$ transformation • to encode a classical $11$, she applies the $Y$ transformation By doing so, she may end up transforming the state of the entangled pair to one of the three other Bell states $\ket{\Psi^+}$, $\ket{\Phi^-}$ or $\ket{\Psi^-}$. This is summarized algebraically below: $$(I \otimes I)\ket{\varphi_{ab}} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11}) = \ket{\Phi^+}$$ $$(X \otimes I)\ket{\varphi_{ab}} = \frac{1}{\sqrt{2}}(\ket{10} + \ket{01}) = \ket{\Psi^+}$$ $$(Z \otimes I)\ket{\varphi_{ab}} = \frac{1}{\sqrt{2}}(\ket{00} – \ket{11}) = \ket{\Phi^-}$$ $$(Y \otimes I)\ket{\varphi_{ab}} = \frac{1}{\sqrt{2}}(\ket{01} – \ket{10}) = \ket{\Psi^-}$$ At that point Alice sends off her qubit to Bob using a quantum communications channel. Bob can decode the two bits of information using a reverse Bell circuit – a $CNOT$ on both qubits, followed by an $H$ transformation on the qubit he received. As explained in the last post, this allows Bob to decompose a joint measurement in the Bell basis into standard single qubit measurements in the computational basis. Depending on the encoding operation applied by Alice, as soon as Bob runs the qubits through a $CNOT$ gate, he recevies one of the following states: $$CNOT(\frac{1}{\sqrt{2}}(\ket{00} + \ket{11})) = \\ \frac{1}{\sqrt{2}}(\ket{00} + \ket{10}) = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{0}$$ $$CNOT(\frac{1}{\sqrt{2}}(\ket{10} + \ket{01})) = \\ \frac{1}{\sqrt{2}}(\ket{11} + \ket{01}) = \frac{1}{\sqrt{2}}(\ket{1} + \ket{0}) \otimes \ket{1}$$ $$CNOT(\frac{1}{\sqrt{2}}(\ket{00} – \ket{11})) = \\ \frac{1}{\sqrt{2}}(\ket{00} – \ket{10}) = \frac{1}{\sqrt{2}}(\ket{0} – \ket{1}) \otimes \ket{0}$$ $$CNOT(\frac{1}{\sqrt{2}}(\ket{01} – \ket{10})) = \\ \frac{1}{\sqrt{2}}(\ket{01} – \ket{11}) = \frac{1}{\sqrt{2}}(\ket{0} – \ket{1}) \otimes \ket{1}$$ The $H$ will then modify the overall state accordingly (for each of the four cases): $$(H \otimes I) \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{0} = \ket{00}$$ $$(H \otimes I) \frac{1}{\sqrt{2}}(\ket{1} + \ket{0}) \otimes \ket{1} = \ket{01}$$ $$(H \otimes I) \frac{1}{\sqrt{2}}(\ket{0} – \ket{1}) \otimes \ket{0} = \ket{10}$$ $$(H \otimes I) \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{1} = \ket{11}$$ In other words, Bob now has 2 qubits that will, upon measurement, with 100% probability yield the two classical bits that Alice encoded – $00$, $01$, $10$ or $11$. This completes the protocol. The overall schematics of it are shown on the circuit below. It illustrates the case for encoding $01$ – so using the Pauli $X$ gate, but, as we discussed moments ago, all other three would look identical, with the only difference being that particular gate. ### Q# implementation of superdense coding Let's now shift our attention to Q# code, and try to implement the protocol using that language. Of course the protocol itself is predominantly relevant in quantum communications – where a qubit is physically transferred from one location to another. Doing it in a single Q# application process does not bring a lot of value, but it is nevertheless a useful experiment to run, to try to confirm our understanding of the foundations of quantum information theory. We start off by allocating two qubits and entangling them, creating the Bell state $\ket{\Phi^+}$. We showed already in the earlier posts that while it is possible to manually invoke the necessary gates, we can also use the helper operation $PrepareEntangledState$ from the $Microsoft.Quantum.Preparation$ namespace. We'd like to test the dense coding protocol for all four cases – encoding of $00$, $01$, $10$ and $11$, so it makes sense to parameterize our operation with two input bits, represented by two $Booleans$. What follows, are two operations that are not defined yet – $Encode$, which will mimic what Alice did in our theoretical example from earlier on, and $Decode$, which will pretend to be Bob and his process of decoding of the two classical bits out of the two qubits. The code to encode the two classical bits is shown next. As explained before, depending on what two-bit message we (or Alice) would like to convey, we would choose a no-op $I$ transformation or one of the three Pauli gates – $X$, $Z$ or $Y$. This way, the initial Bell state $\ket{\Phi^+}$ might be transformed into $\ket{\Psi^+}$, $\ket{\Phi^-}$ or $\ket{\Psi^-}$, or might stay the same. For the sake of tracking what we really end up encoding, we use a local mutable variable, that we can use to print out the encoded state. Finally, the decoding operation is shown below. It consists of a reverse Bell circuit and a measurement, with reset, on both qubits in the Pauli Z basis. To round things off, we need to add an entry point for our application that will invoke the $TestDenseCoding$ operation with various classical bit configurations. Equipped with such a Q# program, we can now execute it to verify the correctness of the superdense coding protocol. We run four variants, corresponding to classical bits $00$, $01$, $10$ and $11$ being encoded, and we expect the decoded output to be the same. And this is exactly what we should see as the output of our program. Once again, we are thrilled with the result and the opportunities that quantum computing presents us with. Something that not that long ago required a complex experimental setup and some of the greatest experimental physicists on the planet, can now be achieved and verified with several lines of Q# code, and (soon) executed on quantum hardware in the cloud using Azure Quantum. ### Summary In today's post we discussed the historical background, as well as the mathematical foundations of the superdense coding protocol. We then proceed to implement it in Q#, verifying that our algebraic reasoning was indeed correct. Similarly to teleportation, we can think of superdense coding as requiring entanglement as a resource that gets consumed when the protocol is used. In the next parts of this series we will move towards discussing quantum security and cryptography concepts.
2020-12-03 19:10:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6421707272529602, "perplexity": 555.454977961431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732696.67/warc/CC-MAIN-20201203190021-20201203220021-00552.warc.gz"}
https://socratic.org/questions/5900d9e811ef6b57e2fb292b
# Question #b292b Apr 26, 2017 Details below... #### Explanation: The balanced equation is $2 N a B r + M g {\left(N {O}_{3}\right)}_{2} \rightarrow 2 N a N {O}_{3} + M g B {r}_{2}$ In ionic form, this would be $2 N {a}^{+} + 2 B {r}^{-}$$+ M {g}^{2 +} + 2 N {O}_{3}^{-}$ $\rightarrow 2 N {a}^{+} + 2 N {O}_{3}^{-}$$+ M {g}^{2 +} + 2 B {r}^{-}$ There is no net ionic equation really, because there is no reaction among these species. (All substances formed are soluble.) The net ionic equation would result in complete cancellation of all ions.
2021-10-21 09:11:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4412979781627655, "perplexity": 12617.402294200823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00078.warc.gz"}
https://math.stackexchange.com/questions/2007678/proving-that-a-vector-field-is-conservative-using-only-green-theorem
Proving that a vector field is conservative using only Green Theorem Consider a vector field $F: \mathbb{R}^2/\{\vec{0}\} \to \mathbb{R}^2$ $$F=(F_1(x,y),F_2(x,y))$$ $\mathbb{R}^2/\{\vec{0}\}$ is not a simply connected domain . Suppose also that $F$ is irrotational. Can the following procedure be valid to prove that $F$ is conservative? • I find one closed curve $\gamma_1$ that goes around the origin and such that $$\oint_{\gamma_1} F \cdot ds=0$$ (assuming that such curve exists). • If I consider any other closed curve $\gamma_2$ that goes around the origin, then I can use Green Theorem and see the union of $\gamma_2 \cup \gamma_1$ as the border of a regular domain in $\mathbb{R}^2$. Let's say this domain is $D$: then its border is $\partial D=\gamma_1 \cup \gamma_2$. I have to choose the positive orientation for $\partial D$ but suppose that $\gamma_1$ and $\gamma_2$ already have the right orientations in order to have $+\partial D$ (I think that this is not restrictive). Therefore I can finally say that: $$\oint_{+\partial D} F \cdot ds=\oint_{\gamma_1 \cup \gamma_2 } F \cdot ds=\oint_{\gamma_1 } F \cdot ds+\oint_{\gamma_2 } F \cdot ds= \int \int_{D} \partial_{x} F_2- \partial_{y} F_1=0$$ $$\implies \oint_{\gamma_1 } F \cdot ds=-\oint_{\gamma_2 } F \cdot ds$$ Since $F$ is irrotational. But $\oint_{\gamma_1} F \cdot ds=0$, therefore $$\oint_{\gamma_2 } F \cdot ds=0$$ This is valid for any curve $\gamma_2$ that goes around the origin. (indipendently from the fact that $\gamma_1$ and $\gamma_2$ are indeed oriented in the proper way to have $+\partial D$). • For any curve $\gamma_3$ that does not go around the origin it is possible to find a subset $A \subset \mathbb{R}^2/\{\vec{0}\}$ simply connected such that $\gamma_3 \subset A$, and $F$ is irrotational, $F$ is conservative in $A$ and $$\oint_{\gamma_3} F \cdot ds=0$$ • So in conclusion we have $$\oint_{\gamma} F \cdot ds=0 \,\,\, \forall \gamma \subset \mathbb{R}^2$$ and $F$ is conservative. Is the previous proof valid or are there any mistakes? I think it should be valid, but it looks strange because if it is, I would conclude that $F$ is conservative looking at one only curve $\gamma_1$, which seems a bit reductive I think? Theorem. If the field $F=(P,Q)$ defined in $\Omega:={\mathbb R}^2\setminus\{0\}$ has vanishing curl: $Q_x-P_y\equiv0$, and if $\int_{\gamma_*}F\cdot dz=0$ for a single generating cycle $\gamma_*$, then $F$ is conservative. In order to prove this theorem you have to prove that $\int_\gamma F\cdot dz=0$ for all closed curves $\gamma\subset\Omega$. For this you cannot directly appeal to Green's theorem, because such a curve $\gamma$ may have selfintersections, or go around the origin several times, etc. In any case, you cannot assume that $\gamma$ and $\gamma_*$ together form the boundary of a nice domain $D$. Instead we proceed by constructing first a function $f:\>\Omega\to{\mathbb R}$ with $\nabla f=F$ on $\Omega$, i.e., a potential of $F$. For this construction we shall only need Green's theorem for rectangles. For simplicity assume that you can choose four points $z_i=(x_i,y_i)$ $(i\ {\rm mod}\ 4)$ on your "witness curve" $\gamma_*$, such that $z_i$ is in the open quadrant $Q_i$, and such that the part $\gamma_i$ of $\gamma$ connecting $z_{i-1}$ with $z_i$ completely lies in the halfplane containing $Q_{i-1}\cup Q_i$. Choose an arbitrary $c_0\in{\mathbb R}$ and define a function $f_R$ in the right halfplane $H_R$ by $$f_R(x,y):=c_0+\int_{x_0}^x P(x',y_0)\>dx'+\int_{y_0}^y Q(x,y')\>dy'\ .$$ It follows that ${\partial f_R\over \partial y}=Q$ on $H_R$. Applying Green's theorem to the obvious rectangle we see that we also have $$f_R(x,y)=c_0+\int_{y_0}^y Q(x_0,y')\>dy'+\int_{x_0}^x P(x',y)\>dx'\ ,$$ and this shows that ${\partial f_R\over \partial x}=P$ on $H_R$ as well. It follows that $$\nabla f_R(x,y)=F(x,y)\qquad\bigl((x,y)\in H_R\bigr)\ .$$ We have $f(z_0)=c_0$ and, using a standard fact about potentials, $$f(z_1)=c_0+\int_{\gamma_1} F\cdot dz=:c_1\ .$$ In the same way we construct a potential $f_U$ in the upper halfplane requiring $f_U(z_1)=c_1$. Since $f_U(z_1)=f_R(z_1)$ we can conclude that $f_U$ and $f_R$ coincide in the intersection $Q_1$ of their domains; furthermore $$f_U(z_2)=c_1+\int_{\gamma_2} F\cdot dz=:c_2\ .$$ In the same way we construct a potential $f_L$ in the left halfplane requiring $f_L(z_2)=c_2$, and then a potential $f_B$ in the bottom halfplane requiring $$f_B(z_3)=f_L(z_3)=c_2+\int_{\gamma_3} F\cdot dz=:c_3\ .$$ The function $f_B$ is also defined in the fourth quadrant $Q_4$,where we already have $f_R$. Since $\nabla f_B-\nabla f_R=F-F=0$ in $Q_4$ we know that $f_B$ and $f_R$ differ by a constant there. We therefore compute $$f_B(z_0)=c_3+\int_{\gamma_4}F\cdot dz=\ldots= c_0+\int_{\gamma_*}F\cdot dz=c_0\ .$$ This shows that in fact $f_B=f_R$ in $Q_4$, so that $f_R$, $F_U$, $f_L$, $f_B$ together constitute a globally defined potential function $f:\>\Omega\to{\mathbb R}$ of $F$. Therewith it is proven that $F$ is conservative on $\Omega$. • Thanks a lot for this great answer! Can I extend this to the three dimensional case? That is: if $F: \Omega \to \mathbb{R}^3$, with $\Omega=\{(x,y,z) \in \mathbb{R}^3 : x^2+y^2\neq0 \}$ has vanishing curl and $\oint_{\gamma^*} F \cdot ds =0$ for a single cycle $\gamma^*$ that goes around the $z$ axis then $F$ is conservative? – Gianolepo Nov 16 '16 at 10:48
2019-04-25 02:05:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579439163208008, "perplexity": 102.1949032892977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00236.warc.gz"}
https://socratic.org/questions/what-is-an-example-of-a-nuclear-equations-practice-problem
# What is an example of a nuclear equations practice problem? Jan 16, 2015 The two most common types of problems you'll see in nuclear chemistry involve either nuclear half-life calculations, or balancing nuclear equations. I'll show you an example on how nuclear equations pop up in exams or tests. More often than not you will be asked to complete a certain nuclear equation and name the reaction, like $\text{_0^1"n" + _92^235"U" -> ... -> _56^141"Ba" + _36^92"Kr} + \ldots$ When balancing nuclear equations It is very important to know that the sum of the atomic masses must be equal on both sides of the equation; likewise, the sum of the atomic numbers must be equal on both sides. An isotope's atomic mass is represented by the top number, while its atomic number is represented by the bottom number. In the above example, $\text{U}$'s atomic mass is 235 and its atomic number is 92. So, we know that matter must be conserved in any type of nuclear equation - this includes both protons and neutrons, of course. Let's take the first stage of this equation $\text{_0^1"n" + _92^235"U} \to \ldots$ Here, an U-235 isotope is bombarded with a neutron - notice that the neutron has no charge - the bottom number is 0 - and a mass of 1 - the top number. SInce the atomic number will be unchanged, we know for sure that we are dealing with another uranium isotope, but this time its atomic mass will be $\text{235 +1 = 236}$. So, $\text{_0^1"n" + _92^235"U" -> _92^236"U}$ Now for the second stage $\text{_92^236"U" -> _56^141"Ba" + _36^92"Kr} + \ldots$ Here, two elements, which are called daughter nuclei fragments, are formed. Let's solve for the missing particle's atomic mass and atomic number $\text{236 = 141 + 92 + unknown atomic mass}$ $\text{92 = 56 + 36 + unknown atomic number}$ We get that the particle's atomic number is 0 and that its atomic mass is 3, which implies that we are dealing with not one, but three neutrons emitted in the second stage. So, the balanced nuclear equation is $\text{_0^1"n" + _92^235"U" -> _92^236"U" -> _56^141"Ba" + _36^92"Kr" + 3_0^1"n}$ This particular nuclear equation describes the formation of a possible pair of fission fragments for the fission of uranium-235.
2020-02-20 01:17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7578558325767517, "perplexity": 446.4241514979221}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144498.68/warc/CC-MAIN-20200220005045-20200220035045-00454.warc.gz"}
https://www.gamedev.net/forums/topic/397617-depth-sorting/
# Depth sorting This topic is 4206 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm working on an isometric game where the tiles are squares, and having a hard time figuring out how to depth-sort objects. Have a look: http://mail.rochester.edu/~mabernet/isometric.swf (use arrow keys to move the red box, space to jump). At the moment, it sorts from front to back, then top to bottom where there are ties. If you jump on the low wall to the right and move towards the back, you'll see the problem: the wall gets displayed over the box, even though it ought to be the other way around. The problem is that since the box's front-back coordinate is at that point less than the wall's, it gets sorted behind. What's the proper way of computing depth in a situation like this? Is there any linear ordering of objects based on their coordinates that gives what should be in front of what? [Edited by - _Flecko on June 10, 2006 12:09:28 AM] ##### Share on other sites From the looks of that sample, just sort from front to back based on the Y-coordinate of the bottom of each object. Is that what you're doing, or have I misinterpreted the problem? ##### Share on other sites That was more or less what I was doing. However, after a good deal of strugglin, I realized why I'm having so much trouble making it work: because it is -impossible- to do it the way I was trying to :) There is no linear order on the depth of whole objects - cubes, faces, etc. That is, if you're treating objects like a cube as a single unit, you can't find a strict order to draw objects in. If my scanner was set up I'd show you proof; basically if you assume there is a linear ordering, you construct a situation where you have two objects a and b where it is simultaneously true that a is in front of b and that b is in front of a, which contradicts that such an ordering could exist. So, now I gotta find a different way to do it. [Edited by - _Flecko on June 10, 2006 5:45:19 PM] ##### Share on other sites Well, of course such situations exist. That's why you, as the designer, connive yourself up some clever way of preventing the situation. It seemed to me in the example that the collision bounding boxes of those objects were at and for a ways above the base of the object. This prevents an object from suddenly "popping" ahead or behind the other one, which makes it so you never get a situation where it matters. Think oldschool RPGs. A player appears in front of the trunk of a tree, but behind its branches. This is done by having the branches always drawn "in front" of the player, and by making it so you can't walk through the trunk, only either behind it or in front of it. Considering your boxes; you can jump on top of them. I assume that you have an X,Y, and psuedo-vertical-Z coordinate for objects, and that this XYZ position represents the center-front-bottom of the object. Are you sorting them by world Y, or by screen Y? You should be sorting by world position. For ties, you sub-sort by world Z, like you're doing now. _____ / /| /___/ | | | / |_._|/ The "." is where the object is sorted by; at the X center, Y front, Z bottom of the object. ##### Share on other sites This topic is 4206 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account
2017-12-17 02:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2151300311088562, "perplexity": 702.8136829332773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00126.warc.gz"}
https://cs.stackexchange.com/questions/49153/assignment-based-on-ranked-preference/49157#49157
# Assignment based on ranked preference Assume that there are n students, who have to be evenly assigned to m groups. For every student, a preference ranking of of the m groups is given. I partially order assignment by pointwise preference, i.e. one is better or equal to another if for every student, the assigned group is ranked higher or equal. What algorithm can I use find “locally optimal” solutions, i.e. assignments where there are no strictly better solutions? I assume there will be multiple locally optimal solutions. Is there a sensible way to order them without giving the students an incentive to be dishonest in their ranking, i.e. without encouraging strategic voting? If so, can that be solved? And finally: What are the right terms to search for research that solves this and related problems? • Check out the "stable marriage" problem. Nov 6 '15 at 12:30 • Right, that’s related. I’ll see what I can find starting from there. Nov 6 '15 at 12:49 • Did you end up solving this problem in a satisfiable manner? It appears to be a non-trivial step to go from the SMP to this particular problem. Mar 24 '16 at 15:19 • I did not solve it at all, sorry. Mar 24 '16 at 17:30 • This post describes a similar problem and might be of use to you. – Pim Feb 23 '21 at 21:08
2022-01-16 20:08:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312879920005798, "perplexity": 864.1937899515608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00189.warc.gz"}
https://www.physicsforums.com/threads/does-this-limit-exist.192162/
# Does this limit exist 1. Oct 18, 2007 ### azatkgz I've answered to test in this way 1)If $$\lim_{x\rightarrow a}f(x)$$ and $$\lim_{x\rightarrow a}g(x)$$ do not exist,then $$\lim_{x\rightarrow a}(f(x)+g(x))$$ may exist or not. 2)if $$\lim_{x\rightarrow a}f(x)$$ and $$\lim_{x\rightarrow a}(f(x)+g(x))$$ exists then $$\lim_{x\rightarrow a}g(x)$$ must exist. 2. Oct 18, 2007 ### morphism And what are your thoughts on the problems? 3. Oct 18, 2007 ### azatkgz 1)I think it usually does not exist,but addtion limits of some functions may be any number ,like $$\frac{|x|}{x}+\frac{|x|}{x}$$. 2)Here,I thought that if $$\lim_{x\rightarrow a}g(x)$$ does not exist then $$\lim_{x\rightarrow a}(f(x)+g(x))$$ does not exist also. 4. Oct 18, 2007 ### JasonRox You didn't give a solution the limit in 1). You basically said the limit of a function of x is another function of x, when x approaches something. I find that hard to believe. You're adding two limits that don't exist. Is it possible that when adding two limits that don't exist to actually exist after adding them? Think in terms of graphs and how the graph looks like when a limit does not exist. Using the practice from 1), you should be able to handle 2).
2016-12-10 20:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5328659415245056, "perplexity": 852.691730904377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543567.64/warc/CC-MAIN-20161202170903-00376-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.lecture-notes.co.uk/susskind/quantum-entanglements/lecture-5/violation-of-bells-theorem/
# Violation of Bell’s theorem ### Bell's theorem Bell's theorem is a result from set theory. In classical terms, it is unremarkable but we shall show that it does not hold for the singlet state - the simplest quantum system that exhibits entanglement. It shouldn't be too surprising, since states in quantum theory are complex vectors, rather than elements of sets. The violation of Bell's theorem is a very simple way to see that there is no underlying classical interpretation of quantum mechanics. Historically, this was known before Bell, but his theorem is perhaps the most elegant demonstration. Later, Alan Aspect proved the result for entangled photons rather than electrons. The measurements made on the photons were sufficiently simultaneous so that no light signal (information) could travel between them, hence completely eliminating any chance that the result was due to anything other than entanglement. Figure 5.3 - Bell's theorem Let $A,B,C \subset U$ be three finite subsets of some universal set $U$. We use the notation $$A^c = \left\{x \in U: x \notin A\right\}$$ for the complement of $A$, and then \begin{align*}A \setminus B &= A \cap B^c\\ &= \left\{x \in U: x \in A \text{ and } x \notin B\right\}\end{align*} We now define a function, $N$, say that just counts the number of elements in a set. Then, Bell's theorem states that $$N\left(A \setminus B\right) + N\left(B \setminus C\right) \geq N\left(A \setminus C\right)$$ To prove it, note that, from figure 5.3, \begin{align*}N\left(A \setminus B\right) &= N(1) + N(4)\\N\left(B \setminus C\right) &= N(2) + N(3)\\N\left(A \setminus C\right) &= N(1) + N(2)\end{align*} So, we get \begin{align*}N\left(A \setminus B\right) + N\left(B \setminus C\right) &= N(1) + N(4) + N(2) + N(3)\\&\geq N(1) + N(2)\\&= N\left(A \setminus C\right)\end{align*} We can easily convert this into statements about probabilities of propositions. If $N(U)$ is the total number of elements in a universal set of propositions, then the probability that a particular proposition, $X$, is true is the function $$P(X) = \frac{N(X)}{N(U)}$$ Bell's theorem then becomes $$P\left(A \setminus B\right) + P\left(B \setminus C\right) \geq P\left(A \setminus C\right)$$ ### Propositions about the singlet state We shall consider the spin of the singlet state along the directions $$\hat{z}, \hat{x}, \hat{w} = \frac{\hat{z} + \hat{x}}{2}$$ Consider the following propositions, • $A$: the spin of the first electron is measured to be $+1$ in the $\hat{z}$ direction. • $B$: the spin of the first electron is measured to be $+1$ in the $\hat{w}$ direction. • $C$: the spin of the first electron is measured to be $+1$ in the $\hat{x}$ direction. We also need the complement propositions $B^c,C^c$ for the theorem. Due to the nature of the singlet state, if the spin of the first electron is not measured to be $+1$ in the $\hat{n}$ direction, say, then that must mean that the spin of the second electron is measured to be $+1$ in the $\hat{n}$ direction. Thus, • $B^c$: the spin of the second electron is measured to be $+1$ in the $\hat{w}$ direction. • $C^c$: the spin of the second electron is measured to be $+1$ in the $\hat{x}$ direction. For ease of notation, write $$[\hat{m}, \hat{n}] \rightarrow \left\{\begin{matrix}1\text{st electron measured } +1 \text{ in } \hat{m} \text{ direction }\\2\text{nd electron measured } +1 \text{ in } \hat{n} \text{ direction }\end{matrix}\right.$$ Figure 5.4 - Propositions about the singlet state We can now combine these propositions, • $A \setminus B \rightarrow [\hat{z}, \hat{w}]$ • $B \setminus C \rightarrow [\hat{w}, \hat{x}]$ • $A \setminus C \rightarrow [\hat{z}, \hat{x}]$ We shall show that, contrary to Bell's theorem, the probabilities that these propositions are true satisfy $$P[\hat{z}, \hat{w}] + P[\hat{w}, \hat{x}] \not \geq P[\hat{z}, \hat{x}]$$ ### Calculating the probabilities We first note that rotational symmetry implies that $P[\hat{z}, \hat{w}] = P[\hat{w}, \hat{x}]$, since only the angle between the pair of electrons determines the probability and this angle is $45^{\circ}$ for both $P[\hat{z}, \hat{w}]$ and $P[\hat{w}, \hat{x}]$. So, in order to contradict Bell's theorem, we need to show that $$2P[\hat{z}, \hat{w}] \not\geq P[\hat{z}, \hat{x}]$$ The projection operator associated with the combined proposition $[\hat{m}, \hat{n}]$ is $$\frac{\mathbf{I} + \boldsymbol{\sigma}_m}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_n}{2}$$ and the corresponding probability that the spin of each electron of the singlet state, $\ket{S}$ is measured as up in the given directions is $$P[\hat{m}, \hat{n}] = \bra{S}\frac{\mathbf{I} + \boldsymbol{\sigma}_m}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_n}{2}\ket{S}$$ Now, in matrix form, the general projection operator for the first electron is $$\frac{\mathbf{I} + \boldsymbol{\sigma}_m}{2} = \tfrac{1}{2} \begin{bmatrix} 1 + m_z & 0 & m_- & 0 \\ 0 & 1 + m_z & 0 & m_-\\ m_+ & 0 &1 - m_z & 0 \\ 0 & m_+ & 0 & 1 - m_z \end{bmatrix}$$ We only need to consider the $\hat{z}$ operator, $$\frac{\mathbf{I} + \boldsymbol{\sigma}_z}{2} = \tfrac{1}{2} \begin{bmatrix} 1 + 1 & 0 & 0 & 0 \\ 0 & 1 + 1 & 0 & 0\\ 0 & 0 &1 - 1 & 0 \\ 0 & 0 & 0 & 1 - 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$$ The general projection operator for the second electron is $$\frac{\mathbf{I} + \boldsymbol{\tau}_n}{2} = \tfrac{1}{2} \begin{bmatrix} 1 + n_z & n_- & 0 & 0 \\ n_+ & 1 - n_z & 0 & 0 \\ 0 & 0 & 1 + n_z & n_- \\ 0 & 0 & n_+ & 1 - n_z \end{bmatrix}$$ We need two second-electron operators, for the $\hat{w}, \hat{x}$ directions. They are $$\frac{\mathbf{I} + \boldsymbol{\tau}_w}{2} = \tfrac{1}{2} \begin{bmatrix} 1 + \tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}} & 0 & 0 \\ \tfrac{1}{\sqrt{2}} & 1 - \tfrac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 1 + \tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}} \\ 0 & 0 & \tfrac{1}{\sqrt{2}} & 1 - \tfrac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{2\sqrt{2}} \begin{bmatrix} \sqrt{2} + 1 & 1 & 0 & 0 \\ 1 & \sqrt{2} - 1 & 0 & 0 \\ 0 & 0 & \sqrt{2} + 1 & 1 \\ 0 & 0 & 1 & \sqrt{2} -1 \end{bmatrix}$$ and $$\frac{\mathbf{I} + \boldsymbol{\tau}_x}{2} = \tfrac{1}{2} \begin{bmatrix} 1 + 0 & 1 & 0 & 0 \\ 1 & 1 - 0 & 0 & 0 \\ 0 & 0 & 1 + 0 & 1 \\ 0 & 0 & 1 & 1 - 0 \end{bmatrix} = \tfrac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{bmatrix}$$ Thus, the two relevant compound projections in matrix form are \begin{align*}[\hat{z}, \hat{w}] &= \frac{\mathbf{I} + \boldsymbol{\sigma}_z}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_w}{2}\\&= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \frac{1}{2\sqrt{2}} \begin{bmatrix} \sqrt{2} + 1 & 1 & 0 & 0 \\ 1 & \sqrt{2} - 1 & 0 & 0 \\ 0 & 0 & \sqrt{2} + 1 & 1 \\ 0 & 0 & 1 & \sqrt{2} -1 \end{bmatrix}\\&= \frac{1}{2\sqrt{2}} \begin{bmatrix} \sqrt{2} + 1 & 1 & 0 & 0 \\ 1 & \sqrt{2} - 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\end{align*} and \begin{align*}[\hat{z}, \hat{x}] &= \frac{\mathbf{I} + \boldsymbol{\sigma}_z}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_x}{2}\\&= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \tfrac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{bmatrix}\\&= \tfrac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\end{align*} The singlet state can be written as $$\ket{S} = \tfrac{1}{\sqrt{2}}\begin{bmatrix}0 \\ 1 \\ -1 \\ 0 \end{bmatrix}$$ and so now we are in a position to calculate the probabilities \begin{align*}2P[\hat{z}, \hat{w}] &= 2\bra{S}\frac{\mathbf{I} + \boldsymbol{\sigma}_z}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_w}{2}\ket{S}\\&= 2\left(\tfrac{1}{\sqrt{2}} \begin{bmatrix}0 & 1 & -1 & 0 \end{bmatrix}\right) \left(\frac{1}{2\sqrt{2}} \begin{bmatrix} \sqrt{2} + 1 & 1 & 0 & 0 \\ 1 & \sqrt{2} - 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\right) \left(\tfrac{1}{\sqrt{2}}\begin{bmatrix}0 \\ 1 \\ -1 \\ 0 \end{bmatrix}\right)\\&= \frac{1}{2\sqrt{2}} \begin{bmatrix}0 & 1 & -1 & 0 \end{bmatrix} \begin{bmatrix}1 \\ \sqrt{2} - 1 \\ 0 \\ 0 \end{bmatrix}\\&= \frac{\sqrt{2} - 1}{2\sqrt{2}} \approx 0.146\end{align*} Also, \begin{align*}P[\hat{z}, \hat{x}] &= \bra{S}\frac{\mathbf{I} + \boldsymbol{\sigma}_z}{2} \frac{\mathbf{I} + \boldsymbol{\tau}_x}{2}\ket{S}\\&= \left(\tfrac{1}{\sqrt{2}} \begin{bmatrix}0 & 1 & -1 & 0 \end{bmatrix}\right) \left(\tfrac{1}{2} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\right) \left(\tfrac{1}{\sqrt{2}}\begin{bmatrix}0 \\ 1 \\ -1 \\ 0 \end{bmatrix}\right)\\&= \tfrac{1}{4} \begin{bmatrix}0 & 1 & -1 & 0 \end{bmatrix} \begin{bmatrix}1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\\&= \tfrac{1}{4} = 0.25\end{align*} Clearly, $$2P[\hat{z}, \hat{w}] \approx 0.146 \not\geq 0.25 = P[\hat{z}, \hat{x}]$$ hence we have shown that the singlet state violates Bell's theorem.
2017-11-17 19:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000053644180298, "perplexity": 155.34591223167612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00163.warc.gz"}
http://fricas.github.io/api/IntegrationResultRFToFunction.html
# IntegrationResultRFToFunction R¶ complexExpand(i) returns the expanded complex function corresponding to i. complexIntegrate(f, x) returns the integral of f(x)dx where x is viewed as a complex variable. expand(i, x) returns the list of possible real functions of x corresponding to i. integrate(f, x) returns the integral of f(x)dx where x is viewed as a real variable. split(u(x) + sum_{P(a)=0} Q(a, x)) returns u(x) + sum_{P1(a)=0} Q(a, x) + ... + sum_{Pn(a)=0} Q(a, x) where P1, ..., Pn are the factors of P.
2017-07-26 00:38:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706597328186035, "perplexity": 3465.1717974452135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00425.warc.gz"}
https://puzzling.stackexchange.com/questions/74687/bike-riding-puzzle
Bike Riding Puzzle Eight persons, A through H, were standing along a circular track, not necessarily in the same order, such that the distance along the track between any two persons standing adjacent to each other was exactly 1 km. At exactly 9:00 AM, all of them started riding along the track in the clockwise direction. However, four of them started riding at a speed of 1 km/h, while the other four started riding at a speed of 3 km/h. The following information is known about their relative positions along the track at exactly 10:00 AM, as they were riding: No two persons were at the same position. G was immediately behind D, while E was diametrically opposite C. B was not immediately behind H, while F was diametrically opposite A. The following information is known about their relative positions along the track at exactly 11:00 AM, as they were riding: Neither F nor B was either immediately ahead of or immediately behind H. E was either immediately ahead of or immediately behind D. PS: It can be solved completely. Find the arrangement at 9 AM,10 AM and 11 AM ? The solution I found is: The arrangement and speed (k/hr) of people at 9:00am are D G A E B H F C : 3 1 3 1 3 1 3 1 Note: Bold writing indicates information that is given or has been deduced earlier. I solved this by making the following inferences from the given information: At 10:00am No two persons were at the same position. The order of 3k/hr and 1k/hr riders alternates (i.e. it is 3,1,3,1,3,1,3,1) G was immediately behind D G and D travel at different speeds E was diametrically opposite C E and C travel at the same speed F was diametrically opposite A F and A travel at the same speed We also know 4 riders ride at each speed. For the above statements to all be true, E & C must travel at a different speed to F & A At 11:00am E was either immediately ahead of or immediately behind D E and D travel at different speeds D, F and A travel at the same speed G, E and C travel at the same speed Thus B and H travel at different speeds Something to realize at this point is that any two riders who are 'adjacent' (and thus travelling at different speeds) cannot be 'adjacent' in +/-2 hours. Also, any rider (Alice) who is not adjacent to another rider (Bob), who rides at a different speed must then be adjacent to the rider (Charlie) who is diametrically opposite of the other rider (Bob). Put simply; if Alice rides at a different speed to both Bob and Charlie, Alice is adjacent to exclusively Bob or Charlie if Bob and Charlie form a diametrically opposite pair. At 11:00am E and D cannot be adjacent at 9:00am E was diametrically opposite C E and D travel at different speeds D and C must be adjacent at 9:00am Neither F nor B was either immediately ahead of or immediately behind (adjacent to) H B and H travel at different speeds B and H must be adjacent at 9:00am . At this point, some visual representations are helpful. Consider 9:00am E and C are diametrically opposite 09:00 _ _ _ E _ _ _ C 09:00 _ _ _ E _ _ D C 09:00 D _ _ E _ _ _ C We can deduce that the latter must be correct and thus C must be immediately behind D at 9:00am because G is immediately behind D (at 10:00am) 09:00 D G _ E _ _ _ C (if G rides 1k/hr) 09:00 D _ _ E _ G _ C (if G rides 3k/hr) Since A and F are diametrically opposite and travelling at a different speed to E and C, they are either each immediately behind E and C or each immediately ahead of E and C. In this case D is already immediately ahead of C so A and F must each be immediately behind E and C. However, we do not know which is A or F 09:00 D G X E _ _ X C (if G rides 1k/hr) 09:00 D _ X E _ G X C (if G rides 3k/hr) B and H must be adjacent at 9:00am B was not immediately behind H at 10:00am 09:00 D G X E B H X C (G rides 1k/hr) B, D, F and A travel at the same speed H, G, E and C travel at the same speed F was not immediately ahead of or immediately behind H at 11:00am F and H travel at different speeds F and H must be adjacent at 9:00am 09:00 D G A E B H F C (G rides 1k/hr) • Your first solution is correct. Sequence at 9 AM is DGAEBHFC. – sam Nov 5 '18 at 13:27 • Thanks. I realized why the 2nd solution doesn't hold. Edited answer accordingly – SockPastaRock Nov 5 '18 at 13:53 It can be deduced that: The length of the track is 8 km. At 10:00 As there weren't any people occupying the same position, then: The only possible combination of speeds in kmph is 3 1 3 1 3 1 3 1 or 1 3 1 3 1 3 1 3. (I am still not sure of this, but any combination that fits here must have each 1 preceded by a 3 by a distance not equal to 2 km). Since E was diametrically opposite to C, then: E and C are moving at the same speed. Since F was diametrically opposite to A, then: F and A are also moving at the same speed. Since G was immediately behind D, then: G and D are not moving at the same speed. At 11:00 The combination of speeds is similar to that at 10:00, with the positions of the 3's and 1's swapped. And: E and C are still on the same diameter. F and A are also on the same diameter. Since E was either immediately ahead of or immediately behind D, then: E and D are not moving at the same speed. • You are on the right track. Good going! – sam Nov 4 '18 at 16:41 9am - DGFBHECA 10am - CAGDBFEH 11am - FCHGABDE - the distance along the track between any two persons standing adjacent to each other was exactly 1 km This indicates that the track is exactly 8km long, since the distance between the last and first person is also 1km (taking from any starting person). This also indicates that no one is at the same position at 9am. Moving on to 10am: - No two persons were at the same position Taking a sample position grid: Position i,ii,iii,iv,v,vi,vii,viii 9am 1,2,3,4,5,6,7,8 10am assuming that Person i is travelling at 3km, it would be at position 4 now. Since no person were at the same position at 10am, it means that person iii cannot be travelling at 1km. Repeating the same logic, it seems that alternate person at travelling at 1km/h or 3km/h respectively. Hence 10am position will be: 10am > 4,3,6,5,8,7,2,1 11am > 7,4,1,6,3,8,5,2 - G was immediately behind D Assume case 1: i = D, ii = G or case 2: i = G, iv = D Lets further see a possible arrangement of the positions with the following statements: - ... E was diametrically opposite C. - ... F was diametrically opposite A. 123 8 4 765 so 1 is diametrically opposite 7, 2/6 etc ..... For both cases above, since 4 & 3/5 is occupied, only 1/7 & 2/6 is left for the E/C & F/A pair. Lets visualize the 4 main cases possible (noting that E/C & F/A could switch): DGFBHEAC DGEBHFCA GBFDHEAC GBEDHFCA • B was not immediately behind H B/H is either in iv/v or ii/V, which is position 3,5,8 respectively. Hence this condition is fulfilled. Moving on to 11am: - Neither F nor B was either immediately ahead of or immediately behind H Position ii & v is adjacent to each other (4/3), hence the last 2 cases are invalid. If H is iv/v (6/3 at 11am), F cannot be vii/viii (5/2 at 11am) respectively. - E was either immediately ahead of or immediately behind D Since D is at i, it is at position 7 at 11am. D has to be at 6/8, making position vi (8) the only case. Final answer is therefore starting position of DGFBHEAC. • Pls have a recheck of your solution. It is not correct. – sam Nov 4 '18 at 15:04
2020-01-29 13:44:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476385235786438, "perplexity": 1560.6837877882786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00087.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=0562503
MathSciNet bibliographic data MR562503 10D15 (22E55) Jacquet, Hervé Automorphic forms on ${\rm GL}(2)$${\rm GL}(2)$. Part II. Lecture Notes in Mathematics, Vol. 278. Springer-Verlag, Berlin-New York, 1972. xiii+142 pp. Links to the journal or article are not yet available For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2016-05-27 09:16:06
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919453859329224, "perplexity": 3865.0018114502595}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00104-ip-10-185-217-139.ec2.internal.warc.gz"}
https://edge-conference.de/Pob/4344_word-equation-for-the-extraction-of-iron-from-the-ore-iron-oxide.html
Home ball mill grinding media word equation for the extraction of iron from the ore iron oxide #### word equation for the extraction of iron from the ore iron oxide Word equations for iron ore extraction? AnswersOct 11, 2009· Carbon Monoxide + Iron Oxide Iron + Carbon Dioxide. or. 3 CO + Fe2O3 2 Fe + 3 CO2. Another might be iron oxide + carbon > carbon dioxide + ironExtracting iron Iron and aluminium GCSE Chemistr extraction of iron metallurgy blast furnace and reactionsthe extraction of iron from its ore is a long and subdued process, that helps in separating the useful components from the waste materials such as slag. what happens in the blast furnace? the purpose of a blast furnace is to reduce the concentrated ore chemically to its liquid metal state.method of extraction and energy change irondue to the fact iron is not a reactive metal, it can be extracted using an carbon in a blast furnace. this is because iron is less reactive than carbon. the iron is then displaced from the oxide by high temperatures, and this is known as a displacement reaction. reduction of the ore a blast furnace is used for extracting iron.iron production chemistry libretextsaug 15, 2020· iron production. the production of iron from its ore involves an oxidationreduction reaction carried out in a blast furnace. iron ore is usually a mixture of iron and vast quantities of impurities such as sand and clay referred to as gangue. the iron found in iron ores are found in the form of iron oxides. as a result of these impurities, ironextracting iron secondary science 4 alljul 21, 2014· the calcium oxide then reacts with the acidic impurities (mainly silica) present in the iron ore to form molten slag (calcium silicate). cao(s) + sio 2 (s) casio 3 (l) the slag floats on top of the molten iron, because it is less dense than the iron, and is tapped off separately.displacement reactions of metals (3) activityin a blast furnace, iron is heated with coke (carbon) and air blasted through the mixture. carbon monoxide is formed and this reacts with the iron oxide ore to form iron. (a) apart from forming carbon monoxide, what is the other use of the coke in a blast furnace? (b) write an equation for the formation of iron.extracting iron secondary science 4 alljul 21, 2014· the calcium oxide then reacts with the acidic impurities (mainly silica) present in the iron ore to form molten slag (calcium silicate). cao(s) + sio 2 (s) casio 3 (l) the slag floats on top of the molten iron, because it is less dense than the iron, and is tapped off separately.reaction mechanism and thermodynamics of segregationiron particles recovered after magnetic separation both from ferric oxide and mill scale is studied by electron probe microscopy analyzer. keywords segregation roasting, iron oxide, alkali chloride, thermodynamics, reaction mechanism, epma, sem 1. introduction the segregation process of copper oxide ore is a techniquewhat is the word equation for the reduction of iron oreoct 19, 2009· what substance in the iron extraction process reduces iron oxide to iron? it is carbon (in the form of coke) that is added to blast furnaces to reduce iron oxide and recover the iron.extraction of iron 651 words 123 help meextraction of iron iron, perhaps the most important element to all civilization is also one of earth's most abundant. like the majority of metal ores, iron ores are not pure compounds. rather, most iron ore compounds are polluted with sand, rock and silica. the process of extracting ironextraction of iron concentration of ore an overviewsep 23, 2019· the extraction of iron from its ore is the third and the penultimate process in metallurgy, which is the process of separating metals from their ores. the common ores of iron are iron oxides. these oxides can be reduced to iron by heating them with carbon in the form of coke. heating coal in the absence of air produces coke. Chat Online ### The case of word equation for the extraction of iron from the ore iron oxide university of cambridge internationalaluminium oxide aluminium 2100 iron oxide iron 425 nickel oxide nickel 475 zinc oxide zinc 925 (a) (i) use the information in the table to arrange aluminium, iron, nickel and zinc in order of their reactivity. least reactive most reactive [1] (ii) suggest why aluminium is extracted by electrolysis rather than by heating with carbon.reactivityofmetals2.pdf 1 a student was trying to(c) a chemical reaction for the extraction of iron is fe 2 o 3 + 3co 2fe + 3co 2 (i) complete the word equation for this chemical reaction. _____ + carbon monoxide iron + _____ (2) (ii) draw a ring around the correct answer to complete the sentence. decomposition. iron is extracted from its ore by oxidation. reduction.choose gases from this list to complete the word equations(a)iron is extracted from iron ore. part of the process involves reduction of the ore with carbon monoxide. iron ore contains iron oxide (fe2o3). write a balanced equation for the reaction of iron oxide with carbon monoxide.what is the role of coke in the extraction of iron fromactually the basic need of coke is to reduce the iron oxide to iron. however coke is also used to make carbon dioxide which can trap heat. carbon dioxide is a greenhouse gas and so it is better at trapping and retaining the heat required for melti...word equations for iron ore extraction? answersoct 11, 2009· carbon monoxide + iron oxide iron + carbon dioxide. or. 3 co + fe2o3 2 fe + 3 co2. another might be iron oxide + carbon > carbon dioxide + ironextracting iron iron and aluminium gcse chemistryiron(iii) oxide + carbon iron + carbon dioxide 2fe 2 o 3 (s) + 3c(s) 4fe(l) + 3co 2 (g) in this reaction, the iron(iii) oxide is reduced to iron, and the carbon issmelting smelting is a process of applying heat to ore in order to extract a base metal. it is a form of extractive metallurgy.it is used to extract many metals from their ores, including silver, iron, copper, and other base metals.smelting uses heat and a chemical reducing agent to decompose the ore, driving off other elements as gases or slag and leaving the metal base behind.the extraction of iron chemistry libretextsjul 12, 2019· 164400. contributed by jim clark. former head of chemistry and head of science at truro school in cornwall. table of contents. contributors and attributions. this page looks at the use of the blast furnace in the extraction of iron from iron ore, and the conversion of the raw iron from the furnace into various kinds of steel.how to extract iron from it ore word equationextracting iron to be able to describe what happens in. extracting iron iron ore making sure you have written in the three word equations for makingwhat is the equation of the exothermic reaction in a blastnov 24, 2008· in a blast furnace a mixture of iron oxide (fe2o3), coke (carbon derived from coal), and limestone are heated to produce elemental iron (fe) plus residual slag. Get Price ##### Related Article • extraction of copper from its sulphide ore equation copper extraction and purification chemguideExtracting copper from its ores. The method used to extract copper from its ores depends on the nature of the ore. Sulphide ores such as chalcopyrite are converted to copper by a different method from silicate, c • word equation for the extraction of iron from the ore iron oxide Word equations for iron ore extraction? AnswersOct 11, 2009· Carbon Monoxide + Iron Oxide Iron + Carbon Dioxide. or. 3 CO + Fe2O3 2 Fe + 3 CO2. Another might be iron oxide + carbon > carbon dioxide + ironExtracting iron Iron and aluminium GCSE Chemistr • chemical equation for the extraction of iron Extraction of Iron Metallurgy Blast Furnace and ReactionsJul 13, 2018· The Haematite reacts with Carbon in the cast iron to give pure iron and carbon monoxide gas which escapes. $$Fe_{2}O_{3}+ 3 C \rightarrow 2Fe + 3 CO$$Estimated Reading Time 3 minsExtra • balanced the equation for the reduction of iro ore Write a balanced equation for the reduction of iron oreAnswer to Write a balanced equation for the reduction of iron ore (Fe_2O_3) to iron, using hydrogen, by applying the method of halfreactions. By...The balanced equation for the reduction of iron oreThe • extraction of gold with equation Extraction of GoldThe solution of gold in the cyanide solution is accompanied by the intermediate formation of hydrogen peroxide, and the process is accelerated by addition of thisextraction of gold equation congresoextensionaugm.clextraction of the gold e • balanced equation for the extraction of iron from its ore Extraction of Iron Metallurgy Blast Furnace and ReactionsJul 13, 2018· Wrought iron is the purest form of iron available commercially available and is prepared from cast iron by heating cast iron in a furnace lined with HaematiteEstimated Reading Time 3 m • the use of cyanide in gold mining chemical equation Cyanide Use in Gold Mining EarthworksHow is cyanide used in mining? A sodium cyanide solution is commonly used to leach gold from ore. There are two types of leaching Heap leaching In theHydraulic Fracturing 101 · 1872 Mining Law · Frackingrelated Earthqua • elsner equation for cyanide leaching of gold Gold cyanidation OverviewChemical reactionsHistoryApplicationRecovery of gold from cyanide solutionsCyanide remediation processesEffects on the environmentAlternatives to cyanideThe chemical reaction for the dissolution of gold, the "Elsner Equation", foll • equation for the extraction of gold Extraction of GoldThe solution of gold in the cyanide solution is accompanied by the intermediate formation of hydrogen peroxide, and the process is accelerated by addition of this substance 2 Au +4 KCN +2 H2O + O2 = 2 KAu ( CN) 2 +2 KOH + H2O2; 2 Au +4 KCN • new word of thicken THICKEN Synonyms 41 Synonyms Antonyms for THICKENFind 41 ways to say THICKEN, along with antonyms, related words, and example sentences at Thesaurus.com, the world's most trusted free thesaurus.Thicken Definition of Thicken by MerriamWebsterDefinition of • karcher wd 5200 m vacuum filters Multipurpose vacuum cleaner WD 5.200 M KaercherMultipurpose vacuum cleaner WD 5.200 M. Unfortunately, the desired product is not part of our current product range anymore. Accessories, cleaning productsWD 5.200 MP Wet and dry vacuum cleaner KÄRCHER PDFDe • processing bangla minings word processing Bengali Meaning processing Meaning inইংরেজি বাংলা Online অভিধান। Providing the maximum meaning of a word by combining the best sources with us. processing Bengali Meaning processing Meaning in Bengali at englishbangla.com processing শব্দের • nernest equation for varied copper concentration 17.3 Concentration Effects and the Nernst EquationFeb 02, 2016· Ecell = E ∘ cell − (0.0591 V n)logQ = 0 E ∘ = (0.0591 V n)logQ logQ = E ∘ n 0.0591 V = (1.10 V)(2) 0.0591 V = 37.23 Q = 1037.23 = 1.7 × 1037. Recall that at equilibriumEstimated Reading Time 9 • balance equation for the extraction of iron in the bladt furnance Extraction of Iron Metallurgy Blast Furnace and ReactionsJul 13, 2018· Wrought iron is the purest form of iron available commercially available and is prepared from cast iron by heating cast iron in a furnace lined with Haematite (Fe 2 O 3). The Haematite • few word in hindi of gold equipment Google TranslateGoogle's free service instantly translates words, phrases, and web pages between English and over 100 other languages. goldMust include goldTranslate · Google TranslateHindi EtymologyHistoryOfficial StatusGeographical DistributionComparison • equation for manganese extraction from pyrolusite ore mno2 Equation For Manganese Extraction From Pyrolusite Ore Mno2Feb 06, 2021· Extraction of Manganese Sulphate from Local Pyrolusite Ore by . Extraction of Manganese Sulphate from Local Pyrolusite Ore by Cellulose. Reduction Method. Khin Su Su . manganese ore are • molarity equation solver Molarity Calculator [with Molar Formula]Dec 02, 2016· Follow these steps to find the molarity of an unknown solution with the titration method Prepare the concentrations put the analyte in an Erlenmeyer flask and the titrant in a burette. Mix the concentra • cyanidation process equation Gold cyanidation OverviewChemical reactionsHistoryApplicationRecovery of gold from cyanide solutionsCyanide remediation processesEffects on the environmentAlternatives to cyanideThe chemical reaction for the dissolution of gold, the "Elsner Equation", foll • stokes equation as applied in mineral proccesing Stokes' Law Settling Velocity (Deposition)predicted by Stokes’ law F n = −p r=Rp (θ) 0 π ∫ 0 2π ∫ R p 2sin θdφ p =2πµRu ∞ F t = τ rθ r=R p sinθ) 0 π ∫ 0 2 ∫ R p 2sin θdφ p =6πµR pu ∞ =4πµRu ∞ F buoyant = πD p 3ρg 6 • examples of Reynolds numbers of particle • equation for the extraction of iron from it ore Extracting iron Iron and aluminium GCSE ChemistryHere are the equations for the reaction Iron(III) oxide + carbon → iron + carbon dioxide. 2Fe 2 O 3 (s) + 3C(s) → 4Fe(l) + 3CO 2 (g)Extraction of Iron Metallurgy Blast Furnace and ReactionsJul 13, 2018· W
2021-12-03 06:28:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4261924922466278, "perplexity": 8915.072846755187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00466.warc.gz"}
https://gamedev.stackexchange.com/questions/55054/using-graphicsdevice-viewport-project-with-a-small-farplanedistance
# Using GraphicsDevice.Viewport.Project with a small farPlaneDistance I am using "GraphicsDevice.Viewport.Project()" to create a "3D" waypoint system like so; I have a farPlaneDistance of 5000 in my camera; objProjectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), flAspectRatio, 1.0f, 5000.0f); Obviously when the waypoint is farther than 5000, it cant find it within the viewport and bugs out. Is there another way to display waypoints on screen (within a 3D world) without using GraphicsDevice.Viewport.Project()? Or, is increasing my z buffer out to 100,000 not that big of an issue (i find this hard to believe)? Learning as I go, Thanks in advance! Linuxx • Is it only the waypoints that suffer from this problem? If so, you might be able to relocate that part of the drawing into some sort of 2D overlay along with the rest of your user interface. – Seth Battin May 6 '13 at 0:39 • As seth mentioned, the better solution would be to implement a 2D overlay. Additionally, if you want the indicator to be rendered in the 3D space, simply clip it to the end of the view frustum to ensure it is visible if the actual waypoint is further away. – Evan May 6 '13 at 21:03 ## 1 Answer The problem with a large farPlaneDistance is floating point errors. This can cause things to draw in the wrong order because multiple verts are getting are falling withing the values that a float can re-present. I recommend doing something like the below to lock the object a certain distance from the camera if it is too far away. var location = Vector3.Subtract(object.Vector3, camera.Vector3); if (location.length > 5000) { location.normalize(); location.Multiply(5000) object.Vector3 = Vector3.Addition(camera.Vector3 + location); } Another common solution is to use point sprites but they are not supported in XNA 4.0. • XNA 4.0 doesn't seem to have a Scale() method, so I created my own version of this and failed miserably. pastebin.com/uiAGPGS1 – Linuxx May 6 '13 at 17:25 • My bad its Vector2.Multiply. – ClassicThunder May 6 '13 at 20:14 • w00t, it worked. Thanks a ton. Sorry for the noobish, I'm still learning. Here is the end result, just in case someone wants to see it. pastebin.com/YFed0DKZ – Linuxx May 7 '13 at 1:26
2019-10-14 17:48:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17852698266506195, "perplexity": 1650.50955704561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00429.warc.gz"}
https://cburrell.wordpress.com/2007/11/30/academic-genealogies/
November 30, 2007 Yesterday I had an interesting discussion with a friend about his academic genealogy. In an academic family tree, one’s thesis advisor is one’s “parent”, and his advisor is one’s “grandparent”, and so on. I was inspired to look up my own academic genealogy, which I was able to do through the SPIRES database. The results were fascinating, and I thought I would share them. We’ll begin with my own humble self, and work backward: • Craig Burrell (Toronto, 2004) • Michael Luke (Harvard, 1991) Howard Georgi (Yale, 1971) $\hspace{1cm}$Dirac Medal 2000; Sakurai Prize 1995 • Charles M. Sommerfield (Harvard, 1957) Julian Schwinger (Columbia, 1939) $\hspace{1cm}$ → Nobel Prize 1965 Isidor Isaac Rabi (Columbia, 1927) $\hspace{1cm}$ → Nobel Prize 1944 Albert Potter Wills (Clark, 1897) Arthur Gordon Webster (Humboldt, 1890) Hermann Ludwig Ferdinand von Helmholtz (Humboldt, 1842) Johannes Peter Müller (Bonn, 1819-24) Karl Asmund Rudolphi (Greifswald, 1795) Christian Ehrenfried von Weigel (Göttingen, 1771) Johann Christian Polycarp Erxleben (Göttingen, 1775) Abraham Gotthelf Kästner (Leipzig, 1739) Christian August Hausen (Wittenberg, 1712) Johann Christoph Wichmannshausen (Leipzig, 1685) Otto Mencke (Leipzig, 1666/8) That’s sixteen generations stretching back over 330 years. I am agog. • My academic roots are clearly German, the transition to the New World having come in the late nineteenth century with Arthur Gordon Webster, who acquired his doctorate at Humboldt University in Berlin, but taught at Clark University in Worcester, MA (the second oldest graduate school in the United States, I note). His name suggests he was an American by birth, and must have made the journey to Germany to study under Helmholtz. He sounds like an interesting character. • Hold on. If Wikipedia is to be believed, there is more to Arthur Gordon Webster than meets the eye. He was the founder of the American Physical Society! His area of research was mechanics and acoustics, he worked on the gyroscope, and he had a talent for languages, being “fluent in Latin, Greek, German, French, and Swedish, with a good knowledge of Italian and Spanish and competency in Russian and modern Greek.” Sadly, he committed suicide in 1923. • Webster’s advisor was Hermann von Helmholtz, one of the great scientists of the nineteenth century. He is primarily remembered today through the things that bear his name: the Helmholtz equation and the Helmholtz coil in electromagnetics, and the Helmholtz theorem in vector calculus. He did important work on the conservation of energy, and in thermodynamics. Apparently he also worked extensively on the physiology and optics of the eye, and in acoustics, publishing a book on the physiological basis of the aesthetics of music. Among his students are many illustrious names: Hertz, Michelson (of Michelson-Morley fame), Wilhelm Wundt (the “father of experimental psychology”), and even William James. • Helmholtz is the first “physicist”, in the modern sense, in my genealogy. His advisor, Johannes Müller, was an anatomist and physiologist, as was Müller’s advisor Karl Rudolphi, who also did work in botany and zoology. Rudolphi is remembered (happily) for arguing that plants are constituted from cells, and (unhappily) for proposing that the various human races should properly be considered different species. The Rudolphi whale is named for him. He is the founder of helminthology, which sounds great until you know what it is. • Rudolphi studied under Christian von Weigel, a professor of chemistry, botany, pharmacy, and mineralogy. The genus Weigela is named for him. • Weigel studied with Johann Christian Polycarp Erxleben. My suspicion is that Erxleben was a Catholic, for Protestants just don’t name their children “Polycarp”. He was a professor of veterinary science, and founded the first German academic veterinary school. He authored a book called Systema regni animalis (1777), which might indicate that he took an interest in the work of his contemporary Carl Linnaeus. • Erxleben studied with Abraham Kästner, a mathematician. Kästner wrote voluminous surveys of mathematical topics, and books of original poetry. He directed the observatory at the university in Göttingen, and the Kästner lunar crater is named for him. • Next in line is Christian Hausen. I am unable to find much information about him, other than to say that he did early (very early) work on electricity and electrical generation. I note with interest, however, that he worked at the University of Leipzig at the same time that J.S. Bach was living and working in the city! Is it possible that Hausen saw the great man with his own eyes and heard him with his own ears? My friends, I think it is very possible. Marvelous! • Curiously, Hausen’s advisor, who bore the very impressive name of Johann Wichmannshausen, was a philologist and professor of Near Eastern languages. He wrote a book on the Templars (De extinctione ordinis Templariorum) and his thesis was on an important (and still relevant) point of moral philosophy (Disputationem Moralem De Divortiis Secundum Jus Naturae). • With Wichmannshausen’s advisor, Otto Mencke, the trail dries up. Mencke studied at Leipzig, but I am unable to discover who his advisor was, if indeed the advisor–student relationship existed at that time in the form we have come to expect. Mencke was a distinguished man, founding Germany’s first scientific journal, Acta Eruditorum, and being a correspondent of Isaac Newton. His thesis was entitled Ex Theologia naturali – De Absoluta Dei Simplicitate, Micropolitiam, id est Rempublicam In Microcosmo Conspicuam. If I were to hazard a translation, I would say “From Natural Theology – On the Absolute Simplicity of God, the Small State, that is, the Republic as a Miniature Cosmos”. I admit that this translation makes no sense. Anyone care to help? • It is nice to see those two Nobel prizes in my past. Julian Schwinger’s was for his work on quantum electrodynamics, of course (awarded also to Feynman and Tomonoga). Isidor Rabi’s prize was for work on magnetic resonance properties of nuclei. It was Rabi who famously greeted the news of the discovery of the muon by asking, “Who ordered that?” Personally, I’m rather fond of the muon, but then children, even academic great-great-great-grandchildren, often seem to their elders to have peculiar tastes. I’m quite delighted by my genealogy. There are quite a number of distinguished men in that lineage, and I am proud to be a part of it, however inconsequential. I acknowledge, however — as I bring this post back to the conversation that spawned it — that it is not half so distinguished as my friend’s, whose family tree includes, if you can believe it, Schur, Frobenius, Weierstrass, Gauss, Hilbert, Klein, Dirichlet, Poisson, Fourier, Lagrange, Euler, and Bernoulli! Well done! (Incidentally, my lineage does intersect with his, and in the following way: Abraham Kästner was advisor not only to Johann Erxleben, but also to Johann Pfaff, who was in turn the advisor of Gauss.) UPDATE: Here is a diagram showing the lineage discussed in this post. Lyman Alexander Page, Jr. (MIT, 1989) Stephan Meyer (Princeton, 1979) David Wilkinson (Michigan U., 1962) Horace Richard Crane (Caltech, 1934) Charles Christian Lauritsen (Caltech, 1929) The trail runs cold rather quickly . . . it doesn’t say on SPIRES who Crane’s adviser was, but an internet search seemed to yielded Lauritsen; no similar luck with Lauritsen, who originally came from Denmark. The earlier guys were nuclear physicists. Wilkinson is the eponym of the WMAP satellite. 2. cburrell Says: This is splendid fun. I did some hunting on Lauritsen, and it appears his advisor was Robert Millikan. The trail is hot again — very hot! I’d love to know where the trail leads from there. Do tell! 3. Jim Says: What a lovely parlour game for a winter afternoon. Alas, I have to use Digital Dissertations and have a far shorter genology, but mine goes: Jim Farney (Toronto 2008?) Donald Forbes (Yale 1976) Robert Dahl (Yale 1940) F.W. Coker (??) Before Coker, I’m not sure, but then political scientists didn’t really get into this PhD business until the turn of the 20th century anyway. I may have to dig around more, for it would be fascinating to go farther back. Dahl (who is still publishing at 92) is probably my best claim to fame in the genealogical department 4. cburrell Says: Thanks for playing, Jim. Do you suppose Robert Dahl is any relation to Roald Dahl? No, probably not. 5. […] genealogies II December 28, 2007 Several weeks ago I wrote about some aspects of my academic genealogy. I have continued to investigate, and happily I have […] 6. […] Welcome to “Academic Genealogy: The von Haller Edition”. In a series of previous posts (I, II) I have been exploring my academic genealogy, and one of the men I encountered was Albrecht von […] 7. […] genealogies IV March 13, 2008 In several previous posts (I, II, III) I have been exploring my academic genealogy. At the end of the last post I had two […]
2016-10-20 19:33:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4296337068080902, "perplexity": 6322.142720989423}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00499-ip-10-171-6-4.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/229674/howto-align-mount-holes-in-2-pcb-the-right-way
# howto align mount holes in 2 pcb the right way? I'm tring to make some sort of shield board (it's not an arduino): • first board was made in orcad, I have source files gerbers and so on... • I'm making a 2nd board in eagle. When trying to align the 2 boards mount holes and connectors I am having some trouble. I tried exporting pdfs of each board and importing into gimp and inkscape, but the boards get rezised and im afraid it doesnt work. Sometimes I get huge files and computer crashes... I would apreciate ideas on how I could align the boards. What would be the right way to do this? ## Use DXF... (it's designed for this) DXF is a file format. It stands for Drawing Exchange Format... as in exchanging CAD data between systems. 1. Export your OrCAD PCB outline and drill holes as a DXF drawing 2. Import your new DXF as a layer into your EAGLE layout 3. Use the new layer to align your EAGLE parts (you may need to do this inside of a part if you are footprinting a shield template) 4. Remove or disable the drawing layer (it's just a reference, EAGLE can't do anything else with it) Try the importdxf ulp that's up on www.cadsoftusa.com -> Downloads -> User Language Programs All you have to do to use it is type RUN followed by enter in one of the editors. Browse to the ULP and hit open. The ULP will now run and present you with an easy to follow dialog. ### Alternatively... ...use an in-between format. I found that inkscape (freeware) is good at reading DXF and can convert to HPGL format. HPGL format is easily converted to EAGLE script format. The mount holes are likely to be a specific drill size and there's usually not many of them. Open the drill file (usually Excellon format and it'll be supplied with the Gerbers) in any text editor and read the raw coordinates, they won't be too hard to find. I don't know Eagle so I won't tell you how to place your new holes on exact positions. When you're done, generate a drill file from the new board and compare holes for the appropriate drill size with the original. (Use a spreadsheet to subtract offsets if your new board has a different origin position) You can open the Orcad design file (probably the .MAX file for the PCB) in the (free, downloadable) viewer and look at the mounting hole coordinates. Or use Orcad itself if you have it installed. Click on one of the pads, then open the spreadsheet for 'footprints' and you will see something like this (the one you clicked should be at the top and highlighted): If you want to use Brian's method, the default Orcad Excellon file name is a text file named THRUHOLE.TAP and the relevant section would look something like this: T12C0.138F200S100 X001000Y002750 X001000Y022750 X026000Y002750 X026000Y022750 This is tool 12 (T12), diameter (in inches in this case) of 0.138 with some more-or-less random downfeed rate (200 inches per minute) and spindle RPM (100,000)- those numbers will be replaced by the PCB manufacturer most likely anyway. The tools are in sequence from T1 onward, not necessarily sorted by size. This file was (as usual) done with absolute coordinates, not incremental, and the holes are at coordinates (in inches). The number of digits may vary, as may the units, but inches are still most common. In inches, the coordinates are as follows: (0.1, 0.275),(0.1, 2.275),(2.6,0.275),(2.6,2.275) If you are trying to align connectors, I suggest using the pad locations rather than the component locations. This is a big pain, especially when switching EDA systems. Recently I saw hundreds of boards scrapped because the designer misaligned a couple mounting holes by a bit over 1mm. Not much, but enough. Fortunately, it was caught before the boards were populated. So I suggest listing and double/triple checking all the critical alignment coordinates. Comparing the Gerbers might be worthwile. • thanks for your answer : how do i access the footprint spreadsheet on orcad 16 ? – Cristian Mardones Apr 21 '16 at 16:19 • The icon that looks like a grid and has a rollover tooltip "View Spreadsheet". That's assuming Layout. If it's another Orcad PCB program, I don't know. We switched from Orcad before converting. – Spehro Pefhany Apr 21 '16 at 16:22 The trick was: -take reference coordinates from orcad -make a calculus in excel , to transpose to eagle relative coordinates -apply relative coordinates to eagle board. -export to DXF in eagle -LASER CUT the eagle board in Plexy glass this way i was able to test 100% of alignment: mount holes + connector pins !! marvelous , board arrived , NO SURPRISES !!!!! i slept like a million dollars !!
2020-07-11 14:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3306021988391876, "perplexity": 4254.824142751723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00180.warc.gz"}
https://zbmath.org/?q=an:0801.35001
# zbMATH — the first resource for mathematics Homogenization of differential operators. (Усреднение дифференциальных операторов.) (Russian. English summary) Zbl 0801.35001 Moskva: Izdatel’skaya Firma “Fiziko-Matematicheskaya Literatura”. 464 p. (1993). This interesting book (monograph) concerns the already well developed theory of homogenization. It provides us with the mathematical foundations of the theory, as well as it indicates various relations with other theories of mathematics, physics and mechanics like, for example, ergodic theory, the theory of composite materials, etc. The book contains, for instance, the homogenization of differential equations of elliptic or parabolic type with periodic, almost periodic or random coefficients. There are many examples and solutions to important physical problems such as diffusion in random media, elasticity problems in perforated or stratified domains and so on. Some of the 17 chapters of the book are devoted to such problems as $$G$$- convergence of differential operators, $$\Gamma$$-convergence of functionals, homogenization of nonlinear variational problems, spectral problems of homogenization theory, boundary value problems in perforated random domains, homogenization and percolation, etc. Many references to the current literature on the subject are also given. ##### MSC: 35-02 Research exposition (monographs, survey articles) pertaining to partial differential equations 35B27 Homogenization in context of PDEs; PDEs in media with periodic structure 47F05 General theory of partial differential operators 74E05 Inhomogeneity in solid mechanics 74E30 Composite and mixture properties
2021-10-22 19:11:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5148745775222778, "perplexity": 1590.2884884279997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00405.warc.gz"}
http://apniphysics.com/viva/expt2-viva-questions-to-determine-the-wavelength-of-the-laser-light-by-diffraction-grating/
In this post # Diffraction grating practical lab: Practice Quiz If you use this practice test please share your experience through the comment box below. So I can prepare for other experiments too if it is beneficial for you all. Viva Questions Determine the wavelength of the laser light by diffraction grating and demonstration. 1. What is diffraction grating? 2. What is diffraction? 3. Does diffracted Waves can interfere? 4. What are the Laser light characteristics ? 5. What is the formula for to find the percentage error? Observed value=A Standard value=B 1. $Error&space;=&space;\frac{A-B}{B}\times&space;100$ 2. $Error&space;=&space;\frac{A-B}{A}\times&space;100$ 3. $Error&space;=&space;\frac{A-B}{2B}\times&space;100$ 4. $Error&space;=&space;\frac{2A-B}{B}\times&space;100$ 6. What is the condition for constructive interference? 7. What is population Inversion? 8. What is grating Element? 9. How to find the grating element of a given diffraction grating having 1500 Lines Per Inch (LPI)? 10. What is the standard value of Red Color wavelength in He-Ne gas LASER? 11. What is spontaneous and stimulated emission process? 12. What is full form of the LASER? 13. What is Interference? 14. Why do you use n lembda= d sin theta formula? What is the origin to derive this formula for this experiment? #### Video: Just click the link and watch the video In the above video, you can watch the demonstration of the experiment includes the diffraction grating practical lab. The grating is a glass slab that may have a number of lines known as grating elements. You may have noticed the text on the glass slab, i.e., 15000 LPI. It means 15000 thousand lines per inch. ### How to perform diffraction grating practical lab You know that in one inch is equal to 2.54 cm. So if you have to determine the grating element then you have to solve it mathematically. You can perform a diffraction grating practical lab with any laser. More you can observe from the quiz. Join the Courses http://apniphysics.com/courses/
2019-12-09 21:55:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3818179666996002, "perplexity": 1287.534377860427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00290.warc.gz"}
https://en.m.wikipedia.org/wiki/Tangent_vectors
# Tangent vector (Redirected from Tangent vectors) In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in Rn. More generally, tangent vectors are elements of a tangent space of a differentiable manifold. Tangent vectors can also be described in terms of germs. Formally, a tangent vector at the point ${\displaystyle x}$ is a linear derivation of the algebra defined by the set of germs at ${\displaystyle x}$. ## Motivation Before proceeding to a general definition of the tangent vector, we discuss its use in calculus and its tensor properties. ### Calculus Let ${\displaystyle \mathbf {r} (t)}$  be a parametric smooth curve. The tangent vector is given by ${\displaystyle \mathbf {r} ^{\prime }(t)}$ , where we have used a prime instead of the usual dot to indicate differentiation with respect to parameter t.[1] The unit tangent vector is given by ${\displaystyle \mathbf {T} (t)={\frac {\mathbf {r} ^{\prime }(t)}{|\mathbf {r} ^{\prime }(t)|}}\,.}$ #### Example Given the curve ${\displaystyle \mathbf {r} (t)=\{(1+t^{2},e^{2t},\cos {t})|\ t\in \mathbb {R} \}}$ in ${\displaystyle \mathbb {R} ^{3}}$ , the unit tangent vector at ${\displaystyle t=0}$  is given by ${\displaystyle \mathbf {T} (0)={\frac {\mathbf {r} ^{\prime }(0)}{\|\mathbf {r} ^{\prime }(0)\|}}=\left.{\frac {(2t,2e^{2t},\ -\sin {t})}{\sqrt {4t^{2}+4e^{4t}+\sin ^{2}{t}}}}\right|_{t=0}=(0,1,0)\,.}$ ### Contravariance If ${\displaystyle \mathbf {r} (t)}$  is given parametrically in the n-dimensional coordinate system xi (here we have used superscripts as an index instead of the usual subscript) by ${\displaystyle \mathbf {r} (t)=(x^{1}(t),x^{2}(t),\ldots ,x^{n}(t))}$  or ${\displaystyle \mathbf {r} =x^{i}=x^{i}(t),\quad a\leq t\leq b\,,}$ then the tangent vector field ${\displaystyle \mathbf {T} =T^{i}}$  is given by ${\displaystyle T^{i}={\frac {dx^{i}}{dt}}\,.}$ Under a change of coordinates ${\displaystyle u^{i}=u^{i}(x^{1},x^{2},\ldots ,x^{n}),\quad 1\leq i\leq n}$ the tangent vector ${\displaystyle {\bar {\mathbf {T} }}={\bar {T}}^{i}}$  in the ui-coordinate system is given by ${\displaystyle {\bar {T}}^{i}={\frac {du^{i}}{dt}}={\frac {\partial u^{i}}{\partial x^{s}}}{\frac {dx^{s}}{dt}}=T^{s}{\frac {\partial u^{i}}{\partial x^{s}}}}$ where we have used the Einstein summation convention. Therefore, a tangent vector of a smooth curve will transform as a contravariant tensor of order one under a change of coordinates.[2] ## Definition Let ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$  be a differentiable function and let ${\displaystyle \mathbf {v} }$  be a vector in ${\displaystyle \mathbb {R} ^{n}}$ . We define the directional derivative in the ${\displaystyle \mathbf {v} }$  direction at a point ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$  by ${\displaystyle D_{\mathbf {v} }f(\mathbf {x} )=\left.{\frac {d}{dt}}f(\mathbf {x} +t\mathbf {v} )\right|_{t=0}=\sum _{i=1}^{n}v_{i}{\frac {\partial f}{\partial x_{i}}}(\mathbf {x} )\,.}$ The tangent vector at the point ${\displaystyle \mathbf {x} }$  may then be defined[3] as ${\displaystyle \mathbf {v} (f(\mathbf {x} ))\equiv (D_{\mathbf {v} }(f))(\mathbf {x} )\,.}$ ## Properties Let ${\displaystyle f,g:\mathbb {R} ^{n}\to \mathbb {R} }$  be differentiable functions, let ${\displaystyle \mathbf {v} ,\mathbf {w} }$  be tangent vectors in ${\displaystyle \mathbb {R} ^{n}}$  at ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ , and let ${\displaystyle a,b\in \mathbb {R} }$ . Then 1. ${\displaystyle (a\mathbf {v} +b\mathbf {w} )(f)=a\mathbf {v} (f)+b\mathbf {w} (f)}$ 2. ${\displaystyle \mathbf {v} (af+bg)=a\mathbf {v} (f)+b\mathbf {v} (g)}$ 3. ${\displaystyle \mathbf {v} (fg)=f(\mathbf {x} )\mathbf {v} (g)+g(\mathbf {x} )\mathbf {v} (f)\,.}$ ## Tangent vector on manifolds Let ${\displaystyle M}$  be a differentiable manifold and let ${\displaystyle A(M)}$  be the algebra of real-valued differentiable functions on ${\displaystyle M}$ . Then the tangent vector to ${\displaystyle M}$  at a point ${\displaystyle x}$  in the manifold is given by the derivation ${\displaystyle D_{v}:A(M)\rightarrow \mathbb {R} }$  which shall be linear — i.e., for any ${\displaystyle f,g\in A(M)}$  and ${\displaystyle a,b\in \mathbb {R} }$  we have ${\displaystyle D_{v}(af+bg)=aD_{v}(f)+bD_{v}(g)\,.}$ Note that the derivation will by definition have the Leibniz property ${\displaystyle D_{v}(f\cdot g)(x)=D_{v}(f)(x)\cdot g(x)+f(x)\cdot D_{v}(g)(x)\,.}$
2021-09-21 06:19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775469303131104, "perplexity": 564.1705667066069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00047.warc.gz"}
https://www.springerprofessional.de/smoothing-additive-manufactured-parts-using-ns-pulsed-laser-radi/18850360?fulltextView=true
main-content ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen 09.02.2021 | Full Research Article | Ausgabe 2/2021 Open Access Zeitschrift: Progress in Additive Manufacturing > Ausgabe 2/2021 Autoren: Florian Kuisat, Fernando Lasagni, Andrés Fabián Lasagni ## 1 Introduction Additive manufacturing (AM) processes are becoming more and more important for industrial applications because complex parts can be easily manufactured. A wide range of materials can be used for AM processes. Common metals are Ti64 and Scalmalloy ®. Due to their low weight and excellent strength properties, both materials are excellent materials for a wide range of applications, such as in the aerospace industry [ 1, 2]. The AM components are fabricated layer by layer from three-dimensional models, as opposed to traditional subtractive fabrication technologies. Numerous technologies such as Powder Bed Fusion (PBF) or Direct Energy Deposition (DED) exist for this purpose. Most of these technologies have in common that the powder particles are locally melted and solidified [ 3]. In the case of PBF process, a laser or electron beam scans and melts the powder layer to create the component. Due to the nature of the fabrication process, the manufactured parts have a relatively high roughness level on their surface, typically between 8 and 30 µm (Ra) for Laser PBF, which in some cases reduces their fatigue behavior and mechanical performance [ 4]. For instance, Sakar et al. demonstrated an improvement in fatigue life of laser-treated AM components of more than 100% compared to its as-built counterpart [ 5]. Other researchers have also shown enhancement of the stress resistance caused by surface post-treatment processes [ 6, 7]. This means that this is an extremely important characteristic of the surface quality, which is relevant to improve the technical capacities of these components such as fatigue lifetime [ 8, 9]. To improve the surface quality of AM parts, further production steps are frequently necessary [ 10]. Therefore, the roughness has to be decreased by means of a surface finishing process. Common surface finishing techniques are milling, blasting and vibration grinding [ 4, 11, 12]. However, the limited ability to treat complex shapes and geometries is often a challenge for these technologies [ 13, 14]. Another flexible post-processing technology for improving the quality of the 3D-manufactured parts consists of using laser surface treatments. Laser polishing has become a significant importance in the last 10 years and has several advantages against conventional polishing techniques [ 1518]. For example, laser-based methods work without employing mechanical forces and deformations and make it possible to polish complex three-dimensional geometries or workpieces with thin materials [ 19]. Due to the characteristic of the laser treatment process, the surface morphology can be changed based on re-melting, without the addition of polishing agents, chemicals or grinding materials [ 20]. Many ongoing research works concentrate on laser-based surface smoothing techniques. Mainly, continuous wave operating laser sources (cw) are used for macro-applications with remelting depths up to 200 µm. In contrast, pulsed laser sources (pw) are generally used for micro polishing with remelting depths of several micrometers [ 21]. Lambarri et al. investigated the laser surface smoothing process of nickel-based alloys and were able to show a reduction in roughness as a function of the scanning speed and laser power using cw laser radiation [ 22]. Similarly, Marimuthu et al. have examined that cw laser polishing of SLM components can be an effective way to improve the surface quality in Ti-6Al-4 V [ 23]. Pulsed laser radiation is partially used for micro polishing processes where the beam irradiates the material with laser pulses at a fluence level that causes surface melting and ablation. For example, Perry et al. showed a reduction of the average surface roughness Ra from 0.206 to 0.070 µm using a Nd:YAG laser source with 650 ns long pulses [ 24]. Also, Chow et al. have indicated that the surface quality can be improved by adjustment of the focal offset position of the laser radiation [ 25]. Many other research studies concerning laser polishing have been published by Propawe and other researchers [ 2629], reporting that the quality of the smoothing process can be controlled by diverse parameters such as the pulse duration, the laser power or the focal position. In addition, the quality of the smoothed surface depends on the base material and the initial surface topography and roughness. The reached results up to now have shown that laser smoothing can be used to successfully compete with conventional polishing processes. However, only a few publications are available on laser smoothing of the innovative additive manufactured Scalmalloy ® or Titanium 64 substrates [ 16, 2325, 30], defining the main objective of the present research. In particular, the use of an industrial laser system providing pulses in the nanosecond range for the post-treatment process of AM materials has not been sufficiently researched. This work examines the utilization of nanosecond pulsed laser smoothing as innovative method to improve the surface quality of additive manufactured parts of Scalmalloy ® and Titanium 64. An important aspect of the investigation is to determine the achievable surface roughness levels. The experiments are conducted using a nanosecond-pulsed infrared laser source with variable pulse durations between 8 and 200 ns was applied. The topography of the treated samples is investigated using confocal microscopy as well as scanning electron microscopy. ## 2 Experimental procedure ### 2.1 Materials Specimens of Ti6Al4V, also known as Titanium 64 (Ti64), and Scalmalloy ® (Al–Mg–Sc) were used in this study. The samples were produced by laser-Powder Bed Fusion which belongs to the category of additive manufacturing processes. This technology brings the best resolution and accuracy of current metal AM methods. These materials are characterized by high strength and high ductility combined with lightweight and are today the most used additively manufactured alloys for aerospace components [ 31, 32]. Ti6Al4V and Scalmalloy ® used powders were supplied by Renishaw and TOYAL Europe, respectively. The chemical composition of the main alloying elements for the Al–Mg–Sc alloy used in this study ranged from 4.5–4.9 Mg, 0.68–0.80 Sc, 0.2–0.4 Zr and 0.2–0.7 Mn (all compositions in wt%). More detailed information can be found in [ 1]. The particle size distribution for this alloy was D10 = 30.6 µm, D50 = 48.0 µm, D90 = 69.1 µm with an average particle size of 51.0 µm. Particles were mostly spherical, with circularity values (obtained from optical microscopy) between 0.78 and 1.00 for more than 50% of the particles. For the case of Ti64, the composition of the main alloying elements ranged between 5.5 and 6.75 Al, 3.5–4.5 V (see details in [ 33]). The average particle size was 36.8 µm, with the following particle size distribution: D10 = 24.2 µm, D50 = 35.8 µm, D90 = 61.2 µm. Also, in this case, mostly spherical particles were observed, with circularity values between 0.85 and 1.00 for more than 50% of the particles. The samples were manufactured on Renishaw AM250 and RenAM 500 systems for Ti6Al4V and Scalmalloy ®, respectively, under argon atmosphere. The layer thickness was set to 30 µm in both cases. Samples were also thermally treated before extraction at 325 °C during 4 h (annealing/aging) in air atmosphere followed by slow cooling for Scalmalloy ®, and at 730 °C during 1.5 h (annealing) in a vacuum with Argon atmosphere following by slow cooling. Based on the used manufacturing process, the initial surface roughness ( Sa) was 21.20 ± 2.85 µm for Ti64 and 19.58 ± 9.04 µm for Scalmalloy ®. Prior to the laser smoothing process, the samples were cleaned with isopropanol to remove dirt particles. ### 2.2 Direct laser smoothing The laser experiments were realized using the direct laser writing (DLW) technology, which were performed on an industrial laser system (GF machining solutions P 600). The setup is illustrated in Fig.  1a. The DLW workstation uses a Ytterbium fiber laser source which emits light with a wavelength of 1064 nm with a maximal output power of 30 W depending on the pulse duration. The maximal available laser power decreases for shorter pulse durations at constant repetition rate. For instance, the maximal laser power at a frequency of 30 kHz is approximately 30 W for a pulse duration of 200 ns, 14 W for 100 ns and 2.5 W for 8 ns. The pulse duration can be set between 4 and 200 ns. The laser beam was focused on the sample with an f-theta lens with a focal length of 254 mm. This results in a working distance of approx. 280 mm from the laser head. In this study, the highest laser power was used for all pulse durations. The laser fluence F per pulse can be adjusted by changing the laser spot diameter at the irradiated region which is controlled by the working position. This methodology has been already used by Mai and Lim [ 34] as well as Chow et al. [ 25]. The focused laser beam has a spot diameter of approx. 70 µm, where the highest fluence level can be achieved. Three different working positions were utilized. The laser beam was scanned over the surface using a galvanometer scanner, with a maximal speed of 3.6 m/s. To achieve a homogenous smooth surface, two different scan strategies were examined. Parallel process strategy and 90° rotated process strategy were considered as shown in Fig.  1b, c. The number of scans for both process strategies were also examined and varied between 1 and 10. The pulse overlap in y-direction and the hatch distance in x-direction were varied between 80 and 99%. This can be calculated by the pulse separation distance and the laser spot diameter [ 35]. The pulse overlap was adjusted by constant triggering the laser pulses with 30 kHz and changing the scan speed (30 mm/s up to 3600 mm/s). Using Eq. ( 1), it possible to calculate the laser fluence ( F) at the laser spot: $$F\; = \;\frac{{P_{L} }}{f\;A^{\prime}},$$ (1) where P L is the average laser power, f is the repetition rate (or frequency) and A is the irradiated area. The parameters used in this study are summarized in Table 1. Table 1 Overview of selected parameters Parameter Value/range Laser fluence (J/cm 2) 0.57–24.0 Focus offset position (mm) 0.0, 3.0, 6.0 Frequency (kHz) 30 Pulse duration (ns) 8, 20, 50, 100, 200 Overlap/hatch distance (%) 0–99 Number of scans 1–10 For the smoothening test, matrices with areas of 2 × 2 mm were processed for each parameter. All experiments were performed in air, at normal conditions of pressure and temperature. ### 2.3 Surface characterization For the examination of the morphology of the laser-treated surfaces, a confocal microscope (Sensofar S Neox) with 20 × magnification was used. The roughness parameter Sa (arithmetical mean height), which is an extension of Ra to a surface, was measured according to ISO 25178. For statistical analyses, different positions in each treated are were measured. A Scanning Electron Microscope (ZEISS Supra 40 VP) was used to analyze the surface topology of the samples at an operating voltage of 5.0 kV. ## 3 Results and discussion Using the infrared DLW setup, Ti64 and Scalmalloy ® substrates were laser treated to examine the influence of the processing parameters on the surface roughness. Firstly, the effect of the pulse duration and the laser fluence were investigated, as shown in Effect of pulse duration and focus position on the surface roughness. The laser fluence was varied by changing the offset position of the DLW configuration. A second set of experiments was performed with the previously selected focus offset position and pulse duration for both materials. In this case, the influence of the pulse overlap, moving direction as well as the number of scans on the surface roughness were tested. These results are shown in Surface smoothing by changing of number of passes and moving directions. Finally, the ablation characteristic caused by the laser treatment process with the used ns-pulses is described in detail in Material removal due to the smoothing process. ### 3.1 Effect of pulse duration and focus position on the surface roughness In the first set of experiments, the effect of the pulse duration and the amount of energy applied into the workpiece on the surface roughness were investigated. Three different focus positions were chosen to determine the spot diameter and, therefore, the energy which was applied to the material. Depending on the position where the beam interacts the surface, the laser can either heat, melt or ablate the material. The process strategy follows the approach described in detail by Chow et al. [ 25]. The laser fluence also depends on the pulse duration and was varied between 8 and 200 ns. Shorter pulse durations are accompanied by smaller laser fluence. A summarized overview of the parameters used here is shown in Table 2. Table 2 Overview parameter screening set 1 Focus position (mm) Spot diameter (µm) Laser fluence (J/cm 2) 0.0 70 2.1 (8 ns)–23.4 (200 ns) 3.0 (position 1) 90 1.3 (8 ns)–14.1 (200 ns) 6.0 (position 2) 135 0.5 (8 ns)–6.2 (200 ns) Based on the state of the art, it is clear that the choice of parameters plays an important role in the quality of the final surface finish. In this regard, the change of Sa, by varying the focus position between 0 and 6 mm as well as the pulse duration between 8 and 200 ns, is shown in Fig.  2. As it can be observed, these parameters strongly influenced the surface topography depending on the material. For a better understanding of the effect of the laser processes on the surface roughness, the initial surface roughness is marked in green in Fig.  2, where the dashed lines denote the standard deviation. The large observed standard deviation for the reference surfaces can be explained by the quite irregular topography of these specimens. The obtained results show that the pulse duration is the most relevant parameter influencing the surface roughness, as it can be observed for Ti64 in Fig.  2a and for Scalmalloy ® in Fig.  2b. The treatment of the Ti-alloy (Fig.  2a) follows a comparable scheme for all pulse durations depending on the focus positions. No significant variation in surface roughness was observed for pulse durations below 20 ns. This suggests that the applied energy was insufficient and the material was only heated. By increasing the pulse duration, the surface roughness begins to decrease which can be explained by the higher applied energy into the material. As a result of the higher energy input, the material begins to melt and the initial high roughness peaks are reduced by re-melting. This phenomenon can be observed at all examined focus offset positions. By increasing the pulse duration up to 100 ns, the roughness of the Ti64 samples could be significantly reduced. For example, the Sa-value was reduced from 21.20 to 7.9 µm when using the focus offset position 2. In general, it can be observed that a focus offset has a positive effect on the roughness which can be explained by the increased spot diameter and thus a decreased applied energy per area. A further increasing of the pulse duration to 200 ns leaded to an increase of the roughness, which indicates that the amount of energy introduced into the material was too high, producing a larger volume of molten material and even starting an ablation process. For this pulse duration (200 ns), the strongest increase of the surface roughness was observed on the sample treated at the focus offset position 1. In this case, the surface roughness increases from 21.2 to 41.7 µm. In consequence, the optimal process condition for smoothing the surface of the titanium alloy was at a pulse duration of 100 ns and a focus offset position of 6.0 mm, and were used for the rest of the experiments performed in this study. A different behavior was observed for Scalmalloy ®. In this case, the effectiveness of the roughness reduction sing short pulse durations (up to 100 ns) is low. As it can be seen in Fig.  2b, all measured surface roughness was in the range of the statistic standard deviation of the reference sample (from 10.5 to 28.6 µm). A positive effect on the surface roughness was only visible when longer pulses (200 ns) were used. This leads to the assumption that the thermal energy input was too low to sufficiently affect the material with pulse durations below 200 ns. For instance, at the focus position, the surface roughness decreased from 19.6 to 8.6 µm. This suggests that the complete irradiated area was remelted using the focused beam. As a result, the molten material flows into the valleys and reduces the roughness. However, for the focus offset position 2, the roughness was not affected, while at the focus position 1, the surface roughness increased to 38.4 µm. This effect can be explained due to the fact, that the thermal applied energy was too low at the focus offset position 2 to affect the material. The material was neither ablated nor remelted. When using the focus offset position 1, the thermal energy introduced into the material was higher, which resulted in more molten material. Nevertheless, the applied energy was not sufficient to remelt the roughness peaks and thus reduce the roughness. It also seems, that the valleys of the material were affected by the laser and created material craters. The created craters and the initial roughness peaks had a negative effect on the roughness, which resulted in an increase in the roughness value. The optimal parameters in these experiments for Scalmalloy ® were at a pulse duration of 200 ns and the focused laser spot, without offset (0.0 mm). In summary, the first experiments performed have shown that laser pulses with longer durations are more effective for reducing surface roughness: 100 ns and 200 ns for Ti64 and Scalmalloy ®, respectively. Based on these results, the above-mentioned pulse durations were used in the rest of this study. ### 3.2 Surface smoothing by changing of number of passes and moving directions After the preliminary laser treatment experiments described in 3.1, a second set of experiments was performed. In this case, a large number of parameter variations were realized to find a correlation between the pulse-to-pulse overlap (OV = 80–99%) and the scanning strategy (parallel and 90° rotated, see Fig.  1) with the aim of reducing the surface roughness. Figure  3 shows representative 3D images of the surface topography of both materials depending on the pulse overlap (OV). The surface topography images for the Ti64 (a–c) and Scalmalloy ® (d–f) samples provide evidence that the surface roughness can be significantly influenced by the laser process. In all cases, 10 passes were used. The confocal microscope images of Fig.  3 show that, for example, for a pulse overlap of 99% a very rough topography is obtained in both materials, which can be explained by the large amount of produced molten material due to the very high cumulated energy density (see Fig.  3a, d). In contrast, when the pulse overlap was lower, the amount of the molten material reduced and different surface conditions, depending on the material were observed, as shown in Fig.  3b, c, e–f. Further evaluations were done to quantify the effects in detail. The measured results in terms of varying pulse-to-pulse distance and the movement strategy are summarized in Fig.  4a, b for Ti64 and in Fig.  4c, d for Scalmalloy ®. The measured surface roughness, or arithmetical mean height Sa, illustrates the strong effect of the laser treatment. As it can be seen in the figure, the highest pulse overlap (99%) led to the highest surface roughness (see confocal images in Fig.  3a, d for both materials). In this case, the amount of molten material strongly increased due to the high pulse overlap and thus the high amount of cumulated energy. This effect was observed for all number of performed scans (passes). A different behavior was observed for smaller pulse overlaps. In this case, the effectiveness of roughness reduction using pulse overlaps between 80 and 95% was higher. As it can be seen in Fig.  4, many measured roughness values were below the statistical standard deviation of the reference samples. In particular, the surface roughness strongly decreased with the number of passes and the lowest value was achieved after 10 passes. Clearly, the most important observation is that both materials can be smoothed, but with different parameters. In case of Ti64, the best results were obtained at 100 ns (as explained in the previous section) and a laser fluence of 3.1 J/cm 2. Differently, for Scalmalloy ®, the optimal conditions were with 200 ns pulses and a significantly higher laser fluence of 23.4 J/cm 2. These noticeable differences can be attributed to the different optical and thermal properties of the used materials, such as the reflection and thermal conductivity. For example, the reflectivity at normal incidence R (φ = 0) is approximately 0.55 for pure titanium and 0.95 for pure aluminum in the near-infrared region [ 36]. Due to the higher reflection and thus less absorption of the aluminum alloy (Scalmalloy ®), more energy per unit of area is required to treat the material. In addition, the higher thermal conductivity for Scalmalloy ® is responsible for a longer thermal diffusion length and thus higher fluences are needed to reach higher temperatures at the material surface. From Fig.  4a, b, it can be also seen that in case of Ti64, the smoothest surface was achieved using a pulse overlap of 95%. Based on the surface topography analysis, it was found that in case of lower pulse overlaps (90%) and number of passes, the surface roughness was just marginally affected by the laser treatment. An important effect on the roughness reduction was only visible after 10 passes. In this case, the surface roughness was decreased from 21.20 to 14.20 µm, using the strategy “parallel” (10 passes and 90% Overlap). This means that each laser scan partially remelted the material surface, contributing positively to the reduction of the surface roughness. Further improvements were observed for a pulse to pulse overlap of 95%. In this case, the roughness decreases continuously as function of the number of passes. This leads to the assumption that due to the higher cumulated energy, a continuous smoothing process takes place since more material peaks were remelted with each pass. The average surface roughness for the mentioned overlap after 10 passes was 7.22 ± 2.31 µm and 3.45 ± 1.30 µm for both strategies used, being parallel (Fig.  4a) and rotated (Fig.  4b), respectively. In consequence, the smoothest surface was obtained with a pulse overlap of 95% and 10 passes with rotated scanning strategy, which means a reduction of 84% of the initial surface roughness. As indicated before, any positive effect could be observed by further increasing the pulse overlap (e.g., 99%). In this case, these high overlaps significantly increased the cumulated energy and in consequence, a significant of material can be molten or ablated (note that 95% overlap denotes that 20 pulses reached the same positions while for 99%, 100 pulses provide the laser energy to the material surface). In contrast, in the case of Scalmalloy ®, different processing conditions were found to be more satisfactory for reducing the surface roughness. First of all, the standard deviation of the initial surface roughness of Scalmalloy ® was significantly higher compared to Ti64, as can be seen in Fig.  4c, d (see green area). As mentioned before, using a pulse overlap of 99% resulted in a strong increase in the roughness. This is an indication that the surface was strongly affected by the laser process. The higher pulse overlap results in a larger amount of molten material and even in local material ablation, which leads to additional surface features on the surface. Similarly, to Ti64, when using pulse overlaps of 90% and 95%, any significant variation on the roughness was observed since all evaluated conditions are in the range of standard deviation of the initial surface roughness. However, the lowest surface roughness for both used strategies was achieved when using 80% of pulse overlaps, which means that at 90% and 95% the laser process is also modifying the Scalmalloy ® surface. This leads to the assumption that the material was mainly heated and sparsely melted or ablated. Furthermore, by increasing the number of passes (up to 10 runs) and a pulse overlap of 90% or 95% an increase in surface roughness was observed, which leads to the assumption that not only the roughness peaks were remelted, but also that material was ablated and melt bulges were created. For a pulse overlap of 80%, the surface roughness was reduced by increasing the number of passes up to 10. In the last case, the measured surface roughness was significantly lower than the initial roughness. The average surface roughness after 10 runs for both scanning strategies was 6.84 ± 3.73 µm (Fig.  4c) and 8.60 ± 3.58 µm (Fig.  4d), respectively. To closer examine the difference between the initial surface and the treated surface, SEM images of both states and materials were considered. The images of the initial and selected laser-treated surface are shown in Fig.  5. The results of the laser-treated samples with 10 scans show the best outcomes, as it was explained above. Comparing the original surface (Fig.  5a, c) and the treated surface (b, d), it is clear that the laser process has strongly modified the surface of both materials. As it can be observed, the untreated surfaces are characterized by drop-shaped roughness peaks, which is typical of non-treated surfaces in the as-manufactured condition. They have the largest roughness levels mainly caused by the partially melted powder particles attached to the surfaces. In contrast, the topography of the treated surfaces is very different. In the case of Ti64 (Fig.  5a, b), it is visible that the treated surface shows a very flat and homogeneous surface topography. In addition, very few melt drops with sizes below 2–5 µm are observed. In the case of Scalmalloy ®, it is also visible that large droplets in the untreated material were removed (Fig.  5c), but the final surface topography (Fig.  5d) is quite different compared to Ti64. A probable explanation for the observed topography can be related to nanoparticles and clusters of molten metal that are deposited from the ablation cloud, which were formed during the ns laser process. Such an effect has been already observed in other aluminum-based alloys as described by Boinovich et al. [ 37]. Furthermore, also other processes take place during multi-pulse laser treatment, such as high-temperature interactions between the melted surface layers and droplets of molten material as well as its suboxides in the laser ablation plume which could favor the formation of this topography. Based on the experimental data presented before, several statements can be made about adjusting laser parameters and the general laser smoothing process. Firstly, it is clear that the surface quality, in terms of reducing the mean roughens Sa, can be achieved by adjusting the pulse-to-pulse overlap. The results also indicate that several passes of the laser beam over the same area can have a positive effect on surface flattening if the correct overlap is used. Thus, in general, it can be said, that it is more effective to work with a lower pulse overlap but more laser scan runs. A further observation is that the surface roughness was not significantly influenced by the scan strategy. In addition, it has to be mentioned that the standard deviation of the surface roughness after the laser process is significantly lower than the deviation of the initial surface roughness, which was observed for both materials. This means that the roughness peaks have been remelted and that the surface was leveled. ### 3.3 Material removal due to the smoothing process After evaluating and defining the most relevant processing parameters for the smoothing process of the different materials, the surface levels of the untreated and treated areas were measured (ablation depth). A graphical representation of the increasing ablation depth due to the number of passes is shown in Fig.  6a. To quantify the ablation depth, the height difference of the untreated and laser-treated area was measured. An example is shown in Fig.  6b for Ti64 treated with an overlap of 95% and 10 passes. From Fig.  6a, it can be seen that the ablation depth is relatively small for one or two laser scans (< 2.5 µm). However, a significant increase in the depth can be observed for more than two passes. The increased ablation depth is clearly related to the cumulated applied energy due to the repeated passes. Thus, these results indicate that the material was not only melted but also ablated. The maximum obtained ablation depths were 44.6 ± 7.5 µm for Ti64 and 60.4 ± 7.6 µm for Scalmalloy ®. These ablation depths were produced after 10 passes and pulse durations of 100 ns and 200 ns, respectively. This effect is nearly independent of the movement strategy, as mentioned in the previous section. In general, it can be said that the ablation depth is slightly higher for Scalmalloy ® than for Ti64 and can be attributed to the higher laser fluence used (3.1 J/cm 2 and 23.4 J/cm 2 for Ti64 and Scalmalloy ®, respectively). In consequence, the material removal due to the laser smoothing process has to be considered in the design of the 3D part to match the required dimensions. ## 4 Summary and conclusions In this study, experimental research was carried out to better understanding the smoothing process of additive manufactured components of Ti64 and Scalmalloy ® using ns-laser pulses. The following most relevant findings can be drawn from this work: (i) First, it was demonstrated the capability of nanosecond pulses for reducing the surface roughness of additive manufactured Ti64 and Scalmalloy ® alloys. (ii) The reached surface roughness significantly depended on the material and the used parameter such as applied fluence, pulse duration, pulse-to-pulse overlap and number of passes. (iii) The laser treatments permitted the reduction of the surface roughness from 21.20 ± 2.85 µm to 3.45 ± 1.3 µm for Ti64 and from 19.58 ± 9.04 µm to 6.84 µm ± 3.73 for Scalmalloy ®, which corresponds to 84% and 65%, respectively. (iv) It was also shown that not only the roughness can be reduced, but also its standard deviation (by more of 50%) which means that the larger particles are molten or ablated. This lead to a more homogeneous surface topography and could consequently increase the mechanical performance. (v) It was observed that the pulsed laser treatment method is partially ablating the used materials, and that the ablation depth strongly depends on the number of passes. In consequence, the material removal due to the laser smoothing process has to be considered in the design of the 3D part In future investigations, the effect of laser polishing on the alloy fatigue life will be explored, assessing also the microstructural differences of the heat-affected zone in comparison with the bulk material. ## Compliance with ethical standards ### Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. Literatur Über diesen Artikel Zur Ausgabe
2021-04-22 16:12:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6319591403007507, "perplexity": 1278.4760259707411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00075.warc.gz"}
https://mathoverflow.net/questions/414033/do-classifying-spaces-determine-categories-of-principal-bundles
# Do classifying spaces determine categories of principal bundles? If $$X$$ is a topological space, $$G$$ a topological group and $$E G \to BG$$ a universal bundle, isomorphism classes of numerable principal $$G$$-bundles over $$X$$ are in one-to-one correspondence with homotopy classes of maps $$X \to BG$$. I am curious if this fact can be refined to obtain a homotopic description of the category of principal $$G$$-bundles over $$X$$. In other words, is there some category of maps $$X \to BG$$ (or a variation of this) equivalent to the category of principal $$G$$-bundles over $$X$$? • When $G=(\mathbb R,+)$, then $BG$ is contractible. For example, we may take $BG$ to be a single point. You can't recover the category of principal bundles for the group $\mathbb R$ from the data of a single point. Jan 17 at 12:52 • Category of principal $\mathbb R$-bundles over $X$ is equivalent to a category with one object (trivial bundle $X \times \mathbb R$). Automorphisms of $X \times \mathbb R$ are identified with maps $X \to \mathbb R$. So some description in terms of maps from $X$ exists. Of course it can't be recovered from data of a single point, but maybe so from the universal bundle $\mathbb R \to \mathrm{pt}$? Jan 17 at 13:16 • If you interpret the whole question homotopy theoretically, then yes - the mapping space $map(X,BG)$ can be viewed as an $\infty$-groupoid, and this is the $\infty$-groupoid of principal $G$-bundles (morphisms are morphisms of $G$-bundles, and then you allow homotopies between those and so on and so forth). I'm not sure how this translates to the point-set topological setting though, so the answer will probably depend on what you're looking for Jan 17 at 14:28 • I think it's not going to work in a purely topological setting. But one also doesn't have to go to infinity: consider the classifying space to be the topological groupoid $BG$ with a single object. Then, principal $G$-bundles are the same as continuous anafunctors $X \to BG$. So this is your category of maps. Jan 17 at 14:33 • @MaximeRamzi This formalism works for groups in any topos, and in particular, in the topos of condensed sets (modulo cardinality issues), so if the topological space is nice enough (say, compactly generated weak Hausdorff), I think that this would lead to a satisfactory answer. – Z. M Jan 17 at 14:46
2022-05-22 16:36:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745688199996948, "perplexity": 167.37166730028005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00726.warc.gz"}
http://www.cmstatistics.org/RegistrationsV2/CMStatistics2018/viewSubmission.php?in=917&token=p368onnn1pq83opon0s57os14r29s294
CMStatistics 2018: Start Registration View Submission - CMStatistics B0917 Title: Novel model-free estimation approaches of linear dimension reduction Authors:  Lukas Fertl - TU Vienna (Austria) [presenting] Abstract: The purpose is to introduce a new way of estimating the dimension reduction matrix $B$ in the dimension reduction model $y=f(B' x) + \epsilon$, where $B$ is a $p \times d$ ($d < p$) unknown matrix of parameters and $\epsilon$ is a random error independent of $x$. The idea is based on considering the variance of $y$ conditional on $x$ being in the span of a direction vector $v$ as an estimating equation. This estimator falls in the class of semi-parametric methods, and we will denote it as conditional variance estimators. The performance of the estimator is competitive compared to currently used ones. Its main advantage is that it is more robust against a wide range of distributions for $x$ and nonlinear $f()$. Extensions to other estimation or testing problems will be also presented. Furthermore, it can also be used when the size of the sample is smaller than the number of covariates ($n < p$).
2020-08-06 19:33:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7503810524940491, "perplexity": 245.46182075314192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737019.4/warc/CC-MAIN-20200806180859-20200806210859-00453.warc.gz"}
http://fluidsengineering.asmedigitalcollection.asme.org/issue.aspx?journalid=122&issueid=936147
0 ### Research Papers: Flows in Complex Systems J. Fluids Eng. 2017;139(6):061101-061101-13. doi:10.1115/1.4035876. This paper considers the inherent unsteady behavior of the three-dimensional (3D) separation in the corner region of a subsonic linear compressor cascade equipped of 13 NACA 65-009 profile blades. Detailed experimental measurements were carried out at different sections in spanwise direction achieving, simultaneously, unsteady wall pressure signals on the surface of the blade and velocity fields by time-resolved particle image velocimetry (PIV) measurements. Two configurations of the cascade were investigated with an incidence of 4 deg and 7 deg, both at $Re=3.8×105$ and Ma = 0.12 at the inlet of the facility. The intermittent switch between two statistical preferred sizes of separation, large, and almost suppressed, is called bimodal behavior. The present PIV measurements provide, for the first time, time-resolved flow visualizations of the separation switch with an extended field of view covering the entire blade section. Random large structures of the incoming boundary layer are found to destabilize the separation boundary. The recirculation region, therefore, enlarges when these high vorticity perturbations blend with larger eddies situated in the aft part of the blade. Such a massive detached region persists until its main constituting vortex suddenly breaks down and the separation almost completely vanishes. The increase of the blockage during the separation growth phase appears to be responsible for this mechanism. Consequently, the proper orthogonal decomposition (POD) analysis is carried out to decompose the flow modes and to contribute to clarify the underlying cause-effect relations, which predominate the dynamics of the present flow scenario. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061102-061102-14. doi:10.1115/1.4035877. A numerical-based (Reynolds-averaged Navier–Stokes (RANS)) investigation into the role of span and wing angle in determining the performance of an inverted wing in ground effect located forward of a wheel is described, using a generic simplified wheel and NACA 4412 geometry. The complex interactions between the wing and wheel flow structures are investigated to explain either increases or decreases for the downforce and drag produced by the wing and wheel when compared to the equivalent body in isolation. Geometries that allowed the strongest primary wing vortex to pass along the inner face of the wheel resulted in the most significant reductions in lift and drag for the wheel. As a result, the wing span and angle combination that would produce the most downforce, or least drag, in the presence of the wheel does not coincide with what would be assumed if the two bodies were considered only in isolation demonstrating the significance of optimizing these two bodies in unison. Topics: Vortices , Wheels , Wings Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061103-061103-13. doi:10.1115/1.4036084. Effectiveness of ventilated helmets in providing thermal comfort to a motorcycle rider is studied. Computational fluid dynamics (CFD) simulations of human thermoregulation system and the air flow in the air gap of a full-face motorcycle helmet are carried out. The thermal comfort of a rider is predicted using apparent temperature (AT) and wet-bulb global temperature (WBGT) heat indices. The effect of an increase in ambient temperature and relative humidity (RH) of air on the air flow and temperature in the region above the head is studied to predict the thermal comfort of the rider wearing full-face helmets. The effect of increasing the air gap between the head and the helmet is also studied. The results are then compared with the conditions when the rider is not wearing helmet. It is observed that the ventilated helmet is effective in providing thermal comfort to the rider only if the ambient air temperature is less than normal body temperature. For air temperature higher than the body temperature, vents do not provide any cooling to the head and the nonventilated helmet is more comfortable. Furthermore, CFD simulations are performed to investigate the effect of increase in RH in the ambient air on the thermal comfort of the rider. The increase in RH of air from 50% to 90% at a fixed ambient air temperature leads to an increase in AT and WBGT, indicating reduced thermal comfort of the rider. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061104-061104-19. doi:10.1115/1.4035952. Hybrid bearings are mostly used in high-speed and load situations due to their better stability and loading capacity. They are typically designed with recess grooves to enhance both static and dynamic performance of the bearing. Previous theoretical studies on the influence of the recess geometrical shapes often utilize the Reynolds equation method. The aim of this paper is to analytically study the influence of various recess geometrical shapes on hybrid journal bearings. A three-dimensional (3D) computational fluid dynamics (CFD) model of a hybrid journal bearing is built, and a new method of response surface model is employed to determine the equilibrium position of the rotor. Based on the response surface model, an optimization scheme is used to search around the equilibrium position to get an accurate solution. The current analysis includes the geometries of rectangular, circular, triangular, elliptical, and annular shaped recesses. All these different shapes are studied assuming the same operating conditions, and static properties are used as the indices of the bearing performance. This study proposes a new design process using a CFD method with the ability of calculating the equilibrium position. The flow rate, fluid film thickness, and recess flow pattern are analyzed for various recess shapes. The CFD model is validated by published experimental data. The results show that the response surface model method is fast and robust in determining the rotor equilibrium position, even though a 3D-CFD model is utilized. The results suggest that recess shape is a dominant factor in hybrid bearing design. Commentary by Dr. Valentin Fuster ### Research Papers: Fundamental Issues and Canonical Flows J. Fluids Eng. 2017;139(6):061201-061201-9. doi:10.1115/1.4035636. In this paper, the effect of transverse magnetic field on a laminar liquid lead lithium flow in an insulating rectangular duct is numerically solved with three-dimensional (3D) simulations. Cases with and without buoyancy force are examined. The stability of the buoyant flow is studied for different values of the Hartmann number from 0 to 120. We focus on the combined influence of the Hartmann number and buoyancy on flow field, flow structure in the vicinity of walls and its stability. Velocity and temperature distributions are presented for different magnetic field strengths. It is shown that the magnetic field damps the velocity and leads to flow stabilization in the core fluid and generates magnetohydrodynamic (MHD) boundary layers at the walls, which become the main source of instabilities. The buoyant force is responsible of the generation of vortices and enhances the velocities in the core region. It can act together with the MHD forces to intensify the flow near the Hartmann layers. Two critical Hartmann numbers (Hac1 = 63, Hac2 = 120) are found. Hac1 is corresponding to the separation of two MHD regimes: the first one is characterized by a core flow maximum velocity, whereas the second regime is featured by a maximum layer velocity and a pronounced buoyancy effect. Hac2 is a threshold value of electromagnetic force indicating the onset of MHD instability through the generation of small vortices close to the side layers. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061202-061202-14. doi:10.1115/1.4035947. This study reports an efficient reduction of the drag exerted by a flow on a cylinder when the former is forced with a plasma actuator. A three-electrode plasma device (TED) disposed on the surface of the body is considered, and the effect of the actuation frequency and amplitude is studied. Particle image velocimetry (PIV) measurements provided a detailed information that was processed to obtain the time-averaged drag force and to compare the performances of TED actuator and the canonical dielectric discharge barrier actuator. For the Reynolds number considered (Re = 5500), excitations with the TED actuator were more efficient, achieving drag reductions that attained values close to 40% with high net energy savings. The reduction of coherent structures using the instantaneous vorticity fields and a clustering technique allowed us to gain insight into the physical mechanisms involved in these phenomena. This highlights that the symmetrical forcing of the wake flow at its resonant frequency with the TED promotes symmetrical vorticity patterns which favor drag reductions. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061203-061203-9. doi:10.1115/1.4035945. Aspect ratio is an important parameter in the study of flow through noncircular microchannel. In this work, three-dimensional numerical study is carried out to understand the effect of cross aspect ratio (height to width) on flow in diverging and converging microchannels. Three-dimensional models of the diverging and converging microchannels with angle: 2–14 deg, aspect ratio: 0.05–0.58, and Reynolds number: 130–280 are employed in the simulations with water as the working fluid. The effects of aspect ratio on pressure drop in equivalent diverging and converging microchannels are studied in detail and correlated to the underlying flow regime. It is observed that for a given Reynolds number and angle, the pressure drop decreases asymptotically with aspect ratio for both the diverging and converging microchannels. At small aspect ratio and small Reynolds number, the pressure drop remains invariant of angle in both the diverging and converging microchannels; the concept of equivalent hydraulic diameter can be applied to these situations. Onset of flow separation in diverging passage and flow acceleration in converging passage is found to be a strong function of aspect ratio, which has not been shown earlier. The existence of a critical angle with relevance to the concept of equivalent hydraulic diameter is identified and its variation with Reynolds number is discussed. Finally, the effect of aspect ratio on fluidic diodicity is discussed which will be helpful in the design of valveless micropump. These results help in extending the conventional formulae made for uniform cross-sectional channel to that for the diverging and converging microchannels. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061204-061204-9. doi:10.1115/1.4035944. The compressible Rayleigh–Taylor (RT) instability is studied by performing a suite of large eddy simulations (LES) using the Miranda and Ares codes. A grid convergence study is carried out for each of these computational methods, and the convergence properties of integral mixing diagnostics and late-time spectra are established. A comparison between the methods is made using the data from the highest resolution simulations in order to validate the Ares hydro scheme. We find that the integral mixing measures, which capture the global properties of the RT instability, show good agreement between the two codes at this resolution. The late-time turbulent kinetic energy and mass fraction spectra roughly follow a Kolmogorov spectrum, and drop off as k approaches the Nyquist wave number of each simulation. The spectra from the highest resolution Miranda simulation follow a Kolmogorov spectrum for longer than the corresponding spectra from the Ares simulation, and have a more abrupt drop off at high wave numbers. The growth rate is determined to be between around 0.03 and 0.05 at late times; however, it has not fully converged by the end of the simulation. Finally, we study the transition from direct numerical simulation (DNS) to LES. The highest resolution simulations become LES at around t/τ ≃ 1.5. To have a fully resolved DNS through the end of our simulations, the grid spacing must be 3.6 (3.1) times finer than our highest resolution mesh when using Miranda (Ares). Commentary by Dr. Valentin Fuster ### Research Papers: Multiphase Flows J. Fluids Eng. 2017;139(6):061301-061301-8. doi:10.1115/1.4035928. A transport-equation-based homogeneous cavitation model previously assessed and validated against experimental data is used to investigate and explain the efficiency alteration mechanisms in Kaplan turbines. On the one hand, it is shown that the efficiency increase is caused by a decrease in energy dissipation due to a decreased turbulence production driven by a drop in fluid density associated with the cavitation region. This region also entails an increase in torque, caused by the modification of the pressure distribution throughout the blade, which saturates on the suction side. On the other hand, the efficiency drop is shown to be driven by a sharp increase in turbulence production at the trailing edge. An analysis of the pressure coefficient distribution explains such behavior as being a direct consequence of the pressure-altering cavitation region reaching the trailing edge. Finally, even though the efficiency alteration behavior is very sensitive to the dominant cavitation type, it is demonstrated that the governing mechanisms are invariant to it. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061302-061302-12. doi:10.1115/1.4035946. Chemical reactors, air lubrication systems, and the aeration of the oceans rely, either in part or in whole, on the interaction of bubbles and their surrounding liquid. Even though bubbly mixtures have been studied at both the macroscopic and bubble level, the dissipation field associated with an individual bubble in a shear flow has not been thoroughly investigated. Exploring the nature of this phenomenon is critical not only when examining the effect a bubble has on the dissipation in a bulk shear flow but also when a microbubble interacts with turbulent eddies near the Kolmogorov length scale. In order to further our understanding of this behavior, this study investigated these interactions both analytically and experimentally. From an analytical perspective, expressions were developed to model the dissipation associated with the creeping flow fields in and around a fluid particle immersed in a linear shear flow. Experimentally, tests were conducted using a simple test setup that corroborated the general findings of the theoretical investigation. Both the analytical and experimental results indicate that the presence of bubbles in a shear flow causes elevated dissipation of kinetic energy. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061303-061303-15. doi:10.1115/1.4035929. A novel scaling law for the tip vortex cavitation (TVC) noise was determined, employing the Rankine vortex model, the Rayleigh–Plesset equation, the lifting surface theory, the boundary layer effect, and the number of bubbles generated per unit time $(N0)$. All terms appearing in the final derived scaling law are well known three-dimensional (3D) lifting surface parameters, except for $N0$. In this study, the dependence of $N0$ with inflow velocity and hydrofoil dimension is investigated experimentally while trying to retain the same TVC patterns among different experimental conditions. Afterward, the effect of $N0$ on the TVC noise is analyzed. Optimal TVC observation conditions are determined from consideration of cavitation number and Reynolds number of two comparable conditions. Two geometrically scaled hydrofoils are concurrently placed in a cavitation tunnel for the hydrofoil size variation experiment. Wall effects and flow field interaction are prevented with the aid of computational fluid dynamics. Images taken with a high‐speed camera are used to count $N0$ by visual inspection. The noise signals at all conditions are measured and an acoustic bubble counting technique, to supplement visual counting, is devised to determine $N0$ acoustically from the measured noise data. The broad-band noise scaling law incorporating $N0$ and the International Towing Tank Conference (ITTC) cavitation noise estimation rule for hydrofoil are both applied to estimate the TVC noise level for comparison with the measured noise level. The noise level estimated by the broad-band noise scaling law accounting for the acoustically estimated $N0$ gives the best agreement with the measured noise level. Commentary by Dr. Valentin Fuster J. Fluids Eng. 2017;139(6):061304-061304-6. doi:10.1115/1.4035949. Flow-induced vibration of hydrofoils affects pressure pulsations on their surfaces and influences cavitation inception and desinence. As these pulsations depend on the hydrofoil material, cavitation inception and desinence numbers for hydrofoils of the same shape made from different metals can be substantially different. This conclusion is based on the comparison of the multistep numerical analysis of fluid–structure interaction for hydrofoils Cav2003 with earlier obtained experimental data for them. The material impact on cavitation must be taken into account in future experiments. Commentary by Dr. Valentin Fuster
2017-06-27 07:06:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3929539918899536, "perplexity": 1185.7246913171946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321025.86/warc/CC-MAIN-20170627064714-20170627084714-00393.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-25-capacitance-problems-page-744/67e
# Chapter 25 - Capacitance - Problems - Page 744: 67e $120\ V$ #### Work Step by Step The voltage across $C_2$ can be found using the formula: $V_2 =\frac{q_2}{C_2}$ As capacitor are arranged in series, $q_1 =q_2 =480\ \mu C$ Thus: $V_2 =\frac{480\ \mu C}{4\ \mu F}$ $V_2 =120\ V$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-12-12 10:40:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386374473571777, "perplexity": 1251.6587571552416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00334.warc.gz"}
https://web2.0calc.com/questions/write-3-10-as-an-entire-radical-what-is-the-number
+0 # write 3√10 as an entire radical. what is the number under the radical sign? 0 38 1 write 3√10 as an entire radical. what is the number under the radical sign? Guest Oct 15, 2018 #1 +307 +2 $$3\sqrt{10} = \sqrt{3^2}\times\sqrt{10}= \sqrt{9}\times\sqrt{10}= \sqrt{9\times10} =\sqrt{90}$$ The number under the radical sign is $$90$$ Hope this helps! Dimitristhym  Oct 15, 2018 #1 +307 +2 $$3\sqrt{10} = \sqrt{3^2}\times\sqrt{10}= \sqrt{9}\times\sqrt{10}= \sqrt{9\times10} =\sqrt{90}$$ The number under the radical sign is $$90$$ Hope this helps! Dimitristhym  Oct 15, 2018
2018-11-17 01:17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855310320854187, "perplexity": 5887.554147390371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743247.22/warc/CC-MAIN-20181116235534-20181117021534-00236.warc.gz"}
http://www.csam.or.kr/journal/view.html?uid=1760&&vmd=Full
TEXT SIZE CrossRef (0) A comparative study of the Gini coefficient estimators based on the regression approach aDepartment of Statistics, Payame Noor University, Iran, bDepartment of Statistics, Ferdowsi University of Mashhad, Iran Correspondence to: Department of Statistics, Ordered and Spatial Data Center of Excellence, Faculty of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran. E-mail: grmohtashami@um.ac.ir Received December 3, 2016; Revised May 1, 2017; Accepted June 28, 2017. Abstract Resampling approaches were the first techniques employed to compute a variance for the Gini coefficient; however, many authors have shown that an analysis of the Gini coefficient and its corresponding variance can be obtained from a regression model. Despite the simplicity of the regression approach method to compute a standard error for the Gini coefficient, the use of the proposed regression model has been challenging in economics. Therefore in this paper, we focus on a comparative study among the regression approach and resampling techniques. The regression method is shown to overestimate the standard error of the Gini index. The simulations show that the Gini estimator based on the modified regression model is also consistent and asymptotically normal with less divergence from normal distribution than other resampling techniques. Keywords : bootstrap technique, Gini coefficient, jackknife method, Lorenz curve, modified regression model, resampling techniques 1. Introduction Measures of inequality are used to analyzing incomes, welfare, and poverty issues. They can also be helpful to measure the level of social stratification and polarization. Among many inequality measures, the Gini index is of the most widely known measures of income inequality due to its easy interpretation. The Gini coefficient is the popular measure of income inequality; however, it is usually reported without any information about variance and ideally, with a standard error (SE). The reason for this is that most of the formulations of its proposed SE are mathematically complex or require considerable numerical computation. Different methods to compute the SE of Gini index have been shown in various research on statistics and economics. Some related references are Woo and Yoon (2001), Xu (2007), Davidson (2009), Langel and Tillé (2013), and Yitzhaki and Schechtman (2013). Many authors have proposed using resampling approaches such as jackknife and bootstrap techniques to compute variance for the Gini estimator (Kang and Cho, 1996; Mills and Zandvakili, 1997; Moran, 2006; Palmitesta and Provasi, 2006; Yitzhaki, 1991). Some authors have demonstrated that estimates of the Gini coefficient can be obtained from ordinary linear regression based on data and their ranks, thereby providing an exact analytic SE. Lerman and Yitzhaki (1984) derived a convenient method to calculate the Gini coefficient, using the covariance between data and corresponding ranks. In this way, Shalit (1985) proposed a regression model based on a ranked variable for the set of natural numbers to compute the Gini index. Ogwang (2000) provided a method to compute the Gini index by an ordinary least square (OLS) regression, as well as discussed how to use the regression to simplify the computation of the jackknife SE. Giles (2004) later showed that the OLS SE from this regression could be used directly in order to compute the SE of the Gini index. Modarres and Gastwirth (2006) with simulations showed that the regression method overestimates the SE of the Gini estimator. They noted that the defect is due to the dependency between error terms in the proposed regression model. They also recommend complex or computationally intensive methods. However, Ogwang (2006) and Giles (2006) disagreed with this suggestion. In this paper, according to simplicity of the regression approach method, we evaluate and compare this way with other resampling techniques. In this regard, we examine some special situations where the underlying distribution follows log-logistic (LL) and exponential (Exp) distributions as particular cases of generalized Beta II distribution (McDonald, 1984). In the next section, we introduce the concept of the Gini index which is discussed the income inequality. Section 3 then deals with resampling techniques such as bootstrap and jackknife, to measure the SE and confidence interval of the Gini estimator. In Section 4, with discussing shortcoming of the regression approach, we study this method for the variance estimation of the Gini estimator. Section 5 provides simulation evidence that proves the main conclusions of the paper and compares the some inferential statistics of Gini estimators such as consistency, divergence from normal distribution and asymptotic properties among the methods. Some graphical comparisons have also been done. Section 6 illustrated the results of the paper for real data on income inequality in Britain between 1994/1995. Conclusions are left for the last section. 2. The Gini coefficient The most well-known member of the income inequality family is the Gini coefficient. It is widely used to measure income inequality because of its clear economic interpretation. This measure can be defined in various ways (Xu, 2003; Yitzhaki, 1998). The best known definition of the Gini index of inequality is as twice the area between the 45°-line (equality line) and the Lorenz curve, as demonstrated in Figure 1. Therefore, it can be expressed as $G=2∫01(p-L(p))dp,$ such that p = F(x) is a cumulative distribution function (cdf) of income and L(p) is the Lorenz function given by $(1/μ)∫0pF-1(t)dt$ (Gastwirth, 1971), where μ = E(X) > 0 and F−1(p) = inf{x|F(x) ≥ p; p ∈ [0, 1]}. The Gini index takes values between 0 and 1. The value 0 corresponds to complete equality and 1 corresponds to complete inequality. This index is also defined as $G=Δ2μ,$ where $Δ=∫0∞∫0∞∣x-y∣dxdy$ is the population mean difference. Using Expression (2.1), integration by parts and applying a change of variable p = F(x), it can be found that (Xu, 2003): $G=2μ[∫01xF(x)dF(x)-μ2].$ This formula clearly shows an interpretation of the Gini coefficient in terms of covariance via $G=2μCov(X,F(X)).$ Suppose that an identical independent distribution (iid) sample of size n is drawn randomly from the population, and let its empirical distribution function denoted as n. It can be noticed that using (2.2), the natural plug in estimator of G by estimating F with n defiend as $G^=2μ^[∫01xF^n(x)dF^n(x)-μ^2].$ In the context of discrete income distribution, Xu (2003) showed that the Gini index can be estimated by $G^=2Σi=1niYin2Y¯-n+1n,$ where 0 ≤ Y1 ≤ ·· · ≤ Yn are the ordered income data. Davidson (2009) proposed an approximate expression for the bias of Ĝ, from which he subsequently derived the following bias-corrected estimator of the Gini coefficient, denoted , which is given by: $G˜=nn-1G^,$ while the estimator is still biased but its bias is of order n−1. Sometimes using this estimator is recommended because the properly bias-corrected estimator is not only even easier to compute rather than the other estimators, but also its bias converges to 0 as n→∞faster. 3. Resampling techniques Different methods with complicated formula to variance estimation of the Gini index have prompted significant research in statistics and economics. Most of the formulations of the variance for the Gini estimator proposed in the literature are mathematically complex or require considerable numerical computation. To avoid these mathematical difficulties, various authors have proposed using the resampling techniques such as jackknife and bootstrap methods as follows. ### 3.1. The jackknife method The jackknife provides a general purpose approach to estimating the bias and variance of an estimator. The jackknife is particularly useful when standard methods for computing bias and variance cannot be applied or are difficult to apply. Suppose that Ĝ is an estimator of the Gini coefficient (G) based on sample of iid random variables X1, …, Xn. Let Ĝ(i) be the Gini estimator for the subsample of the initial sample where the ith observation has been deleted, then the jackknife estimator to measure the Gini coefficient based on the n values of Ĝ(i) is defined as: $G^J=G^+n-1n∑i=1n(G^-G^(i)),$ and $bnJ=n-1n∑i=1n(G^-G^(i)),$ which is the jackknife bias estimator (Yitzhaki, 1991). It follows that ĜJ will be asymptotically unbiased (Jiang, 2010). The jackknife method can also be used to estimate the SE of the Gini estimator. This was first noted by Yitzhaki (1991), whose SE estimate has the following expression: $σ^J=n-1n∑i=1n(G^(i)-G^•)2,$ where $G^•=1n∑i=1nG^(i).$ ### 3.2. The Bootstrap method The bootstrap method is an alternative approach to the variance estimation of Gini estimator. This technique is relatively straightforward, yet analytically powerful. Mathematical justifications can be quite sophisticated, the bootstrap method requires no theoretical calculations, applies identically to any income inequality measure, and is available no matter how mathematically complicated the parameter estimate or its asymptotic SE may be (Mills and Zandvakili, 1997). The bootstrap procedure is: • Given a sample X1, …, Xn of size n and an estimator Ĝ. • Draw B bootstrap samples of size n with replacement from X1, …, Xn. • Calculate the estimator for each one of them and obtain B values of the estimator, denoted by $G^1*,…,G^B*$. Now, these values are used in order to estimate the variance of the original estimator. Namely, the sample variance of $G^1*,…,G^B*$ is used as the bootstrap variance estimator of the variance of the original statistic Ĝ. The bootstrap SE of Ĝ can then be estimated as: $σ^Boot=1B-1∑b=1B(G^b*-G¯*)2,$ where $G¯*=1B∑b=1BG^b*.$ 4. The regression method The regression approach is the simplest way of computing a SE for the Gini coefficient. The computational difficulties or mathematical complexities associated with conventional formulas to compute the variance of the Gini coefficient make the use of simpler regression-based approach seems attractive. At first Lerman and Yitzhaki (1984) stated the Gini coefficient based on a covariance between the variable and its rank. Shalit (1985) proposed a regression model to compute the Gini index in the following form: $yi=α+βi+ξi, i=1,2,…,n,$ such that 0 ≤ y1y2 ≤ · · · ≤ yn and ξ1, ξ2, …, ξn are errors with zero mean and homogenous variance σ2. He calculated the Gini estimator based on the estimated slope of this regression model as follows: $G^=(n2-16n)(β^y¯),$ where is the sample mean, β̂ is the OLS estimator of β. To see an alternative regression interpretation of the Gini index, Ogwang (2000) also showed that Ĝ can be written as: $G^=2θ^n-n+1n,$ such that $θ^=Σi=1niyi/Σi=1nyi$, is the weighted least squares estimator of θ in the following regression model: $iyi=θyi+vi, i=1,2,…,n,$ where v1, v2, …, vn are errors with mean zero and homogenous variance σ2. Davidson (2009) proposed regression modified model in (4.4) as: $(2i-1n-1)yi=θ*yi+ξi, i=1,2,…,n.$ In this model, the Gini coefficient has an estimator of the slope of the regression model directly obtained and the SE of the Gini estimate can be computed. ### 4.1. Shortcomings of the regression model approach The regression model to estimating the SE of the Gini estimator cannot produce reliable results because it does not account for potential shortcomings introduced in the following: • The regression model takes no account of the sampling design. It is used only for a random sampling technique. • The independent variable in ordinary linear regression model is measured with no error; however, this model assumes that the independent variable is random. • The normality of the error terms may be not satisfied and it should be tested. • The relationship between ordered income and its rank is convex as demonstrated in Figure 2. Therefore, the error terms are dependent because the variance-covariance matrix of the errors is not diagonal. It has nonnegative element σi j/n, such that $σij=pi(1-pj)f(ζpi)f(ζpj), i≤j, i, j=1,2,…,n,$ and $σij=σji, i>j,$ where ζpj is the $pjth$ population quantile and f (ζpj ) is the positive and continuous density-quantile function for 0 < pj < 1 (David and Nagaraja, 2003). It is important to note that actual data have these defects. Therefore, the method is used even though the defects are true. We must be cautious when using a regression-based approach to construct a SE for the Gini coefficient; in addition, tests for the validity of assumptions in the regression model must be formally conducted. ### 4.2. Asymptotic behavior based on regression approach We can suppose that the estimator of Gini coefficient is unbiased and the asymptotic properties can be concluded if the regression model in (4.5) is true. Based on modified regression model in (4.5), the slope estimate of the regression model is equal to the Gini estimator is: $G^=Σi=1n(2i-1n-1)yiΣi=1nyi.$ The Gini estimator in (4.6) is the proportion of two functions that are linear combination of order statistics (L-statistics); therefore, asymptotical normality and consistency can be obtained with the theory of L-statistics (Sendler, 1979). Ĝ is normally distribution based on theory of U-statistics and under some regularity conditions (Xu, 2007). 5. Simulation study To comparative study of the Gini coefficient estimators based on bootstrap, jackknife and the regression approach in equation (4.5), we examined a special situation where the underlying distributions follow generalized Beta II (GB2) distribution. The most general distribution which has been proposed for fitting income data is the GB2 introduced in McDonald (1984). Its density is: $fGB2(x∣a,b,p,q)=axa p-1bapB(p,q)[1+(x/b)a]p+q,$ with x > 0, a, b > 0 and where B(p, q) is the Beta function given by $B(p,q)=Γ(p)Γ(q)Γ(p+q)=∫0∞tp-1(1+t)p+qdt, p,q>0.$ The GB2 has the advantage that many densities can be obtained as a particular case and therefor constitutes a nice framework for discussion. Because of the complexity of the mathematical expression, we concentrate on Exp distribution with cdf: $F(x)=1-e-x, x>0,$ and LL distribution with cdf: $F(x)=1-11+xa, x>0,$ where a > 0, is a shape parameter (a > 2 leads to existence of the second moment of X). In each stage, the simulations were undertaken by drawing 10,000 independent samples of size n = 10, 20, 50, 100, 500, 1,000 from the underlying distributions. ### 5.1. Variance inflation of Gini estimator in regression approach It is important to have a correct method available to compute the SE of the Gini coefficient. Using a hypothesis test, it is possible when we use the SE of one method; the difference is significant but it is not when using another method. In Table 1, we write the values of the SE for Gini estimator with the regression model and other resampling techniques under Exp and LL distributions with parameter a = 5 as a benchmark. From these values, it is clear that the regression method underestimate the true value of Gini index and also overestimates the SE of the Gini estimator rather than other resampling techniques. Three methods, the bootstrap, jackknife and the regression approach, provide comparable results. SE estimates using the bootstrap and jackknife are close to the real values, while those of the suggested regression model are noticeably inflated. The weakness of the regression based approach is essentially a finite sample matter, and its importance should diminish as n → ∞. Figure 3 shows corresponding results for Exp distribution. Table 2 reports the bias values, SE and mean square error (MSE) for the Gini estimator with the regression model underlying the LL distribution with parameter a = 3, 10, 20. It is clear that the change of these properties is influenced by the sample size and the values of a. ### 5.2. Consistency of Gini estimator in regression model Table 3 presents the values of the MSE for Gini index with regression model underlying Exp and LL distributions with a = 3 as a benchmark. The results are shown in the following Table obtained by drawing 10,000 independent samples for the Gini index and n = 10, 20, 50, 100, 500, 1,000. The simulations show that the Gini coefficient based on parameter estimate in the modified regression model is consistent. ### 5.3. Asymptotic behavior First, in order to see whether the asymptotic normality assumption yields a good approximation, simulations are undertaken with drawings 10,000 independent samples of size n = 100 for the Gini index from the Exp distribution, with cdf in (5.1). Using normality tests, it is evident that the consistency of the Gini estimator with the normal distribution is high. In comparison normality of estimators based on jackknife and the regression approach, Figure 4 shows the empirical distributions for n = 100 of the statistics τReg = ( ĜG0)/σ̂Reg and τJack = (ĜJG0)/σ̂Jack. Here σ̂Reg is the regression SE estimator from regression (4.5) and σ̂Jack is the jackknife SE estimator from (3.3) and ĜJ is given by (3.1). Note that G0, the true value of the Gini index, for Exp distribution is 0.5. Both of these statistics have distributions that are close to the standard normal distribution, but the jackknife estimator has better behavior of normality than the regression estimator. ### 5.4. Gini regression estimate deviation from the normal distribution In this section, we examine the divergence of Gini estimator in regression approach to normal distribution using the Kolmogorov distance under LL distribution with parameter a. The results are shown in Figure 5, where we show the Kolmogorov distances of sampling indices from a normal distribution influenced by parameter values of the distribution and sample size values. The concave curves express that the parameter value has the opposite effect. Here, we compare the deviation of Gini estimates from normal distribution. Figure 6 explains divergence from normal distribution for jackknife, bootstrap and the regression approach estimators under Exp distribution. It is evident that the deviation of these estimates are asymptotically equivalent (especially for jackknife and bootstrap). It can be seen that the Gini estimate based on the jackknife method has a good performance in deviation from normal distribution. ### 5.5. Comparison of confidence intervals In this section, we proposed three 95% confidence intervals for the Gini estimates of Exp distribution. Recall that the true value of the Gini index for this distribution is 0.5. The first column of Table 4 uses SE of the jackknife with N(0, 1) critical values, the second is based on the percentile-t bootstrap confidence interval (Mills and Zandvakili, 1997), the third are confidence intervals based on the regression model. It is evident that the asymptotic bootstrap and jackknife intervals are very similar and both are very much narrower than those computed with the SEs based on regression approach. 6. Empirical illustration Here, we refer to the real data based on income inequality in Britain on the fiscal years 1994–1995. Estimation is based on the unit record data used to calculate the official income statistics, derived from Family Resources Surveys available from the UK Data Service’s web site ( https://www.ukdataservice.ac.uk). We first performed an analysis on a comparison of the SEs and three 95% confidence intervals. Table 5 provides the corresponding results. According to probability plots and quantile plots, we fit the GB2 distribution to real data with scale parameter equal to b = 227.84 and shape parameters equal to a = 2.99, p = 1, and q = 1. This result has also been reported in Jenkins (2009). The parameters considered are the maximum likelihood estimates of the GB2 distribution based on incomes. The distribution was well-characterized by a Fisk distribution with a = 2.99 and b = 227.84. Here, we refer to Monte Carlo samples drawn from a Fisk distribution as well as performed an analysis on a comparison of the divergence of Gini estimates from normal distribution by using the Kolmogorov distance. Table 6 summarizes the results for better interpretation. Table 7 presents the empirical MSEs of the Gini estimator based on resampling and regression model. It is clear that all of these methods have asymptotically equal MSE values. Table 8 reports the relative frequencies of the 95% confidence intervals containing the true value of the inequality measure (coverage probability) and the sizes of confidence intervals (average size) of 10,000 confidence intervals, for different methods of the fitted distribution to real data. The results show that the coverage accuracy of the resampling techniques confidence intervals is reasonably close to the nominal confidence level for a large sample size. As expected, there was no substantial difference in coverage probability (CP) and in average size (AS) for the two resampling techniques. The jackknife confidence interval performs best in terms of CP and AS at the cost of providing larger confidence intervals. 7. Conclusion The regression approach is the simplest way to estimate the Gini coefficient and its SE; however, this analysis of the Gini estimator can produce weaker results because it does not account for potential shortcomings introduced in the proposed regression model. It is important to note that actual data has defects when this method is used, despite the known shortcomings. The Gini estimator based on the regression model is consistent and asymptotically normal with less divergence from normal distribution than other resampling techniques. This method does not require the grouping of individual data to economize on computations. In addition, the estimator can be analyzed by using standard statistical software. The weakness of this method decreases as the sample size grows; therefore, we should be cautious when using regression-based approach to analyze the Gini coefficient in small samples size. Figures Fig. 1. The area between the equality-line and the Lorenz curve. Fig. 2. The dependency between ordered incomes under exponential distribution. Fig. 3. Comparison of the standard errors under exponential distribution. Fig. 4. Comparison of the empirical distributions of jackknife and regression statistics. Fig. 5. The trend of Kolmogorov distances with respect to the values of parameter a. Fig. 6. The divergence from N(0, 1) for jackknife, bootstrap, and regression approach. TABLES ### Table 1 Comparison the standard errors of Gini estimates nExpLL BootJackRegBootJackReg 100.08250.09040.14780.03910.04650.1867 200.05970.06360.10160.03140.03630.1292 500.03850.04030.06300.02250.02410.0807 1000.02780.02880.04430.01680.01750.0568 5000.01270.01280.01970.00810.00810.0253 1,0000.00900.00910.01390.00580.00580.0178 LL = loglogistic distribution; Exp = exponential distribution; Boot = bootstrap; Jack = jackknife; Reg = regression. ### Table 2 Summary of regression approach under loglogistic distribution anBiasStandard errorMean square error 310−0.044960.17800.0337 20−0.022110.12360.0157 50−0.009860.07740.0060 100−0.003690.05450.0029 500−0.000290.02430.0005 1,000−0.000170.01720.0002 1010−0.010570.19020.0363 20−0.004890.13140.0173 50−0.001860.08200.0067 100−0.000650.05770.0033 500−0.000130.02570.0006 1,000−0.000060.01810.0003 2010−0.005210.19110.0365 20−0.002420.13200.0174 50−0.000890.08230.0067 100−0.000320.05790.0033 500−0.000070.02580.0006 1,000−0.000030.01820.0003 ### Table 3 Consistency of Gini estimator in regression model for exponential distribution n 1020501005001,000 MSEExp0.02430.01090.00480.00190.00030.0001 LL0.03370.01570.00610.00300.00050.0002 MSE = mean square error; Exp = exponential distribution; LL = loglogistic distribution. ### Table 4 Confidence intervals for Gini estimates of exponential distribution nJackknifeBootstrapRegression 10(0.3222, 0.6363)(0.2626, 0.6794)(0.1596, 0.7399) 20(0.3735, 0.6012)(0.3497, 0.6265)(0.2758, 0.6747) 50(0.4199, 0.5682)(0.4115, 0.5799)(0.3662, 0.6138) 100(0.4437, 0.5503)(0.4395, 0.5570)(0.4085, 0.5822) 500(0.4748, 0.5239)(0.4739, 0.5251)(0.4605, 0.5377) 1,000(0.4820, 0.5174)(0.4816, 0.5177)(0.4723, 0.5270) ### Table 5 Comparison the Gini estimates in Britain in fiscal years 1994–1995 MethodĜStandard error(Ĝ)Confidence Interval Bootstrap0.331850.00272[0.32578, 0.33715] Jackknife0.331860.00190[0.32623, 0.33750] Regression0.331850.00337[0.32524, 0.33846] ### Table 6 Divergence of Gini estimates from N(0, 1) based on fitted distribution nJackknifeBootstrapRegression 100.23210.27160.2297 200.20610.23890.1932 500.16180.18760.1809 1000.12590.13720.1441 5000.06120.10790.1304 1,0000.05480.06630.1268 ### Table 7 MSEs of the Gini estimates based on fitted distribution nMSE JackknifeBootstrapRegression 100.0074510.0055920.033719 200.0038620.0029520.015768 500.0019640.0014980.006093 1000.0010200.0008610.002993 5000.0002570.0002470.000593 1,0000.0001320.0001330.000296 MSE = mean square error. ### Table 8 CP and AS of fitted distribution to real data nJackknifeBootstrapRegression CPASCPASCPAS 100.79710.28640.70850.23580.69550.6219 200.83660.22980.76830.19440.75020.4804 500.87050.16550.83050.14710.80430.3033 1000.89160.12530.87410.11580.86470.2136 5000.92230.06190.91110.06050.90120.0954 1,0000.93130.04510.92330.04420.91890.0675 CP = coverage probability; AS = average size. References 1. David, HA, and Nagaraja, HN (2003). Order Statistics. New York: John & Wiley 2. Davidson, R (2009). Reliable inference for the Gini index. Journal of Econometrics. 150, 30-40. 3. Gastwirth, JL (1971). A general definition of the Lorenz curve. Econometrica. 39, 1037-1039. 4. Giles, DEA (2004). Calculating a standard error for the Gini coefficient: some further results. Oxford Bulletin Economics and Statistics. 66, 425-433. 5. Giles, DEA (2006). A cautionary note on estimating the standard error of the Gini index of inequality: comment. Oxford Bulletin Economics and Statistics. 68, 395-396. 6. Jenkins, SP (2009). Distributionally-sensitive inequality indices and the GB2 income distribution. Review of Income and Wealth. 55, 392-398. 7. Jiang, J (2010). Large Sample Techniques for Statistics. New York: Springer Science 8. Kang, SB, and Cho, YS (1996). Estimation of Gini index of the exponential distribution by bootstrap method. Communications for Statistical Applications and Methods. 3, 291-297. 9. Langel, M, and Tillé, Y (2013). Variance estimation of the Gini index: revisiting a result several times published. Journal of the Royal Statistical Society Series A (Statistics in Society). 176, 521-540. 10. Lerman, RI, and Yitzhaki, S (1984). A note on the calculation and interpretation of the Gini index. Economics Letters. 15, 363-368. 11. McDonald, JB (1984). Some generalized functions for the size distribution of income. Econometrica. 52, 647-665. 12. Mills, JA, and Zandvakili, S (1997). Statistical inference via bootstrapping for measures of inequality. Journal of Applied Econometrics. 12, 133-150. 13. Modarres, R, and Gastwirth, JL (2006). A cautionary note on estimating the standard error of the Gini index of inequality. Oxford Bulletin Economics and Statistics. 68, 385-390. 14. Moran, TP (2006). Statistical inference for measures of inequality with a cross-national bootstrap application. Sociological Methods & Research. 34, 296-333. 15. Ogwang, T (2000). A convenient method of computing the Gini index and its standard error. Oxford Bulletin Economics and Statistics. 62, 123-129. 16. Ogwang, T (2006). A cautionary note on estimating the standard error of the Gini index of inequality: comment. Oxford Bulletin Economics and Statistics. 68, 391-393. 17. Palmitesta, GMGP, and Provasi, C (2006). Asymptotic and bootstrap inference for the generalized Gini indices. Metron. 64, 107-124. 18. Sendler, W (1979). On statistical inference in concentration measurement. Metrika. 26, 109-122. 19. Shalit, H (1985). Calculating the Gini index of inequality for individual data. Oxford Bulletin Economics and Statistics. 47, 185-189. 20. Woo, JS, and Yoon, GE (2001). Estimations of Lorenz curve and Gini index in a Pareto distribution. Communications for Statistical Applications and Methods. 8, 249-256. 21. Xu, K (2003). How has the literature on Gini’s index evolved in the past 80 years?. China Economic Quarterly. 3. 22. Xu, K (2007). U-statistics and their asymptotic results for some inequality and poverty measures. Econometric Reviews. 26, 567-577. 23. Yitzhaki, S (1991). Calculating jackknife variance estimators for parameters of the Gini method. Journal of Business and Economic Statistics. 9, 235-239. 24. Yitzhaki, S (1998). More than a dozen alternative ways of spelling Gini. Research in Economic Inequality. 8, 13-30. 25. Yitzhaki, S, and Schechtman, E (2013). The Gini Methodology: A Primer on a Statistical Methodology. New York: Springer Science
2017-11-18 06:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343778014183044, "perplexity": 1103.304824934573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804666.54/warc/CC-MAIN-20171118055757-20171118075757-00374.warc.gz"}
https://www.thepoorcoder.com/hackerrank-repeated-string-solution/
Hackerrank - Repeated String Solution # Hackerrank - Repeated String Solution Lilah has a string, , of lowercase English letters that she repeated infinitely many times. Given an integer, , find and print the number of letter a's in the first  letters of Lilah's infinite string. For example, if the string  and , the substring we consider is , the first  characters of her infinite string. There are  occurrences of a in the substring. Function Description Complete the repeatedString function in the editor below. It should return an integer representing the number of occurrences of a in the prefix of length  in the infinitely repeating string. repeatedString has the following parameter(s): • s: a string to repeat • n: the number of characters to consider Input Format The first line contains a single string, . The second line contains an integer, . Constraints • For  of the test cases, . Output Format Print a single integer denoting the number of letter a's in the first  letters of the infinite string created by repeating  infinitely many times. Sample Input 0 aba 10 Sample Output 0 7 Explanation 0 The first  letters of the infinite string are abaabaabaa. Because there are  a's, we print  on a new line. Sample Input 1 a 1000000000000 Sample Output 1 1000000000000 Explanation 1 Because all of the first  letters of the infinite string are a, we print  on a new line. ### Solution in Python def repeatedString(s, n): x,y = divmod(n,len(s)) return s[:y].count("a")*(x+1) + s[y:].count("a")*x s = input() n = int(input()) print(repeatedString(s, n)) ## Enjoying these posts? Subscribe for more That's okay. But without advertising-income, we can't keep making this site awesome. We don't have any banner, Flash, animation, obnoxious sound, or popup ad. We do not implement these annoying types of ads! We need money to operate the site, and almost all of it comes from our online advertising.
2022-05-18 09:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35506531596183777, "perplexity": 6503.116438933716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00445.warc.gz"}
https://www.intmath.com/forum/functions-and-graphs-36/not-getting-how-to-calculate-function-equation-from:98
IntMath Home » Forum home » Functions and Graphs » Not getting how to calculate function equation from graph # Not getting how to calculate function equation from graph [Solved!] ### My question For MR513, I'm trying to find out value of x(alcohol content) with help of Y(result : vout) For MQ3 sensor I'm not getting How to formulate equation in magnitude bode plot which can implement in c? ### Relevant page https://www.futurlec.com/Datasheet/Sensor/MR513.pdf ### What I've done so far Hi Everyone, I'm working on integration of 2 alcohol sensor with micro-controller. I'm pretty comfortable with micro-controller and I'm getting results with change in alcohol vapor. I have attached characteristic graph of both the sensor, also attach datasheet(Please find URL) with mail. I had tried to finding order-3 polynomial equation with MR513 graph with the help of excel and I'm getting 'Y = (6E^-07)x^3 - (0.0015)x^2 + (1.2839)x - 4.3287' equation which is function of X. where Y is result for x is alcohol content. But after implementing I'm getting result i.e Y and I need formula which will find value of x for value Y X <img src="/forum/uploads/imf-2221-mr513-graph.png" width="500" height="390" alt="functions-and-graphs" /> For MR513, I'm trying to find out value of x(alcohol content) with help of Y(result : vout) <img src="/forum/uploads/imf-2311-mq3-graph.png" width="500" height="436" alt="functions-and-graphs" /> For MQ3 sensor I'm not getting How to formulate equation in magnitude bode plot which can implement in c? Relevant page <a href="https://www.futurlec.com/Datasheet/Sensor/MR513.pdf">https://www.futurlec.com/Datasheet/Sensor/MR513.pdf</a> What I've done so far Hi Everyone, I'm working on integration of 2 alcohol sensor with micro-controller. I'm pretty comfortable with micro-controller and I'm getting results with change in alcohol vapor. I have attached characteristic graph of both the sensor, also attach datasheet(Please find URL) with mail. I had tried to finding order-3 polynomial equation with MR513 graph with the help of excel and I'm getting 'Y = (6E^-07)x^3 - (0.0015)x^2 + (1.2839)x - 4.3287' equation which is function of X. where Y is result for x is alcohol content. But after implementing I'm getting result i.e Y and I need formula which will find value of x for value Y ## Re: Not getting how to calculate function equation from graph Hello Harshal For the first graph (ppm vs vout), what happens near and beyond 1000 ppm? That is, it seems to become almost linear for values greater than 600. Does it level out to some vout value around 450 or 500, or does it continue to increase? This will affect how we model the curve. Regards Murray X Hello Harshal For the first graph (ppm vs vout), what happens near and beyond 1000 ppm? That is, it seems to become almost linear for values greater than 600. Does it level out to some vout value around 450 or 500, or does it continue to increase? This will affect how we model the curve. Regards Murray ## Re: Not getting how to calculate function equation from graph Hello Murray, Thank you so much Murray for your reply and regret about delay, I was struck in some other thing. Beyond 1000 sensor value gets saturated between 400 to 450 and basically my region of interest is between 0 PPM to 800 PPM. X Hello Murray, Thank you so much Murray for your reply and regret about delay, I was struck in some other thing. Beyond 1000 sensor value gets saturated between 400 to 450 and basically my region of interest is between 0 PPM to 800 PPM. ## Re: Not getting how to calculate function equation from graph Hi again OK. I think your easiest approach is to go for a logarithmic curve. Here's my attempt with your original data and my log curve: The inverse of a logarithmic function is exponential, and I obtained the following reverse values, as you required: y x (Chart) x (Model) 0 0 34.9 115 100 90.4 200 200 184.7 305 400 446.2 350 600 651.3 375 800 803.6 395 1000 950.6 (It uses x=e^((y+421)//119).) The values are within the ball park, but not wonderful. You could always get a better fit with more data points than I used. Try it with better data and see how you go. X Hi again OK. I think your easiest approach is to go for a logarithmic curve. Here's my attempt with your original data and my log curve: <img src="/forum/uploads/imf-2441-alcohol-sensor.png" width="383" height="302" alt="functions-and-graphs" /> The inverse of a logarithmic function is exponential, and I obtained the following reverse values, as you required: <table><tr><th><i>y</i></th><th><i>x</i> (Chart)</th><th><i>x</i> (Model)</th></tr><tr><td>0</td><td>0</td><td>34.9</td></tr><tr><td>115</td><td>100</td><td>90.4</td></tr><tr><td>200</td><td>200</td><td>184.7</td></tr><tr><td>305</td><td>400</td><td>446.2</td></tr><tr><td>350</td><td>600</td><td>651.3</td></tr><tr><td>375</td><td>800</td><td>803.6</td></tr><tr><td>395</td><td>1000</td><td>950.6</td></tr></table> (It uses x=e^((y+421)//119).) The values are within the ball park, but not wonderful. You could always get a better fit with more data points than I used. Try it with better data and see how you go.
2018-04-22 21:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45963019132614136, "perplexity": 2408.508500617994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945660.53/warc/CC-MAIN-20180422212935-20180422232935-00016.warc.gz"}
http://clay6.com/qa/21735/let-f-0-infty-rightarrow-r-and-f-x-int-0-xf-t-dt-if-f-x-2-x-2-1-x-then-f-4-
# Let $f:(0,\infty)\rightarrow R$ and $f(x)=\int_0^xf(t) dt$ if $f(x^2)=x^2(1+x)$ then f(4) equals $(a)\;5/4\qquad(b)\;7\qquad(c)\;4\qquad(d)\;2$ ## 1 Answer $F(x)=\int_0^x f(t)dt$ $\Rightarrow f'(x)=f(x)-f(0)$ Also $F(x^2)=x^2(1+x)$ $\Rightarrow F'(x^2)2x=2x+3x^2$ $\therefore F'(4)=f(4)$ $f(0)=0$ $F'(4)\times 4=4+12$ $F'(4)=4$ $\Rightarrow f(4)=4$ Hence (c) is the correct answer. answered Jan 2, 2014 1 answer 1 answer 1 answer 1 answer
2018-02-22 10:34:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638151526451111, "perplexity": 10971.957549545552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00508.warc.gz"}
https://tex.stackexchange.com/questions/503036/how-to-cite-part-of-online-entry-website-chapter-in-biblatex/503109
# How to cite part of online entry (website chapter) in BibLaTex? I want to cite a part of a web page. Something analogical to a chapter of a book. But from the documentation, the @online entry only has subtitle and titleaddon fields which seem somehow relevant, but from my understanding serve to display a title referencing the whole article, not part of it. (The website I want to reference.) What I'm currently doing is: @online{parallel-computing, author = "Blaise Barney", title = "Introduction to Parallel Computing", url = "https://computing.llnl.gov/tutorials/parallel_comp/#Whatis" } And I'd like to specifically refer to the part What is Parallel Computing?, which I'm currently referencing with the URL (don't know if that's appropriate). • I guess you could just use @inbook. It should look OK. Use author, title, booktitle, url, and date. – David Purton Aug 6 at 12:17 For this particular example (and I guess for most website with their own URL) I would just use @online to refer to the complete website and give the 'section' in the postnote of the citation to refer to the specific part. Much like one normally adds the complete @book to the bibliography, but only cites a specific page. \documentclass[british]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{csquotes} \usepackage[style=authoryear, backend=biber]{biblatex} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @online{parallel-computing, author = {Blaise Barney}, title = {Introduction to Parallel Computing}, url = {https://computing.llnl.gov/tutorials/parallel_comp}, urldate = {2019-08-06}, organization = {Lawrence Livermore National Laboratory}, } \end{filecontents} \begin{document} Lorem ipsum \autocite[section \enquote{What is Parallel Computing?}]{parallel-computing}. \printbibliography \end{document} To me the most natural, self-contained unit in this case just appears to be the complete website. If you insist that the specific section be referenced in the bibliography, you can follow David Purton's advice from the comments and (ab)use the @inbook entry type. \documentclass[british]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{csquotes} \usepackage[style=authoryear, backend=biber]{biblatex} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @inbook{barney:whatis, author = {Blaise Barney}, booktitle = {Introduction to Parallel Computing}, title = {What is Parallel Computing?}, url = {https://computing.llnl.gov/tutorials/parallel_comp/#Whatis}, urldate = {2019-08-06}, publisher = {Lawrence Livermore National Laboratory}, } \end{filecontents} \begin{document} Lorem ipsum \autocite{barney:whatis}. \printbibliography \end{document} Of course it would be possible to define a new entry type called @inonline that relates to @online as @inbook relates to @book. At the moment I doubt this is worth the effort, but it is most certainly doable. See How can I create entirely new data types with BibLaTeX/Biber? for a starter. Alternatively, Bib Formatting Question shows how you could add maintitle to @online entries so that you could have something like @online{barney:whatis, author = {Blaise Barney}, maintitle = {Introduction to Parallel Computing}, title = {What is Parallel Computing?}, url = {https://computing.llnl.gov/tutorials/parallel_comp/#Whatis}, urldate = {2019-08-06}, organization = {Lawrence Livermore National Laboratory}, } • That's a pretty exhaustive yet comprehensible answer. I'll now (ab)use the comments section by saying thank you for that. – h4nek Aug 7 at 19:14
2019-09-15 19:02:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46087661385536194, "perplexity": 4118.02272685362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00450.warc.gz"}
https://math.stackexchange.com/questions/799467/proof-of-cayley-hamilton-theorem-for-diagonalisable-matrices-lay-p326-ch-5-sup
Proof of Cayley-Hamilton Theorem for Diagonalisable Matrices [Lay P326 Ch 5 Sup Q7] Proof for Diagonal Matrices from Page 2 of 7: Let $A \in M_{n}(C)$ be diagonal, to wit, $A _{ii}=\lambda_{i}$. Then $p_{A}(t) = \det(tI-A)= \det \begin{bmatrix} t - \lambda_1 & ~ & ~ \\ ~ & \ddots & ~ \\ ~ & ~ & t - \lambda_n \\ \end{bmatrix} =\prod_{i=1}^{n}(t-\lambda_{i}) \quad (♦)$ and $p_A(A)= \prod_{i=1}^{n}(A- \color{forestgreen}{ \lambda_iI } )$ , a product of diagonal matrices. $1.$ How does $p_A(A)= \prod_{i=1}^{n}(A-\lambda_{i}I)$ ? $(♦)$ contains $\lambda_i$ and NOT $\color{forestgreen}{ \lambda_iI }$ ? What legitimates this? $t$ is a variable but A is a matrix, so they can't be equal? Does the proof repeat this technique for the last line of this proof, denoted with $\color{ orangered }{ ( \yen ) }$? As in the previous examples (on the linked PDF in the first sentence), since $A$ is diagonal, $$p_{A}(A) \mathop{=}^{\color{ red }{\clubsuit} } \begin{bmatrix} p_A(\lambda_1) & ~ & ~ \\ ~ & \ddots & ~ \\ ~ & ~ & p_A(\lambda_n) \\ \end{bmatrix} = \begin{bmatrix} \prod_{i=1}^{n}(\lambda_1 -\lambda_{i}) & ~ & ~ \\ ~ & \ddots & ~ \\ ~ & ~ & \prod_{i=1}^{n}(\lambda_n -\lambda_{i}) \\ \end{bmatrix} = \text{ 0 matrix },$$ where $\prod_{i=1}^{n}(\lambda_n -\lambda_{i}) = ...(\lambda_n -\lambda_{n})= 0$, and the same holds for all the other diagonal entries. $2.$ How does $p_{A}(A)$ equal that diagonal matrix, as denoted with $\color{ red }{ ( \clubsuit )}$ ? Proof for Diagonalisable Matrices: Similar matrices have the same eigenvalues (and thus characteristic polynomials), so suppose for similar matrices A and $B$ (now $A$ may NOT be diagonal): $p_{A}(z)=p_{B}(z)=\displaystyle \sum_{i=0}^{n}c_{i}z^{i} \implies p_{A}(A)=\sum_{i=0}^{n}c_{i}A^{i} \quad \color{ orangered }{ ( \yen ) }$ (I omit the rest of the proof.) • The statement of Cayley Hamilton (for a general matrix) says precisely that when you substitute $A$ for the variable in the characteristic polynomial of $A$, and evaluate the resulting matrix, you will get the zero matrix. No, indeed, $A$ is not the same as the variable $t$, but the resulting product of matrices does make sense. If $A$ is diagonal matrix, then for each eigenvalue $\lambda$ of $A,$ we can consider the matrix $A- \lambda I$. If we take the product of these matrices, as $\lambda$ runs through all eigenvalues of $A$, we get the zero matrix. – Geoff Robinson May 17 '14 at 20:17 • In general, if $D={\rm diag}(a_1,\ldots,a_n)$; $P(A)={\rm diag}(P(a_1),\ldots,P(a_n))$. – Pedro Tamaroff May 17 '14 at 21:43 • @PandaBear Better? – Greek - Area 51 Proposal May 22 '14 at 17:15 I believe what is happening is that the two "functions" $p_A(t)$ and $p_A(A)$ are defined by a product. $\prod_{i=1}^n(t-\lambda_i)$,$\prod_{i=1}^n(A-\lambda_iI)$, In $1$ Dimension, $p_A(t)=\det(tI-A)$, i.e. in this case the product and the determinant are the same. Clearly this is not the same in higher dimensions. Now $p_A(A)$ is itself a diagonal matrix (since it is the product of diagonal matrices). This is because $p_A(A)=\prod_{i=1}^n(A-\lambda_iI)$, for each $i$, $(A-\lambda_iI)$ is a diagonal matrix (because both $A$ and $I$ are). Thus $p_A(A)=\prod_{i=1}^n(A-\lambda_iI)=(A-\lambda_1I)...(A-\lambda_nI)$, is a product of diagonal matrices. We see that the $j$-th diagonal entry is: $\prod_{i=1}^n(A_{jj}-\lambda_i)=\prod_{i=1}^n(\lambda_j-\lambda_i)=p_A(\lambda_j)$. This because: $p_A(A)=\prod_{i=1}^n(A-\lambda_iI)=(A-\lambda_1I)...(A-\lambda_nI)$ $=\left[\begin{array}{ccc} a_{11}-\lambda_1 & 0 & ... & 0\\ 0 & a_{22}-\lambda_1& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& a_{nn}-\lambda_1 \end{array} \right]...\left[\begin{array}{ccc} a_{11}-\lambda_n & 0 & ... & 0\\ 0 & a_{22}-\lambda_n& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& a_{nn}-\lambda_n \end{array} \right]$ $=\left[\begin{array}{ccc} (a_{11}-\lambda_1)...(a_{11}-\lambda_n) & 0 & ... & 0\\ 0 & (a_{22}-\lambda_1)...(a_{22}-\lambda_n)& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& (a_{nn}-\lambda_1)...(a_{nn}-\lambda_n) \end{array} \right]$ $=\left[\begin{array}{ccc} \prod_{j=1}^n(a_{11}-\lambda_j) & 0 & ... & 0\\ 0 & \prod_{j=1}^n(a_{22}-\lambda_j)& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& \prod_{j=1}^n(a_{nn}-\lambda_j) \end{array} \right]$ $=\left[\begin{array}{ccc} \prod_{j=1}^n(\lambda_1-\lambda_j) & 0 & ... & 0\\ 0 & \prod_{j=1}^n(\lambda_2-\lambda_j)& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& \prod_{j=1}^n(\lambda_n-\lambda_j) \end{array} \right]$ $=\left[\begin{array}{ccc} p_A(\lambda_1) & 0 & ... & 0\\ 0 & p_A(\lambda_2)& ...& 0 \\ ...& ...& ...& ...\\ 0 & ... & ...& p_A(\lambda_n) \end{array} \right]$ From this we see that each diagonal entry in the matrix correspond to what you have posted. edit They are related in the sense that substituting $A$ for $t$ in $p_A(t)$ will give you $p_A(A)$, but the latter is an $n×n$ matrix, whereas, $p_A(t)$ is a scalar, so they only truly coincide if A is a one dimensional matrix. The reason that substituting $A$ for $t$ in $p_A(t)$ gives us $p_A(A)$, is as follows, Let $t=A$ (and in the formula change $\lambda_i$ to $\lambda_i I$ so that we can add, subtract and multiply with the correct dimensions), and we get: edit You are correct here, because the dimensions don't agree, we need to let $t$ be the matrix $A$ and let $\lambda_i$ be $\lambda_i I$ so really we should have: $t\to A$, $\lambda_i\to \lambda_i I$ $p_A(t)=\prod_{i=1}^n(t-\lambda_i)\to\prod_{i=1}^n(A-\lambda_i I)=p_A(A)$. • It is not quite that $p_A(A)$ is defined as a product. If $p(x)=a_0+a_1x+\dots+a_nx^n$, you define $p(A)=a_0I+a_1A+\dots+a_nA^n$, and then you need to verify that if $p=qr$ (as polynomials), then indeed $p(A)=q(A)r(A)$ (as matrices). In particular, this gives us that for any two polynomials $q,r$, the matrices $q(A)$ and $r(A)$ commute. – Andrés E. Caicedo May 26 '14 at 17:02 • @LePressentiment, hopefully this edit should help :) – Ellya May 26 '14 at 17:38 • +1. Thank you. I see 2 now, thanks to you. But for 1, so is $p_A(A)$ NOT related to $p_A (t)$? Must I memorise $p_A(A)$ as the definition? Is that what Andres Caicedo is saying? – Greek - Area 51 Proposal May 27 '14 at 11:48 • They are related in the sense that substituting $A$ for $t$ in $p_A(t)$ will give you $p_A(A)$, but the latter is an $n\times n$ matrix, whereas, $p_A(t)$ is a scalar, so they only truly coincide if $A$ is a one dimensional matrix. – Ellya May 27 '14 at 12:18 • @LePressentiment, I have put in an edit, I hope it helps :) – Ellya May 28 '14 at 9:15 In answer to 1) you can substitute $t$ for $A$ because we can evaluate $p(x)$ in an algebra over $\mathbb{C}$ (in this case). And yes it does repeat in the last bit of the proof. • Were you able to answer my question 2? Would you also please enlarge on your answer? – Greek - Area 51 Proposal May 22 '14 at 17:14
2019-06-18 00:37:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571695923805237, "perplexity": 242.98044446331284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00313.warc.gz"}
https://www.physicsforums.com/threads/since-second-and-third-generation-fermions-are-unstable-and-spontaeously-decompose.132674/
# Since second and third generation fermions are unstable and spontaeously decompose 1. Sep 18, 2006 ### bananan Are they really fundamental? I am under the impression a fundamental particle would be "Stable", i.e first generation fermions. Could second and third generation fermions be composite particles of first generation fermions? Specifically, since the 2nd gen lepton- muon decomposes rapidly into an electron and electron-antineutrino, and a muon-neutrino, then the muon is some kind of composite structure made of electron and electron-antineutrino, and a muon-neutrino momentarily bound together. Last edited: Sep 18, 2006 2. Sep 19, 2006 ### Severian A muon isn't a bound state of an electron and antineutrino - it is a genuinely different particle. In fact, the muon doesn't even decay into an electron and neutrinos directly. It decays into a neutrino and a W-boson and the W-boson decays into an electron and antineutrino. The only thing that determines which particle decays to which is the energy available in the mass. So the muon decays to the electron and neutrinos only because the electron is lighter. But being lighter doesn't make it more fundamental. Indeed, if you give an electron enough energy, it could very well 'decay' into a muon and neutrinos (which would of course decay right back again rather quickly). To put in yet another way, no particle lives forever. If we observe an electron, we are interacting with it by hitting it with a photon. It isn't really the same particle afterwards, so the old electron has been turned into the new one. Therefore one should not use how long a particle lives to determine whether or not it is fundamental. 3. Sep 19, 2006 ### arivero If you keep looking, also one quark of the first generation in nuclean beta decay will "decompose" into a diferent quark plus an electron plus a neutrino. 4. Sep 19, 2006 ### bananan So string theory models these particle transformation as changes in tension? How close is it to modelling MSSM? I wonder if BT preon-ribon model works. 5. Sep 19, 2006 ### bananan With a preon model, i wonder if you can model the muon as a bond state of multiple preons, or charged preons, or excited preons, etc., in a sense a composite preon object that is a kind of multiple composite of the bundle that makes up the first generation. 6. Sep 20, 2006 ### arivero different topics 7. Sep 20, 2006 ### CarlB I used to think that the 2nd and 3rd generations were excitations off of the 1st generation. The problem is that there is no easy way of explaining why there is no 4th generation seen. The problem with "seen" is that there are ways of detecting the neutrinos of such a generation and the minimum limits on their masses are much much larger than any of the other neutrino masses. So if you do model the 2nd and 3rd generations as excitations, then you have to explain why the 4th generation has extremely heavy neutrinos. Compared to the Planck mass, all these masses are zero, so I think it might be better to model them as if they all have the same energy state to first order, but are split. That way you won't have a potentially infinite sequence of excitations to have to explain away. The next level of excitations would all have energies around the Planck mass and would be outside the range of our ability to create. Carl 8. Sep 20, 2006 ### bananan how does string theory explain the generations and lack of a 4th generation? 9. Sep 20, 2006 ### Haelfix It doesn't in general at this time... There are some compactifications of Calabi Yau manifolds that imply 3 generations and no more (so in that case you would say its a feature of the fundamental geometry that decides), but the situation is not generic and thus model dependant. As far as making them excitations.. Huh! These things carry different quantum numbers!! 10. Sep 21, 2006 ### CarlB If I recall correctly, undergraduate quantum mechanics spends a good bit of time showing that the various excitations of the hydrogen atom carry different quantum numbers. Carl 11. Sep 26, 2006 ### Haelfix Yes b/c the hydrogen atom is not fundamental.. We're talking about fundamental particles here, or aren't we? 12. Sep 26, 2006 ### CarlB That was the question posed in the original post. Assuming it is true in answering it would be a pretty obvious case of circular reasoning. Of course one can always take the approach that what is known in the standard model is true and everything else is lies and pointless speculation. That would remind me of human endeavors other than physics. Uh, that would eliminate string theory from the discussion. 13. Sep 26, 2006 ### Severian Bonuses all round then ;) Just so! Last edited by a moderator: Sep 26, 2006 14. Sep 26, 2006 ### Haelfix Well.. here is what I know of attempts to unify generations from phenomenology. 1) Early attempts at placing horizontal group structures amongst generations and then spontaneously breaking them (eg SU(3) horizontal, Froggatt et al). Somewhat contrived and has technical issues. 2) Preons. Hard to make them work, have strong anomaly matching constraints and issues with chirality. 3) Huge GUTs (like E8) that can presumably fit three generations. Chirality problems again, unless d != 4 (enter string theory phenomonlogy and extra dimensions) 15. Sep 27, 2006 ### bananan Would you include string theory under (3) huge GUT? How close to the SM or MSSM has string theory been able to reach, if you fine-tune the moduli vacua? 16. Sep 29, 2006 ### Haelfix I really dont know Bananan, im not a string theorist. I think they have some low energy limits where they can presumably output things like E8 GUTs (or some copies thereof) + extra fields, and the fact that ST doesnt naturally live in d = 4 makes it attracting from a phenomonological standpoint. 17. Oct 2, 2006 ### bananan By why couldn't you model the muon as a bound state of a muon and neutrino, with the electron as the ground state? The same for other fermions. 18. Oct 2, 2006 ### CarlB Well, you'd have to include two neutrinos, one an antiparticle. That is, muons don't decay into just electrons and "muon neutrinos", but also you get a "electron anti-neutrino". Here's a link showing the decay of a $$\mu^+$$, the antiparticle of the $$\mu^-$$: http://cmms.triumf.ca/intro/ppt/intro/img9.html The worst part of this is that the "electron neutrino" is not a true particle, if you define the true particles as the things that are eigenstates of mass. The electron neutrino is a combination of three neutrinos, the $$\nu_1, \nu_2, \nu_3$$, as is the muon neutrino. Quite a complicated bound state. If on the other hand you don't define the true particles as the things that are eigenstates of mass, then you've lost the ability to distinguish between the electron, muon and tau. You could replace them with various linear combinations and call them the elementary particles. And then there's the problem with modeling bound states. You'll have to specify a force. Hmmmm. The real problem with this sort of speculation is that the standard model is very tightly knit together. You can't modify one small part of it without having repercussions all over the place. Let me quote Feynman. From the book "Genius, the life and science of Richard Feynman", paperback edition page 368-9: My feeling is that the standard model is a "perfect thing", and making small modifications of it (well, other than neutrino masses or, for that matter, making changes to the masses or couplings of any of the various elementary particles) is not possible. Especially in the area of eliminating muons as fundamental particles. But I should also admit that I don't think that the muons are fundamental. I think all the quarks and leptons are composite. Carl Last edited: Oct 2, 2006 19. Oct 3, 2006 ### bananan What would happen to the SM if 2nd and 3rd generation fermions are "excited" states of the first gen, with the first gen as a ground state? Last edited: Oct 3, 2006 20. Oct 3, 2006 ### bananan A muon could decay into an electron and photon. Is there a reason this process doesn't happen?
2016-12-11 04:41:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7153290510177612, "perplexity": 1278.3216298609495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544097.11/warc/CC-MAIN-20161202170904-00209-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.cfd-online.com/W/index.php?title=Peclet_number&diff=9486&oldid=2496
# Peclet number (Difference between revisions) Revision as of 04:31, 19 September 2005 (view source)Praveen (Talk | contribs)m (Peclet Number moved to Peclet number)← Older edit Revision as of 11:14, 18 December 2008 (view source)m (bocvierord)Newer edit → Line 1: Line 1: + basdomle The Peclet number is defined as The Peclet number is defined as ## Revision as of 11:14, 18 December 2008 basdomle The Peclet number is defined as $Pe \equiv \frac{U_\infty L}{\alpha}$ Also $Pe \equiv Re Pr$ where • $U_\infty$ is the freestream velocity • $L$ is the characteristic dimension of the problem • $\alpha$ is the thermal diffusion coefficient
2016-09-25 15:43:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909847378730774, "perplexity": 7592.524065586103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00136-ip-10-143-35-109.ec2.internal.warc.gz"}
https://scholar.social/@SetTheoryTalks
Pinned toot YSTW2015: Saharon Shelah is telling a joke..: Damjan Kalajdzievski: How to show Con(ZFC + omega_1=u<a) from Con(ZFC) Place: Fields Institute (Room 210) Date: February 8 , 2019 (13:30-15:00) Speaker: Damjan Kalajdzievski Title: How to show Con(ZFC + omega_1=u<a) from Con(ZFC) Abstract: I will outline how to prove the result in the title by joint work with Osvaldo Guzman settheory.mathtalks.org/damjan Berkeley conference on inner model theory, July 08–19, 2019 Berkeley conference on inner model theory July 08--19, 2019 Organizers: Ralf Schindler (Münster) and John Steel (Berkeley). This conference will be a sequel to the 1st Conference on the core model induction and hod mice that was held in Münster (FRG), July 19 -- August 06, 2010, to the 2nd Conference on the core model indu settheory.mathtalks.org/berkel Logic Fest in the Windy City, Chicago, May 30 – June 2, 2019 The conference will take place at the University of Illinois at Chicago on May 30 - June 2. We will cover topics in set theory, descriptive set theory, model theory, and various applications. The workshop is aimed at graduate students and settheory.mathtalks.org/logicf Hossein Lamei Ramandi: $\Sigma^2_2$-absoluteness Place: Fields Institute (Room 210) Date: January 25 , 2018 (13:30-15:00) Speaker: Hossein Lamei Ramandi Title: $\Sigma^2_2$-absoluteness Abstract:  We will show there is a $\Sigma^2_2$ sentence $\Phi$ such that both $\Phi$ and $\neg \Phi$ are consistent with $\diamondsuit$. This answers a question due to Woodin. Benjamin Vejnar – Complexity of the homeomorphism relation between compact spaces Talk held by Benjamin Vejnar (Charles University, Prague, Czech Republic) at the KGRC seminar on 2019-01-24. Abstract: We study the complexity of the homeomorphism relation of compact metrizable spaces when restricted to some subclasses such as continua, regular continua or regular compacta. [...] settheory.mathtalks.org/benjam Marcin Sabok: Measurable Hall’s theorem for actions of Z^n Place: Fields Institute (Room 210) Date: January 18, 2018 (13:30-15:00) Speaker: Marcin Sabok Title: Measurable Hall's theorem for actions of Z^n Abstract: In the 1920's Tarski asked if it is possible to divide the unit square into finitely many pieces, rearrange them by translations and get a disc of area 1. It turns out that this is[...] settheory.mathtalks.org/marcin Moritz Müller – Forcing against bounded arithmetic Talk held by Moritz Müller (Universitat Politècnica de Catalunya, Barcelona, Spain) at the KGRC seminar on 2019-01-17. Abstract: We study the following problem. Given a nonstandard model of arithmetic we want to expand it by a binary relation that does something prohibitive, e.g. violates the pigeonhole principle in the sense that it is the graph…[...] settheory.mathtalks.org/moritz Menachem Magidor: Omitting types in the logic of metric structures HUJI Logic Seminar 16/Jan/2019, 11-13, Ross 63. Speaker: Menachem Magidor Title: Omitting types in the logic of metric structures Abstract. (joint work with I. Farah) The logic of metric structures was introduced by Ben Yaacov, Berenstein , Henson and Usvyatsov. It is a version of continuous logic which allows fruitful mode[...] settheory.mathtalks.org/menach Natasha Dobrinen – Mini-course on Infinitary Ramsey theory Time and Place: Tuesday, January 8 and Wednesday, January 9  at 10:30am in the KGRC lecture room (both parts) at the KGRC. Part I.    Topological Ramsey spaces and applications to ultrafilters Part II.   Ramsey theory on trees and applications to big Ramsey degrees The Infinite Ramsey Theorem states that given $n,r\ge 1$ and a color[...] settheory.mathtalks.org/natash Natasha Dobrinen – Ramsey Theory of the Henson graphs Abstract: A central question in the theory of ultrahomogeneous relational structures asks, How close of an analogue to the Infinite Ramsey Theorem does it carry? An infinite structure S is ultrahomogeneous if any isomorphism between two finitely generated substructures of S can be extended to an automorphism of S. We say that S has finite [...] settheory.mathtalks.org/natash Miguel Moreno: An introduction to generalized descriptive set theory, part 2 BIU Infinite Combinatorics Seminar Date : 31/12/2018 - 13:00 - 15:00 Speaker: Miguel Moreno (BIU) Title : An introduction to generalized descriptive set theory, part 2 Abstract. After introducing the notions of $\kappa$-Borel class, $\kappa$-$\Delta_1^1$ class, $\kappa$-Borel^* class in the previous talk,[...] settheory.mathtalks.org/miguel Ur Yaar: The Modal Logic of Forcing HUJI Set Theory Seminar On Wednesday, December 26, Ur Yaar will talk about the modal logic of forcing. Title: The Modal Logic of Forcing   Abstract: Modal logic is used to study various modalities, i.e. various ways in which statements can be true, the most notable of which are the modalities of necessity and possibility. In set-theory,…[...] settheory.mathtalks.org/ur-yaa Matt Foreman: Games on weakly compact cardinals TAU Forcing Seminar Tuesday, 25/12/18 Speaker: Matt Foreman Title: Games on weakly compact cardinals Abstract: Attached.[...] settheory.mathtalks.org/matt-f Assaf Rinot: Hindman’s theorem and uncountable Abelian groups Colloquium, Hebrew University of Jerusalem Thu, 20/12/2018 - 14:30 to 15:30 Location: Manchester Building (Hall 2), Hebrew University Jerusalem Speaker: Assaf Rinot Title: Hindman’s theorem and uncountable Abelian groups Abstract. In the early 1970’s, Hindman proved a beautiful theorem in additive Ramsey theory asserting that [...] settheory.mathtalks.org/assaf- Daniel T. Soukup – New aspects of ladder system uniformization II Talk held by Daniel Soukup (KGRC) at the KGRC seminar on 2018-12-13. Abstract: We continue the previous lecture and present proofs for some of the new results. We show that $\diamondsuit$ implies that for any Aronszajn-tree $T$, there is a ladder system with a 2-colouring with no $T$-uniformization. However, if $\diamondsuit^[...] settheory.mathtalks.org/daniel Asaf Karagila: On countable unions of countable sets BIU seminar in Set Theory December 17, 2018 Speaker: Asaf Karagila (UEA) Title: On countable unions of countable sets Abstract. How big can countable unions of countable sets be? Assuming the axiom of choice, countable. Not assuming the axiom of choice, it is not hard to arrange situation where there are many incomparable cardinals which…[...] settheory.mathtalks.org/asaf-k Ilijas Farah: On the model theory of C*-algebras HUJI Logic Seminar 12/December/18, 11 am, in Ross 63. Speaker: Ilijas Farah Title: On the model theory of C*-algebras Abstract. Ultrapowers and reduced products play a central role in the Elliott classification program for separable (nuclear, etc.) C*-algebras. Although an ultrapower of a separable C*-algebra A is quite different from the reduced product$\e[...] settheory.mathtalks.org/ilijas Clovis Hamel: Stability and Definability in Continuous Logics, Cp-theory and the Tsirelson space. Place: Fields Institute Library Date: Decembver 7, 2018 (13:30-15:00) Speaker: Clovis Hamel Title: Stability and Definability in Continuous Logics, Cp-theory and the Tsirelson space. Abstract: An old question in Functional Analysis inquired whether there is a Ba[...] settheory.mathtalks.org/clovis Stevo Todorcevic – Ramsey degrees of topological spaces Talk held by Stevo Todorcevic (University of Toronto, Canada) at the KGRC seminar on 2018-12-04. Abstract: This will be an overview of structural Ramsey theory when the objects are topological spaces. Open problems and directions for further research in this area will also be examined.[...] settheory.mathtalks.org/stevo- James Cummings: Regular cardinals and compactness Mathematical logic seminar - Dec 4 2018 Time:     3:30pm - 4:30 pm Room:     Wean Hall 8220 Speaker:         James Cummings Department of Mathematical Sciences CMU Title:     Regular cardinals and compactness Abstract: This talk is a sequel of sorts to last week's talk on singular compactness, but is completely independent of it. I will [...] settheory.mathtalks.org/james-
2019-06-26 13:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6394884586334229, "perplexity": 3076.3166603080476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00038.warc.gz"}
https://socratic.org/questions/5576e8c2581e2a11c218c623
# Question #8c623 Jun 9, 2015 1) $o \to p$ Av(speed)$= 20 \frac{m}{s}$ Av(velocity)$= 20 \frac{m}{s}$ 2) $o \to p \to o$ Av(speed)$= 20 \frac{m}{s}$ Av(velocity)$= 0 \frac{m}{s}$ #### Explanation: I considered only the distance op (I didn't use oq). The average velocity is the change of position $x$, indicated as $\Delta x$ (known as displacement), divided by the corresponding change in time $\Delta t$. ${v}_{a v} = \frac{\Delta x}{\Delta y} = \frac{{x}_{f} - {x}_{i}}{{t}_{f} - {t}_{i}}$ Where ${x}_{i}$ is the position at $o$ that I considered the origin ${x}_{1} = 0$. The average speed $s$ is a little bit more subtle...it is the total distance travelled, $d$, divided by the time needed. ${s}_{a v} = \frac{d}{t}$ So you get:
2019-05-23 12:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961070716381073, "perplexity": 849.1850068001991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00404.warc.gz"}
http://tex.stackexchange.com/questions/23480/creating-a-node-fitting-the-horizontal-width-of-two-other-nodes?answertab=active
# Creating a node fitting the horizontal width of two other nodes I have the two PGF nodes foo and bar positioned in a row. \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning} \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \end{tikzpicture} \end{document} How can I create a third node below those two nodes with the width of foo, bar and the node distance between them? Is there a way to do this with the fit library or is this the wrong approach? - –  Qrrbrbirlbel Dec 19 '13 at 17:46 You can use yshift together with fit and inner sep=0pt to get a node of the same height and width as the other nodes, but shifted vertically. Note that the placement of the node text is different than in a normal node, so I would suggest you use the label=center:<text> option to place the text instead. As Martin points out, you should also set the outer sep of the nodes you're fitting around to 0pt, as otherwise your new node will be too large by \pgflinewidth. \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit} \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={outer sep=0pt, draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \node [ mynode, inner sep=0pt, yshift=-1cm, fit={(foo) (bar)}, label=center:foobar] {}; \end{tikzpicture} \end{document} Below is an approach to get the vertical spacing between the old nodes and the newly created one right. Using the calc library, you can shift the new node down by the height of the old nodes by using ($(foo.south) - (foo.north)$) You can't directly read the value of node distance, so I've appended code to store the value in a new key that can be read in a yshift. \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit,calc} \pgfkeys{ /tikz/node distance/.append code={ \pgfkeyssetvalue{/tikz/node distance value}{#1} } } \begin{document} \begin{tikzpicture}[ node distance=0.2cm, mynode/.style={ draw, outer sep=0pt }] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \node [ mynode, inner sep=0pt, shift=($(foo.south)-(foo.north)$), yshift=-\pgfkeysvalueof{/tikz/node distance value}, fit={(foo) (bar)}, label=center:foobar] {}; \end{tikzpicture} \end{document} A different approach is using the let syntax to calculate the difference between bar.east and foo.west, and using that to set the minimum width of the new node: \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,calc} \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \path let \p1=(foo.west), \p2=(bar.east) in node [ mynode, below=of foo.south west, anchor=north west, minimum width=\x2-\x1-\pgflinewidth ] {foobar}; \end{tikzpicture} \end{document} - The inner sep was what I didn't know about yet. If someone knew a way to calculate the yshift automatically I'd be interested to hear it. –  h0b0 Jul 19 '11 at 12:20 The outer sep of the first two nodes should also create issues. It places the anchors on the outside of the node border, not in the middle of it. This should cause the fitted node to be 2x.5\pgflinewidth wider. –  Martin Scharrer Jul 19 '11 at 12:35 @Martin: Good point, thanks for that! –  Jake Jul 19 '11 at 12:57 @h0b0: I assume you want the same gap between the old and the new node as you have between the two old ones? I've edited my answer to show one possible way of doing this. –  Jake Jul 19 '11 at 12:58 @Jake: I suffered from that dearly. The issue is also that you can't get the original outer sep of the referenced nodes afterwards. –  Martin Scharrer Jul 19 '11 at 12:59 Another one with a getdist=p1 and p2 syntax. It gets the left border of the first and right of the second. I'm not sure if this is simpler but slightly cleaner. \documentclass[border=3mm]{standalone} \usepackage{tikz} \makeatletter \tikzset{ getdist/.style args={#1 and #2}{ getdistc={#1}{#2},minimum width=\mylength-\pgflinewidth }, getdistc/.code 2 args={ \pgfextra{ \pgfpointdiff{\pgfpointanchor{#1}{west}}{\pgfpointanchor{#2}{east}} \xdef\mylength{\the\pgf@x} } } } \makeatother \begin{document} \begin{tikzpicture} \node[ultra thick][draw] (f) {foo}; \node[draw,ultra thin] at (1cm,0.5cm) (b) {bar}; \node[anchor=west,getdist=f and b,draw] at ([yshift=-6mm]f.west) {foobar}; \end{tikzpicture} \end{document} - With .code 2 args (fine idea !), you don't need \pgfextra. –  Alain Matthes Jul 24 '12 at 6:58 @Altermundus Ah, that's from an earlier version where I did all of this inside a path. Indeed it's not needed. –  percusse Jul 24 '12 at 7:58 Update Without fit and calc \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning} \makeatletter \tikzset{minimum dist/.style 2 args={% insert path={% \pgfextra{% \path (#1); \pgfgetlastxy{\xa}{\ya} \path (#2); \pgfgetlastxy{\xb}{\yb} \pgfpointdiff{\pgfpoint{\xa}{\ya}}% {\pgfpoint{\xb}{\yb}}% \pgf@xa=\pgf@x} }, minimum width=\pgf@xa} } \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \node [mynode,below= 1cm of foo.south west, anchor=west, minimum dist={foo.south west}{bar.north east} ] {foobar}; \end{tikzpicture} \end{document} Another variant with fit: \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit,calc} \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \node [mynode,below=1cm of foo.south west,inner sep=0pt, anchor=west, fit={($(foo.south west)+(.5*\pgflinewidth,0)$) ($(bar.north east)-(.5*\pgflinewidth,0)$)}, label=center:foobar] {}; \end{tikzpicture} \end{document} - If you fit using the anchor nodes, why not move the anchor nodes instead of yshifting? –  krlmlr Jul 18 '12 at 17:24 What is your idea? What you want to say by "moving" the anchor nodes ? –  Alain Matthes Jul 18 '12 at 18:22 I mean adding a vertical shift to the nodes used for fit instead of yshift-ing the node. –  krlmlr Jul 18 '12 at 21:02 Ah! Finally I understand your request (perhaps , I'm not sure) below=1cm for example. Personally I don't use positioning and I prefer yshift-ing ! because it's more easy to scale the picture. –  Alain Matthes Jul 19 '12 at 7:47 Do you want to substitute the first solution with the two-argument variant from the linked question? –  krlmlr Jul 19 '12 at 18:04 Another solution, which does not involve the fit library, but computes instead the required width of the node: \documentclass{article} \usepackage{tikz} \begin{document} \usetikzlibrary{positioning,calc} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \path let \p1=($(foo.west)-(bar.east)$), \n1 = {veclen(\p1)-0.4pt} % 0.4pt is the width of the border line in node[mynode, below=of foo.south west, anchor=north west, minimum width=\n1] {foobar}; \end{tikzpicture} \end{document} Resulting in: - Nice. Is there a way to programmatically retrieve the value of the border line width (0.4pt)? –  krlmlr Jul 18 '12 at 22:38 Oh, I just noticed that Jake's answer also containes this solution, but even better, because he uses \pgflinewidth instead of the hardcoded value 0.4pt. Shoud I retire my answer? –  JLDiaz Jul 18 '12 at 23:01 Indeed, but your code is shorter. –  krlmlr Jul 18 '12 at 23:03 A variant using fit library but without outer sep=0pt: \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{positioning,fit} \begin{document} \begin{tikzpicture}[node distance=0.2cm,mynode/.style={rectangle,draw,line width=2pt}] \node[mynode] (foo) {foo}; \node[mynode] (bar) [right=of foo] {bar}; \node[fit=(foo)(bar),yshift=-1cm,% line width=1pt, % inner sep=-.5\pgflinewidth, % -1/2 of current line width draw,label=center:foobar]{}; \end{tikzpicture} \end{document} -
2015-07-02 16:43:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619077801704407, "perplexity": 5223.059030520632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095632.21/warc/CC-MAIN-20150627031815-00140-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.cut-the-knot.org/Probability/ActingAsATeam.shtml
# Acting As a Team I ### Strategy 1 Each player chooses the color randomly. The probability of success is $\displaystyle P=\left(\frac{1}{2}\right)^3=\frac{1}{8}.$ ### Strategy 2 There are eight possible combinations of hat colors. Each player has to choose between $R(ed),$ $B(lue),$ or $P(ass),$ making $27$ possible team answers. Each player makes a random choice, now between three possibilities. If all pass, the team loses. With two passes, there are $6$ possible answers, each with probability of $\displaystyle \frac{1}{2}$ of a correct guess. With one pass, there are $12$ possible answers, each with probability of $\displaystyle \frac{1}{4}$ of a correct guess. With no passes, there are $8$ possible answers, each with probability of $\displaystyle \frac{1}{8}$ of being right. The total comes to \displaystyle\begin{align}P&=\frac{1}{27}\left(0\cdot 1+\frac{1}{2}\cdot 6+\frac{1}{4}\cdot 12+\frac{1}{8}\cdot 8\right)\\ &=\frac{3+3+1}{27}=\frac{7}{27}\gt \frac{7}{28}=\frac{1}{4}. \end{align} ### Strategy 3 Strategies 1 and 2 are stupid: the fewer people make a guess, the higher is the probability of success. However, in order to win at least one guess ought to be made. The probability of success is $\displaystyle \frac{1}{2}.$ ### Elwin Berlekamp's Strategy 1. If you see two hats of the same color, guess the other color. 2. If you see two hats of different colors, pass. The results of the strategy are summarized in the table below: $\begin{array}{ccccccccc} A&B&C&&A&B&C&&Outcome\\ \hline R&R&R&&GBW&GBW&GBW&&lose\\ R&R&B&&pass&pass&GBC&&win\\ R&B&R&&pass&GBC&pass&&win\\ R&B&B&&GRC&pass&pass&&win\\ B&R&R&&GRC&pass&pass&&win\\ B&R&B&&pass&GRC&pass&&win\\ B&B&R&&pass&pass&GRC&&win\\ B&B&B&&GRW&GRW&GRW&&lose \end{array}$ In the table, $GBW=guess\;blue:\;wrong,$ $GBC=guess\;blue:\;correct,$ $GRW=guess\;red:\;wrong,$ $GRC=guess\;red:\;correct.$ The strategy gives six team wins out of eight: $\displaystyle \frac{6}{8}=\frac{3}{4}.$ ### Acknowledgment This is one of the problems from Chapter 6 of J. Havil's Impossible? Surprising Solutions to Counterintuitive Conundrums (Princeton University Press, 2008). Previously, the problem appeared in P. Winkler's Mathematical Puzzles: A Connoisseur's Collection (A K Peters, 2004)
2019-04-21 00:45:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855839729309082, "perplexity": 1034.162148056411}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00208.warc.gz"}
https://tex.stackexchange.com/questions/126236/draw-grid-in-pgfplots-when-axis-line-none
# draw grid in pgfplots when axis line=none I would like to have a grid in the y axis although the axis y line is not drawn, either by setting axis y line=none or hide y axis. In the MWE below the horizontal line along y=0 is not drawn when I use the extra y ticks method described in one of the answers here. How could it work when the y axis is not displayed? \documentclass{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ extra y ticks = 0, extra y tick style = { grid = major }, domain=0:360, hide y axis, ] \end{axis} \end{tikzpicture} \end{document} The key hide y axis completely disables all ticks and lines, extra or otherwise. Instead, you can stop the axis line from being drawn by setting separate axis lines, y axis line style= { draw opacity=0 } The separate axis lines is necessary to be able to assign the line style only to the y axis, since usually the axes are drawn with a single path. Note that you have to use draw opacity=0 to hide the y axis, the usual approach draw=none doesn't work here. \documentclass{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ domain=0:360, ytick=0, separate axis lines, y axis line style= { draw opacity=0 }, ymajorgrids, tick pos = left ] \end{axis} \end{tikzpicture} \end{document} In case you want to do away with the tick mark and the tick label, the easiest thing might be to use one of the approaches from How can I add a zero line to a plot? to draw the zero line without the tick/grid mechanism. \documentclass[border=5mm]{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ domain=0:360, hide y axis, before end axis/.code={ \draw [/pgfplots/every axis grid] ({rel axis cs:0,0}|-{axis cs:0,0}) -- ({rel axis cs:1,0}|-{axis cs:0,0}); }, axis lines*=left ] • @MigueldeVal-Borro: Yeah, to get rid of the tick mark you would have to set major tick length=0pt, but that would apply to the x axis as well. I've edited my answer to show a different approach. – Jake Jul 31 '13 at 18:00 • Excellent! Since I am using the extra x ticks method for the x axis and would like to have the same style, is the default extra tick style gray, very thin or gray, ultra thin ? – gypaetus Jul 31 '13 at 19:34 • @MigueldeVal-Borro: The default style is thin,black!25. You can also use the actual style that's used to draw the grid lines by adding /pgfplots/every axis grid to the \draw options. – Jake Jul 31 '13 at 20:10 • With the release of PGFPlots v1.13 the bug has been fixed and now also y axis line style={draw=none} works fine. – Stefan Pinnow Jan 10 '16 at 6:57
2021-01-22 20:01:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6173262596130371, "perplexity": 1700.2324745057663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00339.warc.gz"}
https://www.teamstudy.com/resources/chapter-5-circular-motion-and-gravitation-EipKDoX2
# Chapter 5: Circular Motion and Gravitation Centripetal on the FBD - force is directly to the center of the circle • Mass is never a factor of any uniform circular motion ### Uniform Circular Motion Motion of an object traveling at a constant speed on a circular path Centripetal force is the $\sum F$ that keeps an object in the circular path $F_{rad} = ma_{ac} = mv^{2}/r$ $v=2\pi r/T$ where T represent the period of the motion $\underline{+ a_{c} = v^{2}/r \Rightarrow a_{c}}$ represents the centripetal acceleration $a_[c] = 4\pi^{2}R/T^{2} \Rightarrow R$ represents the radius • An object will travel tangential to its original circular motion once it loses the centripetal force. (**Inertia**) • Cannot be taken to image on the FBD ### Circular motion in reality Friction - responsible for the centripetal force of a car making a curve with a specific $\mu$ . $F_{c}= \mu _{s}F_{N}=\mu _{s}mg=mv^{\wedge }2/r\Rightarrow v= \gamma \mu _{s}gr$ ### Banking On a tilt, the motion of an object in a circular direction $\underline{F_c = F_N Sin\Theta = mv^2 / r}$ $\underline{F_{N}\ Cos\Theta = mg }$ $\underline{Tan\Theta = v^2 /rg}$ #### On a Ferris wheel $N_B - mg = mv^2 / R \Rightarrow N_B = mg + mv^2 /R = m(g+v^2/R)$ $Mg - N_T = mv^2 / R \Rightarrow N_T = mg - mv^2 /R = m(g-v^2/R)$ • The Normal force is greater @ bottom giving a greater pressurized force on a person. $@ v^2 / R= g$ normal force $0=\rightarrow 0$ weight / weightless (at top) • Walking is considered uniform circular motion $F_{cent} = mg = mv^2 /g\Rightarrow \gamma gr = v$ ### Newton’s Law of Gravitation $G = 6.674 \cdot 10^{-11} \frac{N \cdot m^2}{kg^2}$ - Gravitational Constant $F_G = Gm_1m_2 / r^2$ - calculates attraction force between 2 masses $M_{\frac{1}{2}}=$ the 2 masses, given r = distance between the 2 masses where: $Fg_{12}=-Fg_{21}$ Weight = [m]g = G[mass] $(M_{earth}) / r_{earth}^2$ #### Circular motion around the earth $F_c = GmM_E / r^2 = mv^2/r\Rightarrow v = \gamma(GM_E/r)$ - Radius from center of earth to object #### Keplar’s law $V = \gamma (GM/r) = 2\pi r/T \Rightarrow T =2\pi r^3 /2$ $\sqrt{GM}$ and $M = 4\pi ^2r^3 / GT^2$ Escape Velocity - the speed required to leave earth’s gravitational force ### Apparent Weight • The weight that we feel with other applied force on us. • It is different from $F_{ga}$
2021-11-27 17:06:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.301227331161499, "perplexity": 2386.184704480354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00255.warc.gz"}